11/24/2018

Linux Test Project Primer

For testing your Linux system functionality, you can use the Linux Test Project:

Clone LTP

First clone the source code from the project:

git clone https://github.com/linux-test-project/ltp

Compile LTP

cd ltp
./build.sh

Install LTP

sudo make install

Run LTP Suite

sudo su -
cd /root/ltp-install
./runltp





11/20/2018

Linux Capabilities Example

An example of setting and getting a Linux capability

If you run across a command than reports an error suggesting a missing capability like NET_ADMIN, then you may want to use capabilities to allow non-root user to execute special commands such as iotop:

[kwright@ryzen5 cvs]$ iotop
Netlink error: Operation not permitted (1)

The Linux kernel interfaces that iotop relies on now require root priviliges
or the NET_ADMIN capability. This change occured because a security issue
(CVE-2011-2494) was found that allows leakage of sensitive data across user
boundaries. If you require the ability to run iotop as a non-root user, please
configure sudo to allow you to run iotop as root.

Please do not file bugs on iotop about this.
[kwright@ryzen5 cvs]$ setcap
usage: setcap [-q] [-v] (-r|-|) [ ... (-r|-|) ]


SETCAP(8)                             System Manager's Manual                            SETCAP(8)

NAME
       setcap - set file capabilities

SYNOPSIS
       setcap [-q] [-v] (capabilities|-|-r) filename [ ... capabilitiesN fileN ]

DESCRIPTION
       In  the  absence  of  the -v (verify) option setcap sets the capabilities of each specified
       filename to the capabilities specified.  The -v option is used to verify that the specified
       capabilities are currently associated with the file.

       The capabilities are specified in the form described in cap_from_text(3).

       The special capability string, '-', can be used to indicate that capabilities are read from
       the standard input. In such cases, the capability set is terminated with a blank line.

       The special capability string, '-r', is used to remove a capability set from a file.

       The -q flag is used to make the program less verbose in its output.

EXIT CODE
       The setcap program will exit with a 0 exit code if successful. On failure, the exit code is
       1.

SEE ALSO
       cap_from_text(3), cap_set_file(3), getcap(8),capabilities(7)


[kwright@ryzen5 cvs]$ sudo setcap cap_net_admin+eip /usr/sbin/iotop

[kwright@ryzen5 cvs]$ echo $?
0

[kwright@ryzen5 cvs]$ getcap /usr/sbin/iotop
/usr/sbin/iotop = cap_net_admin+eip

Conclusion

Despite setting the capability reported in the error message, the iotop command still reports the same error. Capabilities have to be carefully programmed into an executable, or else they may still be ineffective, and executing as root may be the only quick workaround.


ZFS on Linux Quick Start

The point of this entry is to document how to use ZFS, and not all of the reasons why you might want to do so, or how it works. If you refer to the sources, then you'll find many compelling reasons to use ZFS, and a deeper understanding of how it works.
I recently had my Buffalo NAS device fail, and decided to try ZFS on mirrored disks in my computer as a replacement until a new device can be found. Over the last few days, I've used the following sources and my own experimentation to produce this entry.

Sources

https://www.open-zfs.org

https://zfsonlinux.org/

https://github.com/zfsonlinux/zfs/wiki/Fedora

https://github.com/zfsonlinux/zfs/

https://wiki.gentoo.org/wiki/ZFS

https://docs.joyent.com/private-cloud/troubleshooting/disk-replacement

https://www.thegeekdiary.com/how-to-backup-and-restore-zfs-root-pool-in-solaris-10


ZFS Basics

My reason for wanting to use ZFS is that offers all the advantages of acls, backup, deduplication, logical volume management, quotas, restore and software raid within an efficient and resilient filesystem. ZFS works by combining devices into pools which can be used to create filesystems (volumes) and snapshots.

The pool and devices are managed with the zpool command and the filesystems and snapshots with the zfs command. Devices (vdevs) can be used for write buffering (log) devices, read caching (cache) devices, spare devices, clones or as data devices in a mirrored (mirror) array or a RAID-like array with single (raidz1), double (raidz2), or triple (raidz3) parity. Both commands can be used to get or set properties which determine the configuration of the pool or the filesystem/snapshot.

The zfs command can make filesystems or snapshots created from the space available in the pool. It can also be used to send or receive snapshots.

Getting Started with ZFS on Linux

ZFS is not in the mainstream Linux kernel, as it licensed under CDDL, which is not compatible with GPL. However, the source of ZFS can be redistributed and compiled under Linux using dynamic kernel modules under Fedora (dkms) and other distributions such as: Arch, Debian, Gentoo, Ubuntu, etc, according to https://zfsonlinux.org/.

To install the repository configuration for zfsonlinux.org on Fedora:

# dnf install http://download.zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm

To install the zfs package, the kernel-devel, and dependencies to build the zfs kernel modules:

# dnf install kernel-devel zfs

To enable the necessary services for systemd, execute:

# systemctl preset zfs-import-cache zfs-import-scan zfs-import.target zfs-mount zfs-share zfs-zed zfs.target


It was necessary and recommended to reboot the system:

# systemctl reboot

Jumping in zPool

Depending on the number of disks that you want to use, you can create pools with single or multiple disks in mirrored (mirror) or raid-like (raidz?) configurations. The man page for the zpool command gives examples of many of these configurations.

The following zpool command creates a mirrored pool with two disks /dev/sda and /dev/sdb.

# zpool create ztank mirror /dev/sda /dev/sdb

It is best to use entire disks for maximum efficiency, although partitions can be used.

Common zpool Commands

zpool status - display status of pool(s)

zpool iostat - show io statistics for pool(s)

zpool list - show details for pool(s)

zpool add - add a new vdev to a pool for log and cache

zpool remove - remove a vdev to a pool for log and cache

zpool attach - attach a new vdev

zpool detach - detach a vdev

zpool online - active a vdev in a pool

zpool offline - deactivate a vdev in a pool


Other zpool Commands

zpool import - activate a ZFS pool

zpool export - deactivate a ZFS pool

zpool upgrade - show or upgrade a ZFS pool

zpool scrub - check and fix ZFS filesystems

zpool history - show command history of pool


History Example with zpool

# zpool history

History for 'ztank':

2018-11-18.17:20:02 zpool create ztank mirror /dev/sda /dev/sdb

2018-11-18.17:21:38 zfs create ztank/keith

2018-11-18.17:21:52 zfs create ztank/pattie

2018-11-18.17:22:01 zfs create ztank/chris

2018-11-18.17:22:34 zfs create ztank/gallery

2018-11-18.17:24:52 zfs set mountpoint=/var/zfs/gallery ztank/gallery

2018-11-18.17:26:57 zfs create ztank/backup

2018-11-18.17:34:11 zfs set dedup=verify ztank

2018-11-18.17:50:30 zpool add -f ztank log /dev/sdd1

2018-11-18.17:50:44 zpool add -f ztank cache /dev/sdd2

2018-11-18.18:49:58 zfs create ztank/isos

2018-11-18.19:51:52 zfs set logbias=throughput ztank

2018-11-18.20:41:08 zfs set compression=lz4 ztank

2018-11-18.20:48:28 zfs set mountpoint=none ztank/gallery

2018-11-18.20:50:53 zfs set mountpoint=/ztank/gallery ztank/gallery

2018-11-19.00:21:40 zfs snapshot -r ztank/keith@20181119-002133

2018-11-19.00:23:40 zfs snapshot -r ztank/pattie@20181119-002334

2018-11-19.00:23:53 zfs snapshot -r ztank/chris@20181119-002348

2018-11-19.00:24:06 zfs snapshot -r ztank/backup@20181119-002359

2018-11-19.01:15:26 zpool scrub ztank

2018-11-19.19:37:40 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-19.20:26:54 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-19.22:55:21 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-19.23:37:44 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-20.01:38:13 zfs create ztank/VMS

2018-11-20.10:45:52 zfs snapshot -r ztank@backup


Managing Volumes and Snapshots with zfs

The zfs command shows up repeatedly in the above zpool history output. As shown the first step after creating a pool is to create volumes:

# zfs create ztank/keith

# zfs create ztank/pattie

# zfs create ztank/chris

# zfs create ztank/gallery

# zfs create ztank/isos

# zfs create ztank/VMS



To create snapshots of volumes a pool/volume@snapshot syntax is used:


# zfs snapshot -r ztank/keith@20181119-002133

# zfs snapshot -r ztank/pattie@20181119-002334

# zfs snapshot -r ztank/chris@20181119-002348

# zfs snapshot -r ztank/backup@20181119-002359



The zfs snapshot -r option makes the snapshot recursive throughout the filesystem and its descendants.

To view filesystems, volumes and snapshots the zfs list -t all command can be used.

The zfs destroy command can be used to remove snapshots and volumes.

Managing Properties with zfs

It is a mistake to believe the man page for the default value of properties of a filesystem. I found that dedup and compression actual values disagreed with the default values in the man page. To view all the current values of a filesystem or snapshot use:

# zfs get all ztank/chris

NAME PROPERTY VALUE SOURCE

ztank/chris type filesystem -

ztank/chris creation Sun Nov 18 17:21 2018 -

ztank/chris used 1.11G -

ztank/chris available 1.39T -

ztank/chris referenced 1.11G -

ztank/chris compressratio 1.00x -

ztank/chris mounted yes -

ztank/chris quota none default

ztank/chris reservation none default...


Properties set on a parent are inherited by a child in ZFS, so for example, properties set on the pool will be inherited by the volumes in the pool unless overridden.

A few examples of setting properties to be inherited by all volumes:

# zfs set dedup=verify ztank

# zfs set logbias=throughput ztank

# zfs set compression=lz4 ztank


Properties can also be set on a volume or a snapshot. If you wanted to restrict usage on a volume, then a quota can be set. To insure that space will be allocated to a volume a reservation can be set:

# zfs set quota=4G ztank/chris

# zfs set reservation=2G ztank/chris

# zfs get all ztank/chris
 # verifies that the quota and reservation are updated

NAME         PROPERTY              VALUE                  SOURCE
ztank/chris  type                  filesystem             -
ztank/chris  creation              Sun Nov 18 17:21 2018  -
ztank/chris  used                  1.11G                  -
ztank/chris  available             2.89G                  -
ztank/chris  referenced            1.11G                  -
ztank/chris  compressratio         1.00x                  -
ztank/chris  mounted               yes                    -
ztank/chris  quota                 4G                     local
ztank/chris  reservation           2G                     local
...

Backing Up

To create a backup image of a pool or filesystem, first create a snapshot and then send that snapshot to a file or another host:

# zfs snapshot -r ztank@backup

# zfs send -v ztank@backup > /mnt/ztank.dump

To restore an image can be received:

# zfs receive ztank@backup < /mnt/ztank.dump

Maintenance and Repair

Maintenance and repair of a ZFS pool and filesystems should be automatic, but periodically running the following will search for corrupted blocks and repair them:

# zpool scrub ztank

11/11/2018

Network Manager - nmcli

Network Manager - nmcli 

If your Linux Distribution is using Network Manager, then nmcli is a great interface to configure connections from the command line interface.

View Connections

To view connections using nmcli, use:

# nmcli

or

# nmcli connection

NAME        UUID                                  TYPE             DEVICE 
docker0     1948720d-d589-4a3f-99d6-5fefcb5a3380  bridge           docker0
Wired connection 1      13f3f876-c0ab-3b31-a086-360d972043d8  802-3-ethernet   enp4s0 
tun0        c4e462c4-89d9-4d6d-8042-2c2eda44c6da  tun              tun0   
virbr0      f3aedff9-2b78-4a7f-b24e-b3d878daefbf  bridge           virbr0 
Droidz      a940b5dd-ac3a-41b8-9d90-ed408c5f0910  802-11-wireless  --     

To see details use the show action with the connection name:

# nmcli connection show Droidz 

Rename Connection

Change the connection name from "Wired connection 1" to "enp4s0" after the device name:

nmcli connection modify Wired\ connection\ 1 con-name enp4s0

Set IP Address

Set a single IP address for the enp4s0 connection:

nmcli connection modify enp4s0 ipv4.addresses 10.0.0.234/8

Use Multiple Addresses

Configure multiple IP addresses for the enp4s0 connection:
nmcli connection modify enp4s0 ipv4.addresses \ 10.0.0.234/8,192.168.0.234/24

Modifying Other Connection Settings

If using BASH shell completion, many other network connections can discovered to modify easily. For example, typing:

nmcli connection modify enp4s0 ipv4. 

and then pressing the TAB key would show: 


ipv4.addresses           ipv4.dhcp-send-hostname  ipv4.dns-search          ipv4.method
ipv4.dad-timeout         ipv4.dhcp-timeout        ipv4.gateway             ipv4.never-default
ipv4.dhcp-client-id      ipv4.dns                 ipv4.ignore-auto-dns     ipv4.route-metric
ipv4.dhcp-fqdn           ipv4.dns-options         ipv4.ignore-auto-routes  ipv4.routes
ipv4.dhcp-hostname       ipv4.dns-priority        ipv4.may-fail 


Setting the Default Gateway

The command for setting the default gateway is:

nmcli connection modify enp4s0 ipv4.gateway 192.168.0.1


Activating Modified Changes

After changing the configuration of a connection, bring the connection up to activate them:

nmcli connection up enp4s0

Interactive Editing

To use an interactive editing mode from the command line, use the edit action for the connection:

nmcli connection edit enp4s0

Use the  help command to get started with editing. 
Use the describe command to determine what to provide for a setting.

Hot tip: to avoid settings from being appended when set, first remove them. For example to return to a single IP address from multiple addresses, the other addresses must be removed first.

describe ipv4.addresses
remove ipv4.addresses
set ipv4.addresses 192.168.0.234/24
verify
save
activate
quit

Alternatives

The nmtui and nm-connection-editor commands provide a text menu and graphical interface to Network Manager.

11/07/2018

Try mandb if makewhatis is not found

Are you unable to get expected results when executing the following?

man -k

or:

man -f

In the past, it was typical to update the index files for searching the man pages with the command:

makewhatis

Recently, this command has been superseded by the command:

mandb



10/23/2018

Automounting NFS with Systemd

Systemd Automount


These files would be placed in /etc/systemd/system.
They are named specifically after the directory that is to be (auto)mounted.
They automount unit can be enabled and started to allow auto-mounting of the the directory.
If the mount unit is enabled and started then the directory will be persistently mounted.

[root@ryzen5 system]# cat home-lf.mount
[Unit]
  Description=nfs mount script
  Requires=network-online.target
  After=network-online.service

[Mount]
  What=10.0.0.46:/home/lf
  Where=/home/lf
  Options=rsize=8192,wsize=8192
  Type=nfs

[Install]
  WantedBy=multi-user.target

[root@ryzen5 system]# cat home-lf.automount
[Unit]
  Description=nfs mount script
  Requires=network-online.target
  After=network-online.service

[Automount]
  Where=/home/lf
  TimeoutIdleSec=10

[Install]
  WantedBy=multi-user.target

1/04/2018

Kubernetes Installation on Fedora 27 Cloud Base

Kubernetes Installation on Fedora 27 Cloud Base

Getting Started

I tried numerous ways to get a Kubernetes Master node to be installed on my bare-metal Fedora 25 distribution without success. With Kubernetes under such rapid development, it can be difficult to find a distribution platform which is able to keep up. Fedora 26 is represents a milestone in the development of Kubernetes, where instead of running the components of Kubernetes directly as services on the Master node, they have been containerized.

This post explains how to install a Kubernetes Master node using containers on a Linux host running Fedora 25. This host has libvirt installed already for storing the container metadata.

Managing Cloud Base Container

The container which will run the Kubernetes containers will be managed using Vagrant. Although the Vagrant documentation recommends installing directly from https://www.vagrantup.com/downloads.html, there is no package for the Fedora distribution there. If you are on a Windows, Mac, CentOS, or Debian platform, then you can install the vagrant software there. On Fedora, it was installed with:

dnf install vagrant

Obtaining the Fedora 27 Cloud Base container



vagrant box add fedora/27-cloud-base

Initializing the Vagrant Environment


To download the container image, first a "box" has to be added to vagrant. To create a download the image and create a Vagrantfile to customize how you deploy this image:

mkdir k8s-master; cd k8s-master
vagrant init fedora/27-cloud-base 


You can customize many options for the container like memory, networking, shared directories, port forwarding, etc. If the above command was executed with the minimal option -m, comments would not have been provided. To get started, you need to allocate more than the 500 megabytes of memory that is normally provided. If you can afford the memory, allocate the 2,048 gigabytes by using the customization below.

Assuming you are using libvirt, add the following lines to the Vagrantfile after the "config.vm.box = fedora/27-cloud-base" line:

     config.vm.hostname = "k8s-master"
  config.vm.provider "libvirt" do |libvirt, override|
    libvirt.memory = 2048
    libvirt.nested = true
  end

Starting and Accessing the Fedora 27 Cloud Base Container


vagrant up 
vagrant ssh
sudo -i




Create the Kubernetes Repository File

cat > /etc/yum.repos.d/kubernetes.repo << HERE
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

HERE

Install the Packages

dnf install kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 kubernetes-cni.x86_64 bash-completion rsyslog docker -y


Configure the Kubelet service

Add to the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file $KUBELET_KUBECONFIG_ARGS:

--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

Reload systemd

For the updated kubelet configuration to be recognized, systemd must be reloaded.

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet


Enable and Start the Services

systemctl enable docker --now
systemctl enable rsyslog --now
systemctl enable kubelet --now

Initialize the Kubernetes Cluster

kubeadm init --pod-network-cidr 10.244.0.0/16


Prepare Home Directory 

exit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo 'source <(kubectl completion bash)' >> .bashrc
source .bashrc

Apply the Weave Network

kubectl apply -f https://git.io/weave-kube-1.6

Untaint the Master Node


kubectl taint nodes --all node-role.kubernetes.io/master-


If necessary the node can be re-tainted by executing:

kubectl taint nodes --all node-role.kubernetes.io/master=""

Get Cluster Information


kubectl cluster-info

Kubernetes master is running at https://192.168.121.9:6443

KubeDNS is running at https://192.168.121.9:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'

12/16/2017

Git Protocol with Systemd

How to use Git Protocol with Systemd


The documentation from https://git-scm.com/book/gr/v2/Git-on-the-Server-Git-Daemon explains how to execute the git daemon manually and gives an example of an Upstart script to start the daemon. On my Fedora system, I wanted to use Systemd to start this daemon as a service.

I've found the files to make it possible to start the git daemon with Systemd in the /usr/lib/systemd/system directory. They were installed from the git-daemon package. They could be installed with the following command run as root:

# dnf install -y git-daemon

Here's what they look like:

git.socket

[Unit]
Description=Git Activation Socket

[Socket]
ListenStream=9418
Accept=true

[Install]
WantedBy=sockets.target

git@.service

[Unit]
Description=Git Repositories Server Daemon
Documentation=man:git-daemon(1)

[Service]
User=nobody
ExecStart=-/usr/libexec/git-core/git-daemon --base-path=/var/lib/git --export-all --user-path=public_git --syslog --inetd --verbose
StandardInput=socket

Enable and Start git.socket

If you don't need to customize the service, then you can enable it and start it now. Otherwise, you may want to wait until you have customized the git.socket file.

# systemctl enable git.socket --now

Allow Git Home Directory SELinux Access

If getenforce returns either permissive or enforicing, you should enable SELinux to allow access to user home directories if desired by executing:

# setsebool -P git_system_enable_homedirs=true

Verify Listening git.socket

# ss -tlpn '( sport = :9418 )'

State       Recv-Q Send-Q       Local Address:Port                      Peer Address:Port
LISTEN      0      128                     :::9418                                :::*                   users:(("systemd",pid=1,fd=32))


Open Firewall Port

# firewall-cmd --add-port 9418/tcp --permanent
# firewall-cmd --add-port 9418/tcp 

Customizing git@.service

With the --export-all option used in the git@.service file by default, repositories do not even need to contain the magic file daemon-export-ok.

If you have root access, then you can change the git@.service file, but copy it to /etc/systemd/system directory. 

# cp /usr/lib/systemd/system/git@.service /etc/systemd/system

Then, modify the /etc/systemd/system/git@.service file, so your changes will not be overwritten on system package updates.

Here are some things to consider changing:
--base-path to use a different directory instead of /var/lib/git
--user-path to use a different directory instead of public_git for user home directories
--export-all remove to require the daemon-export-ok file before exporting a directory
User=nobody to specify a different user to user to run the service

The User that is specified will need to have permission to access the directories specified with either the --base-path or --user-path options. 

On my system, the repositories that were under the --user-path were failing until I discovered I needed to allow execute(x) access to my home directory. I didn't like the idea of giving that permission to the nobody user, so I added a gitd user:

useradd -r -d /var/lib/git -s /sbin/nologin gitd

Here's my updated /etc/systemd/system/git@.service:

[Unit]
Description=Git Repositories Server Daemon
Documentation=man:git-daemon(1)

[Service]
User=gitd
ExecStart=-/usr/libexec/git-core/git-daemon --base-path=/var/lib/git --user-path=public_git --syslog --inetd --verbose 
StandardInput=socket 
# removed --export-all and changed from nobody to gitd User

Git Repository Sharing

To share the your git repository over your network, you can place it's directory under /var/lib/git. By default, only the root user has permissions to add files in this directory. If you are using SELinux be sure that you copy files or clone them into this location and do not move them there!

As an ordinary user, you would do the following one time:

$ setfacl -m u:nobody:x $HOME # nobody is the User for the service

If you customized the git@.service file, then be sure to use the User specified in that file, such as "gitd" instead of "nobody".

$ mkdir $HOME/public_git # create the directory for --user-path
$ restorecon -Rv $HOME/public_git # if using SELinux

For each repository to share, an ordinary user would do:

$ cd  $HOME/public_git  
$ git clone --bare (repository) (repository).git
$ touch /(repository).git/git-daemon-export-ok # if --export-all

Accessing Remote Repositories

Base Path Repositories (root user)

If the repository is placed in a directory under the --base-path=/var/lib/git such as /var/lib/git/git-new, then it could be cloned remotely by:

git clone git://(host or ip)/git-new


Using tail -f /var/log/messages shows the log from the server 10.0.0.5 when I connected from the client 10.0.0.46:

Dec 16 17:47:32 future audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@1-10.0.0.5:9418-10.0.0.46:41646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 16 17:47:32 future git-daemon[25679]: Connection from 10.0.0.46:41646
Dec 16 17:47:32 future git-daemon[25679]: Extended attributes (13 bytes) exist
Dec 16 17:47:32 future git-daemon[25679]: Request upload-pack for '/git-new'

Dec 16 17:47:32 future audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@1-10.0.0.5:9418-10.0.0.46:41646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'


User Path Repositories (normal user)


If the repository is placed in a directory under the --user-path=public_git such as:

/home/kwright/public_git/simple.git

Then, it could be cloned remotely by one of the following:

git clone git://(host or ip)/~kwright/simple.git

git clone git://future/~kwright/simple.git

git clone git://10.0.0.5/~kwright/simple.git

Notice that the public_git portion of the path must be omitted in the request.
Using tail -f /var/log/messages shows the log from the server 10.0.0.5 when I connected from the client 10.0.0.46:

Dec 16 17:48:31 future audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@2-10.0.0.5:9418-10.0.0.46:41634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 16 17:48:31 future git-daemon[25583]: Connection from 10.0.0.46:41634
Dec 16 17:48:31 future git-daemon[25583]: Extended attributes (13 bytes) exist
Dec 16 17:48:31 future systemd[1]: Started Git Repositories Server Daemon (10.0.0.46:41634).
Dec 16 17:48:31 future git-daemon[25583]: Request upload-pack for '~kwright/simple.git'
Dec 16 17:48:31 future git-daemon[25583]: userpath , request <~kwright/simple.git>, namlen 8, restlen 8, slash
Dec 16 17:48:54 future audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@2-10.0.0.5:9418-10.0.0.46:41634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'



11/29/2017

Direct Rules for Firewalld

Direct Rules for Firewalld

Why Firewalld Direct Rules?

  1. You need more power than what's available with simply adding or removing services
  2. You want to make exceptions for certain hosts.
  3. You want to make exceptions for certain networks.
  4. You have experience with iptables, ip6tables, or ebtables commands needed for direct rules.
The documentation for Direct Rules can be found with:

man firewalld.direct

The basic structure of a rule is:

ipv - "ipv4|ipv6|eb" # If rule is iptables, ip6tables or ebtables based
table -"table" # Location of rule in filter, mangle, nat, etc. table
chain - "chain" # Location of rule in INPUT, OUTPUT, FORWARD, etc. chain
priority - "priority" # Lower priority value rules take precedence over higher priority values
rule

If you have with the iptables command, then you should feel comfortable with basic Direct Rules.  Instead of starting with "iptables", the command will start with "firewall-cmd --permanent --direct --add-rule" followed by the rule that follows the basic structure above. These rules must be added with the  --permanent option and the firewalld daemon reloaded or restarted.

One simple firewall scenario

The web server service should only be available to one host and reject all others. Both actions should be logged.

Whitelist one host for one service

In this scenario, the host 10.0.0.107 would be allowed access to the http service, but any other host (the 0.0.0.0/0 network) would be rejected. The number following INPUT determines the priority of the rule. The priority ranges from 0 as the highest and on down as the number increases. Beware, any reject or drop rules are evaluated before accept rules.

firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 0 \
-p tcp --dport 80 -s 10.0.0.107 \
-j LOG --log-prefix "DIRECT HTTP ACCEPT"  


firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 1 \
-p tcp --dport 80 -s 10.0.0.107 \
-j ACCEPT


firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 2 \
-p tcp --dport 80 \
-j LOG --log-prefix "DIRECT HTTP REJECT"    

firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 3 \
-p tcp --dport 80 -s 10.0.0.107 \
-j REJECT --reject-with icmp-host-unreachable

Since these rules were added with the --permanent option, they are not active in the runtime rules, yet. So, to make the permanent rules active, use the --reload option.

firewall-cmd --reload
I discovered that you have to "get" the rules instead of querying for a "list" of them:

firewall-cmd --direct --get-all-rules

ipv4 filter INPUT 0 -p tcp --dport 80 -s 10.0.0.107 -j LOG --log-prefix 'DIRECT HTTP ACCEPT'
ipv4 filter INPUT 1 -p tcp --dport 80 -s 10.0.0.107 -j ACCEPT
ipv4 filter INPUT 2 -p tcp --dport 80 -j LOG --log-prefix 'DIRECT HTTP REJECT'
ipv4 filter INPUT 3 -p tcp --dport 80 -s 10.0.0.107 -j REJECT --reject-with icmp-host-unreachable


Rich rules for Firewalld

Rich rules for Firewalld

Why Firewalld Rich Rules?


  1. You need more power than what's available with simply adding or removing services
  2. You want to make exceptions for certain hosts.
  3. You want to make exceptions for certain networks.
  4. You don't have experience with the iptables command needed for direct rules.

Basic Documentation

The man page for firewall-cmd does not cover rich rules. To get the man page about them use:

   man firewalld.richlanguage to view the details like... 

       A rule is part of a zone. One zone can contain several rules. If some rules
       interact/contradict, the first rule that matches "wins".

       Rich rule structure


           rule
             [source]
             [destination]
             service|port|protocol|icmp-block|icmp-type|masquerade|forward-port|source-port
             [log]
             [audit]
             [accept|reject|drop|mark]


Summary Firewalld Rich Rules common options:

rule [family="ipv4|ipv6"] 
source [not] address="address[/mask]"|mac="mac-address"|ipset="ipset"
destination [not] address="address[/mask]"
port port="port value" protocol="tcp|udp"
log [prefix="prefix text"] [level="log level"] [limit value="rate/duration"]
accept [limit value="rate/duration"]
reject [type="reject type"] [limit value="rate/duration"]
drop [limit value="rate/duration"]
mark set="mark[/mask]" [limit value="rate/duration"]


Working with reject action

Actually, I think for the reject action above, the type argument is mandatory. For the reject action, the type must use one of: 

icmp-host-prohibited, host-prohib, icmp-net-unreachable, net-unreach, icmp-host-unreachable, host-unreach, icmp-port-unreachable, port-unreach, icmp-proto-unreachable, proto-unreach, icmp-net-prohibited, net-prohib, tcp-reset, tcp-rst, icmp-admin-prohibited, admin-prohib 

In the whitelist used below you can notice a reject action:

reject type="icmp-host-prohibited"

Two simple firewall scenarios

Let's take two services running on a host, a web server and a dns server. The web server service should accept one host and reject all others, a simple whitelist using a reject action. The dns server service should accept all hosts except one, and drop all others, a simple blacklist using a drop action.

Whitelist one host for one service

In this scenario, the host 10.0.0.107 would be allowed access to the http service, but any other host (the 0.0.0.0/0 network) would be rejected. Beware, any reject or drop rules are evaluated before accept rules.

firewall-cmd --add-rich-rule='
rule family=ipv4 
source address="10.0.0.107" 
service name="http" 
log prefix="RICH HTTP ACCEPTED" 
accept' 

firewall-cmd --add-rich-rule='
rule family=ipv4 
source NOT address="10.0.0.107" 
service name="http" 
log prefix="RICH HTTP REJECTED " 
reject type="icmp-host-prohibited"

Monitoring the RICH HTTP firewall log

To see attempts to connect either be accepted or rejected, try to access the web server from the host 10.0.0.107 and 10.0.0.108, respectively after executing the following on a host like 10.0.0.5 with the httpd service running:

tail -f /var/log/messages | grep 'RICH HTTP '

Blacklist one host for one service

In this scenario, the host 10.0.0.107 would be blacklisted from accessing the DNS service, and its attempts to connect will be dropped. All other hosts will be accepted for access:

firewall-cmd --add-rich-rule='
rule family=ipv4 
source address="10.0.0.107" 
service name="dns" 
log prefix="RICH DNS DROPPED " 
drop'

firewall-cmd --add-rich-rule='
rule family=ipv4 
source address="0.0.0.0/0" 
service name="dns" 
log prefix="RICH DNS ACCEPTED " 
accept' 

Monitoring the RICH DNS firewall log

To see attempts to connect to the DNS server either be accepted or rejected, try to access the dns server from the host 10.0.0.107 and 10.0.0.108, respectively after executing the following on a host like 10.0.0.5 with the httpd service running:

tail -f /var/log/messages | grep 'RICH DNS '

Rich Rule Persistence

For the rules entered above to be maintained across restarting firewalld or the system, they need to be either added again with the --permanent option, or you can use the --runtime-to-permanent option to preserve the rules in the default zone (You can also create rich rules in other zones using the --zone option).


firewall-cmd --runtime-to-permanent

After the above command is executed the rules are saved to a file like /etc/firewalld/zones/public.xml based upon the default active zone. Although you can use the firewall-cmd --remove-rich-rules option to delete rich rules that you no longer want, you can also edit the zone xml file directly, and then use:

firewall-cmd --reload

Other Useful firewall-cmd commands:

firewall-cmd --get-active-zones
firewall-cmd --list-all
firewall-cmd --list-all-zones
firewall-cmd --list-rich-rules
firewall-cmd --help


11/26/2017

Installing Kubernetes on CentOS 7 with kubeadm

Kubernetes on CentOS 7

Prepare CentOS 7 for Kubernetes for Master and Worker

Disable SELinux Enforcement

Update the file /etc/selinux/config:

SELINUX=permissive

To avoid rebooting to have that become effective, execute:

setenforce 0


Disable swap

Swap must be disabled for the kubeadm init process to complete. Edit the /etc/fstab file and comment out the swap entry. For example:

In the file /etc/fstab comment out the line(s) containing swap:
#/dev/sda5 swap                    swap    defaults        0 0

To avoid rebooting to have that become effective, execute:

swapoff -a


Configure the firewall services



Create the k8s-master.xml and k8s-worker.xml files






cd /etc/firewalld/services

wget \
https://raw.githubusercontent.com/wrightrocket/k8s-firewalld/master/k8s-master.xml

wget \
https://raw.githubusercontent.com/wrightrocket/k8s-firewalld/master/k8s-worker.xml



Reload the firewall 


To make the new services available for use, the firewall must be reloaded. Execute the following to avoid rebooting:

firewall-cmd --reload

Apply the firewall rules


On the master execute:

firewall-cmd --add-service k8s-master 
firewall-cmd --add-service k8s-master --permanent

On worker nodes execute:
firewall-cmd --add-service k8s-worker
firewall-cmd --add-service k8s-worker --permanent


Create Kubernetes Yum Repository

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


Install the packages


yum install -y docker kubelet kubeadm kubectl 

Configure the Kubelet service

Add to the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file $KUBELET_KUBECONFIG_ARGS:

--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

Reload systemd
For the updated kubelet configuration to be recognized, systemd must be reloaded.

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

Enable the Docker service

systemctl enable docker --now

Create the needed sysctl rules

cat  > /etc/sysctl.d/k8s.conf <
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
HERE

Apply the sysctl rules

sysctl --system

Installing Kubernetes on CentOS 7 on Master

Initialize the Master Node

Since the flannel network will be used with the kubernetes cluster, the --pod-network-cidr option is used to specify the network that will be used, which will match the network in the kube-flannel.yml file applied later.

kubeadm init --pod-network-cidr 10.244.0.0/16

Configure kubectl for user

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify node is Ready

kubectl get nodes

NAME           STATUS    ROLES     AGE       VERSION
kate.lf.test   Ready     master    2m        v1.8.3


Verfify kube-system Pods are Ready

kubectl get pods --all-namespaces

NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
default       website-7cd5577444-xfp6s               1/1       Running   0          8m
kube-system   etcd-kate.lf.test                      1/1       Running   4          2d
kube-system   kube-apiserver-kate.lf.test            1/1       Running   5          2d
kube-system   kube-controller-manager-kate.lf.test   1/1       Running   7          2d
kube-system   kube-dns-545bc4bfd4-9tgcv              3/3       Running   14         2d
kube-system   kube-flannel-ds-gbzhp                  1/1       Running   2          1d
kube-system   kube-proxy-l9fts                       1/1       Running   3          2d
kube-system   kube-scheduler-kate.lf.test            1/1       Running   6          2d

Retrieve the Configuration for Flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Apply the Flannel Network 

kubectl apply -f kube-flannel

Installing Kubernetes on CentOS 7 on a Worker

Retrieve the Token

On the Master node, retrieve the token that was generated during the installation.

kubeadm token list

TOKEN     TTL       EXPIRES   USAGES    DESCRIPTION   EXTRA GROUPS


If no token is shown , then a new token can be generated. The original installation token expires after one day, but the option --ttl 0 can be used with kubeadm token create to create a token that never expires.

kubeadm token create --ttl 0
33d628.3d1c0bf58ab1a68a


Join the Cluster

On the Worker node, join the cluster. Use the token from the previous step and the IP address of your master node.

kubeadm join --token 33d628.3d1c0bf58ab1a68a 10.0.0.108:6443

Install the flannel package

yum -y install flannel

This package is installed after the flannel network so that the flanneld and docker services will start correctly.

Configure flannel

The etcd prefix value in the file /etc/sysconfig/flanneld is not correct, so the flanneld will fail to start as it is not able to retrieve the prefix given. The value of FLANNEL_ETCD_PREFIX must changed to the following:

#FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_ETCD_PREFIX="/coreos.com/network"

Enable and start flanneld

systemctl enable flanneld --now

This enables and starts flanneld. Since docker has a dependency on flanneld, it will also be restarted, so it may take a while.

Configure kubectl for user

mkdir -p $HOME/.kube


sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Verify Nodes are Ready

It may take several minutes for all the nodes to get to "Ready" status.

kubectl get nodes

NAME           STATUS    ROLES     AGE       VERSION
kate.lf.test   Ready     master    10d       v1.8.3
kave.lf.test   Ready         4d        v1.8.3



About Me - WrightRocket

My photo

I've worked with computers for over 30 years, programming, administering, using and building them from scratch.

I'm an instructor for technical computer courses, an editor and developer of training manuals, and an Android developer.