11/24/2018

Linux Test Project Primer

For testing your Linux system functionality, you can use the Linux Test Project:

Clone LTP

First clone the source code from the project:

git clone https://github.com/linux-test-project/ltp

Compile LTP

cd ltp
./build.sh

Install LTP

sudo make install

Run LTP Suite

sudo su -
cd /root/ltp-install
./runltp





11/20/2018

Linux Capabilities Example

An example of setting and getting a Linux capability

If you run across a command than reports an error suggesting a missing capability like NET_ADMIN, then you may want to use capabilities to allow non-root user to execute special commands such as iotop:

[kwright@ryzen5 cvs]$ iotop
Netlink error: Operation not permitted (1)

The Linux kernel interfaces that iotop relies on now require root priviliges
or the NET_ADMIN capability. This change occured because a security issue
(CVE-2011-2494) was found that allows leakage of sensitive data across user
boundaries. If you require the ability to run iotop as a non-root user, please
configure sudo to allow you to run iotop as root.

Please do not file bugs on iotop about this.
[kwright@ryzen5 cvs]$ setcap
usage: setcap [-q] [-v] (-r|-|) [ ... (-r|-|) ]


SETCAP(8)                             System Manager's Manual                            SETCAP(8)

NAME
       setcap - set file capabilities

SYNOPSIS
       setcap [-q] [-v] (capabilities|-|-r) filename [ ... capabilitiesN fileN ]

DESCRIPTION
       In  the  absence  of  the -v (verify) option setcap sets the capabilities of each specified
       filename to the capabilities specified.  The -v option is used to verify that the specified
       capabilities are currently associated with the file.

       The capabilities are specified in the form described in cap_from_text(3).

       The special capability string, '-', can be used to indicate that capabilities are read from
       the standard input. In such cases, the capability set is terminated with a blank line.

       The special capability string, '-r', is used to remove a capability set from a file.

       The -q flag is used to make the program less verbose in its output.

EXIT CODE
       The setcap program will exit with a 0 exit code if successful. On failure, the exit code is
       1.

SEE ALSO
       cap_from_text(3), cap_set_file(3), getcap(8),capabilities(7)


[kwright@ryzen5 cvs]$ sudo setcap cap_net_admin+eip /usr/sbin/iotop

[kwright@ryzen5 cvs]$ echo $?
0

[kwright@ryzen5 cvs]$ getcap /usr/sbin/iotop
/usr/sbin/iotop = cap_net_admin+eip

Conclusion

Despite setting the capability reported in the error message, the iotop command still reports the same error. Capabilities have to be carefully programmed into an executable, or else they may still be ineffective, and executing as root may be the only quick workaround.


ZFS on Linux Quick Start

The point of this entry is to document how to use ZFS, and not all of the reasons why you might want to do so, or how it works. If you refer to the sources, then you'll find many compelling reasons to use ZFS, and a deeper understanding of how it works.
I recently had my Buffalo NAS device fail, and decided to try ZFS on mirrored disks in my computer as a replacement until a new device can be found. Over the last few days, I've used the following sources and my own experimentation to produce this entry.

Sources

https://www.open-zfs.org

https://zfsonlinux.org/

https://github.com/zfsonlinux/zfs/wiki/Fedora

https://github.com/zfsonlinux/zfs/

https://wiki.gentoo.org/wiki/ZFS

https://docs.joyent.com/private-cloud/troubleshooting/disk-replacement

https://www.thegeekdiary.com/how-to-backup-and-restore-zfs-root-pool-in-solaris-10


ZFS Basics

My reason for wanting to use ZFS is that offers all the advantages of acls, backup, deduplication, logical volume management, quotas, restore and software raid within an efficient and resilient filesystem. ZFS works by combining devices into pools which can be used to create filesystems (volumes) and snapshots.

The pool and devices are managed with the zpool command and the filesystems and snapshots with the zfs command. Devices (vdevs) can be used for write buffering (log) devices, read caching (cache) devices, spare devices, clones or as data devices in a mirrored (mirror) array or a RAID-like array with single (raidz1), double (raidz2), or triple (raidz3) parity. Both commands can be used to get or set properties which determine the configuration of the pool or the filesystem/snapshot.

The zfs command can make filesystems or snapshots created from the space available in the pool. It can also be used to send or receive snapshots.

Getting Started with ZFS on Linux

ZFS is not in the mainstream Linux kernel, as it licensed under CDDL, which is not compatible with GPL. However, the source of ZFS can be redistributed and compiled under Linux using dynamic kernel modules under Fedora (dkms) and other distributions such as: Arch, Debian, Gentoo, Ubuntu, etc, according to https://zfsonlinux.org/.

To install the repository configuration for zfsonlinux.org on Fedora:

# dnf install http://download.zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm

To install the zfs package, the kernel-devel, and dependencies to build the zfs kernel modules:

# dnf install kernel-devel zfs

To enable the necessary services for systemd, execute:

# systemctl preset zfs-import-cache zfs-import-scan zfs-import.target zfs-mount zfs-share zfs-zed zfs.target


It was necessary and recommended to reboot the system:

# systemctl reboot

Jumping in zPool

Depending on the number of disks that you want to use, you can create pools with single or multiple disks in mirrored (mirror) or raid-like (raidz?) configurations. The man page for the zpool command gives examples of many of these configurations.

The following zpool command creates a mirrored pool with two disks /dev/sda and /dev/sdb.

# zpool create ztank mirror /dev/sda /dev/sdb

It is best to use entire disks for maximum efficiency, although partitions can be used.

Common zpool Commands

zpool status - display status of pool(s)

zpool iostat - show io statistics for pool(s)

zpool list - show details for pool(s)

zpool add - add a new vdev to a pool for log and cache

zpool remove - remove a vdev to a pool for log and cache

zpool attach - attach a new vdev

zpool detach - detach a vdev

zpool online - active a vdev in a pool

zpool offline - deactivate a vdev in a pool


Other zpool Commands

zpool import - activate a ZFS pool

zpool export - deactivate a ZFS pool

zpool upgrade - show or upgrade a ZFS pool

zpool scrub - check and fix ZFS filesystems

zpool history - show command history of pool


History Example with zpool

# zpool history

History for 'ztank':

2018-11-18.17:20:02 zpool create ztank mirror /dev/sda /dev/sdb

2018-11-18.17:21:38 zfs create ztank/keith

2018-11-18.17:21:52 zfs create ztank/pattie

2018-11-18.17:22:01 zfs create ztank/chris

2018-11-18.17:22:34 zfs create ztank/gallery

2018-11-18.17:24:52 zfs set mountpoint=/var/zfs/gallery ztank/gallery

2018-11-18.17:26:57 zfs create ztank/backup

2018-11-18.17:34:11 zfs set dedup=verify ztank

2018-11-18.17:50:30 zpool add -f ztank log /dev/sdd1

2018-11-18.17:50:44 zpool add -f ztank cache /dev/sdd2

2018-11-18.18:49:58 zfs create ztank/isos

2018-11-18.19:51:52 zfs set logbias=throughput ztank

2018-11-18.20:41:08 zfs set compression=lz4 ztank

2018-11-18.20:48:28 zfs set mountpoint=none ztank/gallery

2018-11-18.20:50:53 zfs set mountpoint=/ztank/gallery ztank/gallery

2018-11-19.00:21:40 zfs snapshot -r ztank/keith@20181119-002133

2018-11-19.00:23:40 zfs snapshot -r ztank/pattie@20181119-002334

2018-11-19.00:23:53 zfs snapshot -r ztank/chris@20181119-002348

2018-11-19.00:24:06 zfs snapshot -r ztank/backup@20181119-002359

2018-11-19.01:15:26 zpool scrub ztank

2018-11-19.19:37:40 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-19.20:26:54 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-19.22:55:21 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-19.23:37:44 zpool import -c /etc/zfs/zpool.cache -aN

2018-11-20.01:38:13 zfs create ztank/VMS

2018-11-20.10:45:52 zfs snapshot -r ztank@backup


Managing Volumes and Snapshots with zfs

The zfs command shows up repeatedly in the above zpool history output. As shown the first step after creating a pool is to create volumes:

# zfs create ztank/keith

# zfs create ztank/pattie

# zfs create ztank/chris

# zfs create ztank/gallery

# zfs create ztank/isos

# zfs create ztank/VMS



To create snapshots of volumes a pool/volume@snapshot syntax is used:


# zfs snapshot -r ztank/keith@20181119-002133

# zfs snapshot -r ztank/pattie@20181119-002334

# zfs snapshot -r ztank/chris@20181119-002348

# zfs snapshot -r ztank/backup@20181119-002359



The zfs snapshot -r option makes the snapshot recursive throughout the filesystem and its descendants.

To view filesystems, volumes and snapshots the zfs list -t all command can be used.

The zfs destroy command can be used to remove snapshots and volumes.

Managing Properties with zfs

It is a mistake to believe the man page for the default value of properties of a filesystem. I found that dedup and compression actual values disagreed with the default values in the man page. To view all the current values of a filesystem or snapshot use:

# zfs get all ztank/chris

NAME PROPERTY VALUE SOURCE

ztank/chris type filesystem -

ztank/chris creation Sun Nov 18 17:21 2018 -

ztank/chris used 1.11G -

ztank/chris available 1.39T -

ztank/chris referenced 1.11G -

ztank/chris compressratio 1.00x -

ztank/chris mounted yes -

ztank/chris quota none default

ztank/chris reservation none default...


Properties set on a parent are inherited by a child in ZFS, so for example, properties set on the pool will be inherited by the volumes in the pool unless overridden.

A few examples of setting properties to be inherited by all volumes:

# zfs set dedup=verify ztank

# zfs set logbias=throughput ztank

# zfs set compression=lz4 ztank


Properties can also be set on a volume or a snapshot. If you wanted to restrict usage on a volume, then a quota can be set. To insure that space will be allocated to a volume a reservation can be set:

# zfs set quota=4G ztank/chris

# zfs set reservation=2G ztank/chris

# zfs get all ztank/chris
 # verifies that the quota and reservation are updated

NAME         PROPERTY              VALUE                  SOURCE
ztank/chris  type                  filesystem             -
ztank/chris  creation              Sun Nov 18 17:21 2018  -
ztank/chris  used                  1.11G                  -
ztank/chris  available             2.89G                  -
ztank/chris  referenced            1.11G                  -
ztank/chris  compressratio         1.00x                  -
ztank/chris  mounted               yes                    -
ztank/chris  quota                 4G                     local
ztank/chris  reservation           2G                     local
...

Backing Up

To create a backup image of a pool or filesystem, first create a snapshot and then send that snapshot to a file or another host:

# zfs snapshot -r ztank@backup

# zfs send -v ztank@backup > /mnt/ztank.dump

To restore an image can be received:

# zfs receive ztank@backup < /mnt/ztank.dump

Maintenance and Repair

Maintenance and repair of a ZFS pool and filesystems should be automatic, but periodically running the following will search for corrupted blocks and repair them:

# zpool scrub ztank

11/11/2018

Network Manager - nmcli

Network Manager - nmcli 

If your Linux Distribution is using Network Manager, then nmcli is a great interface to configure connections from the command line interface.

View Connections

To view connections using nmcli, use:

# nmcli

or

# nmcli connection

NAME        UUID                                  TYPE             DEVICE 
docker0     1948720d-d589-4a3f-99d6-5fefcb5a3380  bridge           docker0
Wired connection 1      13f3f876-c0ab-3b31-a086-360d972043d8  802-3-ethernet   enp4s0 
tun0        c4e462c4-89d9-4d6d-8042-2c2eda44c6da  tun              tun0   
virbr0      f3aedff9-2b78-4a7f-b24e-b3d878daefbf  bridge           virbr0 
Droidz      a940b5dd-ac3a-41b8-9d90-ed408c5f0910  802-11-wireless  --     

To see details use the show action with the connection name:

# nmcli connection show Droidz 

Rename Connection

Change the connection name from "Wired connection 1" to "enp4s0" after the device name:

nmcli connection modify Wired\ connection\ 1 con-name enp4s0

Set IP Address

Set a single IP address for the enp4s0 connection:

nmcli connection modify enp4s0 ipv4.addresses 10.0.0.234/8

Use Multiple Addresses

Configure multiple IP addresses for the enp4s0 connection:
nmcli connection modify enp4s0 ipv4.addresses \ 10.0.0.234/8,192.168.0.234/24

Modifying Other Connection Settings

If using BASH shell completion, many other network connections can discovered to modify easily. For example, typing:

nmcli connection modify enp4s0 ipv4. 

and then pressing the TAB key would show: 


ipv4.addresses           ipv4.dhcp-send-hostname  ipv4.dns-search          ipv4.method
ipv4.dad-timeout         ipv4.dhcp-timeout        ipv4.gateway             ipv4.never-default
ipv4.dhcp-client-id      ipv4.dns                 ipv4.ignore-auto-dns     ipv4.route-metric
ipv4.dhcp-fqdn           ipv4.dns-options         ipv4.ignore-auto-routes  ipv4.routes
ipv4.dhcp-hostname       ipv4.dns-priority        ipv4.may-fail 


Setting the Default Gateway

The command for setting the default gateway is:

nmcli connection modify enp4s0 ipv4.gateway 192.168.0.1


Activating Modified Changes

After changing the configuration of a connection, bring the connection up to activate them:

nmcli connection up enp4s0

Interactive Editing

To use an interactive editing mode from the command line, use the edit action for the connection:

nmcli connection edit enp4s0

Use the  help command to get started with editing. 
Use the describe command to determine what to provide for a setting.

Hot tip: to avoid settings from being appended when set, first remove them. For example to return to a single IP address from multiple addresses, the other addresses must be removed first.

describe ipv4.addresses
remove ipv4.addresses
set ipv4.addresses 192.168.0.234/24
verify
save
activate
quit

Alternatives

The nmtui and nm-connection-editor commands provide a text menu and graphical interface to Network Manager.

11/07/2018

Try mandb if makewhatis is not found

Are you unable to get expected results when executing the following?

man -k

or:

man -f

In the past, it was typical to update the index files for searching the man pages with the command:

makewhatis

Recently, this command has been superseded by the command:

mandb



10/23/2018

Automounting NFS with Systemd

Systemd Automount


These files would be placed in /etc/systemd/system.
They are named specifically after the directory that is to be (auto)mounted.
They automount unit can be enabled and started to allow auto-mounting of the the directory.
If the mount unit is enabled and started then the directory will be persistently mounted.

[root@ryzen5 system]# cat home-lf.mount
[Unit]
  Description=nfs mount script
  Requires=network-online.target
  After=network-online.service

[Mount]
  What=10.0.0.46:/home/lf
  Where=/home/lf
  Options=rsize=8192,wsize=8192
  Type=nfs

[Install]
  WantedBy=multi-user.target

[root@ryzen5 system]# cat home-lf.automount
[Unit]
  Description=nfs mount script
  Requires=network-online.target
  After=network-online.service

[Automount]
  Where=/home/lf
  TimeoutIdleSec=10

[Install]
  WantedBy=multi-user.target

1/04/2018

Kubernetes Installation on Fedora 27 Cloud Base

Kubernetes Installation on Fedora 27 Cloud Base

Getting Started

I tried numerous ways to get a Kubernetes Master node to be installed on my bare-metal Fedora 25 distribution without success. With Kubernetes under such rapid development, it can be difficult to find a distribution platform which is able to keep up. Fedora 26 is represents a milestone in the development of Kubernetes, where instead of running the components of Kubernetes directly as services on the Master node, they have been containerized.

This post explains how to install a Kubernetes Master node using containers on a Linux host running Fedora 25. This host has libvirt installed already for storing the container metadata.

Managing Cloud Base Container

The container which will run the Kubernetes containers will be managed using Vagrant. Although the Vagrant documentation recommends installing directly from https://www.vagrantup.com/downloads.html, there is no package for the Fedora distribution there. If you are on a Windows, Mac, CentOS, or Debian platform, then you can install the vagrant software there. On Fedora, it was installed with:

dnf install vagrant

Obtaining the Fedora 27 Cloud Base container



vagrant box add fedora/27-cloud-base

Initializing the Vagrant Environment


To download the container image, first a "box" has to be added to vagrant. To create a download the image and create a Vagrantfile to customize how you deploy this image:

mkdir k8s-master; cd k8s-master
vagrant init fedora/27-cloud-base 


You can customize many options for the container like memory, networking, shared directories, port forwarding, etc. If the above command was executed with the minimal option -m, comments would not have been provided. To get started, you need to allocate more than the 500 megabytes of memory that is normally provided. If you can afford the memory, allocate the 2,048 gigabytes by using the customization below.

Assuming you are using libvirt, add the following lines to the Vagrantfile after the "config.vm.box = fedora/27-cloud-base" line:

     config.vm.hostname = "k8s-master"
  config.vm.provider "libvirt" do |libvirt, override|
    libvirt.memory = 2048
    libvirt.nested = true
  end

Starting and Accessing the Fedora 27 Cloud Base Container


vagrant up 
vagrant ssh
sudo -i




Create the Kubernetes Repository File

cat > /etc/yum.repos.d/kubernetes.repo << HERE
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

HERE

Install the Packages

dnf install kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 kubernetes-cni.x86_64 bash-completion rsyslog docker -y


Configure the Kubelet service

Add to the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file $KUBELET_KUBECONFIG_ARGS:

--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

Reload systemd

For the updated kubelet configuration to be recognized, systemd must be reloaded.

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet


Enable and Start the Services

systemctl enable docker --now
systemctl enable rsyslog --now
systemctl enable kubelet --now

Initialize the Kubernetes Cluster

kubeadm init --pod-network-cidr 10.244.0.0/16


Prepare Home Directory 

exit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo 'source <(kubectl completion bash)' >> .bashrc
source .bashrc

Apply the Weave Network

kubectl apply -f https://git.io/weave-kube-1.6

Untaint the Master Node


kubectl taint nodes --all node-role.kubernetes.io/master-


If necessary the node can be re-tainted by executing:

kubectl taint nodes --all node-role.kubernetes.io/master=""

Get Cluster Information


kubectl cluster-info

Kubernetes master is running at https://192.168.121.9:6443

KubeDNS is running at https://192.168.121.9:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'

About Me - WrightRocket

My photo

I've worked with computers for over 30 years, programming, administering, using and building them from scratch.

I'm an instructor for technical computer courses, an editor and developer of training manuals, and an Android developer.