Creating an Encrypted Partition in CentOS 7

Creating an Encrypted Partition in CentOS 7

First, prepare a partition with fdisk. In this example, it is assumed that /dev/sdb1 has been created.

Then, use cryptsetup to luksFormat the partition with a passphrase.

cryptsetup -y luksFormat /dev/sdb1

You will need to type: YES
if you are sure you want to continue.

You will be prompted to Enter and Verify your passphrase.  Be sure to select one that is not too simple or less than 8 characters, as now it does verify the complexity. If you complete this successfully, then you will need the passphrase that you used to open the device.  You can open the device under any name that you want to appear underneath /dev/mapper/.

Next, use cryptsetup to luksOpen the partition to a name like "confidential" that will become part of the path to the new /dev/mapper/confidential device.

crypstsetup luksOpen /dev/sdb1 confidential

You will then be prompted with the passphrase that you used when you executed cryptsetup with luksFormat subcommand.

The device will now appear as /dev/mapper/confidential, but it will actually be a symbolic link to a /dev/dm* device.

Format the open (unencrypted) device by making a filesystem.

mkfs.ext4 /dev/mapper/confidential

Create a mount point and mount the new filesystem.

mkdir /var/lib/confidential
mount /dev/mapper/confidential /var/lib/confidential

Put the data that you want to be encrypted onto the filesystem. For example, to copy confidential data from a user's home directory to the encrypted device, you could execute something like:

cp /home/user/confidential.data /var/lib/confidential

Unmount the filesystem and use cryptsetup to luksClose the filesystem.

umount /var/lib/confidential
cryptsetup luksClose confidential


Creating an iSCSI target and and initiator with CentOS 7

Creating an iSCSI target and and initiator with CentOS 7

First, you will probably need to install the necessary packages, as they are not installed by default. For the iSCSI server, target portal, you will need to install scsi-target-utils and targetcli packages, and on the client, the iscsi-initiator-utils package.  For testing purposes, it may be useful to have all three installed on the server, or if you are just trying out iSCSI using just a single system for practice.  The following command will install all three packages:

yum -y install scsi-target-utils targetcli iscsi-initiator-utils

iSCSI Qualified Name (IQN) 

You will need to assign a unique iSCSI Qualified Name (IQN) for your server, client, and each target.  The IQN starts with "iqn." followed by the year-month that the target will be available after, like "2015-11".  After that, the domain name in reverse, like "com.example", and finally a colon followed by the name of the specific entity like ":lun0", or ":centos7".  Put all together the IQN looks like this: "iqn.2015-11.com.example:lun0" or "iqn.2015-11.com.example:centos7".

If you want to have a IQN generated for your system that should be universally unique that you can place into /etc/iscsi/initiatorname.iscsi, instead of setting your own, you can execute:


which should output something like:


iSCSI Target Portal (Server) Configuration

Your system is already identified with an IQN in the file /etc/iscsi/initiatorname.iscsi.  You can modify this to something unique for your network (or the world), such as: iqn.2015-11.com.example:centos7.  You will need all the IQNs for the clients that will connect to your iSCSI portal. After updating the previous file, the iscsid service should be restarted with:

systemctl restart iscsid

Rather than having to edit configuration files by hand, the targetcli command provides an interface for managing the targets of your portal which uses a directory metaphor for organization and navigation.  Start the interface by executing:


First navigate, and then create an appropriate backing store.  If you have a block device, like /dev/sdd, then you could a backing store named back1 by executing:

cd /backstores/block
create back1 /dev/sdd

To use a file image backing store with a size of 100Mb, you could execute:

cd /backstores/fileio
create back1 /var/lib/iscsi-lun0.img 100M

Next, to create target IQNs, you can create entries under /iscsi.  For example, to create a  target of iqn.2015-11.com.example:lun0, you would execute:

cd /iscsi
create iqn.2015-11.com.example:lun0

The backing store created earlier must be associated with the target IQN. You do this by navigating under the IQN, the target portal group, and the luns directory like /iscsi/iqn.2015-11.com.example:lun0/tpg1/luns.

Pay attention to which /backstore file you used previously. If you created the block device /backstores/block/back1 earlier then you would execute:

cd /iscsi/iqn.2015-11.com.example:lun0/tpg1/luns
create /backstores/block/back1 

If you created the fileio backstore earlier, then you would execute:

cd /iscsi/iqn.2015-11.com.example:lun0/tpg1/luns
create /backstores/fileio/back1

Then, for each client, an acl must be added. Begin by changing to the acls under your IQN/tpg1:

cd /iscsi/iqn.2015-11.com.example:lun0/tpg1/acls

For each client, add the acl by creating an IQN entry:

create iqn.2015-11.com.example.com:centos7

Optionally, add authentication information (this matches the initiator configuration below):

cd iqn.2015-11.com.example.com:centos7
set auth userid=student
set auth password=password

If you return to the targetcli interface interface later, you can navigate to this "directory" and use the following command to view the authentication information for this client (this command is useful in user "directories" within targetcli, too):

cd /iscsi/iqn.2015-11.com.example.com:lun0/tpg1/acls/iqn.2015-11.com.example.com:centos7

which output:

chap_password: password
chap_userid: student


When you are done, you can leave the program.


Hot Tip! Each time you exit targetcli, it informs you that it has updated the /etc/target/saveconfig.json file, which could be edited.  Also, targetcli keeps a copy of the last ten configurations you have used in /etc/target/backup. So it is easy to edit the current configuration or restore one of these configuration files by copying the /etc/target/backup/saveconfig-[TIMESTAMP].json file over the /etc/target/saveconfig.json, and then restart the iscsid service.

After you have finished providing each client acl, you can should review the configuration by using the following suggestions.  You can navigate the configuration like a normal filesystem with cd and ls, and use info and help to get information specific to each directory of the configuration:

cd /
cd /backstores/fileio
cd /iscsi

If you are satisfied, then exit the interface by executing:


Finally, enable and start the iscsid service:

systemctl enable iscsid
systemctl start iscsid

If this is working correctly, then the port 3260/tcp should be listening and shown by the following command:

ss -tln | grep 3260

which should show:

LISTEN     0      5                         *:3260                     *:*  

You may need to enable the port through the firewalld configuration, which is used for the firewall by default.  You have several ways that you could achieve this with greater security, but this example assumes that you want to make the port open for all addresses:

firewall-cmd --zone public --add-port 3260/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-all

iSCSI Initiator (Client) Configuration

Just as was done on the server, the client should be identified by a IQN in the /etc/iscsi/initiatorname.iscsi file. Make sure that the you use the same IQN for each client that you used to create the acl entries on the server.  Don't forget to update the /etc/iscsi/initiatorname.iscsi file and restart the iscsid service:

systemctl restart iscsid

If you set a userid and password in the acl you created on the server, then in /etc/iscsi/iscsid.conf on the client, uncomment and modify the userid and password to match the one that you used:

node.session.auth.authmethod = CHAP
node.session.auth.username = student
node.session.auth.password = password

First, you need to discover the target at the portal by executing:

iscsiadm -m discovery -t sendtargets -p # where the IP is the target portal server above

The above command should show the IQNs available at the target portal server.  You can attempt to login to see if you have any errors, especially if using authentication by using:

iscsiadm -m node --login

For information about the session which is hopefully created, you can use iscsiadm in the session mode.  In the session mode, you can print session information in increasing verbosity by setting the -P option from 0 for the lowest verbosity to 3 the highest verbosity.  For example, here's the inetadm command run in session mode to print out medium high verbosity:

inetadm -P 2 -m session

which had the output of:

Target: iqn.2015-11.com.example:lun0 (non-flash)
Current Portal:,1
Persistent Portal:,1
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2015-11.com.example:centos7
Iface IPaddress:
Iface HWaddress:
Iface Netdev:
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
username: student
password: ********
password_in: ********
Negotiated iSCSI params:
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Once you have resolved any issues with making a connection, you should enable and start the iscsi service:

systemctl enable iscsi
systemctl start iscsi

If everything has gone successfully, then a new SCSI disk device should appear with a name found by listing /dev/sd*.  In this example, the new disk appears as /dev/sdb.

ls /dev/sd*

Shows the output:

/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb 

Most likely, the new device will be the last one shown like /dev/sdb above.  To get it ready for a filesystem, you can use fdisk and the new device name to create a partition.  In the following example, one new partition is created that uses all the space on the device:

fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (8192-204799, default 8192): 
Using default value 8192
Last sector, +sectors or +size{K,M,G} (8192-204799, default 204799): 
Using default value 204799
Partition 1 of type Linux and of size 96 MiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now executing the following command should show the new partition, /dev/sdb1:

ls /dev/sd*

Shows the output:

/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb /dev/sdb1

To prepare the partition for mounting, create a filesystem on it it, in this case an ext4 filesystem will be created:

mkfs -t ext4 /dev/sdb1

Create the directory where you want to mount the new filesystem such as /mnt/lun0

mkdir /mnt/lun0

Verify that the filesystem can be successfully mounted:

mount /dev/sdb1 /mnt/lun0
mount | grep lun0

If successful, this should output something like:

/dev/sdb1 on /mnt/lun0 type ext4 (rw,relatime,seclabel,stripe=4096,data=ordered)

When you want to make this mount permanent, you have to be careful to add the "_netdev" mount option in your /etc/fstab entry.  It is also a good idea to use UUID identifiers instead of device names as device names may change depending on the order in which devices are detected.  To discover the UUID for the new device execute:


or for this specific example

blkid /dev/sdb1

which had the output of:

/dev/sdb1: UUID="3735827d-b4f4-48ed-aca1-a264a3ec956e" TYPE="ext4"

The entry in this example would look similar to the following, but your UUID will be different.

UUID=3735827d-b4f4-48ed-aca1-a264a3ec956e /mnt/lun0  ext4 _netdev 0 0

After adding the entry, unmount your new device, and the mount all /etc/fstab entries to verify that your new entry is correct.  

umount /dev/sdb1
mount -a
mount | grep lun0

should output:

/dev/sdb1 on /mnt/lun0 type ext4 (rw,relatime,seclabel,stripe=4096,data=ordered,_netdev)

If that works, and you won't disturb anyone else on the system, you might reboot the system(s) starting with the server first, and then the client, to verify that everything has been enabled correctly for automatic mounting of the iSCSI device. After the systems have rebooted, check that the client is still mounting the lun0 target with:

mount | grep lun0

If you made it this far, then congratulations! You have a persistent iSCSI target portal server and an iSCSI initiator client able to perform CHAP authentication.

Wrap-up and Troubleshooting

If you are still having issues, then review the files that were updated, and the firewall settings.  For example, the wrong IQN for a client will mean failure to authorize, the wrong userid or password, a failure to authenticate.  Also, revisit the targetcli interface and review the configuration information.  Here's a quick tour of some troubleshooting with commands in bold, the output of the command in italics. and the relevant information highlighted.

cat /etc/iscsi/initiatorname.iscsi

fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d9dbb

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    39938047    19968000   83  Linux
/dev/sda2        39938048    41943039     1002496   82  Linux swap / Solaris

Disk /dev/sdb: 104 MB, 104857600 bytes, 204800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
Disk label type: dos
Disk identifier: 0x761d8fba

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            8192      204799       98304   83  Linux

/dev/sda1: UUID="cfc6be43-cf4b-4cb5-9bf3-67f24d1d5205" TYPE="ext4" 

/dev/sda2: UUID="d5c08700-0ff1-4062-a13b-f3782b80c66b" TYPE="swap" 
/dev/sdb1: UUID="3735827d-b4f4-48ed-aca1-a264a3ec956e" TYPE="ext4"

cat /etc/fstab
# /etc/fstab
# Created by anaconda on Fri Nov 6 16:02:18 2015
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
UUID=cfc6be43-cf4b-4cb5-9bf3-67f24d1d5205 /                       ext4    defaults        1 1
UUID=d5c08700-0ff1-4062-a13b-f3782b80c66b swap                    swap    defaults        0 0

UUID=3735827d-b4f4-48ed-aca1-a264a3ec956e /mnt/lun0  ext4 _netdev 0 0 

grep -Ev '^#|^$' /etc/iscsi/iscsid.conf  # exclude comments and blank lines
iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
node.startup = automatic
node.leading_login = No
node.session.auth.authmethod = CHAP
node.session.auth.username = student
node.session.auth.password = password
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
# remaining output omitted

firewall-cmd --list-all
public (default, active)
  interfaces: enp0s3
  services: dhcpv6-client ssh
  ports: 23/tcp 3260/tcp 23/udp
  masquerade: no
  rich rules:

/iscsi> cd /
/> ls
o- / ..................................................................... [...]
  o- backstores .......................................................... [...]
  | o- block .............................................. [Storage Objects: 0]
  | o- fileio ............................................. [Storage Objects: 1]
  | | o- back1 ....... [/var/lib/iscsi-lun0.img (100.0MiB) write-back activated]
  | o- pscsi .............................................. [Storage Objects: 0]
  | o- ramdisk ............................................ [Storage Objects: 0]
  o- iscsi ........................................................ [Targets: 1]
  | o- iqn.2015-11.com.example:lun0 .................................. [TPGs: 1]
  |   o- tpg1 ........................................... [no-gen-acls, no-auth]
  |     o- acls ...................................................... [ACLs: 1]
  |     | o- iqn.2015-11.com.example:centos7 .................. [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ............................ [lun0 fileio/back1 (rw)]
  |     o- luns ...................................................... [LUNs: 1]
  |     | o- lun0 ..................... [fileio/back1 (/var/lib/iscsi-lun0.img)]
  |     o- portals ................................................ [Portals: 1]
  |       o- ................................................. [OK]
  o- loopback ..................................................... [Targets: 0]

/> cd /iscsi/iqn.2015-11.com.example:lun0/tpg1/acls/iqn.2015-11.com.example:centos7/
/iscsi/iqn.20...ample:centos7> info
chap_password: password
chap_userid: student

iscsiadm -m discovery -t sendtargets -p # the -p must be the correct IP for the portal,1 iqn.2015-11.com.example:lun0

iscsiadm -m node -v --login,1 iqn.2015-11.com.example:lun0

iscsiadm -P3 -m session 
iSCSI Transport Class version 2.0-870
Target: iqn.2015-11.com.example:lun0 (non-flash)
Current Portal:,1
Persistent Portal:,1
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2015-11.com.example:centos7
Iface IPaddress:
Iface HWaddress:
Iface Netdev:
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
username: student
password: ********
password_in: ********
Negotiated iSCSI params:
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
Attached SCSI devices:
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running

Good Luck!

Creating a loop device for testing filesystems in Linux

Creating a loop device for testing filesystems in Linux

If you wanted to have a disk for testing different file systems, but were not able to add a disk to your system, then you might be stuck. By creating a filesystem within a file, you can use a loop mounted device, and you do not have to be stuck!

For example, you could create a 100M /var/lib/testdisk file with the following command:

dd if=/dev/zero of=/var/lib/testdisk bs=1M count=100

Then, you can set up that file as a loop device:

losetup --find # to find the next available loop device, typically /dev/loop0 as shown used below
losetup /dev/loop0 /var/lib/testdisk

Next, you can format the loop device with a filesystem:
mkfs -t ext4 /dev/loop0

Use the filesystem by creating a directory and mounting it:

mkdir /test-ext4
mount /dev/loop0 /test-ext4

In the future to mount this filesystem image, you will be able to just use the command:

mount -o loop /var/lib/testdisk /test-ext4

To make this device permanent you could add the following lines to /etc/fstab:

/var/lib/testdisk       /test-ext4              ext4    loop,nodev      0 0


Fast Forward to the Nikon D5500 from the D3100 - Part II

Fast Forward to the Nikon D5500 from the D3100 - Part II


In the previous part of this series, I mentioned several features that were not available that I wanted to have on my camera, but were not on the Nikon D3100:

  • Bracketing of Exposure and Shutter Speed
  • Wireless remote control
  • Intervalometer
  • Advanced Flash integration
  • GPS
  • Wi-Fi
Another motivating factor was that I really wanted a new lens!  Between my 18-55mm AF-S f/3.5-5.6G lens and my 55-300mm AF-S f/3.5-5.6G ED, I would need to keep switching lenses to go from a wide-angle shot to a telephoto shot.  I wanted just one lens that I could just keep on my camera and be able to get both shots.

As I shopped for the camera upgrade, I noticed that the D5500 was often bundled with a 18-140mm AF-S f/3.5-5.6G ED.  So far, this new lens has been almost perfect for all my needs.  Occasionally, I still reach for the 55-300mm AF-S f/3.5-5.6G ED for a real up-close shot, but most of the time I'm happy with the range provided by the 18-140mm lens.

Comparable Specifications

The following table compares the specifications of the two cameras where a quantitative comparison can be made:

ProcessorExpeed 4Expeed 3
Photo Frames Per Second 
Movie 1920x1080 Frames Per Second6024
ISO100 to 25600100 to 12800

The Biggest Difference 

The most noticeable difference between the Nikon D5500 and the D3100 is that the D5500 has an articulating (movable) touch screen monitor.  Congratulations to Nikon for having the first serious DSLR camera to have a touch screen.

I know it's not just me!  If I'm shown a TFT display, I want to touch it to control it.  On the Nikon D3100, I often would catch myself trying to pinch to zoom in on a picture that was displayed on it's fixed 3.0" diagonal monitor.  

Not only do I appreciate the pinch to zoom for a detailed look at shots in the field, but it's nice to flick through photos quickly on the D5500.  The D5500 also presents an excellent touch screen interface for configuring the settings of the camera. 

When put into LiveView mode, the D5500 touch screen can also be used to select a focus point and take a photo.  Since the D5500 monitor screen can be articulated, whereas the D3100 could not, you can turn it 180 degrees, so you can take selfies too!

The one thing that frustrates me about the touch screen interface is that double-tap is not implemented except for a few places.  I really would want it to be able to zoom all the way back out of a picture that I had zoomed all the way in on.

Get a Grip

The grip on the newer Nikon D5500 feels deeper than that of the D3100, so I feel as if I have a better, stronger grip on the camera while just holding it in my right hand.  Surprisingly, the D5500 is slightly smaller and about one ounce lighter (14.9 oz) than the D3100 (16 oz).

More To Come

I'll write more about the Nikon D5500 soon!  I've got to spend a bit more time with it to really get to know its pros and cons, but I thought I'd share my first impressions with everyone for now.

Upgrading to Windows 10 (and how an SSD really matters)

Upgrading to Windows 10 

Right now I am wondering why I did it.  I started an upgrade to Windows 10 last night, and my primary work laptop is stuck in a boot loop.  I hadn't even received a notification yet, even though I had "reserved" my upgrade a long time ago.  Since I decided I couldn't wait, I searched and found the MediaCreationToolx64.exe from Microsoft at: https://www.microsoft.com/en-us/software-download/windows10.

I know I must be good at finding bugs, or them me.  On my first attempt at using the MediaCreationToolx64.exe tool, I didn't choose to upgrade and was just letting it do a download to a USB key.  I was busy doing other things at the same time, and one of them required that I reboot my system.  The tool was reporting a status of about 50% complete at the time I executed:

shutdown /r /f

After restarting, the MediaCreationToolx64 kept reporting that it wasn't able to start properly, and suggesting rebooting to solve the problem.  That recommendation was not helpful, but I was able to use information in this link to fix the problem I created: https://answers.microsoft.com/en-us/insider/wiki/insider_wintp-insider_install/how-to-troubleshoot-common-setup-and-stop-errors/324d5a5f-d658-456c-bb82-b1201f735683

This was the procedure that was successful for me:
a. Press Windows key + X on the desktop screen of the computer.
b. Select Command Prompt (Admin)
c. On the open Command Prompt window copy and paste the commands (all at once).
net stop wuauserv
net stop cryptSvc
net stop bits
net stop msiserver
ren C:\Windows\SoftwareDistribution
ren C:\Windows\System32\catroot2
net start wuauserv
net start cryptSvc
net start bits
net start msiserver

 Next, I tried to Upgrade this PC, instead of making a USB disk for installation on another computer.  The first stage of the installation went successfully to 100% and then my computer rebooted.  I am now stuck with it continually rebooting.  There seems to be a phantom entry in the UEFI, as there are two choices to boot the system, one that is still labeled Windows 8.1 and another one without a label.  Neither one will boot my system, and none of the choices on the boot menu are helpful either.

As I write, I am now downloading Windows 10 for installation on another computer from my spare laptop.  According to the information found at the site of the last link, I should be able to boot the USB to repair the start up of the system.  I hope so!

My download just finished.  The MediaCreationToolx64 first verified the download, and now it is creating the Windows 10 Media.  Creating the media seems to take a long time... about as long as the download!  For the first 50% of the time it is preparing the media, and then for the second 50% it writes it out to the USB drive.

No luck using the Windows 10 setup program booting off the USB drive. I was not able to repair the start up of the system, despite that option being available within the Windows 10 setup program.  I was also unable to restore any previous system restore points that I had created.  

In order to upgrade, the setup program told me that I had to boot that version of Windows, which was still not happening.  When I tried to do a fresh install I was told that I was not allowed because the only partition big enough to hold the new Windows was a "reserved OEM partition".  

The Good News

One option that was available in the Windows 10 setup program was helpful.  It at least allowed me to open a command prompt.  This setup mode of Windows also allowed me to have access to not only the internal hard drive partitions, but also to any external hard drives or USB flash drives.

After connecting my Western Digital "My Passport Ultra" 2 TB external hard drive, it was recognized as the G: drive.  Oddly, what was normally my C: drive showed up as the D:.  To backup all of my users data, I used the following commands:

mkdir g:\users
robocopy /r:0 /w:0 /s d:\users  g:\users

The /r:0 option is to attempt 0 retries on a failed copy, the /w:0 option is to wait 0 seconds between retries, and the /s option is to make it act recursively on the source directory d:\users.
When I saw it was copying a very deep and unwanted directory, I used CTRL-C to stop the copying.  I then used the following command to prune a directory and all of its contents:

rmdir /s d:\users\Keith\Documents\Github

The Best News

Since I don't have time to spend backing up other parts of the original drive outside of the "users" directory, I ran out and got a new Toshiba Q Series Pro SSD.  Best Buy had a great deal on them for the same price I'd pay for it on Amazon, so I jumped on it.  

It will be a bit hard to say I suppose whether I get a boost in speed from the new Solid State Drive, from Windows 10 versus Windows 8.1, or by just having a fresh clean install.  If it is slower than before, then I'd be really surprised. 

The setup of Windows 10 was so fast I was really surprised.  It was done in less than ten minutes with what it had to do from the USB drive.  Then, it took about another 20 minutes of running of the hard drive to set up the apps and the updates.  

The best news:  My work laptop is now booting Windows 10!  It would be better news if it wasn't running Windows, but it's my work laptop, so it must!

Update - SSD is a MUST Upgrade

The speed of my system is dramatically improved! There is no way I can attribute the speed increase to more than anything the Toshiba Q Series Pro SSD replacing a 5400 RPM Toshiba traditional hard drive.  Applications with exactly the same code that was running a couple of days ago on Windows 8.1 now load so much faster, it is unbelievable to me!  If you are a person who swears at your computer for being slow, then you MUST upgrade to a SSD drive!


Fast Forward to the Nikon D5500 from the D3100

Fast Forward to the Nikon D5500 from the D3100

Part I - The Nikon D3100


A little more than a year ago, I took the leap back into serious photography, and starting doing business at http://wrightrocket.smugmug.com.  With a very limited budget, I started with the entry-level Nikon D3100, which provides beginning users with a guide mode, but intermediate or advanced users the basics of a DSLR with the sophistication and quality of Nikon.  At the time, I really wanted the Nikon D5300, but didn't have the budget for it, so I settled for a little less than I wanted to get what I really needed.

Here are the technical specifications of the D3100:
  • Expeed 3 Processor
  • 14.2 Megapixels
  • DX Sensor 23.1mm x 15.4mm
  • 3 Frames per second continuous
  • ISO 100 to 12800
  • HD 1920x1080 at 24 frames per second
  • 3.0 inch diagonal non-touchscreen monitor
  • 16.0 ounces weight for camera body

The kit that I bought included a 18-55mm and 55-200mm f/3.5-5.6G AF-S Nikkor VR lenses.  Over time, I have enjoyed using the camera very much taking over 6,000 photographs in about a year. Here's a couple of my favorites:

I added a Nikkor AF-S 35mm f/1.8G prime VR lens and a 55-300mm AF-S f/3.5-5.6G ED VR II lens, and lots of other stuff.  Here's one of my favorites from the 300mm, which has VR II so taking hand-held photos like this are possible:

After exploring more advanced photography techniques through reading and experimentation, I found several features which I wished were built-in to the camera, but were not. Here's a short list of features I wish that the Nikon D3100 had:
  • Bracketing of Exposure and Shutter Speed
  • Wireless remote control
  • Intervalometer
  • Advanced Flash integration
  • GPS
  • Wi-Fi
Prior to upgrading, the following explains how I dealt with these short-comings of the D3100.


It was not much of a problem to overcome the lack of bracketing controls by simply varying the exposure or shutter speed manually, but to do it effectively required a tripod to keep the camera at the same view.  For the outdoor photography that I tend to do, I usually would set up the camera on the tripod and set it to the aperture (A) mode. Then, I would simply turn the adjustment knob for the aperture between each frame.  Otherwise, I might set it up with the camera set to shutter speed mode (S) and adjust the shutter speed as shown below.

1/15th of a second, f10, ISO 100

1/20th of a second

1/25th of a second

1/30th of a second

1/40th of a second

1/50th of a second

Wireless Remote Control

I bought a wired intervalometer for taking time-lapse photography, and it also served as a way to release the shutter remotely, at a shorter distance, but like a wireless remote control.  It's also nice because in manual mode (M), if I set the shutter speed to Bulb, then I can hold the release button for as long as I would like to create long exposures like these:

Advanced Flash Integration

The small built-in flash is only adequate for up-close or very small room photography.  Any outdoor or large space photography required more.  By going with the Nikon SB-700 Speedlight flash, I was able to get an integrated flash that could provide Commander capabilities for the Nikon Creative Lighting System.  This flash isn't quite as large or as powerful as the SB-910, but is more than adequate even for outdoor or photos taken in moderate to large rooms.


There is a GPS port on the D3100, but I never acquired the Nikon GP-1A module which lists for $312 as of today at www.nikonusa.com. Within Adobe Lightroom or Photoshop, as well as other tools, there are ways to embed location information, although I never bothered.


There is no Wi-Fi on the Nikon D3100, so you have to wait until you can get to a computer and import them.  There is also no Eye-Fi support built-in, although I didn't explore this option, I should note that there is some option for Eye-Fi support built-in on the Nikon D5500.  Eye-Fi allow you to have the SDcard in the camera connect to upload to a Wi-Fi access point.

Current Offerings

Nikon no longer offers the D3100, except possibly as refurbished, which I saw one today at their site for $749, which made me smile.  The kit for the D5500 which I purchased with the 18-140mm f3.5-5.6G ED VR is listed at $1,049,95 after a $350 instant savings, but lacks many of the extras that I got in my kit, which you can read about in my next part of this post.

A much improved entry-level cameras can be found in the D3200 and D3300 still offered at $449.95 and $499.95, respectively.  They now feature 24.2 megapixels.  The D3200 still has the same Expeed 3 processor as the D3100, but the D3300 now has the Expeed 4 processor. With an extra WU-1a Mobile Adapter module for $59.95 you can connect to both of these cameras through Wi-Fi on your smartphone.

Part II - Fast Forward to the Nikon D5500 from the D3100 - The D5500

The second part to this post will be upcoming soon! I plan to share the benefits and downsides to the Nikon D5500 in comparison to the D3100, as well as a few new photos!


Firewalld and iptables

Firewalld and Iptables

The Problem

The iptables command line interface to control the Netfilter functions in the kernel is being superseded by Firewalld's firewall-cmd.  Firewalld provides not only a command line interface, but also a very powerful graphic one. 

The problem is that once you enable the Firewalld service, then you should only use firewall-cmd from the command line for configuration.  Attempts to modify the firewall configuration with iptables commands directly will fail.  However, iptables commands can still be used to query the rules that are created by the Firewalld GUI, or by firewall-cmd commands.  For example, after using firewall-cmd to create rules, you could execute the following iptables command to view the actual Netfilter rules:

iptables -nvL 

HOT TIP: Take advantage of the command completion feature while working with firewall-cmd.  If forget an option, just press TAB and TAB and wait for a second for the list of available options!  Executing "firewall-cmd --help" also provides a good summary of the available options before you have to start reading the man page.

Service Rules

There are many services that have rules which are predefined by Firewalld.  It can make it much easier to enable access to a service by using these preset rules. To find out which services can be enabled access through the default zone, use the following command:

firewall-cmd --get-services

To enable access to the service through the firewall persistently, you can execute:

firewall-cmd --add-service=dns --zone public --permanent

The above command does not affect the state of the current firewall.  To add the service immediately, you can execute the above command without the --permanent option, or else use:

firewall-cmd --reload

Alternatively, you could add all the services and other rules that you wanted until you got the runtime configuration to reflect what you want by not using the --permanent option and then execute:

firewall-cmd --runtime-to-permanent

Adding a New firewalld Service Definition

If a service that you want to enable through firewalld is not defined, then you can define it in:
/etc/firedwalld/services in an xml file.  Here is an example of /etc/firewalld/services/quake.xml:

<?xml version="1.0" encoding="utf-8"?>
  <description>Quake is an on-line game</description>
  <port port="26000" protocol="tcp"/>
  <port port="26000" protocol="udp"/>

Here is an another example of /etc/firewalld/services/iscsi.xml:
<?xml version="1.0" encoding="utf-8"?>
  <description>iSCSI default target portal port</description>
  <port port="3260" protocol="tcp"/>mv 

After creating xml files like these in /etc/firewalld/, you need to reload the firewalld service with:

firewall-cmd --reload

Then, you would be able to see the new services by executing:

firewall-cmd --get-services

which outputs:

RH-Satellite-6 amanda-client bacula bacula-client dhcp dhcpv6 dhcpv6-client dns ftp high-availability http https imaps ipp ipp-client ipsec iscsi kerberos kpasswd ldap ldaps libvirt libvirt-tls mdns mountd ms-wbt mysql nfs ntp openvpn pmcd pmproxy pmwebapi pmwebapis pop3s postgresql proxy-dhcp quake radius rpc-bind samba samba-client smtp ssh telnet tftp tftp-client transmission-client vnc-server wbem-https

Next, you could add the new services to a zone like public, for example:

firewall-cmd --add-service={iscsi,quake} --zone public --permanent
firewall-cmd --reload

To see the set of all rules in the current (runtime) configuration, you can use:

firewall-cmd --list-all

Direct Rules

Direct rules are similar to rules that used to be added with the iptables command.  Instead of starting with an iptables -I INPUT, you start with "firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0".  Like adding services, a permanent rule is not immediately active, but you can make it so by using reloading the firewall rules.  For example to open port tcp/8200, you could use:

firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 \
-s -p tcp --dport 8200 -j ACCEPT

firewall-cmd --reload

I discovered that you have to "get" the rules instead of querying for a "list" of them:

firewall-cmd --direct --get-all-rules

Rich Rules

Rich rules are designed to accept a more natural language than Direct rules.  Both require more knowledge of the workings of the firewall than Service rules.

firewall-cmd --list-rich-rules
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="" port \ port="2049" protocol="tcp" accept'
firewall-cmd --remove-rich-rule 'rule family="ipv4" source address="" port \ port="2049" protocol="tcp" accept'


Minimal Linux with CentOS 7

Minimal Linux with CentOS 7

I've been on a quest to create a minimal desktop using CentOS 7.  Starting with a minimal install of CentOS 7, I've slowly been adding packages to provide normal command line functionality instead of only minimal command line functionality.

yum -y install net-tools vim-enhanced ncurses-devel readline-devel bash-doc kernel-doc mlocate ksh zsh words attr ftp nmap-frontend telnet strace 

If you need to compile any kernel modules for virtualization or other purposes, you should add to that list:
kernel-devel kernel-headers make and gcc

Minimal Graphical User Interface Desktop for CentOS 7

To get the minimal desktop environment (GUI) for CentOS, I had to install the "xfce-desktop" package group with the following command run as root:

yum -y group install xfce-desktop

To add a few packages for web development and viewing I also ran as root:

 yum -y install bluefish firefox mate-terminal

Since I've been working on a VMWare virtual machine, and the current version of vmware-open-vm-tools and vmware-tools packages were having issues capturing the mouse properly,  I had to uninstall them and install the following packages:

yum -y install xorg-x11-drv-vmmouse xorg-x11-driv-vmware

Now, I can click into the desktop window, and my mouse stays captured in the guest, whereas before the mouse would too easily return to the host.

For those of you who may be running a VirtualBox, you can make sure that you activate the Guest Additions CD through the Device menu of the VirtualBox application. Then, you can click back into the VirtualBox guest and execute from the command line as the root user:

mount /dev/cdrom /mnt

Gnu Free Mono Fonts

Having started from a minimal install of CentOS 7, I was not at all pleased with how the terminal looked in the minimal xfce-dessktop or in the mate-terminal.  After searching with:

yum search "font monospace"

I found the gnu-free-mono-fonts package.  After installing it with:

yum -y install gnu-free-mono-fonts

The terminal and desktop fonts immediately changed, and actually became fixed-width, and looked like proper monospace fonts.


Sharing Shotwell in CentOS 7

Sharing Shotwell in CentOS 7

In CentOS 7, the database for shotwell is kept under the users home directory in the 
~/.local/share/shotwell/data directory along with a backup.  The thumbnails are stored the users home directory in the in the ~/.cache/shotwell/thumbs directory.

To share these directories between multiple users on my system, I created a group called "shotwell":
groupadd -r shotwell

I added each user to the group:
useradd -aG shotwell keith
useradd -aG shotwell wright

After importing all the multimedia possible into Shotwell, I backed up the data to make it easy to extract to the destination location:

cd ~/.local/share/shotwell/
tar cvzf shotwell-database.tar.gz  data/
cd  ~/.cache/shotwell/
tar cvzf shotwell-thumbs.tar.gz thumbs/

Next, the data was extracted to the destination location:

cd /usr/local/shotwell
tar xf shotwell-database.tar.gz
tar xf shotwell-thumbs.tar.gz

Make sure the permissions on the new directories will allow members of the group to write to the directories and files, and with setgid (g+s) make sure that they will own any new files created:

cd /usr/share/shotwell/
chgrp shotwell data
find data -type d -exec chmod 775 {} \;
find data -type f -exec chmod 664 {} \;
chgrp shotwell thumbs

find thumbs -type d -exec chmod 775 {} \;

find thumbs -type f -exec chmod 664 {} \;
chmod g+s data thumbs

For each user, you need to either remove their previous database and cache or rename these directories. Then, you can create a symbolic link to the shared directory for the database (data) and the thumbnails (thumbs).

For the first user:

cd ~keith/.local/share/shotwell/
mv data data-orig # or rm data
ln -s /usr/share/shotwell/data data

cd ~keith/.cache/shotwell/
mv thumbs thumbs-orig
ln -s /usr/share/shotwell/thumbs thumbs

For the second user:

cd ~wright/.local/share/shotwell/
mv data data-orig # or rm data
ln -s /usr/share/shotwell/data data

cd ~wright/.cache/shotwell/
mv thumbs thumbs-orig
ln -s /usr/share/shotwell/thumbs thumbs

Etc... for each user

Shotwell Sumary

The first time you have set up the directories, make sure you logout and log back in before trying to use shotwell again.  Also, shotwell is not designed for multiuser use, so do not allow multiple users to run the program at the same time.

It can take a long time to import your multimedia with shotwell. It may crash, but if you restart it, it will continue.  If you don't try and import too much, it seems to help, for example, going by month of the year you are importing instead of trying to import the whole year.  

Gate One - Command line applications from any HTML5 browser

Gate One

Gate One is a service that can be run on system to be able to provide secure access to any on the command line applications of the server and a SSH client.  No plugins are required for access, only an HTML5 compliant web browser.  There are both commercial and open source versions of this product available at http://liftoffsoftware.com/Products/GateOne.

Installation - Git it!

To install the Gate One you can use a git client to download it, and then execute the python setup.py install command.

  • First change to an appropriate directory as the root user
cd /usr/local

  • Clone the git repository
git clone https://github.com/liftoff/GateOne

  • Install it with setup.py in the GateOne directory
cd GateOne
python setup.py install

  • Start gateone to create a default configuration
gateone &
Once it runs, break out of the service



Configure the service with JSON files 10server.conf, 20authentication.conf, and 50terminal.conf.  The files are named with two important distinctions.  One, they are processed in alphabetical order. Two, they are only processed if they have .conf suffix.

The port to use is in 10server.conf, as I already had 443 in use.  The configuration files are found in the /etc/gateone/conf.d directory.  

cd /etc/gateone/conf.d
Edit to your liking.  Here is my  modified 10server.conf:

// This is Gate One's main settings file.
    // "gateone" server-wide settings fall under "*"
    "*": {
        "gateone": { // These settings apply to all of Gate One
            "address": "",
            "ca_certs": null,
            "cache_dir": "/tmp/gateone_cache",
            "certificate": "/etc/gateone/ssl/certificate.pem",
            "cookie_secret": "NGNiMjBhYjQ0M2FiNDgxYmFjOGE0ZmNkMWI1MGI0MzlhN",
            "debug": false,
            "disable_ssl": false,
            "embedded": false,
            "enable_unix_socket": false,
            "gid": "0",
            "https_redirect": false,
            "js_init": "",
            "keyfile": "/etc/gateone/ssl/keyfile.pem",
            "locale": "en_US",
            "log_file_max_size": 100000000,
            "log_file_num_backups": 10,
            "log_file_prefix": "/var/log/gateone/gateone.log",
            "log_to_stderr": null,
            "logging": "info",
            "multiprocessing_workers": null,
            "origins": ["localhost", "", "localhost.localdomain", "localhost4", "localhost4.localdomain4", "localhost6", "localhost6.localdomain6"],
            "pid_file": "/var/run/gateone.pid",
            "port": 10443,
            "session_dir": "/tmp/gateone",
            "session_timeout": "5d",
            "syslog_facility": "daemon",
            "uid": "0",
            "unix_socket_path": "/tmp/gateone.sock",
            "url_prefix": "/",
            "user_dir": "/var/lib/gateone/users",
            "user_logs_max_age": "30d"

Here is my 50terminal.conf.  I added Perl, Python and Ruby applications by adding other objects in the commands for the terminal.

// This is Gate One's Terminal application settings file.
    // "*" means "apply to all users" or "default"
    "*": {
        "terminal": { // These settings apply to the "terminal" application
            "commands": {"SSH": {"command": "/usr/lib/python2.7/site-packages/gateone-1.2.0-py2.7.egg/gateone/applications/terminal/plugins/ssh/scripts/ssh_connect.py -S '%SESSION_DIR%/%SESSION%/%SHORT_SOCKET%' --sshfp -a '-oUserKnownHostsFile=\\\"%USERDIR%/%USER%/.ssh/known_hosts\\\"'", "description": "Connect to hosts via SSH."}, 
            "PYTHON": {"command": "/bin/python", "description": "Start Python Shell"},
            "PERL": {"command": "/bin/perl -d -e42", "description": "Start Perl Debugger Interactively"},
            "RUBY": {"command": "/bin/irb", "description": "Start Interactive Ruby Shell"}},
            "default_command": "SSH",
            "dtach": true,
            "enabled_filetypes": "all",
            "environment_vars": {"TERM": "xterm-256color"},
            "session_logging": true,
            "syslog_session_logging": false

Here is what I added to the 50terminal.conf:

            "PYTHON": {"command": "/bin/python", "description": "Start Python Shell"},
            "PERL": {"command": "/bin/perl -d -e42", "description": "Start Perl Debugger Interactively"},
            "RUBY": {"command": "/bin/irb", "description": "Start Interactive Ruby Shell"}}

If you want to customize how things work beyond these configuration files, then you can edit various files under the directory where gateone was installed on your system.
For example, to change the branding on the main screen from "Gate One - Applications", then the /usr/lib/python2.7/site-packages/gateone-1.2.0-py2.7.egg/gateone/static directory contains the file gateone.js, where I updated line 3342 to the following:

titleH2.innerHTML = gettext("OCS Learning Gateway");

Running GateOne Unprivileged

By default, the gateone.service systemd configuration file has the service run as the root user.  Since I wanted to be able to start programming shells, this was not something that I wanted to allow.  So, I modified the systemd configuration file for gateone.service found at: /usr/lib/systemd/system/gateone.service 

In the [Service] block, I added:

Update: In the 10server.conf, the uid value can also be changed from 0 to the uid of an unprivileged user.  This allows the server to start with root privileges to bind to a port, but then drop them. 

Here's what the whole gateone.service file looks like now:
Description=Web-based terminal



Next, I created the service account to match and set the proper ACLs on the user's home directory:

useradd -r -s /sbin/nologin -d /var/lib/gateone gateone
setfacl -Rm d:u:gateone:rwx /var/lib/gateone
setfacl -Rm u:gateone:rwx /var/lib/gateone

Then, I switched from the root account to the gateone user with sudo:

sudo -u gateone bash

As the gateone user, I executed gateone to create the default configuration:


Once it runs, break out of the service


The configuration files for the gateone user were updated by copying from the /etc/gateone/conf.d

cp /etc/gateone/conf.d/*.conf ~gateone/.gateone/conf.d/


To send a message to the screen, you can get an application to use the JavaScript:
GateOne.Visual.displayMessage('Message notification');

To send text to the terminal application, you can use the JavaScript:


Working with Drupal

Working with Drupal

The last couple of weeks have been very exciting as I've managed to migrate two websites (www.onecoursesource.com and www.technicaltrainingresources.com) that were created in Drupal and helped to build a new one in Drupal (www.missionbaymassage.com).  In the last year, I've been doing more and more with PHP programming directly on our company's Point Of Sale system.  At first, I found that working with Drupal was confusing, as what thought would be a web page file was really just an entry in the Drupal database.  

What Got Me Started

The Fedora server that I set up several years to run postfix/dovecot/squirrelmail/ftp/http for our two company domains was at end-of-life for software updates, so we needed to upgrade.  Rather than actually upgrading, a new server was installed, and I was tasked with making the new server do everything that the old server was doing.

How I Fixed Problems


Drush is the Drupal shell.  It gives you an awesome amount of power to work with Drupal.  You can use it to perform updates, download and install modules and themes.  You can do backups and restores.  It also allows you to execute PHP and SQL code for troubleshooting, and more.  I highly recommend this tool for managing a Drupal installation.

Download drush and install it.  

drupal.org does keep files for the release of drush, but the project is now maintained at github in the repository: https://github.com/drush-ops/drush

There is documentation available at http://docs.drush.org/ and specifically for installation for English users at http://docs.drush.org/en/master/install/

Start by executing the following commands (the exact file name may vary):

wget http://ftp.drupal.org/files/projects/drush-7.x-5.9.tar.gz  
echo $PATH

Examine the output of the PATH directories listed for an optimal one where you have write permission
In my case, I was limited to my home directory/bin, so I executed -C option with the name of my home directory:

mkdir ~bin
tar -xvf  drush-7.x-5.9.tar.gz -C ~/

Next, create the links to the executables in the directory where you have write permission:

ln -s ~/drush/drush ~/bin/drush
ln -s ~/drush.php ~/bin/drush.php

Now, you can use drush, but before you execute it, you should change to directory where drupal is installed.  In a few of my sites this is /usr/share/drupal, but in this example for HostGator, it is ~/public_html. In order to use drush, cd to the directory where drupal is installed:

cd ~/public_html

If you successfully executed drush, then you can see all the sub commands available:
Execute a drush command. Run `drush help [command]` to view command-specific help.  Run
`drush topic` to read even more documentation.

Global options (see `drush topic core-global-options` for the full list):
 -d, --debug                               Display even more information, including    
                                           internal messages.                          
 -h, --help                                This help system.                           
 -ia, --interactive                        Force interactive mode for commands run on  
                                           multiple targets (e.g. `drush @site1,@site2 
                                           cc --ia`).                                  
 -n, --no                                  Assume 'no' as answer to all prompts.       
                    The absolute path to your PHP intepreter,
                                           if not 'php' in the path.                   
 -p, --pipe                                Emit a compact representation of the        
                                           command for scripting.                      
 -r , --root=                  Drupal root directory to use (default:      
                                           current directory).                         
 -s, --simulate                            Simulate all relevant actions (don't        
                                           actually change the system).                
 -l ,             URI of the drupal site to use (only needed  
 --uri=           in multisite environments or when running   
                                           on an alternate port).                      
 -v, --verbose                             Display extra information about the         
 --version                                 Show drush version.                         
 -y, --yes                                 Assume 'yes' as answer to all prompts.      

Core drush commands: (core)
 archive-dump (ard,    Backup your code, files, and database into a single file.       
 archive-backup, arb)                                                                  
 archive-restore       Expand a site archive into a Drupal web site.                   
 cache-clear (cc)      Clear a specific cache, or all drupal caches.                   
 cache-get (cg)        Fetch a cached object and display it.                           
 cache-set (cs)        Cache an object expressed in JSON or var_export() format.       
 core-config (conf,    Edit drushrc, site alias, and Drupal settings.php files.        
 core-cron (cron)      Run all cron hooks in all active modules for specified site.    
 core-execute (exec,   Execute a shell command. Usually used with a site alias.        
 core-quick-drupal     Download, install, serve and login to Drupal with minimal       
 (qd)                  configuration and dependencies.                                 
 core-requirements     Provides information about things that may be wrong in your     
 (status-report, rq)   Drupal installation, if any.                                    
 core-rsync (rsync)    Rsync the Drupal tree to/from another server using ssh.         
 core-status (status,  Provides a birds-eye view of the current Drupal installation,   
 st)                   if any.                                                         
 core-topic (topic)    Read detailed documentation on a given topic.                   
 drupal-directory      Return path to a given module/theme directory.                  
 help                  Print this help message. See `drush help help` for more         
 image-flush           Flush all derived images for a given style.                     
 php-eval (eval, ev)   Evaluate arbitrary php code after bootstrapping Drupal (if      
 php-script (scr)      Run php script(s).                                              
 queue-list            Returns a list of all defined queues                            
 queue-run             Run a specific queue by name                                    
 search-index          Index the remaining search items without wiping the index.      
 search-reindex        Force the search index to be rebuilt.                           
 search-status         Show how many items remain to be indexed out of the total.      
 self-update           Check to see if there is a newer Drush release available.       
 shell-alias (sha)     Print all known shell alias records.                            
 site-alias (sa)       Print site alias records for all known site aliases and local   
 site-install (si)     Install Drupal along with modules/themes/configuration using    
                       the specified install profile.                                  
 site-reset            Reset a persistently set site.                                  
 site-set (use)        Set a site alias to work on that will persist for the current   
 site-ssh (ssh)        Connect to a Drupal site's server via SSH for an interactive    
                       session or to run a shell command                               
 test-clean            Clean temporary tables and files.                               
 test-run              Run tests. Note that you must use the --uri option.             
 updatedb (updb)       Apply any database updates required (as with running            
 usage-send (usend)    Send anonymous Drush usage information to statistics logging    
                       site.  Usage statistics contain the Drush command name and the  
                       Drush option names, but no arguments or option values.          
 usage-show (ushow)    Show Drush usage information that has been logged but not sent. 
                        Usage statistics contain the Drush command name and the Drush  
                       option names, but no arguments or option values.                
 variable-delete       Delete a variable.                                              
 variable-get (vget)   Get a list of some or all site variables and values.            
 variable-set (vset)   Set a variable.                                                 
 version               Show drush version.                                             
 watchdog-delete       Delete watchdog messages.                                       
 (wd-del, wd-delete)                                                                   
 watchdog-list         Show available message types and severity levels. A prompt will 
 (wd-list)             ask for a choice to show watchdog messages.                     
 watchdog-show         Show watchdog messages.                                         
 (wd-show, ws)                                                                         

Runserver commands: (runserver)
 runserver (rs)        Runs a lightweight built in http server for development. 

Field commands: (field)
 field-clone           Clone a field and all its instances.                         
 field-create          Create fields and instances. Returns urls for field editing. 
 field-delete          Delete a field and its instances.                            
 field-info            View information about fields, field_types, and widgets.     
 field-update          Return URL for field editing web page.                       

Project manager commands: (pm)
 pm-disable (dis)      Disable one or more extensions (modules or themes).           
 pm-download (dl)      Download projects from drupal.org or other sources.           
 pm-enable (en)        Enable one or more extensions (modules or themes).            
 pm-info (pmi)         Show detailed info for one or more extensions (modules or     
 pm-list (pml)         Show a list of available extensions (modules and themes).     
 pm-refresh (rf)       Refresh update status information.                            
 pm-releasenotes       Print release notes for given projects.                       
 pm-releases (rl)      Print release information for given projects.                 
 pm-uninstall          Uninstall one or more modules.                                
 pm-update (up)        Update Drupal core and contrib projects and apply any pending 
                       database updates (Same as pm-updatecode + updatedb).          
 pm-updatecode (upc)   Update Drupal core and contrib projects to latest recommended 

SQL commands: (sql)
 sql-cli (sqlc)        Open a SQL command-line interface using Drupal's credentials. 
 sql-connect           A string for connecting to the DB.                            
 sql-create            Create a database.                                            
 sql-drop              Drop all tables in a given database.                          
 sql-dump              Exports the Drupal DB as SQL using mysqldump or equivalent.   
 sql-query (sqlq)      Execute a query against the site database.                    
 sql-sync              Copy and import source database to target database. Transfers 
                       via rsync.                                                    

User commands: (user)
 user-add-role (urol)  Add a role to the specified user accounts.                   
 user-block (ublk)     Block the specified user(s).                                 
 user-cancel (ucan)    Cancel a user account with the specified name.               
 user-create (ucrt)    Create a user account with the specified name.               
 user-information      Print information about the specified user(s).               
 user-login (uli)      Display a one time login link for the given user account     
                       (defaults to uid 1).                                         
 user-password (upwd)  (Re)Set the password for the user account with the specified 
 user-remove-role      Remove a role from the specified user accounts.              
 user-unblock (uublk)  Unblock the specified user(s).                               

Other commands: (make)
 make                  Turns a makefile into a working Drupal codebase.  
 make-generate         Generate a makefile from the current Drupal site. 

What Kept Me Going

It was frustrating and difficult to migrate the existing websites because I was not involved in creating them.  My girlfriend was frustrated at the same time that the company hosting her website was not responsive to her requests.  I had been reluctant to create a website for my girlfriend, as I was not wanting to have to be the one who would have to be responsive to all of her requests.

I had reached a tipping point.  I decided that if I helped my girlfriend create a new site in Drupal, then I might have a better understanding of the whole framework.  I was right!  While my girlfriend still has plenty of requests, gradually I'm teaching her how to use Drupal (and HTML) to solve them on her own.  

About Me - WrightRocket

My Photo

I've worked with computers for over 30 years, programming, administering, using and building them from scratch.

I'm an instructor for technical computer courses, an editor and developer of training manuals, and an Android developer.