Using Vagrant to Virtualize Multiple Hard Drives

Recently I have been working on upgrading my home server. I keep a home server where me and the wife can back up our files and store our media. The server is really just a spare desktop with some hard drives, not speciailized server hardware, but server our purpose well.

I built the machine in 2012 so by now it is running an older version of Ubuntu (Ubuntu 12.04). I got some new hard drives for the server to expand the storage and felt it was a good time to also upgrade the software.

My main goal was to make the backup system more robust. After reading around on other blogs and reddit I settled on a solution that uses mergerfs to mount the drives to a central point and SnapRAID to create a parity disk for backups. This was a good solution for me because I don't really care about real-time backups of the data, so SnapRAID can run weekly and update the parity disk (if I store something critical, I can always manually invoke SnapRAID). For more information on SnapRAID/mergerfs I recommend this article done by Linux Server IO.

My next task was to model all this using Vagrant + Ansible. I essientially wanted a Vagrant environment I could use to test the new OS, new disk configuration and my Ansible playbook to provision the machine. This will be helpful in the future as well, when I want to install something new on my server I can run all the changes on the VM and test before pushing the update to my server.

To accurately write and test my Ansible playbook and test my configuration for MergerFS and SnapRaid I needed to virtualize multiple hard drives using Vagrant and VirtualBox. My final server consists of 5 total hard drives and wanted to make sure the migration was going to be smooth.

Below is the complete Vagrantfile I ended up with to attach additional hard drives to a Vagrant VM:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|  
  config.vm.box = "geerlingguy/ubuntu1604"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network "private_network", ip: "192.168.30.200"

  parityDisk = './parityDisk.vdi'
  dataDisk1 = './dataDisk1.vdi'
  dataDisk2 = './dataDisk2.vdi'

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"

    # Building disk files if they don't exist
    if not File.exists?(parityDisk)
      vb.customize ['createhd', '--filename', parityDisk, '--variant', 'Fixed', '--size', 10 * 1024]
    end
    if not File.exists?(dataDisk1)
      vb.customize ['createhd', '--filename', dataDisk1, '--variant', 'Fixed', '--size', 10 * 1024]
    end
    if not File.exists?(dataDisk2)
      vb.customize ['createhd', '--filename', dataDisk2, '--variant', 'Fixed', '--size', 10 * 1024]

      # Adding a SATA controller that allows 4 hard drives
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata', '--portcount', 4]
      # Attaching the disks using the SATA controller
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', parityDisk]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', dataDisk1]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', dataDisk2]
    end
  end

  config.vm.provision "shell", inline: <<-SHELL
    sudo mkfs.ext4 /dev/sdb
    sudo mkfs.ext4 /dev/sdc
    sudo mkfs.ext4 /dev/sdd
  SHELL

end  

The first thing in the Vagrantfile of interest is declaring the disk names that we will use. These files will appear in your working directory (I recommend adding them to your .gitignore).

parityDisk = './parityDisk.vdi'  
dataDisk1 = './dataDisk1.vdi'  
dataDisk2 = './dataDisk2.vdi'  

Now we need the specifiy the actual VirtualBox command to create these hard drive files. The below snippet checks if the hard drive file exists and if not executes the "createhd" command of the VirtualBox command line to create the hard drive. In my case I cam creating a fixed hard drive of 10GB.

if not File.exists?(parityDisk)  
      vb.customize ['createhd', '--filename', parityDisk, '--variant', 'Fixed', '--size', 10 * 1024]
end  

After creating the last hard drive we need to attach them to Virtual Machine. To do that, we create a SATA storage controller and use that to attach the hard drive files to the VM. Important to note here is that the SATA controller supports 4 disks and when attaching the storage I assign a different port to each drive.

if not File.exists?(dataDisk2)  
      vb.customize ['createhd', '--filename', dataDisk2, '--variant', 'Fixed', '--size', 10 * 1024]

      # Adding a SATA controller that allows 4 hard drives
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata', '--portcount', 4]
      # Attaching the disks using the SATA controller
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', parityDisk]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', dataDisk1]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', dataDisk2]
end  

Now that the drives are attached to the VM we want to format them so they are ready to use. The shell provisioner can be used to execute the correct commands to format the drives as EXT4.

config.vm.provision "shell", inline: <<-SHELL  
    sudo mkfs.ext4 /dev/sdb
    sudo mkfs.ext4 /dev/sdc
    sudo mkfs.ext4 /dev/sdd
SHELL  

Overall, this method worked well for me and I was able to write and test my larger Ansible playbook on a VM closer to my targeted production hardware. Using the vb.customize API of Vagrant allows you to do some complex setups for virtual machines.

Recently I have been working on upgrading my home server. I keep a home server where me and the wife can back up our files and store our media. The server is really just a spare desktop with some hard drives, not speciailized server hardware, but server our purpose well…

Read More

Automated Deployment of MSP430 Firmware (Part 1)

I now have completed Part 2 of the article where I discuss the ansible playbook to deploy the firmware.

I have recently started looking into all the DevOps tools that have constant articles on Hacker News and reddit lately. I use linux around my apartment for all my machines (mostly debian based) and wanted a better way to control and configure them. As someone that works with python often, Ansible caught my eye. After reading about this cool demo, I wanted to try out Ansible on a Raspberry Pi cluster of my own but also add in some MSP430s getting programmed.

For those that don't know MSP430 is a ultra low-power microcontroller made by Texas Instruments. A great starting point for information is the TI LaunchPad website.

As a side project I sought out an automated way to deploy MSP430 firmware images to MSP430F5529 LaunchPads connected to Raspberry Pis. The master ansible server is responsible for compiling of the MSP430 firmware images and then using ansible pushes the image to the Raspberry Pis which then program the firmware.

Here's a simple organization chart showing how the firmware images flow.
Organization

Since the Raspberry Pi will be doing the programming of the MSP430F5529 it will need a tool for downloading firmware images. MSPDebug is a command line tool for downloading to MSP430s and the rest of the blog post will cover compiling it for use on the Raspberry Pi.

All the following compiling steps were run on a Raspberry Pi 2 running Raspbian.

Compiling HIDAPI

The first step will be compiling HIDAPI. From their GitHub page, HIDAPI is described as "A Simple library for communicating with USB and Bluetooth HID devices on Linux, Mac, and Windows." The library is used by libmsp430.so that we will be building later on.

There are a few dependencies I found I needed to build the library, so go ahead and install those first.

sudo apt-get update  
sudo apt-get install libusb-1.0-0-dev  
sudo apt-get install libudev-dev  

Now we can download the source code for version 0.7.0 from their GitHub Releases Page. We'll download and build in a "build" directory.

mkdir ~/build  
cd ~/build  
wget https://github.com/signal11/hidapi/archive/hidapi-0.7.0.zip  
unzip hidapi-0.7.0.zip  

After extracting we can now build the library.

cd hidapi-hidapi-0.7.0/linux  
make -j4 CXXFLAGS="-Wall -g -lpthread -lrt"  

If the make successfully completes you should have a hid-libusb.o file located in your current directory.

Compiling MSPDebugStack

Disclaimer: Texas Instruments does not officially support the MSPDebugStack on Raspbian or the Raspberry Pi. This is meant as a learning exercise rather than a production solution.

Next step will be compiling the MSPDebugStack from Texas Instruments. The source is avaiable from the TI website.

There are a few dependencies we are going to need to install for this as well. Go ahead and run the following:

sudo apt-get install libasio-dev  
sudo apt-get install libboost-all-dev  

Now we are ready to build

cd ~/build  
wget http://www.ti.com/lit/sw/slac460k/slac460k.zip  
unzip slac460k.zip  

Now we need to copy the hidapi library we built previously and the hidapi header file.

cd MSPDebugStack_OS_Package/ThirdParty  
mkdir include lib  
cp -p ~/build/hidapi-hidapi-0.7.0/linux/hid-libusb.o lib  
cp -p ~/build/hidapi-hidapi-0.7.0/hidapi/hidapi.h include  

Now that we have the copied dependencies we can build libmsp430 library from the root of the package.

cd ..  
make -j4 STATIC=1  

Go ahead and grab a beer because this will probably take awhile on your Raspberry Pi. Once it's done though you will have a libmsp430.so in your current directory, run the following to copy your library to your library path.

sudo make install  

Compiling MSPDebug

The final piece will be compiling MSPDebug itself. This is the tool that will actually program the MSP430F5529 in our automated deployment.

First let's install the dependencies:

sudo apt-get install libusb-dev  
sudo apt-get install libreadline-dev  

Now we can download the source and extract.

cd ~/build  
wget http://downloads.sourceforge.net/project/mspdebug/mspdebug-0.23.tar.gz  
tar -zxvf mspdebug-0.23.tar.gz  

Now we can compile the source:

cd mspdebug-0.23  
make -j4  

Once the process is complete you will have an executable file mspdebug, go ahead and install the file to your PATH by running

sudo make install  

Testing MSPDebug

Now that everything has been compiled to run on a Raspberry Pi we can finally connect to our device using mspdebug!

Go ahead and plug in your MSP430F5529 LaunchPad and then launch mspdebug with the following commands. The first puts the libmsp430.so that we compiled previously in your linker library path and the second launches mspdebug. The arguments for mspdebug tell it to use the TI library (the one that we built) and to allow a firmware update if the debugger firmware is out of date.

export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}  
mspdebug tilib --allow-fw-update  

If all works fine you should see something like this:
MSPDebug Launched

Indicating you are now connected to the MSP430!

Next Steps

Next we will be using Ansible to distribute MSP430F5529 firmware images to a group of Raspberry Pis which will then use the mspdebug tool to download to their connected LaunchPads.

If you had trouble following any of the steps or have suggestions/improvements for the guide please leave a comment below! If you're having trouble getting it to work and just want the binaries they are checked into my GitHub repository.

I now have completed Part 2 of the article where I discuss the ansible playbook to deploy the firmware. I have recently started looking into all the DevOps tools that have constant articles on Hacker News and reddit lately. I use linux around my apartment for all my machines (mostly…

Read More