Cross Platform Compile using Bazel

I've seen this come up often on the Bazel Discuss forums and Stack Overflow and wanted to make a quick blog post about it. For reference, all code is contained within this repo.

C based projects often need to work cross platform and need special linking or compiling settings in order to achive this. The setup below shows how to configure your Bazel project so that Bazel does the "right thing" when compiling for the host platform.

Let's say you want to embed a Python application within your C application. Some of the challenges we will see when compiling are:

  • Python is installed in a different place on Windows/Linux, we need Bazel to choose the right location
  • The Python library we need to link against is named differently on the host systems (libpython3.5.so vs python35.lib)
  • Directory containing Python header files is slightly different per host system

Setting Up the Bazel Workspace

First off, we need to tell Bazel about the installed Python. To tell Bazel about any third party dependency that is not checked into your git repository you typically use one of the Workspace Rules.

In our case we'll use new_local_repository because the directory is already on our machine and it doesn't have it's own Bazel BUILD file.

A completed WORKSPACE file to support Windows and Linux can be seen below:

new_local_repository(  
    name = "python_linux",
    path = "/usr",
    build_file_content = """
cc_library(  
    name = "python35-lib",
    srcs = ["lib/python3.5/config-3.5m-x86_64-linux-gnu/libpython3.5.so"],
    hdrs = glob(["include/python3.5/*.h"]),
    includes = ["include/python3.5"],
    visibility = ["//visibility:public"]
)
    """
)

new_local_repository(  
    name = "python_win",
    path = "C:/Python35",
    build_file_content = """
cc_library(  
    name = "python35-lib",
    srcs = ["libs/python35.lib"],
    hdrs = glob(["include/*.h"]),
    includes = ["include/"],
    visibility = ["//visibility:public"]
)
    """
)

To break this down, each host OS contains:

  • Name: a unique name that defines this third party dependeny
  • Path: the path to the root of the external dependency
  • build_file_content: essentialy the contents of a BUILD file for the third party dependency. If this becomes too complex you can put it in it's own BUILD file and reference it using build_file instead of build_file_content

The BUILD file contents consists of:

  • Name: a simple alias we can use to reference the library throughout Bazel
  • srcs: the actual precompiled library to link against
  • hdrs: a list of header files that anything compiling against will need
  • includes: an include path to add to anything compiling with this library
  • visibility: making them public allows any BUILD file in Bazel to reference them

Setting Up a Build

Now that Bazel knows about our precompiled libraries we need to create an application to link against them. I found a quick and dirty C program that runs embedded python and created the main.c below:

#include "Python.h"

int main(int argc, char *argv[])  
{
  Py_SetProgramName(argv[0]);  /* optional but recommended */
  Py_Initialize();
  PyRun_SimpleString("from time import time,ctime\n"
                     "print('Today is',ctime(time()))\n");
  Py_Finalize();
  return 0;
}

One thing to notice about the C program is that it does #include "Python.h". This will work in our cross platform build because each Python library exposed it's include directory using the includes attribute of the cc_library rule.

To build this C program we can create the following BUILD file:

config_setting(  
    name = "linux_x86_64",
    values = {"cpu": "k8"},
    visibility = ["//visibility:public"],
)

config_setting(  
    name = "windows",
    values = {"cpu": "x64_windows"},
    visibility = ["//visibility:public"],
)

cc_binary(  
    name="python-test",
    srcs = [
        "main.c",
    ],
    deps = select({
        "//:linux_x86_64": [
            "@python_linux//:python35-lib"
        ],
        "//:windows": [
            "@python_win//:python35-lib"
        ]
    })
)

The real power of Bazel here is in the config_setting and select() functionality.

The BUILD file starts by defining two different config settings. Basically, the first tells Bazel that when cpu=k8 that a Linux build is happening and when cpu=x64_windows that a Windows buid is happening. Bazel automatically sets those variables when building on the appropriate hosts.

The cc_binary rule includes the main.c we mentioned earlier but to actually reference the library it uses the select() function. The select function takes in a dictionary where the keys are configurations and the values are the return values. select will basically return the correct value for whatever configuration listed in the keys is active. Here we use it to select the appropriate library to link against. You could also use it if you need custom copts, linkopts, srcs or any other attribute of rules.

Should I Just Check it In?

One other approach that simplifies some of the setup is to just check in the Python libraries and headers into your repository. There are loads of JARs and other precompiled code in the Bazel repository, so it's defintely a common practice. However, some teams may be relucatant to check in so many binary artifacts into their repository. Either way, the select() function is still needed to choose the correct library at link time.

In conclusion, Bazel is a powerful tool and changing how I manage large builds that need to work across platforms (and even architectures). With some added logic, you can manage a cross platform build easily with Bazel.

I've seen this come up often on the Bazel Discuss forums and Stack Overflow and wanted to make a quick blog post about it. For reference, all code is contained within this repo. C based projects often need to work cross platform and need special linking or compiling settings in…

Read More

Running Vagrant Build on Jenkins

For my projects I make heavy use of Vagrant so that other developers can work in the same environment I am using. This also makes continious integration easier for your builds because you don't need to rely on a build machine becoming out of date or maintaining the correct packages, everything is managed in each VM.

This blogpost explains how I'm using Vagrant + Jenkins for continuous integration.

Jenkins Job

We don't use any of the Vagrant plugins for Jenkins (they seemed a little overkill) instead we just use this build script to boot up Vagrant and launch our build.

#!/usr/bin/env bash

vagrant up  
vagrant rsync  
vagrant provision

vagrant ssh -c "cd /vagrant; ./build.sh"  
result=$?

vagrant suspend

exit ${result}  

All that's really happening here is booting up the VM, provisioning it and then executing a build.sh script from the project (this can be your equivalent build or test script). The VM is suspended at the end of the job to make bringup faster during the next execution. The error code is saved from exectuing the build script and Jenkins can correctly mark the job as failing or passing.

Modifying Vagrantfile to Make Use of Jenkins Resources

If you have dedicated Linux machines that are running your Vagrant based builds this additional tip may be of interest. We have a pool of servers that have lots more memory than our typical devleopment environments. We want the Vagrant VMs to make use of all those resources when they are running on our Jenkins servers, but not on our development machines.

Because a Vagrantfile is a ruby script, we can modify the Vagrantfile to gather information about it's environment before building the VM.

One assumption made is that Jenkins will always be running Vagrant based jobs under the username jenkins. However, if your environment is different it should be easy to adapt, you just need a programatic way to determine you are running on Jenkins rather than a development machine.

The Vagrantfile basically checks what user is running the machine. If it is a user named jenkins it determines the number of CPUs and amount of memory on the machine and grows the VM accordingly. The VM is also uses rsync for sharing content rather than VirtualBox shared folders, which perform poorly.

# -*- mode: ruby -*-
# vi: set ft=ruby :

require 'etc'

Vagrant.configure(2) do |config|

  config.vm.box = "ubuntu/trusty64"

  if Etc.getlogin == "jenkins" then
    cpus = Etc.nprocessors - 1
    # Get total memory by calling "free" command and parsing output
    total_memory = %x(free -m).split(" ")[7].to_i
    memory = total_memory - 1024
    # Use rsync for syncing
    config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ".git/"
  else
    memory = 1024
    cpus = 1
  end

  config.vm.provider "virtualbox" do |vb|
    vb.memory = memory
    vb.cpus = cpus
    vb.linked_clone = true
  end
end  

Overall, this is a pretty simple setup for Jenkins + Vagrant that doesn't rely on any plugins.

For my projects I make heavy use of Vagrant so that other developers can work in the same environment I am using. This also makes continious integration easier for your builds because you don't need to rely on a build machine becoming out of date or maintaining the correct packages…

Read More

Home IoT Network (Part 1)

The goal for this project is to create a simple home IoT network using BLE (Bluetooth Low Energy). I'm using a BeagleBone Black to act as a central hub for the IoT topology and the TI CC2650 for BLE compatible devices to collect sensor data and transmit that data to the hub.

This first part of the project will work on setting up the BeagleBone Black (BBB) with a usable Debian image with BLE capabilities.

Getting Started with BeagleBone Black

This section of the guide will help you get a BeagleBone Black setup and running Debian Jessie with Bluetooth support.

First off, you'll want to make sure you have:

For starters, I'm not relying on bridging my internet connection over USB to give the BBB access to the internet. Instead I'm using a 5V supply and an ethernet cable to give it it's own dedicated power source and connection.

Getting the Image

Head to https://beagleboard.org/latest-images and download the latest Debian image for the BeagleBone Black. I'm using the Debian 8.5 2016-05-13 4GB SD LXQT image for my project. This getting started guide gives a good overview on how to write this to a MicroSD card. If you're like me and running with a Linux host you can just execute the following (change /dev/sdX to be your MicroSD card).

sudo dd if=./bone-debian-8.4-lxqt-4gb-armhf-2016-05-13-4gb.img of=/dev/sdX status=progress  

Once the image has been written to the card we can boot off of it. Simply place the MicroSD card in the BBB and plug it in. After awhile the device should be booted and you can see an "heartbeat" led blinking.

Setting up the BBB

Now we should be able to login into your BBB! Go ahead and try SSH'ing into the machine using the username debian and password temppwd. Notice I'm using beaglebone.local as the hostname, you can also use the IP address of the device if you know it.

ssh debian@beaglebone.local  

There are two ways to run an operating system on the BBB. One is using the eMMC and the other is the MicroSD card. We will continue to use the MicroSD card because it gives us more space for our application. However, the *.img file we wrote to the SD card earlier was only 4GB, thankfully we can expand the OS to use the whole SD card. Execute the following to expand it to the whole card (taken from elinux.org).

cd /opt/scripts/tools/  
git pull  
sudo ./grow_partition.sh  
sudo reboot  

After the reboot occurs, log back into the device and update all packages so that our system will be up to date.

sudo apt-get update  
sudo apt-get dist-upgrade  

We now have a fresh BBB image using our whole MicroSD card. Next step will be making sure Bluetooth works.

Getting Started with Bluetooth

For this part of the project I was mostly interested in just ensuring that my bluetooth adapter was working with my BBB. Later on in the project I will focus more on connecting devices programatically and reading data.

Now that you have a fresh BBB it's time to ensure bluetooth is working. If you haven't already, plug in the USB bluetooth adapter. I recommend rebooting after you have plugged it in to allow the system to detect it proprely. Many users have reported isues with "hot plugging" the adapter while the OS is running.

First off, make sure you have the bluetooth metapackage installed.

sudo apt-get install bluetooth  

Using the Debian 8.5 2016-05-13 image it was already installed for me, so don't be suprised if it's ready to go for you too. Next, let's check if the Bluetooth adatper is working with the BBB and Debian system.

Run the following to see if the adapter can be found.

sudo hcitool dev  

Next let's scan for Bluetooth devices. This will return the addresses of all Bluetooth devices within range.

sudo hcitool lescan  

Results from my scan

If both of these execute correctly than we have some confidence that we have a working Debian operating system that can use our USB Bluetooth adatper.

On the next blog post I plan on setting up the TI CC2650 BLE device for reading data. I'll be using the CC2650 as the sensor nodes in the IoT network which will be communicating to the BeagleBone Black hub we have just setup.

The goal for this project is to create a simple home IoT network using BLE (Bluetooth Low Energy). I'm using a BeagleBone Black to act as a central hub for the IoT topology and the TI CC2650 for BLE compatible devices to collect sensor data and transmit that data to…

Read More

Using Vagrant to Virtualize Multiple Hard Drives

Recently I have been working on upgrading my home server. I keep a home server where me and the wife can back up our files and store our media. The server is really just a spare desktop with some hard drives, not speciailized server hardware, but server our purpose well.

I built the machine in 2012 so by now it is running an older version of Ubuntu (Ubuntu 12.04). I got some new hard drives for the server to expand the storage and felt it was a good time to also upgrade the software.

My main goal was to make the backup system more robust. After reading around on other blogs and reddit I settled on a solution that uses mergerfs to mount the drives to a central point and SnapRAID to create a parity disk for backups. This was a good solution for me because I don't really care about real-time backups of the data, so SnapRAID can run weekly and update the parity disk (if I store something critical, I can always manually invoke SnapRAID). For more information on SnapRAID/mergerfs I recommend this article done by Linux Server IO.

My next task was to model all this using Vagrant + Ansible. I essientially wanted a Vagrant environment I could use to test the new OS, new disk configuration and my Ansible playbook to provision the machine. This will be helpful in the future as well, when I want to install something new on my server I can run all the changes on the VM and test before pushing the update to my server.

To accurately write and test my Ansible playbook and test my configuration for MergerFS and SnapRaid I needed to virtualize multiple hard drives using Vagrant and VirtualBox. My final server consists of 5 total hard drives and wanted to make sure the migration was going to be smooth.

Below is the complete Vagrantfile I ended up with to attach additional hard drives to a Vagrant VM:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|  
  config.vm.box = "geerlingguy/ubuntu1604"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network "private_network", ip: "192.168.30.200"

  parityDisk = './parityDisk.vdi'
  dataDisk1 = './dataDisk1.vdi'
  dataDisk2 = './dataDisk2.vdi'

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"

    # Building disk files if they don't exist
    if not File.exists?(parityDisk)
      vb.customize ['createhd', '--filename', parityDisk, '--variant', 'Fixed', '--size', 10 * 1024]
    end
    if not File.exists?(dataDisk1)
      vb.customize ['createhd', '--filename', dataDisk1, '--variant', 'Fixed', '--size', 10 * 1024]
    end
    if not File.exists?(dataDisk2)
      vb.customize ['createhd', '--filename', dataDisk2, '--variant', 'Fixed', '--size', 10 * 1024]

      # Adding a SATA controller that allows 4 hard drives
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata', '--portcount', 4]
      # Attaching the disks using the SATA controller
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', parityDisk]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', dataDisk1]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', dataDisk2]
    end
  end

  config.vm.provision "shell", inline: <<-SHELL
    sudo mkfs.ext4 /dev/sdb
    sudo mkfs.ext4 /dev/sdc
    sudo mkfs.ext4 /dev/sdd
  SHELL

end  

The first thing in the Vagrantfile of interest is declaring the disk names that we will use. These files will appear in your working directory (I recommend adding them to your .gitignore).

parityDisk = './parityDisk.vdi'  
dataDisk1 = './dataDisk1.vdi'  
dataDisk2 = './dataDisk2.vdi'  

Now we need the specifiy the actual VirtualBox command to create these hard drive files. The below snippet checks if the hard drive file exists and if not executes the "createhd" command of the VirtualBox command line to create the hard drive. In my case I cam creating a fixed hard drive of 10GB.

if not File.exists?(parityDisk)  
      vb.customize ['createhd', '--filename', parityDisk, '--variant', 'Fixed', '--size', 10 * 1024]
end  

After creating the last hard drive we need to attach them to Virtual Machine. To do that, we create a SATA storage controller and use that to attach the hard drive files to the VM. Important to note here is that the SATA controller supports 4 disks and when attaching the storage I assign a different port to each drive.

if not File.exists?(dataDisk2)  
      vb.customize ['createhd', '--filename', dataDisk2, '--variant', 'Fixed', '--size', 10 * 1024]

      # Adding a SATA controller that allows 4 hard drives
      vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata', '--portcount', 4]
      # Attaching the disks using the SATA controller
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', parityDisk]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', dataDisk1]
      vb.customize ['storageattach', :id,  '--storagectl', 'SATA Controller', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', dataDisk2]
end  

Now that the drives are attached to the VM we want to format them so they are ready to use. The shell provisioner can be used to execute the correct commands to format the drives as EXT4.

config.vm.provision "shell", inline: <<-SHELL  
    sudo mkfs.ext4 /dev/sdb
    sudo mkfs.ext4 /dev/sdc
    sudo mkfs.ext4 /dev/sdd
SHELL  

Overall, this method worked well for me and I was able to write and test my larger Ansible playbook on a VM closer to my targeted production hardware. Using the vb.customize API of Vagrant allows you to do some complex setups for virtual machines.

Recently I have been working on upgrading my home server. I keep a home server where me and the wife can back up our files and store our media. The server is really just a spare desktop with some hard drives, not speciailized server hardware, but server our purpose well…

Read More

Automated Deployment of MSP430 Firmware (Part 2)

I recommend reading Part 1 to get a general understanding of what this project is trying to achieve and the first steps we took to get our system in place.

Now that we have all the tools needed to deploy firmware, we can start to build an Ansible playbook to automate deployment of new firmware to the MSP430s on the network.

Looking back at the incredibly detailed flowchart I presented in the first section:
Deployment Flowchart

We have compiled the tools to flash the MSP430F5529 firmware using the Raspberry Pi, now we will be setting up the Ansible playbook to automate the firmware deployment.

The hardware setup I'm using consists of two Raspberry Pis (one Raspberry Pi B and one Raspberry Pi 2), along with two MSP-EXP430F5529LP connected over USB to the Raspberry Pis. The setup:
Setup

If you want to skip ahead or run into issues along the way, check out the completed Ansible playbook on my GitHub page.

I highly recommend going through the Ansible docs to learn more about how Ansible works and best practices. The rest of the tutorial will just target our use case and explain the playbook I setup.

Installing Ansible

First off, we will need to install Ansible, the tool we will use to manage the Raspberry Pis on the network. The Ansible Docs are a great resource and should provide steps on how to install in your environment.

Hopefully this is pretty straight forward, I will be running on an Ubuntu machine but the rest of this post should be the same regardless of OS.

Setting Up Hosts File

First create a folder that will be holding our playbook. The first file we want to create is the hosts file that contains the machines we will use to deploy the firmware. So go ahead and create a file in your directory called hosts and add something like this to the file:

[pis]
192.168.0.47 ansible_ssh_user=pi  
192.168.0.48 ansible_ssh_user=pi  

You will need to swap out the IP addresses for the names of the machines you will be using. If you're only using one machine that's fine, Ansible will work the same way no matter how many machines we have in our system. Also note the ansible_ssh_user keyword next to each machine, this defines the user you use to ssh into the machine so change it if you're not using the default raspbian user. Ansible uses ssh to configure the machine.

After creating the file I recommend pinging the machines to ensure ssh is working and Ansible can communicate with the machines. Try running:

ansible -i hosts all -m ping  

Let's break down this command. First we tell ansible to use our hosts file instead of looking in the Ansible installation with -i hosts. Next we tell ansible to run the commands on all hosts found in the hosts file. Finally we define the module to run with -m ping.

Setting Up Roles

Now that Ansible can communicate with the Raspberry Pis on the network we will now define a playbook to deploy the firmware. Our playbook will consist of two roles that machines can be assigned to. The first role is "common" which defines some useful packages and settings we want on all our machines. The second roles is the "mspdebug" role whid is designed to be a reusable role that installs all dependencies to get mspdebug working on a Raspberry Pi, including installing the mspdebug binary we build earlier.

The "Common" Role

For this simple example, all machines we are configuring are also deploying firmware. In a more complex setup, if you wanted to keep one machine as a "build" machine for building the firmware or mspdebug then it would need a separate set of packages installed and environment setup. This is where the "common" role comes into place. This role can define configuration that happens across all machines in the setup, not just ones dedicated to firmware development.

So start off by creating a directory for the common role and the tasks we will be creating.

mkdir -p roles/common/tasks  

In this directory we will create a main.yml file that defines the tasks to complete. Here is my example:

---
# Generic things to get machines up to date and usable

- name: Install packages
  apt:
    pkg={{item}}
    state=installed
  sudo: yes
  with_items:
    - vim
    - zsh
    - git
    - tmux
    - htop

- name: Change default shell
  user:
    name=pi
    shell=/usr/bin/zsh
  sudo: yes

This configuration is pretty personal and is just some general packages I want installed if I need to work on the machine. The first task installs useful optional packages and the second task changes the shell to zsh. Feel free to configure these to your preference. The playbook uses the apt module and the user module to install packages and configure the user.

Ideally our firmware deployment system will be completely automated... but if issues start happening having a decent toolset already installed on all machines will help debugging, that's where the common role comes in handy.

The "MSPDebug" Role

The next role our Raspberry Pis will use is a mspdebug role. This role will use the mspdebug binary we built in the first tutorial to actually program our firmware to the MSP430 LaunchPads.

mkdir -p roles/mspdebug/tasks  

In this directory we will have three separate files. A playbook to install our mspdebug application and a playbook to download the firmware. This separation is necessary so that you can update the firmware without having to reinstall mspdebug. At the same time you can also update mspdebug without updating the firmware of the connected LaunchPad.

We also need to create a directory to keep all the files we'll be copying over to the server in.

mkdir -p roles/mspdebug/tasks  

Looking back at my example on GitHub, you can see I have checked in the binaries we built in the last post.

In the tasks directory for mspdebug we can create a install.yml file that installs mspdebug. Here is my example:

---
# Playbook to install msp debug

- name: Install libmsp430.so
  copy:
    src=libmsp430.so
    dest=/usr/local/lib
  sudo: yes

- name: Install mspdebug
  copy:
    src=mspdebug
    dest=/usr/local/bin
    mode=775
  sudo: yes

- name: Add /usr/local/lib to LD_SEARCH_PATH
  lineinfile:
    dest=/etc/ld.so.conf
    line=/usr/local/lib
    state=present
  sudo: yes
  register: ld

- name: Rebuild LD cache
  command: /sbin/ldconfig
  sudo: yes
  when: ld.changed

This playbook is broken into 4 steps.

  1. Copy the libmsp430.so library to /usr/local/lib
  2. Copy the mspdebug binary to /usr/local/bin and make it executable
  3. Add /usr/local/lib to the library search path
  4. Rebuild the library path if we needed to change the library search path

In this playbook I'm using the copy module which simply copies files to all remote servers as well as the lineinfile module for simple edits to a text file on remote systems and the command module to execute arbitrary commands. Once the installation playbook has run, all dependencies will be met and the Raspberry Pi will be ready to flash an MSP430.

Alongside the install.yml playbook I also created an update_firmware.yml playbook to actually run the commands to download to the MSP430. Here is my playbook implementation:

---
# Playbook to update firmware on connected devices

- name: Create a directory to store the firmware
  file:
    path=/var/ansible
    state=directory
    owner=pi
  sudo: yes

- name: Copy the firmware to the hosts
  copy:
    src={{ firmware_name }}.out
    dest=/var/ansible

- name: Download the firmware to the devices
  shell: mspdebug tilib --allow-fw-update --force-reset "prog /var/ansible/{{ firmware_name }}.out"
  sudo: yes

The new feature in this playbook is my use of ansible variables. Here I use the {{ firmware_name }} variable to substitute the filename of the firmware image. This way, when a new firmware image comes along I can just update the variable, and it will propagate through the playbook.

Finally we need a high level playbook to map the roles to our different hosts. In our example every host we connect to will be deploying firmware. My top level playbook site.yml consists of the following:

---
# This playbook deploys firmware to all hosts

- name: Configure and deploy firmware
  hosts: all
  remote_user: pi
  roles:
    - common
    - mspdebug

This playbook simply tells ansible that all hosts will have the "common" and "mspdebug" role.

If you're following along I recommend comparing your own setup to my GitHub repository.

Running the Code

Now that we have all of our ansible configuration written and the tools we need compiled for the Rapsberry Pi we can start the automatic deployment of our firmware.

To recap, the playbook will do the following:

  1. Install some common tools on all the Pis (vim, tmux, etc...)
  2. Install an ARM compiled mspdebug on all the Pis, along with associated libraries
  3. Copy the firmware to the Raspberry Pis
  4. Invoke mspdebug with the firmware and flash the MSP430

To run the playbook we can invoke it using ansible-playbook:

ansible-playbok -i hosts site.yml  

This tells ansible which hosts file to use and the playbook to run. When we are ready to change the firmware to a new version we can either

  1. Update the group_vars/all file to include the name of the new firmware image
  2. Pass the name of the new firmware image from the command line when invoking anible-playbook.

To pass the firmware image name as a variable on the command line you can invoke:

ansible-playbook -i hosts site.yml --extra-vars "firmware_name=Blink_v2"  

To see a quick demo of the setup in action I program two different firmware images, one that blinks a red LED and one blinking a green LED.

Next Steps

There are a ton of ways to improve this setup. Some next steps might be:

  • Flash multiple MSP430s connected to a single Raspberry Pi
  • Dedicate a Raspberry Pi on the network to building mspdebug and needed libraries. Once an update occurs, push that to all Pis in the system
  • Keep a database of which MSP430s are connected to which Pis, deploy different firmware to different connected devices
  • Integrate with a Continuous Integration tool like Jenkins, so that if all Jenkins tests pass, the firmware gets automatically deployed
  • Flash a device over BSL using the GPIO pins from the Raspberry Pi rather than the on board debugger on the LaunchPad
  • Use ansible tags to skip parts of the playbook that do not need to be run every time when deploying firmware

Want to Get Started?

I recommend reading Part 1 to get a general understanding of what this project is trying to achieve and the first steps we took to get our system in place. Now that we have all the tools needed to deploy firmware, we can start to build an Ansible playbook to…

Read More