Publishing Data to Azure Event Hubs from Particle Core using Webhooks, creators of the most excellent Spark Core device are now known as Particle!  Please keep in mind as a reader that the following information applies to the Spark Core device as well as Particle Core. 

In a previous post I discussed sending messages from a Spark Core Device to Azure Event Hubs by means of an Azure Mobile Service Proxy.  The solution raises a variety of concerns around security and is a bit cumbersome to implement.  It was then revealed to me that Spark (now Particle) employs a WebHooks service that can allow for triggering a request to a remote endpoint.  I had the experience of working together with David Middlecamp from Particle’s engineering team to create an extension for enabling Azure Event Hubs through this service.  In this post, I will guide you through setting up an Particle WebHook capable of sending data to an Azure Event Hub.



  1. Create an Azure Event Hub and configure a Shared Access Policy with the Send permission enabled
  2. Install the Particle CLI tool on your operating system of choice (You may skip the portion on enabling DFU-Util)
  3. Create a new file named webhook.json with a structure similar to the following (See: Particle Webhooks Documentation):
Note: There is a max send size of 255 characters from the Particle Core Device, please keep this in mind when naming variables!  Also, the “azure_sas_token” is very important as it is used server-side by the WebHooks service to appropriately forward to your Event Hub REST API.
  1. Launch the Particle CLI tool (Open CMD prompt on Windows) and type “particle login” then login with your credentials
  2. Navigate to the folder containing webhook.json and type “particle webhook create webhook.json”
  3. Verify your webhook was created with “particle webhook list”
  4. Now in the Particle Web IDE you can send data to your Azure Event Hub using “Spark.publish(“NAME_OF_YOUR_EVENT“, payload);”
  5. Verify your data is sending appropriately from the Particle CLI tool by running “particle subscribe mine”

Conclusion: is an excellent device for microcontroller prototyping, especially where web connectivity is required.  I have found my Particle Core devices are extremely resilient, being 100% operational after using non-stop for months at a time.  The device is even smart enough to reconnect if the network goes out.  As a result, I believe this device to be a an excellent contender in the Internet of Things space.  Furthermore, now that the device supports Azure Events Hubs via WebHooks, one can relatively easily craft a scenario involving up to 1 million messages coming in per second for processing via Microsoft Azure.  For a full working example, check out this implementation of Particle WebHooks in the open-source ConnectTheDots project from MSOpenTech.

Streaming Xbox One Games to Windows 10 Preview – Tutorial

Earlier today, a variety of exciting announcements were made from Microsoft at this year’s E3 gaming conference regarding Xbox One.  These include backwards compatibility with Xbox 360, playing Xbox One games on Oculus Rift, and streaming games to Windows 10 devices!  I was unaware that this was already available until it was alluded to by a tweet from XboxQwik.  Having the Preview bits for Windows 10 already installed, I decided to explore and wound up figuring out how to enable Xbox One streaming to your Windows 10 device!




Please keep in mind, Xbox One to Windows 10 streaming is currently in preview and subject to change.  For more information, check the official streaming FAQ from  It is very exciting to see the direction the Xbox team has taken with integrating into Windows 10.  I can’t wait to see the final product when Windows 10 is officially released in late July!  Feel free to post your findings and experience in the comments!

Happy Hacking and Game On!

RPi + WinPhone + MS Band + Azure + Excel + Audio-Controlled LEDs = Hot Tub Time Machine From the Future

Hot Tub Time Machine from The Future – Music Entertainment System

The Internet of Things and Houston weather have one thing very much in common.  They are sooooo hot right now!  Inspired by this, I have been thinking a lot about outdoor projects that interact with the cloud, for example my recent Spark Core powered Hot Tub monitor.  This trend is only just now beginning to take off with plenty of exciting projects forming in the space including Rachio’s IoT sprinkler system and this most excellent homebrew soil monitor running on an Intel Edison.  These examples highlight how we can operate on data to produce interactions and inferences which apply to the physical world.  This, I believe, is the core of IoT’s ability to change our lives in the future.

I propose that if the Internet of Things is the future, then projects which incorporate it bring the future to those “things” involved.  Deriving from my personal passion for music and entertaining, I decided to explore how IoT could assist in amplifying those passions.  As a software developer, there is no better feeling than creatively applying our talent to produce extensions of our interests which serve to enhance our experience.   Today’s project combines an array of seemingly disparate technologies to produce a voice-controlled music entertainment system combined with flashing lights and a good old cloud-enabled Excel report for analyzing playback data.   I call it, “Hot Tub Time Machine From the Future”.



All code with instructions on use and configuration can be found in the MusicNet repository.  Simply follow the instructions in the Readme and deploy the Windows Phone project to your device.

How it works:

We leverage the PiMusicBox project to turn the Raspberry Pi into a network enabled Jukebox.  This project is amazing and allows for playing back from a variety of sources including Youtube, Spotify, SoundCloud etc., in addition to SMB share and local files.  After installing and configuring PiMusicBox, simply plug up some speakers and anyone on your home network can now access the device by ip or using the “musicbox.local” hostname.  We then modify the Last.FM Scrobbler plugin on the PiMusicBox to push the playback result into an Azure Mobile Service Table.  We can then connect to this Data Source via Excel and provide a variety of visualizations by using a pivot table over Artist and TrackName.

The Windows Phone app connects to the the Mopidy service running on PiMusicBox to allow for API level access for controlling things like Pause, Play, Next Track etc.  Using the speech API on the phone we define a series of voice commands that can launch the app from Cortana and speak to the PiMusicBox through the aforementioned Mopidy service.  As a result, this just works from the Microsoft Band with no modifcation needed because the Band supports Cortana out the box!

Finally, the blinking lights connect into the Audio Controller and are mounted.  Make sure to place the LED Audio Controller within reasonable proximity to the speaker system connected to the PI.


Voice-controlled music playback with blinky lights and Azure-powered Excel reporting is awesome!  Now the idea is how to take it further!  What if we took the result of the current playing track and displayed it along with current listener satisfaction on a projector of sorts allowing for dissatisfied listeners to upvote or downvote in real-time?  What if Cortana controlled the hot tub itself?  What if the music genre changed depending on the temperature of the hot tub?  What if the “Hot Tub Time Machine” knew what the best music was for the mood based on weather data?  Feel free to leave your suggestions in the comments.  Until next time, Happy Hacking!


Spark Core + DS18B20 + Azure Event Hubs = IoT Hot Tub / Pool Monitor



Summer is here in Houston and there is no better time to get in the water to cool off or warm up.  I am particularly fond of the latter, especially with a good group of friends, food, and refreshments.  The problem is,  it can be hard to tell when the hot tub is ready without actually getting in or looking at a temperature gauge.

Enter Spark Core, a Wi-Fi enabled micro-controller that can be programmed remotely from  a web-based IDE.  I absolutely love this device for ease of programming, portability, and resilience.  Let me reiterate on resilience.  I have successfully ran my Spark Core devices for weeks at a time with zero downtime.  I think these things are pretty much impervious to breakdown so far.  To give a visual output of the temperature, I added a Spark Button device to glow a specific color in relation to the current reading.

I have had alot of fun getting this device to work with the ConnectTheDots project from MSOpenTech.  This project allows for connecting a variety of sensors either directly or through a gateway, into Azure Event Hubs for real-time Streaming Analytics processing.  The results are then displayed in an Azure Web Portal.  I absolutely love this project for it’s ability to walkthrough a serious end-to-end IoT solution and have enjoyed both contributing and delivering IoT Dev Camps based on the project.

Build your Own!

The hot tub monitor solution leverages my previous post on “Sending messages to Azure Event Hub with Spark over AMS API Proxy“.  All code has been contributed to the ConnectTheDots project.  To replicate, simply follow the instructions for setting up the Spark+DS18B20 documentaion and insert into your pool or Hot Tub!  The only modification I made was to use a portable 5v battery source which could be greatly improved using a 5v solar cell.


He’s heating up!


He’s on FIRE!








You may be wondering, is this really useful?  I actually ended up using the device over the weekend, and was happy to see the ring turn red indicating to guests that Hot Tub was in fact ready!  This no doubt got a lot of questions and spawned some discussion on improving the Hot Tub experience further.  Stay tuned for an update on what we plan to implement.  A few hints, it involves a Microsoft Band, Windows Phone App, raspberry Pi, and audio responsive LED driver!


Microsoft IoT DevCamps Announced!

Microsoft has recently announced a series of Microsoft IoT DevCamps taking place across the United States through May and June with more being planned (stay tuned here for updates) or check the official announcement.


Date                          Speaker                                         Locale

5/12/15 Paul DeCarlo Chicago
5/15/15 Stacey Mulcahy New York
5/29/15 Bret Stateham Sunnyvale


We just wrapped up the first event in Chicago, Illinois with an excellent group of industry professionals and IoT enthusiasts.  The content is very exciting as it leverages connecting the popular Raspberry Pi 2 device + Arudnio + WeatherShield and .NET Gadgeteer to an Azure Event Hub for real-time processing and visualization of data through Streaming Analytics and an Azure Websites front-end.  This is a truly hands-on lab where we outfit attendees with the hardware and Azure services to walkthrough a complete end-to-end Iot solution in the cloud.  The lab content comes from the ConnectTheDots project from MSOpenTech.



Introducing IoT


Hands-On Development


Raspberry Pi + Gadgeteer Kits


Rasp Pi + Arduino



Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Evvvvvveryboddddddy!

Discover how to use Microsoft Azure services as a full, end-to-end IoT solution.  We’ll put Event Hubs, Streaming Analytics, and Websites to the task of presenting data from a variety of hardware devices.

Sending messages to Azure Event Hub with Spark over AMS API Proxy

In this article, I will describe how to publish data from a Spark Core to an Azure Event Hub for real-time processing using Azure Mobile Services as a message proxy.

Spark OS is a distributed operating system for the Internet of Things that brings the power of the cloud to low-cost connected hardware.  Spark provides an online IDE for programming a Wi-Fi enabled Arduino-like device known as the Spark Core.  Azure Event Hubs are a highly scalable publish-subscribe ingestor that can intake millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Once collected into Event Hubs you can transform and store data using any real-time analytics provider or with batching/storage adapters.

To begin, I took an approach of using the Event Hubs Rest API Send Event.  This seemed straightforward, simply create a request with the appropriate Request Headers over HTTP as HTTP and HTTPS are mentioned as supported in the documentation.  However, when sending this request over HTTP, I received “Transport security is required to protect the security token” when including the necessary “Authorization” header.  This poses a bit of a problem as the light-weight Spark device is unable to perform the computations necessary to send SSL requests.

Azure Mobile Services to the rescue!

I first created a new Azure Mobile Service with a Javscript backend:

1- Create Mobile Service

1.5 - Create Mobile Service

Next, I created a new API within the service named “temp”:

2 - Create API


Finally, create an Azure Service Bus with an Event Hub by following the instructions in Hypernephelist’s “Sending data to Azure Event Hubs from Node.JS using the REST API“.


The idea being, that I could successfully send data to the Mobile Service API as documented in Brian Sherwin’s “Wiring Up the Spark Core To Azure” then forward this data to the event hub using the information provided in Hypernephelist’s “Sending data to Azure Event Hubs from Node.JS using the REST API“.  Essentially creating an Proxy via Azure Mobile Services to get data from the Spark in to an Azure Event Hub.


Let’s begin, by building the Event Hub Proxy in the “temp” API.  This API will require Custom Node.JS packages that can be installed by following Redbit’s “Using Custom NodeJS Modules with Azure Mobile Services“.  Follow the instructions and be sure to run an npm install for https, crypto, and moment as these are required to generate the SAS Key for sending data through the Event Hub rest Service.

The actual API code is below (with heavy reliance on Hypernephelist’s example), you will need to modify this by editing in the Azure API editor within the Azure portal, or modifying on disk after cloning per Redbit’s instructions.  Be sure to edit the namespace, hubname, my_key_name, and my_key variables with the appropriate values from your Azure Event Hub.

3 - Azure API Editor

/************************Begin AMS Code**********************

var https = require('https');
var crypto = require('crypto');
var moment = require('moment'); = function(request, response) {

function sendTemperature(payload) {
// Event Hubs parameters
var namespace = 'EVENTHUBNAMESPACE';
var hubname ='EVENTHUBNAME';

// Shared access key (from Event Hub configuration) 
var my_key_name = 'KEYNAME'; 
var my_key = 'KEY';
// Payload to send
//payload = "{ \"temp\": \"100\", \"hmdt\": \"78\", \"subject\": \"wthr\", \"dspl\": \"test\"," + "\"time\": " + "\"" + new Date().toISOString() + "\" }";

// Full Event Hub publisher URI
var my_uri = 'https://' + namespace + '' + '/' + hubname  + '/messages';

// Create a SAS token
// See

function create_sas_token(uri, key_name, key)
    // Token expires in one hour
    var expiry = moment().add(1, 'hours').unix();

    var string_to_sign = encodeURIComponent(uri) + '\n' + expiry;
    var hmac = crypto.createHmac('sha256', key);
    var signature = hmac.digest('base64');
    var token = 'SharedAccessSignature sr=' + encodeURIComponent(uri) + '&sig=' + encodeURIComponent(signature) + '&se=' + expiry + '&skn=' + key_name;

    return token;

var my_sas = create_sas_token(my_uri, my_key_name, my_key)


// Send the request to the Event Hub

var options = {
  hostname: namespace + '',
  port: 443,
  path: '/' + hubname + '/messages',
  method: 'POST',
  headers: {
    'Authorization': my_sas,
    'Content-Length': payload.length,
    'Content-Type': 'application/atom+xml;type=entry;charset=utf-8'

var req = https.request(options, function(res) {
  //console.log("statusCode: ", res.statusCode);
  //console.log("headers: ", res.headers);

  res.on('data', function(d) {

req.on('error', function(e) {



/************************End AMS Code**********************

Finally, we need to set up the Spark Core with appropriate code to push data to the API in our Mobile Service. I leveraged the HttpClient as it has great logging features for debugging and is a bit easier to wield compared to Spark’s lightweight TCPClient. I also import SparkTime.h to generate the timestamp for messages from the Spark itself. Simply flash this code to your spark device, taking care to appropriately modify the AzureMobileService, AzureMobileServiceAPI, AzureMobileServiceKey, and deviceName variables. Note that the payload sent in this particular example corresponds to the expected payload in the Connect the Dots Project from MSOpenTech. This implies that there will soon be support for the Spark Core in this amazing project!

4 - Spark Editor

/************************Begin Spark Code******************

// This #include statement was automatically added by the Spark IDE.
#include "HttpClient/HttpClient.h"

// This #include statement was automatically added by the Spark IDE.
#include "SparkTime/SparkTime.h"

String AzureMobileService = "";
String AzureMobileSeriveAPI = "APINAME";
char AzureMobileServiceKey[40] = "MOBILESERVICEKEY";
char deviceName[40] = "SparkCore"; 

UDP UDPClient;
SparkTime rtc;
HttpClient http;

void setup()
    rtc.begin(&UDPClient, "");
    rtc.setTimeZone(-5); // gmt offset

void loop()
    unsigned long currentTime;
    currentTime =;
    String timeNowString = rtc.ISODateUTCString(currentTime);
    char timeNowChar[sizeof(timeNowString)]; 
    strcpy(timeNowChar, timeNowString.c_str());
    char payload[120];
    snprintf(payload, sizeof(payload), "{ \"temp\": \"76\", \"hmdt\": \"32\", \"subject\": \"wthr\", \"dspl\": \"%s\", \"time\": \"%s\" }", deviceName, timeNowChar);
    http_header_t headers[] = {
        { "X-ZUMO-APPLICATION", AzureMobileServiceKey },
        { "Cache-Control", "no-cache" },
        { NULL, NULL } // NOTE: Always terminate headers with NULL
    http_request_t request;
    http_response_t response;
    request.hostname = AzureMobileService;
    request.port = 80;
    request.path = "/api/" + AzureMobileSeriveAPI;
    request.body = payload;, response, headers);
    Serial.print("Application>\tResponse status: ");

    Serial.print("Application>\tHTTP Response Body: ");

/************************End Spark Code********************

Voila! I am able to verify my Spark is appropriately forwarding data to my ConnectTheDots portal!

5 - CTD Portal

We can also verify / debug by connecting to our Spark Core over serial and monitoring the output of HTTPClient

6 - Putty
I absolutely love developing on the Spark device do to it’s simplicity to update and convenient online IDE. Now with the power of Azure, we can real-time analyze data coming from one of these devices!

You can find the latest code included in this project at the DXHacker/SparkEventHub repo on Github.

Training Kinect4NES to Control Mike Tyson’s Punch-Out!

Kinect4NES @ HackRice – First Time Player Knocks out Glass Joe

In a previous post, I talked about how to create an interface to send controller commands to an NES based on interaction with the Kinect v2.  The idea was successful, but I received a bit of feedback on the control being less than optimal and a suggestion that it would likely work well with a game like Mike Tyson’s Punch-Out.

This aroused an interesting challenge, could I create a control mechanism that could allow me to play Mike Tyson’s Punch-Out using Kinect4NES with enough stability to accurately beat the first couple characters?

Let’s first look at how control was achieved in the first iteration of Kinect4NES.  There are essentially 2 ways of reacting to input on the Kinect, using a heuristic-based approach based on relatively inexpensive positional comparison of tracked joints or gesture based tracking (either discrete or continuous).  For my initial proof of concept, I used the following heuristic-based approach:


Taken from CalcController(Body body) in MainWindow.xaml.cs

* DPad from Calc

var dpadLeft = ((leftWrist.Position.Y > mid.Position.Y – 0.20) && (leftWrist.Position.X < mid.Position.X -0.5));
var dpadRight = ((rightWrist.Position.Y > mid.Position.Y – 0.20) && (rightWrist.Position.X > mid.Position.X + 0.5));
var dpadUp = ((leftWrist.Position.Y > head.Position.Y) || (rightWrist.Position.Y > head.Position.Y));
var dpadDown = ((spineBase.Position.Y – knee.Position.Y) < 0.10);
var start = ((head.Position.Y < shoulder.Position.Y));


As you can see this a basic approach that just compares current joint positions and if the condition is satisfied, it activates that controller input.

Ideally, we would like to have natural body movements drive our interaction with Mike Tyson’s Punch-Out.  To begin, we need to familiarize with they way the game is controlled by the NES controller.  I was lucky enough to come across a copy of the game at a local flea market around the time this project idea was going on in my head, the same one where I had found boxed NES controllers a couple weeks earlier.  I found an online manual which described the various game inputs and used these as a basis for defining my gestures.


-)  : Dodge to right
(-  : Dodge to left
DOWN: Once: block
      Twice rapidly: ducking

--- Left body blow (B + UP = Punch to left face)
|    -- Right body blow (A + UP = Punch to right face)
|    |
B    A

(When Mac is knocked down, press rapidly and he'll get 

SELECT: If pressed between rounds, Doc's encoraging
        advice can increase Mac's stamina
START:  Uppercut (If the number of stars is 1 or


Take note of how some of these inputs are button combinations or rapid presses.  We will revisit later how I optimized the mechanism to account for these cases.

To begin with creating the gestures, I started a new solution using the Visual Gesture Builder Preview included in the Kinect v2 SDK to create a series of Discrete Gesture projects for each of the behaviors identified in the Punch-Out manual.


For each of these projects, I had my brother perform a decided gesture with approximately 20 positive cases (gestures that should be considered as performed successfully) and 5 or so negatives (gestures that should not be considered performed successfully).  I.E. for the Uppercut, he would perform 20 uppercuts with the right hand for positive cases and a few regular left and right punches for the negative cases.  This way, we won’t accidentally perform an uppercut when a regular left or right punch is thrown.

KinectStudioAfter obtaining a successful recording, we add the clip to the appropriate project in our Visual Gesture Builder project.  Here we meticulously tag the key frames to indicate the frames where a successful gesture is performed.  As a result, areas that are not tagged are considered negative cases.


We then perform a build of the project which uses the Adaboost algorithm to learn the intended positions of the joints to create a state machine for determining a successful gesture.  Each project outputs a .gba file which are composed into a .gbd when building the solution.


We repeat this for all of our projects and then verify the .gbd with “File => Live Preview” in Visual Gesture Builder.  This allows us to see the signal generated by our current pose for all produced gesture projects, very handy for determining whether a given gesture creates interference with another.  In the image below, you see a very clear signal is generated by the uppercut pose.


With the recorded gestures verified, I looked at the sample code used in the “Visual Studio Gesture Builder – Preview” project included in the Kinect SDK browser.



From here, I incorporated the relevant bits into GestureDetector.cs.  In my original implementation, I iterated through all recorded gestures and employed a switch to perform the button press when one was detected.  This proved to be ineffecient and created inconsistent button presses.  I improved this significantly in my second update using a dictionary to hold a series of Actions (anonymous functions that return void) and a parallel foreach, allowing me to eliminate cyclomatic complexity in the previous switch while allowing me to process all potential gestures in parallel.  I also created a Press method for simulating presses.  This allowed me to send in any combination of buttons to perform behaviors like HeadBlow_Right (UP + A).  I also implemented a Hold method to make it possible to perform the duck behavior (press down, hold down).  In the final tweak, I implemented a method to produce a RapidPress for the Recover gesture.  This allowed me to reproduce a well known tip in Punch-Out where you can regain health in between matches by rapidly pressing select.

This was a rather interesting programming excercise, imagine coding at 2 in the morning with the goal of optimizing code for the intent of knocking out Glass Joe in a stable repeatable manner.  The end result wound up working well enough to where a ‘seasoned’ player can actually TKO the first two characters with relative regularity.  In the video at the top of this post, the player had actually never used the Kinect4NES and TKO’d Glass Joe on his first try.  As a result, I am satisfied with this experiment, it was certainly a fun project that allowed me to become more familiar with programming for the Kinect while also having the joy of merging modern technology with the classic NES.  For those interested in replicating, you can find the source code on github. If you have any ideas on future games that you would like to see controlled with Kinect4NES, please let me know in the comments!

Porting Open Source Libraries to Windows for IoT (mincore)

Microsoft is bringing Windows to a new class of small devices. Riding the crest of the “Internet of Things” movement, Microsoft is looking to capitalize on devices and sensor capabilities of popular development boards. Recently, members of the Windows Developer Program for IoT have been able to gain access to a build of Windows which supports the Intel Galileo chipset.

Bringing Windows to small devices is a huge feat that opens the door to many development opportunities.  Of course, this means, a lot of existing code can be brought over to aid in creating IoT solutions.  This post aims to identify the specifics of compiling two open source libraries to this new version of Windows.

The libraries in question concern apache-qpid-proton a light-weight messaging framework for sending AMQP messages and OpenSSL, an open-source library for implement Secure Socket Layer protocols.

Why these two libraries?  They were necessary for creating a Win32 application capable of sending AMQPS messages up to an Azure Event Hub as part of Galieo device support in the super awesome Connect the Dots project from MSOpenTech. More importantly, we get to encounter two rather distinct compilation exercises.  Apache Qpid can send AMQP (without the S) messages on its own, but Azure requires these are sent over SSL.  So we need to compile Apache Qpid against OpenSSL to get AMQPS support.  In addition, Apache Qpid gives us a Visual Studio Solution to work with while OpenSSL is built using a makefile in combination with Perl and python processors for producing the makefile itself.  This post will explain these scenarios and the necessary changes required to target Windows for IoT through the Visual Studio project for Apache Qpid and the makefile for Open SSL.

Let’s begin by looking at the default property configuration for the Apache Qpid Visual Studio Project:


Normally, when compiling a Win32 application for a desktop PC, we will compile against Win32 libraries contained in C:\Windows\System32.  When targeting the Intel Galileo board you will notice that the default Intel Galileo Wiring App template contained in the Windows Developer Program for IoT MSI links against a single library, mincore.lib.  Jer’s blog goes into the best known detail on what this is. Long story short, we need to compile against mincore.lib in order to obtain code capable of running on the Galileo as the mappings for System and Win32 functions are completely different in Windows for IoT and contained in this particular lib.  This sets the basis for rules #1 and #2.


1. Remove all references to System 32 libs and replace with a reference to Mincore.lib



2. For all references removed in step 1, add these Dlls to the IgnoreDefaultLibraries Collection, this ensures that the linker will not attempt to link to these Dlls, as we want to link to references in Mincore only.  Note: I have added compatible OpenSSL binaries to Additional Dependencies to enable OpenSSL support


In addition, we need to consider the hardware present on the Galileo board itself.  Intel outfits the board with an Intel® Quark™ SoC X1000 application processor, a 32-bit, single-core, single-thread, Intel® Pentium® processor instruction set architecture (ISA)-compatible, operating at speeds up to 400 MHz.  This processor does not support enhanced instruction sets including SSE, SSE2, AVX, or AVX 2.    This sets the basis for rule #3.


3.  Ensure all code is compiled with the /arch:IA32 compiler flag



You can now build Apache-Qpid-Proton to target Windows for IoT on the Intel Galileo, however, in order to be useful, we need to compile again OpenSSL as Azure event hubs require that we send messages using AMQPS.  Without OpenSSL support, we can only send AMQP messages which will be ignored by the Azure event hub.

There is an excellent article on compiling Apache-Qpid Proton against OpenSSL for Windows @

I don’t want to reproduce the content there so let’s talk about the changes necessary to target the Windows on the Galileo board.

In step A.3 the author describes the process for compiling the OpenSSL dynamic linking libraries using “nmake –f ms\ntdll.mak install”.  Nmake is Microsoft’s build tool for building makefiles.  To use the tool you can access it within a Visual Studio command prompt, from it’s actual location in C:\Program Files (x86)\Microsoft Visual Studio X.X\VC\bin, or call C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\vsvars32.bat in a standard command prompt to allow for the path to nmake to be available in your current shell.  The problem is the default makefile is configured to build against Win32, i.e. Win32 on the desktop.

Let’s take what we learned above and apply it to the ntdll makefile:

Inside the untouched ntdll.mak you will see the following:

# Set your compiler options
APP_CFLAG= /Zi /Fd$(TMP_D)/app
APP_EX_OBJ=setargv.obj $(OBJ_D)\applink.obj /implib:$(TMP_D)\junk.lib
# add extra libraries to this define, for solaris -lsocket -lnsl would
# be added
EX_LIBS=ws2_32.lib gdi32.lib advapi32.lib crypt32.lib user32.lib

# The OpenSSL directory

LFLAGS=/nologo /subsystem:console /opt:ref /debug


We essentially have a section of the makefile which outlines compiler flags and linker flags.  Here we can apply the rules from above to create a makefile that will produce a Win32 compatible library which targets the Intel Galileo.

Applying Rule #1 we remove the libs mentioned in EX_LIBS and replace with mincore.lib

Applying Rule #2 we take the libs that were in EX_LIBS and add to the linker flags (LFLAG): /NODEFAULTLIB:NAMEOFLIBARY

Applying Rule #3 we add /arch:IA32 to each compiler flag (*CFLAG)


This yields the following changes:

# Set your compiler options
APP_CFLAG= /arch:IA32 /Zi /Fd$(TMP_D)/app
LIB_CFLAG= /arch:IA32 /Zi /Fd$(TMP_D)/lib -D_WINDLL
APP_EX_OBJ=setargv.obj $(OBJ_D)\applink.obj /implib:$(TMP_D)\junk.lib
# add extra libraries to this define, for solaris -lsocket -lnsl would
# be added

# The OpenSSL directory

LFLAGS=/NODEFAULTLIB:kernel32.lib /NODEFAULTLIB:ws2_32.lib /NODEFAULTLIB:gdi32.lib /NODEFAULTLIB:advapi32.lib /NODEFAULTLIB:crypt32.lib /NODEFAULTLIB:user32.lib /nologo /subsystem:console /opt:ref /debug

I have posted the the complete changes made to ntdll.mak compatible with the Windows for IoT on Intel Galileo @


We can now build the OpenSSL libraries, but you will notice you receive a variety of errors.  This is due to missing functions in mincore.lib that were available in the original System32 dlls.

For example:

Creating library out32dll\libeay32.lib and object out32dll\libeay32.exp
cryptlib.obj : error LNK2019: unresolved external symbol __imp__DeregisterEventSource@4 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__RegisterEventSourceA@8 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__ReportEventA@36 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetProcessWindowStation@0 referenced in function _OPENSSL_isservice
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetUserObjectInformationW@20 referenced in function _OPENSSL_isservice
cryptlib.obj : error LNK2019: unresolved external symbol __imp__MessageBoxA@16 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetDesktopWindow@0 referenced in function _OPENSSL_isservice
rand_win.obj : error LNK2019: unresolved external symbol __imp__CreateCompatibleBitmap@12 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__DeleteObject@4 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDeviceCaps@8 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDIBits@28 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetObjectA@12 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDC@4 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__ReleaseDC@8 referenced in function _readscreen

You will notice that these errors actually kind of make sense.  Recall Window for IoT (mincore) is stripped down to approximately 171 MB.  As a result, many unnecessary functions are removed, such as GetProessWindow and MessageBox as shown above (as there isn’t a GUI available on the stripped down mincore).  We now need to modify the source (as safely as possible) to resolve these externals.  In my case, I simply commented out the missing method where necessary.  Of course, this may have unintended side effects, but due to the fact that most of the missing calls deal with the GUI, you are probably okay.

Continue this until the only errors you receive are in creating the e_capi.obj

Now run nmake -i -f ms\ntdll.mak install (-i will ignore compilation errors, namely the ones coming from e_capi)

Capi is one of the engines used by OpenSSL and it is probably important but I could not get around the compilation errors without essentially breaking it completely so I left it out.  This will still produce a valid libeay32.dll and ssleay32.dll.  You can verify by copying these dlls along with the created openssl.exe and not that it runs on the Galileo! (Note: you can resolve the error mentioned by copying the produced openssl.cnf to the directory mentioned)



Now to truly compile Apache-Qpid_Proton with OpenSSL support, you would continue forward from step B of

Upon recreating and opening the Apache-Qpid-Proton Visual Studio solution, you would need to modify all the proton project using Rules #1 – #3 as defined above.

Of course, if you wish to obtain the precompiled binaries and see an example of using Apache-Qpid-Proton with OpenSSL support in a Galileo Wiring app, you may refer to this pull request in the Connect the Dots Project by MS OpenTech:


Happy Hacking!  Here’s to a great ideas and developments on Windows for IoT!

Mo’ Code Movember – A Month of Code happening in Houston!


It looks like November is shaping up to be an event-packed month with various hackathons / coding challenges taking place at UH, Rice, and near UHD in the EaDo area.

To participate in Mo’ Code Movember you will want to join the official Facebook Event and view the contest rules!

Seeing  these events laid out, I can’t help but notice the natural progression it proposes for participants.

Assuming you are female, every one of the events above brings attention to Women in Technology when you participate!   It’s also a great time to encourage group involvement and learning by hosting workshops that coincide with any of the events listed above.

Ok, so what exactly is a Mo’Code Movember?  This is a month-long event hosted by CougarCS @ the University of Houston which aims to bring quality to the typical 24-hour hackathon format, by using a month-long running format with weekly check-ins, while enforcing that submissions themselves are checked-in to Github.  We want to teach and encourage proper software engineering practices over quick and dirty mish-mashing of API’s.  Have an itch you need to scratch?  Mo’Code Movember is the opportunity to get it done!

CodeDay Houston is a 24-hour event where students passionate about technology get together and build cool things together! You pitch ideas, form teams, and build a cool app or game in 24 hours! The best projects will even be rewarded with prizes.  The event is open to highschool and college students and boasts participation form 27 cities around the US!

Hacktoberfest is Hackathon with a tip of the hat to fall, Octoberfest, and Bavarian flare!  If you work in interface design or graphic design, or you currently build websites, apps, and services, you are probably a good fit!

3 Day Startup is an entrepreneurship education program designed for university students with an emphasis on learning by doing. 3 Day Startup teaches entrepreneurial skills to university students in an extreme hands-on environment.

Let’s assume you are a CS Student and you have an awesome idea you have been dying to work on.  Maybe you just want to try coding something challenging or just to learn what all this development stuff is all about because you are new to the scene.  You could vet that idea at the Mo’Code Movember kickoff and learn everything you need to know to get that code placed up on Github where it can be properly maintained (KILLER resume line!).  The next day, you bring your idea out to Codeday Houston and find some like-minded individuals who you recruit to your project, you bang out a minimally viable product, and you end up placing in the top 5!  Next week, you take it out to Hacktoberfest and get the UI polished by the wonderful design divas who will be in attendance.  Sweet, thankfully you had visions of grandiosity early on so you beat the October 28 deadline to sign up for 3 Day Startup.  You bring your baby out and discover some pals in the business school who prove out your product using market analysis, product validation, and other things you never considered.  You pitch the idea that Sunday and see a clear vision now on what you want to do.  The following week you showcase your project to the most difficult audience of all, your family at Thanksgiving dinner, and they provide you feedback that everyone before was scared to share!  Taking into account those nitpicky suggestions Uncle Jeff gave you, you incorporate them into your project and show off a polished product at the Mo’Code Movember showcase.  You now have invaluable experience at what it takes to be an idie developer OR maybe none of this happened but you learned invaluable skills along the way!  Oh and bonus if you are female, because you did all this and represented women in Technology!


Full text schedule with hyperlinks:

October 11 to December 12:
International Women’s Hackathon –

November 7:
University of Houston Mo’Code Movember Kickoff – A Github hosted month-long competition where creations will be showcased at UH on first month of December

November 8 & 9:
Code Day Houston –

November 13:
Mo’Code Movember Checkpoint

November 13 to 14:
Hacktoberfest –

November 20:
Mo’Code Movember Checkpoint

November 21 to 23:
3 Day Startup – – APPLY BY OCTOBER 20!

November 27 to 28:
Thanksgiving Holiday – Show your family the awesome thing(s) you created and add those finishing touches!

December 5: Mo’Code Movember Showcase – Show your colleagues the awesome thing(s) you created!

Kinect4NES – Control your classic NES with the Power of Kinect v2


Recently, I have found myself becoming involved in the exciting world of “IoT” or internet of things.  All of this started while attending a presentation on the subject  that was put on by my fellow colleague Bret Stateham.  If you are unfamiliar with the concept of “IoT”, I like to think of it as a programmable system comprised of input sensor(s) / polling service(s) which interact with a physical device and optionally store data received by the sensors to a web service where it could be optionally processed for patterns to facilitate things like forecasting.  In short, we are connecting your things that are inherently offline to the internet.

This particular project does not quite fully fit the definition above as the end result is a non-network connected thing (a classic NES console) connected to modern sensor (the Kinect V2).  However, it could be easily modified to fit such a description.  For example, this concept could be extended to allow the public to access and control a physical NES through a web interface (think Twitch plays Pokémon) ~Coming Soon~ OR it could allow users to upload Kinect Gesture profiles online that could be pulled down through the application to allow better control in certain games.  Nonetheless, it leverages concepts that are integral to most “IoT” projects, specifically hardware interface construction, software interfacing, and application development.

I’d like to elaborate a little bit more on inspiration for this project as I would really like to tear down any barriers currently holding back abled developers from breaking into this field.  At the end of Bret’s presentation, he showed off a quick demo that showcased an Intel Galileo board running Windows for IoT that contolled a blinking LED with breakpoints set in Visual Studio.  Not exactly, jaw dropping surface value, but when looked at for what it can enable rather what it is specifically doing, you may find an opportunity to expand the possibilities of your code.  It dawned on me that that this blinking LED demo was all I needed to know to allow computer code to interact with physical objects.  I began thinking about everything like a blinking LED project, SMS notifications from my washer/dryer/dishwasher when a cycle is complete, automating the addition of chemicals to a swimming pool, or firing a rocket when a threshold of retweets is achieved on a particular hashtag.   All of these become comparatively simple problems when looked at through the lens of turning on a light when a certain condition is met!  I soon found myself pondering the idea of mixing old nostalgic technology with the bleeding edge.  What if I could control a classic NES with a Kinect 2 device?  Not through an emulator, but a physical, rectangular, Gray-Box NES from 1984.



  • An NES console with game to test
  • An NES controller OR some wiring skills and a CD4021BE 8-bit shift register
  • 12 strands of wire, recommend Kynar
  • 8 1k resistors (technically any value from 1k to 50k should suffice)
  • 2 3.6k resistors (again higher not necessarily bad)
  • IoT board capable of running Firmata, Intel Galileo or Arudio Uno etc.
  • Kinect V2 Sensor for Windows OR recently announced Kinect V2 Adapter and existing Xbox One Kinect Sensor
  • Machine capable of running the Kinect V2 SDK


In my writeup, I am going to assume you have zero experience with hardware development which is fitting because I literally had no idea how to even blink an LED when I started this project last week.  I am also going to assume you want to know how to do achieve the final product from a blank slate, let’s start by breaking down the problem into sub-problems.



We want to use a Kinect V2 Sensor to control games on a physical NES



1. We need to interface with the NES controller port using computer code

2. We need to speak to that hardware interface through a software interface, preferably in C#

3. We need to create an application that takes input from the Kinect V2 Sensor and processes it through the software interface, into the hardware interface, where it can reproduce button presses on the NES console based on defined gestures.


Sub-Problem #1 – Creating a controller interface in hardware

We need to understand how an NES controller works.  I found an excellent article on the subject @ the PoorStudentHobbyist Blog.  I highly recommend giving it a read.  We learn that the NES controller operates using a 4021B 8-bit shift register wired to 8 inputs (the 4 D-Pad directions + Select, Start, A, and B buttons).  Given this knowledge, we can build an interface in a couple of ways.  One would be to use an 8-bit shift register emulator like the one described @ MezzoMill or we can leverage the physical hardware within an NES controller to create the interface or we could substitute a comparable 8-bit shift register. Coincidentally, I came across 2 NES controllers brand new in the box at a local flea market for $6 a week before starting this project.  I considered it a sign and went through with deconstructing the controllers to get the parts I needed.

I removed the screws on the back of one of the controllers, opened it up, and desoldered the 5-wire braid that connects to the controller port and I also desoldered the 4021B shift register.  With the board disassembled you can determine the pinouts to see which wires / buttons are attached to which pins on the 4021b using a mulimeter or the painstaking process of tracing with your finger.

The 4021B Shift register:


Let’s assume you have never used an Arduino and have no idea what it is or does.  All you really need to know is that it is a programmable device that has the ability to turn on/off certain digital pins through programs referred to as sketches.  Those on/off pins will essentially become our buttons, where button press is determined by sending a low signal (see PoorStudentHobbyist blog for details).  Ergo, we are basically going to create a circuit and wire-up a glorified blinking LED demo, the blinking light is going to be NES controller buttons.

I followed the pinouts and wiring guide used @ PoorStudentHobbyistBlog but I did not use the proposed switch.  At this point, try running a sample program to see that your interface works by sending a low signal to the start button every few seconds or so.  The blog post above includes a sample for exactly that.

Here is a picture of the completed interface:


Sub-Problem #2 – Speaking to our Hardware Interface from C#

We are going to leverage open-source software to interface with our board via C#.  A very common scenario when developing for IoT boards is the ability to control the pinouts via an external interface i.e. a Web API or in our case the serial port.  Lucky for us, Firmata is an open-source protocol for doing exactly that!  Firmata is so pervasive that it is actually included as a default sketch in the Arduino IDE.  Simply, upload the standard Firmata sketch to your device.  Now we need to setup communication via C#.  Again, luck for us, we can leverage Arduino4Net which lets us speak to Firmata to control our board via C#.  Bonus, Arduino4Net can be brought in easily using Nuget!  At this point, you will want to create a simple test where you can verify that Arduino4NET is properly passing signals to your board.  I have included one as part of the Kinect4NES project.


Sub-Problem #3 – Creating the Kinect Application to simulate control signals based on gestures

We are going to connect up the Kinect V2 sensor and create a gesture scheme to signal button presses through our interface!  The Kinect V2 SDK includes something called the SDK Browser 2.0.  Inside you will find a Body Basics XAML sample.  Install the sample and copy out to somewhere where you can modify.

The Kinect V2 SDK Browser:


We begin by intercepting the Reader_FrameArrived method when datareceived is true.  Keeping in mind that Kinect can track more than one body, we take one of those bodies and call CalcController(Body body).  Inside this method, we setup the logic for controlling which pin we wish to signal to based on defined gestures which are determined from the joint tracking points.  All of this starts with trial and error, but essentially you make considerations based on where the joints are in relation to each other.  We could also train a gesture using the Gesture Builder tool which is included with the SDK, but that is for a later post =)  Working with my colleague Jared Bienz, we were able to blindly construct the gestures during a visit to the local Microsoft Store.  Simply find some space, get a body, and start doing some gestures and determine how to best capture!

The Kinect4NES Application:




Once you have all this, put all the pieces together and turn on your favorite game!  I chose the pinnacle classic Super Mario 3 which worked well enough with our scheme to actually allow you to play through the first level!  Next thing to consider is trying other games out and possibly allowing for multiple gesture profiles.  For example, I have created a stub in the hosted project to play Mario by physically jumping and running as opposed to using hands.  All in all, this was an extremely fun hack that allowed me to bridge my interest in class video games with modern gaming peripherals!

If you want to get the bits and follow along with updates or even contribute to this project, you may want to check out the GitHub Project Page for Kinect4NES.