Spark Core + DS18B20 + Azure Event Hubs = IoT Hot Tub / Pool Monitor



Summer is here in Houston and there is no better time to get in the water to cool off or warm up.  I am particularly fond of the latter, especially with a good group of friends, food, and refreshments.  The problem is,  it can be hard to tell when the hot tub is ready without actually getting in or looking at a temperature gauge.

Enter Spark Core, a Wi-Fi enabled micro-controller that can be programmed remotely from  a web-based IDE.  I absolutely love this device for ease of programming, portability, and resilience.  Let me reiterate on resilience.  I have successfully ran my Spark Core devices for weeks at a time with zero downtime.  I think these things are pretty much impervious to breakdown so far.  To give a visual output of the temperature, I added a Spark Button device to glow a specific color in relation to the current reading.

I have had alot of fun getting this device to work with the ConnectTheDots project from MSOpenTech.  This project allows for connecting a variety of sensors either directly or through a gateway, into Azure Event Hubs for real-time Streaming Analytics processing.  The results are then displayed in an Azure Web Portal.  I absolutely love this project for it’s ability to walkthrough a serious end-to-end IoT solution and have enjoyed both contributing and delivering IoT Dev Camps based on the project.

Build your Own!

The hot tub monitor solution leverages my previous post on “Sending messages to Azure Event Hub with Spark over AMS API Proxy“.  All code has been contributed to the ConnectTheDots project.  To replicate, simply follow the instructions for setting up the Spark+DS18B20 documentaion and insert into your pool or Hot Tub!  The only modification I made was to use a portable 5v battery source which could be greatly improved using a 5v solar cell.


He’s heating up!


He’s on FIRE!








You may be wondering, is this really useful?  I actually ended up using the device over the weekend, and was happy to see the ring turn red indicating to guests that Hot Tub was in fact ready!  This no doubt got a lot of questions and spawned some discussion on improving the Hot Tub experience further.  Stay tuned for an update on what we plan to implement.  A few hints, it involves a Microsoft Band, Windows Phone App, raspberry Pi, and audio responsive LED driver!


Microsoft IoT DevCamps Announced!

Microsoft has recently announced a series of Microsoft IoT DevCamps taking place across the United States through May and June with more being planned (stay tuned here for updates) or check the official announcement.


Date                          Speaker                                         Locale

5/12/15 Paul DeCarlo Chicago
5/15/15 Stacey Mulcahy New York
5/29/15 Bret Stateham Sunnyvale


We just wrapped up the first event in Chicago, Illinois with an excellent group of industry professionals and IoT enthusiasts.  The content is very exciting as it leverages connecting the popular Raspberry Pi 2 device + Arudnio + WeatherShield and .NET Gadgeteer to an Azure Event Hub for real-time processing and visualization of data through Streaming Analytics and an Azure Websites front-end.  This is a truly hands-on lab where we outfit attendees with the hardware and Azure services to walkthrough a complete end-to-end Iot solution in the cloud.  The lab content comes from the ConnectTheDots project from MSOpenTech.



Introducing IoT


Hands-On Development


Raspberry Pi + Gadgeteer Kits


Rasp Pi + Arduino



Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Dots Evvvvvveryboddddddy!

Discover how to use Microsoft Azure services as a full, end-to-end IoT solution.  We’ll put Event Hubs, Streaming Analytics, and Websites to the task of presenting data from a variety of hardware devices.

Sending messages to Azure Event Hub with Spark over AMS API Proxy

In this article, I will describe how to publish data from a Spark Core to an Azure Event Hub for real-time processing using Azure Mobile Services as a message proxy.

Spark OS is a distributed operating system for the Internet of Things that brings the power of the cloud to low-cost connected hardware.  Spark provides an online IDE for programming a Wi-Fi enabled Arduino-like device known as the Spark Core.  Azure Event Hubs are a highly scalable publish-subscribe ingestor that can intake millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Once collected into Event Hubs you can transform and store data using any real-time analytics provider or with batching/storage adapters.

To begin, I took an approach of using the Event Hubs Rest API Send Event.  This seemed straightforward, simply create a request with the appropriate Request Headers over HTTP as HTTP and HTTPS are mentioned as supported in the documentation.  However, when sending this request over HTTP, I received “Transport security is required to protect the security token” when including the necessary “Authorization” header.  This poses a bit of a problem as the light-weight Spark device is unable to perform the computations necessary to send SSL requests.

Azure Mobile Services to the rescue!

I first created a new Azure Mobile Service with a Javscript backend:

1- Create Mobile Service

1.5 - Create Mobile Service

Next, I created a new API within the service named “temp”:

2 - Create API


Finally, create an Azure Service Bus with an Event Hub by following the instructions in Hypernephelist’s “Sending data to Azure Event Hubs from Node.JS using the REST API“.


The idea being, that I could successfully send data to the Mobile Service API as documented in Brian Sherwin’s “Wiring Up the Spark Core To Azure” then forward this data to the event hub using the information provided in Hypernephelist’s “Sending data to Azure Event Hubs from Node.JS using the REST API“.  Essentially creating an Proxy via Azure Mobile Services to get data from the Spark in to an Azure Event Hub.


Let’s begin, by building the Event Hub Proxy in the “temp” API.  This API will require Custom Node.JS packages that can be installed by following Redbit’s “Using Custom NodeJS Modules with Azure Mobile Services“.  Follow the instructions and be sure to run an npm install for https, crypto, and moment as these are required to generate the SAS Key for sending data through the Event Hub rest Service.

The actual API code is below (with heavy reliance on Hypernephelist’s example), you will need to modify this by editing in the Azure API editor within the Azure portal, or modifying on disk after cloning per Redbit’s instructions.  Be sure to edit the namespace, hubname, my_key_name, and my_key variables with the appropriate values from your Azure Event Hub.

3 - Azure API Editor

/************************Begin AMS Code**********************

var https = require('https');
var crypto = require('crypto');
var moment = require('moment'); = function(request, response) {

function sendTemperature(payload) {
// Event Hubs parameters
var namespace = 'EVENTHUBNAMESPACE';
var hubname ='EVENTHUBNAME';

// Shared access key (from Event Hub configuration) 
var my_key_name = 'KEYNAME'; 
var my_key = 'KEY';
// Payload to send
//payload = "{ \"temp\": \"100\", \"hmdt\": \"78\", \"subject\": \"wthr\", \"dspl\": \"test\"," + "\"time\": " + "\"" + new Date().toISOString() + "\" }";

// Full Event Hub publisher URI
var my_uri = 'https://' + namespace + '' + '/' + hubname  + '/messages';

// Create a SAS token
// See

function create_sas_token(uri, key_name, key)
    // Token expires in one hour
    var expiry = moment().add(1, 'hours').unix();

    var string_to_sign = encodeURIComponent(uri) + '\n' + expiry;
    var hmac = crypto.createHmac('sha256', key);
    var signature = hmac.digest('base64');
    var token = 'SharedAccessSignature sr=' + encodeURIComponent(uri) + '&sig=' + encodeURIComponent(signature) + '&se=' + expiry + '&skn=' + key_name;

    return token;

var my_sas = create_sas_token(my_uri, my_key_name, my_key)


// Send the request to the Event Hub

var options = {
  hostname: namespace + '',
  port: 443,
  path: '/' + hubname + '/messages',
  method: 'POST',
  headers: {
    'Authorization': my_sas,
    'Content-Length': payload.length,
    'Content-Type': 'application/atom+xml;type=entry;charset=utf-8'

var req = https.request(options, function(res) {
  //console.log("statusCode: ", res.statusCode);
  //console.log("headers: ", res.headers);

  res.on('data', function(d) {

req.on('error', function(e) {



/************************End AMS Code**********************

Finally, we need to set up the Spark Core with appropriate code to push data to the API in our Mobile Service. I leveraged the HttpClient as it has great logging features for debugging and is a bit easier to wield compared to Spark’s lightweight TCPClient. I also import SparkTime.h to generate the timestamp for messages from the Spark itself. Simply flash this code to your spark device, taking care to appropriately modify the AzureMobileService, AzureMobileServiceAPI, AzureMobileServiceKey, and deviceName variables. Note that the payload sent in this particular example corresponds to the expected payload in the Connect the Dots Project from MSOpenTech. This implies that there will soon be support for the Spark Core in this amazing project!

4 - Spark Editor

/************************Begin Spark Code******************

// This #include statement was automatically added by the Spark IDE.
#include "HttpClient/HttpClient.h"

// This #include statement was automatically added by the Spark IDE.
#include "SparkTime/SparkTime.h"

String AzureMobileService = "";
String AzureMobileSeriveAPI = "APINAME";
char AzureMobileServiceKey[40] = "MOBILESERVICEKEY";
char deviceName[40] = "SparkCore"; 

UDP UDPClient;
SparkTime rtc;
HttpClient http;

void setup()
    rtc.begin(&UDPClient, "");
    rtc.setTimeZone(-5); // gmt offset

void loop()
    unsigned long currentTime;
    currentTime =;
    String timeNowString = rtc.ISODateUTCString(currentTime);
    char timeNowChar[sizeof(timeNowString)]; 
    strcpy(timeNowChar, timeNowString.c_str());
    char payload[120];
    snprintf(payload, sizeof(payload), "{ \"temp\": \"76\", \"hmdt\": \"32\", \"subject\": \"wthr\", \"dspl\": \"%s\", \"time\": \"%s\" }", deviceName, timeNowChar);
    http_header_t headers[] = {
        { "X-ZUMO-APPLICATION", AzureMobileServiceKey },
        { "Cache-Control", "no-cache" },
        { NULL, NULL } // NOTE: Always terminate headers with NULL
    http_request_t request;
    http_response_t response;
    request.hostname = AzureMobileService;
    request.port = 80;
    request.path = "/api/" + AzureMobileSeriveAPI;
    request.body = payload;, response, headers);
    Serial.print("Application>\tResponse status: ");

    Serial.print("Application>\tHTTP Response Body: ");

/************************End Spark Code********************

Voila! I am able to verify my Spark is appropriately forwarding data to my ConnectTheDots portal!

5 - CTD Portal

We can also verify / debug by connecting to our Spark Core over serial and monitoring the output of HTTPClient

6 - Putty
I absolutely love developing on the Spark device do to it’s simplicity to update and convenient online IDE. Now with the power of Azure, we can real-time analyze data coming from one of these devices!

You can find the latest code included in this project at the DXHacker/SparkEventHub repo on Github.

Training Kinect4NES to Control Mike Tyson’s Punch-Out!

Kinect4NES @ HackRice – First Time Player Knocks out Glass Joe

In a previous post, I talked about how to create an interface to send controller commands to an NES based on interaction with the Kinect v2.  The idea was successful, but I received a bit of feedback on the control being less than optimal and a suggestion that it would likely work well with a game like Mike Tyson’s Punch-Out.

This aroused an interesting challenge, could I create a control mechanism that could allow me to play Mike Tyson’s Punch-Out using Kinect4NES with enough stability to accurately beat the first couple characters?

Let’s first look at how control was achieved in the first iteration of Kinect4NES.  There are essentially 2 ways of reacting to input on the Kinect, using a heuristic-based approach based on relatively inexpensive positional comparison of tracked joints or gesture based tracking (either discrete or continuous).  For my initial proof of concept, I used the following heuristic-based approach:


Taken from CalcController(Body body) in MainWindow.xaml.cs

* DPad from Calc

var dpadLeft = ((leftWrist.Position.Y > mid.Position.Y – 0.20) && (leftWrist.Position.X < mid.Position.X -0.5));
var dpadRight = ((rightWrist.Position.Y > mid.Position.Y – 0.20) && (rightWrist.Position.X > mid.Position.X + 0.5));
var dpadUp = ((leftWrist.Position.Y > head.Position.Y) || (rightWrist.Position.Y > head.Position.Y));
var dpadDown = ((spineBase.Position.Y – knee.Position.Y) < 0.10);
var start = ((head.Position.Y < shoulder.Position.Y));


As you can see this a basic approach that just compares current joint positions and if the condition is satisfied, it activates that controller input.

Ideally, we would like to have natural body movements drive our interaction with Mike Tyson’s Punch-Out.  To begin, we need to familiarize with they way the game is controlled by the NES controller.  I was lucky enough to come across a copy of the game at a local flea market around the time this project idea was going on in my head, the same one where I had found boxed NES controllers a couple weeks earlier.  I found an online manual which described the various game inputs and used these as a basis for defining my gestures.


-)  : Dodge to right
(-  : Dodge to left
DOWN: Once: block
      Twice rapidly: ducking

--- Left body blow (B + UP = Punch to left face)
|    -- Right body blow (A + UP = Punch to right face)
|    |
B    A

(When Mac is knocked down, press rapidly and he'll get 

SELECT: If pressed between rounds, Doc's encoraging
        advice can increase Mac's stamina
START:  Uppercut (If the number of stars is 1 or


Take note of how some of these inputs are button combinations or rapid presses.  We will revisit later how I optimized the mechanism to account for these cases.

To begin with creating the gestures, I started a new solution using the Visual Gesture Builder Preview included in the Kinect v2 SDK to create a series of Discrete Gesture projects for each of the behaviors identified in the Punch-Out manual.


For each of these projects, I had my brother perform a decided gesture with approximately 20 positive cases (gestures that should be considered as performed successfully) and 5 or so negatives (gestures that should not be considered performed successfully).  I.E. for the Uppercut, he would perform 20 uppercuts with the right hand for positive cases and a few regular left and right punches for the negative cases.  This way, we won’t accidentally perform an uppercut when a regular left or right punch is thrown.

KinectStudioAfter obtaining a successful recording, we add the clip to the appropriate project in our Visual Gesture Builder project.  Here we meticulously tag the key frames to indicate the frames where a successful gesture is performed.  As a result, areas that are not tagged are considered negative cases.


We then perform a build of the project which uses the Adaboost algorithm to learn the intended positions of the joints to create a state machine for determining a successful gesture.  Each project outputs a .gba file which are composed into a .gbd when building the solution.


We repeat this for all of our projects and then verify the .gbd with “File => Live Preview” in Visual Gesture Builder.  This allows us to see the signal generated by our current pose for all produced gesture projects, very handy for determining whether a given gesture creates interference with another.  In the image below, you see a very clear signal is generated by the uppercut pose.


With the recorded gestures verified, I looked at the sample code used in the “Visual Studio Gesture Builder – Preview” project included in the Kinect SDK browser.



From here, I incorporated the relevant bits into GestureDetector.cs.  In my original implementation, I iterated through all recorded gestures and employed a switch to perform the button press when one was detected.  This proved to be ineffecient and created inconsistent button presses.  I improved this significantly in my second update using a dictionary to hold a series of Actions (anonymous functions that return void) and a parallel foreach, allowing me to eliminate cyclomatic complexity in the previous switch while allowing me to process all potential gestures in parallel.  I also created a Press method for simulating presses.  This allowed me to send in any combination of buttons to perform behaviors like HeadBlow_Right (UP + A).  I also implemented a Hold method to make it possible to perform the duck behavior (press down, hold down).  In the final tweak, I implemented a method to produce a RapidPress for the Recover gesture.  This allowed me to reproduce a well known tip in Punch-Out where you can regain health in between matches by rapidly pressing select.

This was a rather interesting programming excercise, imagine coding at 2 in the morning with the goal of optimizing code for the intent of knocking out Glass Joe in a stable repeatable manner.  The end result wound up working well enough to where a ‘seasoned’ player can actually TKO the first two characters with relative regularity.  In the video at the top of this post, the player had actually never used the Kinect4NES and TKO’d Glass Joe on his first try.  As a result, I am satisfied with this experiment, it was certainly a fun project that allowed me to become more familiar with programming for the Kinect while also having the joy of merging modern technology with the classic NES.  For those interested in replicating, you can find the source code on github. If you have any ideas on future games that you would like to see controlled with Kinect4NES, please let me know in the comments!

Porting Open Source Libraries to Windows for IoT (mincore)

Microsoft is bringing Windows to a new class of small devices. Riding the crest of the “Internet of Things” movement, Microsoft is looking to capitalize on devices and sensor capabilities of popular development boards. Recently, members of the Windows Developer Program for IoT have been able to gain access to a build of Windows which supports the Intel Galileo chipset.

Bringing Windows to small devices is a huge feat that opens the door to many development opportunities.  Of course, this means, a lot of existing code can be brought over to aid in creating IoT solutions.  This post aims to identify the specifics of compiling two open source libraries to this new version of Windows.

The libraries in question concern apache-qpid-proton a light-weight messaging framework for sending AMQP messages and OpenSSL, an open-source library for implement Secure Socket Layer protocols.

Why these two libraries?  They were necessary for creating a Win32 application capable of sending AMQPS messages up to an Azure Event Hub as part of Galieo device support in the super awesome Connect the Dots project from MSOpenTech. More importantly, we get to encounter two rather distinct compilation exercises.  Apache Qpid can send AMQP (without the S) messages on its own, but Azure requires these are sent over SSL.  So we need to compile Apache Qpid against OpenSSL to get AMQPS support.  In addition, Apache Qpid gives us a Visual Studio Solution to work with while OpenSSL is built using a makefile in combination with Perl and python processors for producing the makefile itself.  This post will explain these scenarios and the necessary changes required to target Windows for IoT through the Visual Studio project for Apache Qpid and the makefile for Open SSL.

Let’s begin by looking at the default property configuration for the Apache Qpid Visual Studio Project:


Normally, when compiling a Win32 application for a desktop PC, we will compile against Win32 libraries contained in C:\Windows\System32.  When targeting the Intel Galileo board you will notice that the default Intel Galileo Wiring App template contained in the Windows Developer Program for IoT MSI links against a single library, mincore.lib.  Jer’s blog goes into the best known detail on what this is. Long story short, we need to compile against mincore.lib in order to obtain code capable of running on the Galileo as the mappings for System and Win32 functions are completely different in Windows for IoT and contained in this particular lib.  This sets the basis for rules #1 and #2.


1. Remove all references to System 32 libs and replace with a reference to Mincore.lib



2. For all references removed in step 1, add these Dlls to the IgnoreDefaultLibraries Collection, this ensures that the linker will not attempt to link to these Dlls, as we want to link to references in Mincore only.  Note: I have added compatible OpenSSL binaries to Additional Dependencies to enable OpenSSL support


In addition, we need to consider the hardware present on the Galileo board itself.  Intel outfits the board with an Intel® Quark™ SoC X1000 application processor, a 32-bit, single-core, single-thread, Intel® Pentium® processor instruction set architecture (ISA)-compatible, operating at speeds up to 400 MHz.  This processor does not support enhanced instruction sets including SSE, SSE2, AVX, or AVX 2.    This sets the basis for rule #3.


3.  Ensure all code is compiled with the /arch:IA32 compiler flag



You can now build Apache-Qpid-Proton to target Windows for IoT on the Intel Galileo, however, in order to be useful, we need to compile again OpenSSL as Azure event hubs require that we send messages using AMQPS.  Without OpenSSL support, we can only send AMQP messages which will be ignored by the Azure event hub.

There is an excellent article on compiling Apache-Qpid Proton against OpenSSL for Windows @

I don’t want to reproduce the content there so let’s talk about the changes necessary to target the Windows on the Galileo board.

In step A.3 the author describes the process for compiling the OpenSSL dynamic linking libraries using “nmake –f ms\ntdll.mak install”.  Nmake is Microsoft’s build tool for building makefiles.  To use the tool you can access it within a Visual Studio command prompt, from it’s actual location in C:\Program Files (x86)\Microsoft Visual Studio X.X\VC\bin, or call C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\vsvars32.bat in a standard command prompt to allow for the path to nmake to be available in your current shell.  The problem is the default makefile is configured to build against Win32, i.e. Win32 on the desktop.

Let’s take what we learned above and apply it to the ntdll makefile:

Inside the untouched ntdll.mak you will see the following:

# Set your compiler options
APP_CFLAG= /Zi /Fd$(TMP_D)/app
APP_EX_OBJ=setargv.obj $(OBJ_D)\applink.obj /implib:$(TMP_D)\junk.lib
# add extra libraries to this define, for solaris -lsocket -lnsl would
# be added
EX_LIBS=ws2_32.lib gdi32.lib advapi32.lib crypt32.lib user32.lib

# The OpenSSL directory

LFLAGS=/nologo /subsystem:console /opt:ref /debug


We essentially have a section of the makefile which outlines compiler flags and linker flags.  Here we can apply the rules from above to create a makefile that will produce a Win32 compatible library which targets the Intel Galileo.

Applying Rule #1 we remove the libs mentioned in EX_LIBS and replace with mincore.lib

Applying Rule #2 we take the libs that were in EX_LIBS and add to the linker flags (LFLAG): /NODEFAULTLIB:NAMEOFLIBARY

Applying Rule #3 we add /arch:IA32 to each compiler flag (*CFLAG)


This yields the following changes:

# Set your compiler options
APP_CFLAG= /arch:IA32 /Zi /Fd$(TMP_D)/app
LIB_CFLAG= /arch:IA32 /Zi /Fd$(TMP_D)/lib -D_WINDLL
APP_EX_OBJ=setargv.obj $(OBJ_D)\applink.obj /implib:$(TMP_D)\junk.lib
# add extra libraries to this define, for solaris -lsocket -lnsl would
# be added

# The OpenSSL directory

LFLAGS=/NODEFAULTLIB:kernel32.lib /NODEFAULTLIB:ws2_32.lib /NODEFAULTLIB:gdi32.lib /NODEFAULTLIB:advapi32.lib /NODEFAULTLIB:crypt32.lib /NODEFAULTLIB:user32.lib /nologo /subsystem:console /opt:ref /debug

I have posted the the complete changes made to ntdll.mak compatible with the Windows for IoT on Intel Galileo @


We can now build the OpenSSL libraries, but you will notice you receive a variety of errors.  This is due to missing functions in mincore.lib that were available in the original System32 dlls.

For example:

Creating library out32dll\libeay32.lib and object out32dll\libeay32.exp
cryptlib.obj : error LNK2019: unresolved external symbol __imp__DeregisterEventSource@4 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__RegisterEventSourceA@8 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__ReportEventA@36 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetProcessWindowStation@0 referenced in function _OPENSSL_isservice
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetUserObjectInformationW@20 referenced in function _OPENSSL_isservice
cryptlib.obj : error LNK2019: unresolved external symbol __imp__MessageBoxA@16 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetDesktopWindow@0 referenced in function _OPENSSL_isservice
rand_win.obj : error LNK2019: unresolved external symbol __imp__CreateCompatibleBitmap@12 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__DeleteObject@4 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDeviceCaps@8 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDIBits@28 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetObjectA@12 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDC@4 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__ReleaseDC@8 referenced in function _readscreen

You will notice that these errors actually kind of make sense.  Recall Window for IoT (mincore) is stripped down to approximately 171 MB.  As a result, many unnecessary functions are removed, such as GetProessWindow and MessageBox as shown above (as there isn’t a GUI available on the stripped down mincore).  We now need to modify the source (as safely as possible) to resolve these externals.  In my case, I simply commented out the missing method where necessary.  Of course, this may have unintended side effects, but due to the fact that most of the missing calls deal with the GUI, you are probably okay.

Continue this until the only errors you receive are in creating the e_capi.obj

Now run nmake -i -f ms\ntdll.mak install (-i will ignore compilation errors, namely the ones coming from e_capi)

Capi is one of the engines used by OpenSSL and it is probably important but I could not get around the compilation errors without essentially breaking it completely so I left it out.  This will still produce a valid libeay32.dll and ssleay32.dll.  You can verify by copying these dlls along with the created openssl.exe and not that it runs on the Galileo! (Note: you can resolve the error mentioned by copying the produced openssl.cnf to the directory mentioned)



Now to truly compile Apache-Qpid_Proton with OpenSSL support, you would continue forward from step B of

Upon recreating and opening the Apache-Qpid-Proton Visual Studio solution, you would need to modify all the proton project using Rules #1 – #3 as defined above.

Of course, if you wish to obtain the precompiled binaries and see an example of using Apache-Qpid-Proton with OpenSSL support in a Galileo Wiring app, you may refer to this pull request in the Connect the Dots Project by MS OpenTech:


Happy Hacking!  Here’s to a great ideas and developments on Windows for IoT!

Mo’ Code Movember – A Month of Code happening in Houston!


It looks like November is shaping up to be an event-packed month with various hackathons / coding challenges taking place at UH, Rice, and near UHD in the EaDo area.

To participate in Mo’ Code Movember you will want to join the official Facebook Event and view the contest rules!

Seeing  these events laid out, I can’t help but notice the natural progression it proposes for participants.

Assuming you are female, every one of the events above brings attention to Women in Technology when you participate!   It’s also a great time to encourage group involvement and learning by hosting workshops that coincide with any of the events listed above.

Ok, so what exactly is a Mo’Code Movember?  This is a month-long event hosted by CougarCS @ the University of Houston which aims to bring quality to the typical 24-hour hackathon format, by using a month-long running format with weekly check-ins, while enforcing that submissions themselves are checked-in to Github.  We want to teach and encourage proper software engineering practices over quick and dirty mish-mashing of API’s.  Have an itch you need to scratch?  Mo’Code Movember is the opportunity to get it done!

CodeDay Houston is a 24-hour event where students passionate about technology get together and build cool things together! You pitch ideas, form teams, and build a cool app or game in 24 hours! The best projects will even be rewarded with prizes.  The event is open to highschool and college students and boasts participation form 27 cities around the US!

Hacktoberfest is Hackathon with a tip of the hat to fall, Octoberfest, and Bavarian flare!  If you work in interface design or graphic design, or you currently build websites, apps, and services, you are probably a good fit!

3 Day Startup is an entrepreneurship education program designed for university students with an emphasis on learning by doing. 3 Day Startup teaches entrepreneurial skills to university students in an extreme hands-on environment.

Let’s assume you are a CS Student and you have an awesome idea you have been dying to work on.  Maybe you just want to try coding something challenging or just to learn what all this development stuff is all about because you are new to the scene.  You could vet that idea at the Mo’Code Movember kickoff and learn everything you need to know to get that code placed up on Github where it can be properly maintained (KILLER resume line!).  The next day, you bring your idea out to Codeday Houston and find some like-minded individuals who you recruit to your project, you bang out a minimally viable product, and you end up placing in the top 5!  Next week, you take it out to Hacktoberfest and get the UI polished by the wonderful design divas who will be in attendance.  Sweet, thankfully you had visions of grandiosity early on so you beat the October 28 deadline to sign up for 3 Day Startup.  You bring your baby out and discover some pals in the business school who prove out your product using market analysis, product validation, and other things you never considered.  You pitch the idea that Sunday and see a clear vision now on what you want to do.  The following week you showcase your project to the most difficult audience of all, your family at Thanksgiving dinner, and they provide you feedback that everyone before was scared to share!  Taking into account those nitpicky suggestions Uncle Jeff gave you, you incorporate them into your project and show off a polished product at the Mo’Code Movember showcase.  You now have invaluable experience at what it takes to be an idie developer OR maybe none of this happened but you learned invaluable skills along the way!  Oh and bonus if you are female, because you did all this and represented women in Technology!


Full text schedule with hyperlinks:

October 11 to December 12:
International Women’s Hackathon –

November 7:
University of Houston Mo’Code Movember Kickoff – A Github hosted month-long competition where creations will be showcased at UH on first month of December

November 8 & 9:
Code Day Houston –

November 13:
Mo’Code Movember Checkpoint

November 13 to 14:
Hacktoberfest –

November 20:
Mo’Code Movember Checkpoint

November 21 to 23:
3 Day Startup – – APPLY BY OCTOBER 20!

November 27 to 28:
Thanksgiving Holiday – Show your family the awesome thing(s) you created and add those finishing touches!

December 5: Mo’Code Movember Showcase – Show your colleagues the awesome thing(s) you created!

Kinect4NES – Control your classic NES with the Power of Kinect v2


Recently, I have found myself becoming involved in the exciting world of “IoT” or internet of things.  All of this started while attending a presentation on the subject  that was put on by my fellow colleague Bret Stateham.  If you are unfamiliar with the concept of “IoT”, I like to think of it as a programmable system comprised of input sensor(s) / polling service(s) which interact with a physical device and optionally store data received by the sensors to a web service where it could be optionally processed for patterns to facilitate things like forecasting.  In short, we are connecting your things that are inherently offline to the internet.

This particular project does not quite fully fit the definition above as the end result is a non-network connected thing (a classic NES console) connected to modern sensor (the Kinect V2).  However, it could be easily modified to fit such a description.  For example, this concept could be extended to allow the public to access and control a physical NES through a web interface (think Twitch plays Pokémon) ~Coming Soon~ OR it could allow users to upload Kinect Gesture profiles online that could be pulled down through the application to allow better control in certain games.  Nonetheless, it leverages concepts that are integral to most “IoT” projects, specifically hardware interface construction, software interfacing, and application development.

I’d like to elaborate a little bit more on inspiration for this project as I would really like to tear down any barriers currently holding back abled developers from breaking into this field.  At the end of Bret’s presentation, he showed off a quick demo that showcased an Intel Galileo board running Windows for IoT that contolled a blinking LED with breakpoints set in Visual Studio.  Not exactly, jaw dropping surface value, but when looked at for what it can enable rather what it is specifically doing, you may find an opportunity to expand the possibilities of your code.  It dawned on me that that this blinking LED demo was all I needed to know to allow computer code to interact with physical objects.  I began thinking about everything like a blinking LED project, SMS notifications from my washer/dryer/dishwasher when a cycle is complete, automating the addition of chemicals to a swimming pool, or firing a rocket when a threshold of retweets is achieved on a particular hashtag.   All of these become comparatively simple problems when looked at through the lens of turning on a light when a certain condition is met!  I soon found myself pondering the idea of mixing old nostalgic technology with the bleeding edge.  What if I could control a classic NES with a Kinect 2 device?  Not through an emulator, but a physical, rectangular, Gray-Box NES from 1984.



  • An NES console with game to test
  • An NES controller OR some wiring skills and a CD4021BE 8-bit shift register
  • 12 strands of wire, recommend Kynar
  • 8 1k resistors (technically any value from 1k to 50k should suffice)
  • 2 3.6k resistors (again higher not necessarily bad)
  • IoT board capable of running Firmata, Intel Galileo or Arudio Uno etc.
  • Kinect V2 Sensor for Windows OR recently announced Kinect V2 Adapter and existing Xbox One Kinect Sensor
  • Machine capable of running the Kinect V2 SDK


In my writeup, I am going to assume you have zero experience with hardware development which is fitting because I literally had no idea how to even blink an LED when I started this project last week.  I am also going to assume you want to know how to do achieve the final product from a blank slate, let’s start by breaking down the problem into sub-problems.



We want to use a Kinect V2 Sensor to control games on a physical NES



1. We need to interface with the NES controller port using computer code

2. We need to speak to that hardware interface through a software interface, preferably in C#

3. We need to create an application that takes input from the Kinect V2 Sensor and processes it through the software interface, into the hardware interface, where it can reproduce button presses on the NES console based on defined gestures.


Sub-Problem #1 – Creating a controller interface in hardware

We need to understand how an NES controller works.  I found an excellent article on the subject @ the PoorStudentHobbyist Blog.  I highly recommend giving it a read.  We learn that the NES controller operates using a 4021B 8-bit shift register wired to 8 inputs (the 4 D-Pad directions + Select, Start, A, and B buttons).  Given this knowledge, we can build an interface in a couple of ways.  One would be to use an 8-bit shift register emulator like the one described @ MezzoMill or we can leverage the physical hardware within an NES controller to create the interface or we could substitute a comparable 8-bit shift register. Coincidentally, I came across 2 NES controllers brand new in the box at a local flea market for $6 a week before starting this project.  I considered it a sign and went through with deconstructing the controllers to get the parts I needed.

I removed the screws on the back of one of the controllers, opened it up, and desoldered the 5-wire braid that connects to the controller port and I also desoldered the 4021B shift register.  With the board disassembled you can determine the pinouts to see which wires / buttons are attached to which pins on the 4021b using a mulimeter or the painstaking process of tracing with your finger.

The 4021B Shift register:


Let’s assume you have never used an Arduino and have no idea what it is or does.  All you really need to know is that it is a programmable device that has the ability to turn on/off certain digital pins through programs referred to as sketches.  Those on/off pins will essentially become our buttons, where button press is determined by sending a low signal (see PoorStudentHobbyist blog for details).  Ergo, we are basically going to create a circuit and wire-up a glorified blinking LED demo, the blinking light is going to be NES controller buttons.

I followed the pinouts and wiring guide used @ PoorStudentHobbyistBlog but I did not use the proposed switch.  At this point, try running a sample program to see that your interface works by sending a low signal to the start button every few seconds or so.  The blog post above includes a sample for exactly that.

Here is a picture of the completed interface:


Sub-Problem #2 – Speaking to our Hardware Interface from C#

We are going to leverage open-source software to interface with our board via C#.  A very common scenario when developing for IoT boards is the ability to control the pinouts via an external interface i.e. a Web API or in our case the serial port.  Lucky for us, Firmata is an open-source protocol for doing exactly that!  Firmata is so pervasive that it is actually included as a default sketch in the Arduino IDE.  Simply, upload the standard Firmata sketch to your device.  Now we need to setup communication via C#.  Again, luck for us, we can leverage Arduino4Net which lets us speak to Firmata to control our board via C#.  Bonus, Arduino4Net can be brought in easily using Nuget!  At this point, you will want to create a simple test where you can verify that Arduino4NET is properly passing signals to your board.  I have included one as part of the Kinect4NES project.


Sub-Problem #3 – Creating the Kinect Application to simulate control signals based on gestures

We are going to connect up the Kinect V2 sensor and create a gesture scheme to signal button presses through our interface!  The Kinect V2 SDK includes something called the SDK Browser 2.0.  Inside you will find a Body Basics XAML sample.  Install the sample and copy out to somewhere where you can modify.

The Kinect V2 SDK Browser:


We begin by intercepting the Reader_FrameArrived method when datareceived is true.  Keeping in mind that Kinect can track more than one body, we take one of those bodies and call CalcController(Body body).  Inside this method, we setup the logic for controlling which pin we wish to signal to based on defined gestures which are determined from the joint tracking points.  All of this starts with trial and error, but essentially you make considerations based on where the joints are in relation to each other.  We could also train a gesture using the Gesture Builder tool which is included with the SDK, but that is for a later post =)  Working with my colleague Jared Bienz, we were able to blindly construct the gestures during a visit to the local Microsoft Store.  Simply find some space, get a body, and start doing some gestures and determine how to best capture!

The Kinect4NES Application:




Once you have all this, put all the pieces together and turn on your favorite game!  I chose the pinnacle classic Super Mario 3 which worked well enough with our scheme to actually allow you to play through the first level!  Next thing to consider is trying other games out and possibly allowing for multiple gesture profiles.  For example, I have created a stub in the hosted project to play Mario by physically jumping and running as opposed to using hands.  All in all, this was an extremely fun hack that allowed me to bridge my interest in class video games with modern gaming peripherals!

If you want to get the bits and follow along with updates or even contribute to this project, you may want to check out the GitHub Project Page for Kinect4NES.


Introducing Azure Media Services – Uploading File Content to Azure Media Service

Azure Media Services allows you to leverage the highly scalable Azure infrastructure to deliver media content on demand.  It boasts the ability to deliver content at a scale equivalent of that used to distribute the Olympics to viewers worldwide!  It addition, it supports the ability to live stream muli-bitrate broadcasts, encode content on the fly, and configure accessibility to your content.   Azure Media Services is currently in Preview in the Microsoft Azure Portal, so let’s take it for a test spin with a very simply demo of uploading file content and making a highly-scalable pubicly distributed link to that content!


1. Create Media Service instance by selecting from panel on the left

1 - Create Media Service


2. Name the Media Service, determine the region where you would like it to exist, create a new storage account for your service

2 - Name Media Service


3. Wait for Azure to complete the creation of your new Media Service

3 - Media Service Activated


4. Click on the service to view the “Getting Started” page

4 - Media Service Dashboard


5. Click the upload button at the bottom of the screen and select the file you wish to upload

5 - Media Service Upload



6.  Click the “Content” tab and noticed your uploaded content, take note that it’s “Publish Url” value is “Not Published”


6 - Media Service Content


7. Click the “Publish” button and the “Publish Url” value will change to a URL

7 - Media Service Content Publish


8. Copy and paste the resulting Url and watch your content stream from Azure to your device!

8  - Media Service Content Playing

Although this demo may seem very basic, it highlights some interesting ideas.  You could theoretically utilize Azure Media Services to create your own Video On Demand service capable of serving a global audience.  This could possibly be consumed by client applications for any platform capable of consuming content supported by the Azure Media Service Encoder and even monetized via access controls.  In the next article we will look at how to create a near infinitely scalable live broadcast using Azure Media Service Channels.  Stay tuned for more on this topic in the next coming weeks!

Using Web App Template in App Studio to Publish WP8 in 25 Steps

Microsoft’s App Studio portal allows developers to build rapid prototypes as well as full blown applications quickly and easily on any machine with a capable web browser.  This has resulted in thousands of published apps from developers around the world!  In today’s post, I am going to focus on one of the many provided templates, the Web App Template.


The Web App Template started as a project on Codeplex that allows for the creation of an app experience on Windows Phone and Windows 8 by leveraging your existing website.  This approach comes with various pros and cons, namely that the more responsive design implemented in the original website the better the layout is when wrapped into an application, in addition the app requires accessibility to the internet for full functionality.  However, this can be seen as a great opportunity for developers looking to create applications which require little maintenance to render across multiple platforms i.e. Android, IOS, Windows etc.


Recently, the Web App Template was brought over as an available template in App Studio offering developers the ability to easily generate wrapped applications over existing mobile ready websites.  This can aid companies in rapidly creating a presence on the Windows ecosystem.  Using this approach, I was able to bring my mobile ready website @ over to the Windows Store as an app in only 25 steps!



1. A mobile ready / responsive website (It is encouraged to use only IP’s or sites that you own)

2. A free App Studio account available @

3. A Windows Store Developer Account – If you are reading this blog, I may be able to help you out with getting access, e-mail “p decarlo @ Micro soft dot com”


Step 1:

Login to App Studio, and start a new project.

When prompted to Choose your template, select the Web App Template


Step 2:

Click the Create Button



Step 3:

Note the default template which wraps

The fields should be fairly intuitive


Step 4:

Modify the values below:

1. App Title

2. Base Url – Note the site may render differently on a device, do not always assume it will look like the simulator on the left

3.  Message (I did not prefer the default grammar)

Then click Save


Step 5:

Select Themes from the top menu

Here you can optionally change the color of the Application Bar

Note: Other template modifications won’t render changes in the Web App Template


Step 6:

Let’s gather logos using Bing Image Search

Note: I am searching for Free to modify, share, and use commercially tagged images


Step 7:

Create a nice square logo larger

Note: It is recommended to use a large layout 400 x 400 or greater

App Studio will resize the large square image in the next step

7_TMV Logo

Step 8:

Select Tiles from the Top menu

Upload your image as the Small and Large Tile

Provide appropriate text for your tile

You may also click over to the Splash and Lock menu to modify the Splash Screen


Step 9:

Select Publish Info from the top menu

Click to modify the logo, again I used the same image from the previous step


Step 109_TMV_Logo:

Click Finish, then Generate and select the options below


Step 11:

Verify your app was successfully generated


Step 12:

Head to the Windows Store Dev Center

Select Dashboard


Step 13:

Select Windows Phone Store


Step 14:

Select Submit App


Step 15:

Select App Info


Step 16:

Reserve your App Name

Make sure this app matches the name given to your app in App Studio

If the name is unavailable, find one that is and regenerate your app in App Studio using the new name

Scroll down and select the appropriate categories and distribution options for your app


Step 17:

Head back over to your generated App Studio App and Download the publish package

Here you can also test your app by side-loading using the Installable Packages QR code

More information on these features can be found by clicking “How to”


Step 18:

Note the location of the downloaded publish package


Step 19:

Locate your downloaded publish package



Step 20:

Extract the .XAP file within


Step 21:

Head to Upload and Describe your packages in the Windows Phone Store


Step 22:

Click Add new and select your extracted .XAP file from the previous steps


Step 23:

Add the necessary description and keyword info

In addition you will want to provide the necessary images

I resized my logo used for the live tiles in order to create the 300×300 App Tile Icon

I used the Snipping Tool to grab screenshots from the App Studio previewer and resized to 1280×800

You could also obtain screenshots by sideloading your package into the emulator if you have the WP8 SDK installed


Step 24:

Click Review and Submit


Step 25:

Note the summary of your app submission

If everything looks good, click Submit




Download the published app!



Adblock for Buffalo WHR-G54 running DD-WRT

You can install and run adblock on your devices at home and create an ad-free experience for users of those devices, or you can block ads at the router level and block ads for every device on your network.  Using a WHR-G54 router flashed with DD-WRT I was able to implement such a solution in bash that executes on the router during boot.  As would be expected, this consists of hacks upon hacks upon hacks.

High-level overview:  Create script and house in ram, call script to download and parse adblock list to hostsfile, append hosts in hostsfile to dnsmasq config, restart web portal on port 81, redirect port 80 traffic to gateway ip to webserver on port 81 via firewall rules so things appear vanilla, download pixelserv from remote location, store in ram and initiate on current ip block ending in 254, then deploy cron job to refresh adblock list during router uptime every Friday at 11:45 PM.

To install simply copy and paste the script below into  the window  @ Administration=>Command and click “Save Startup”


Get the script @ Github