Sending messages to Azure Event Hub with Spark over AMS API Proxy

In this article, I will describe how to publish data from a Spark Core to an Azure Event Hub for real-time processing using Azure Mobile Services as a message proxy.

Spark OS is a distributed operating system for the Internet of Things that brings the power of the cloud to low-cost connected hardware.  Spark provides an online IDE for programming a Wi-Fi enabled Arduino-like device known as the Spark Core.  Azure Event Hubs are a highly scalable publish-subscribe ingestor that can intake millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Once collected into Event Hubs you can transform and store data using any real-time analytics provider or with batching/storage adapters.

To begin, I took an approach of using the Event Hubs Rest API Send Event.  This seemed straightforward, simply create a request with the appropriate Request Headers over HTTP as HTTP and HTTPS are mentioned as supported in the documentation.  However, when sending this request over HTTP, I received “Transport security is required to protect the security token” when including the necessary “Authorization” header.  This poses a bit of a problem as the light-weight Spark device is unable to perform the computations necessary to send SSL requests.

Azure Mobile Services to the rescue!

I first created a new Azure Mobile Service with a Javscript backend:

1- Create Mobile Service

1.5 - Create Mobile Service

Next, I created a new API within the service named “temp”:

2 - Create API

 

Finally, create an Azure Service Bus with an Event Hub by following the instructions in Hypernephelist’s “Sending data to Azure Event Hubs from Node.JS using the REST API“.

 

The idea being, that I could successfully send data to the Mobile Service API as documented in Brian Sherwin’s “Wiring Up the Spark Core To Azure” then forward this data to the event hub using the information provided in Hypernephelist’s “Sending data to Azure Event Hubs from Node.JS using the REST API“.  Essentially creating an Proxy via Azure Mobile Services to get data from the Spark in to an Azure Event Hub.

 

Let’s begin, by building the Event Hub Proxy in the “temp” API.  This API will require Custom Node.JS packages that can be installed by following Redbit’s “Using Custom NodeJS Modules with Azure Mobile Services“.  Follow the instructions and be sure to run an npm install for https, crypto, and moment as these are required to generate the SAS Key for sending data through the Event Hub rest Service.

The actual API code is below (with heavy reliance on Hypernephelist’s example), you will need to modify this by editing in the Azure API editor within the Azure portal, or modifying on disk after cloning per Redbit’s instructions.  Be sure to edit the namespace, hubname, my_key_name, and my_key variables with the appropriate values from your Azure Event Hub.

3 - Azure API Editor

/************************Begin AMS Code**********************

var https = require('https');
var crypto = require('crypto');
var moment = require('moment');

exports.post = function(request, response) {
    sendTemperature(JSON.stringify(request.body));
    console.log(request.body);
    response.send(statusCodes.OK);
};

function sendTemperature(payload) {
// Event Hubs parameters
var namespace = 'EVENTHUBNAMESPACE';
var hubname ='EVENTHUBNAME';

// Shared access key (from Event Hub configuration) 
var my_key_name = 'KEYNAME'; 
var my_key = 'KEY';
    
// Payload to send
//payload = "{ \"temp\": \"100\", \"hmdt\": \"78\", \"subject\": \"wthr\", \"dspl\": \"test\"," + "\"time\": " + "\"" + new Date().toISOString() + "\" }";

// Full Event Hub publisher URI
var my_uri = 'https://' + namespace + '.servicebus.windows.net' + '/' + hubname  + '/messages';

// Create a SAS token
// See http://msdn.microsoft.com/library/azure/dn170477.aspx

function create_sas_token(uri, key_name, key)
{
    // Token expires in one hour
    var expiry = moment().add(1, 'hours').unix();

    var string_to_sign = encodeURIComponent(uri) + '\n' + expiry;
    var hmac = crypto.createHmac('sha256', key);
    hmac.update(string_to_sign);
    var signature = hmac.digest('base64');
    var token = 'SharedAccessSignature sr=' + encodeURIComponent(uri) + '&sig=' + encodeURIComponent(signature) + '&se=' + expiry + '&skn=' + key_name;

    return token;
}

var my_sas = create_sas_token(my_uri, my_key_name, my_key)

//console.log(my_sas);

// Send the request to the Event Hub

var options = {
  hostname: namespace + '.servicebus.windows.net',
  port: 443,
  path: '/' + hubname + '/messages',
  method: 'POST',
  headers: {
    'Authorization': my_sas,
    'Content-Length': payload.length,
    'Content-Type': 'application/atom+xml;type=entry;charset=utf-8'
  }
};

var req = https.request(options, function(res) {
  //console.log("statusCode: ", res.statusCode);
  //console.log("headers: ", res.headers);

  res.on('data', function(d) {
    //process.stdout.write(d);
  });
});

req.on('error', function(e) {
  //console.error(e);
});

req.write(payload);
req.end();

}

/************************End AMS Code**********************

Finally, we need to set up the Spark Core with appropriate code to push data to the API in our Mobile Service. I leveraged the HttpClient as it has great logging features for debugging and is a bit easier to wield compared to Spark’s lightweight TCPClient. I also import SparkTime.h to generate the timestamp for messages from the Spark itself. Simply flash this code to your spark device, taking care to appropriately modify the AzureMobileService, AzureMobileServiceAPI, AzureMobileServiceKey, and deviceName variables. Note that the payload sent in this particular example corresponds to the expected payload in the Connect the Dots Project from MSOpenTech. This implies that there will soon be support for the Spark Core in this amazing project!

4 - Spark Editor

/************************Begin Spark Code******************

// This #include statement was automatically added by the Spark IDE.
#include "HttpClient/HttpClient.h"

// This #include statement was automatically added by the Spark IDE.
#include "SparkTime/SparkTime.h"
  

String AzureMobileService = "MOBILESERVICE.azure-mobile.net";
String AzureMobileSeriveAPI = "APINAME";
char AzureMobileServiceKey[40] = "MOBILESERVICEKEY";
char deviceName[40] = "SparkCore"; 

UDP UDPClient;
SparkTime rtc;
HttpClient http;
  

void setup()
{
    rtc.begin(&UDPClient, "north-america.pool.ntp.org");
    rtc.setTimeZone(-5); // gmt offset
    Serial.begin(9600);
    delay(10000);
}

 
void loop()
{
    delay(5000);
    
    unsigned long currentTime;
    currentTime = rtc.now();
    
    String timeNowString = rtc.ISODateUTCString(currentTime);
    char timeNowChar[sizeof(timeNowString)]; 
    strcpy(timeNowChar, timeNowString.c_str());
    
    char payload[120];
    snprintf(payload, sizeof(payload), "{ \"temp\": \"76\", \"hmdt\": \"32\", \"subject\": \"wthr\", \"dspl\": \"%s\", \"time\": \"%s\" }", deviceName, timeNowChar);
    
    http_header_t headers[] = {
        { "X-ZUMO-APPLICATION", AzureMobileServiceKey },
        { "Cache-Control", "no-cache" },
        { NULL, NULL } // NOTE: Always terminate headers with NULL
    };
    
    http_request_t request;
    http_response_t response;
    
    request.hostname = AzureMobileService;
    request.port = 80;
    request.path = "/api/" + AzureMobileSeriveAPI;
    request.body = payload;

    http.post(request, response, headers);
    Serial.print("Application>\tResponse status: ");
    Serial.println(response.status);

    Serial.print("Application>\tHTTP Response Body: ");
    Serial.println(response.body);

}
/************************End Spark Code********************

Voila! I am able to verify my Spark is appropriately forwarding data to my ConnectTheDots portal!

5 - CTD Portal

We can also verify / debug by connecting to our Spark Core over serial and monitoring the output of HTTPClient

6 - Putty
I absolutely love developing on the Spark device do to it’s simplicity to update and convenient online IDE. Now with the power of Azure, we can real-time analyze data coming from one of these devices!

You can find the latest code included in this project at the DXHacker/SparkEventHub repo on Github.

Training Kinect4NES to Control Mike Tyson’s Punch-Out!

Kinect4NES @ HackRice – First Time Player Knocks out Glass Joe

In a previous post, I talked about how to create an interface to send controller commands to an NES based on interaction with the Kinect v2.  The idea was successful, but I received a bit of feedback on the control being less than optimal and a suggestion that it would likely work well with a game like Mike Tyson’s Punch-Out.

This aroused an interesting challenge, could I create a control mechanism that could allow me to play Mike Tyson’s Punch-Out using Kinect4NES with enough stability to accurately beat the first couple characters?

Let’s first look at how control was achieved in the first iteration of Kinect4NES.  There are essentially 2 ways of reacting to input on the Kinect, using a heuristic-based approach based on relatively inexpensive positional comparison of tracked joints or gesture based tracking (either discrete or continuous).  For my initial proof of concept, I used the following heuristic-based approach:

 

Taken from CalcController(Body body) in MainWindow.xaml.cs

/****************************************************************
* DPad from Calc
***************************************************************/

var dpadLeft = ((leftWrist.Position.Y > mid.Position.Y – 0.20) && (leftWrist.Position.X < mid.Position.X -0.5));
var dpadRight = ((rightWrist.Position.Y > mid.Position.Y – 0.20) && (rightWrist.Position.X > mid.Position.X + 0.5));
var dpadUp = ((leftWrist.Position.Y > head.Position.Y) || (rightWrist.Position.Y > head.Position.Y));
var dpadDown = ((spineBase.Position.Y – knee.Position.Y) < 0.10);
var start = ((head.Position.Y < shoulder.Position.Y));

 

As you can see this a basic approach that just compares current joint positions and if the condition is satisfied, it activates that controller input.

Ideally, we would like to have natural body movements drive our interaction with Mike Tyson’s Punch-Out.  To begin, we need to familiarize with they way the game is controlled by the NES controller.  I was lucky enough to come across a copy of the game at a local flea market around the time this project idea was going on in my head, the same one where I had found boxed NES controllers a couple weeks earlier.  I found an online manual which described the various game inputs and used these as a basis for defining my gestures.

 

-)  : Dodge to right
(-  : Dodge to left
DOWN: Once: block
      Twice rapidly: ducking

--- Left body blow (B + UP = Punch to left face)
|    -- Right body blow (A + UP = Punch to right face)
|    |
B    A

(When Mac is knocked down, press rapidly and he'll get 
up.)

SELECT: If pressed between rounds, Doc's encoraging
        advice can increase Mac's stamina
START:  Uppercut (If the number of stars is 1 or
	greater)

 

Take note of how some of these inputs are button combinations or rapid presses.  We will revisit later how I optimized the mechanism to account for these cases.

To begin with creating the gestures, I started a new solution using the Visual Gesture Builder Preview included in the Kinect v2 SDK to create a series of Discrete Gesture projects for each of the behaviors identified in the Punch-Out manual.

Gestures

For each of these projects, I had my brother perform a decided gesture with approximately 20 positive cases (gestures that should be considered as performed successfully) and 5 or so negatives (gestures that should not be considered performed successfully).  I.E. for the Uppercut, he would perform 20 uppercuts with the right hand for positive cases and a few regular left and right punches for the negative cases.  This way, we won’t accidentally perform an uppercut when a regular left or right punch is thrown.

KinectStudioAfter obtaining a successful recording, we add the clip to the appropriate project in our Visual Gesture Builder project.  Here we meticulously tag the key frames to indicate the frames where a successful gesture is performed.  As a result, areas that are not tagged are considered negative cases.

Tagging

We then perform a build of the project which uses the Adaboost algorithm to learn the intended positions of the joints to create a state machine for determining a successful gesture.  Each project outputs a .gba file which are composed into a .gbd when building the solution.

Build

We repeat this for all of our projects and then verify the .gbd with “File => Live Preview” in Visual Gesture Builder.  This allows us to see the signal generated by our current pose for all produced gesture projects, very handy for determining whether a given gesture creates interference with another.  In the image below, you see a very clear signal is generated by the uppercut pose.

Verify

With the recorded gestures verified, I looked at the sample code used in the “Visual Studio Gesture Builder – Preview” project included in the Kinect SDK browser.

SDKBrowser

 

From here, I incorporated the relevant bits into GestureDetector.cs.  In my original implementation, I iterated through all recorded gestures and employed a switch to perform the button press when one was detected.  This proved to be ineffecient and created inconsistent button presses.  I improved this significantly in my second update using a dictionary to hold a series of Actions (anonymous functions that return void) and a parallel foreach, allowing me to eliminate cyclomatic complexity in the previous switch while allowing me to process all potential gestures in parallel.  I also created a Press method for simulating presses.  This allowed me to send in any combination of buttons to perform behaviors like HeadBlow_Right (UP + A).  I also implemented a Hold method to make it possible to perform the duck behavior (press down, hold down).  In the final tweak, I implemented a method to produce a RapidPress for the Recover gesture.  This allowed me to reproduce a well known tip in Punch-Out where you can regain health in between matches by rapidly pressing select.

This was a rather interesting programming excercise, imagine coding at 2 in the morning with the goal of optimizing code for the intent of knocking out Glass Joe in a stable repeatable manner.  The end result wound up working well enough to where a ‘seasoned’ player can actually TKO the first two characters with relative regularity.  In the video at the top of this post, the player had actually never used the Kinect4NES and TKO’d Glass Joe on his first try.  As a result, I am satisfied with this experiment, it was certainly a fun project that allowed me to become more familiar with programming for the Kinect while also having the joy of merging modern technology with the classic NES.  For those interested in replicating, you can find the source code on github. If you have any ideas on future games that you would like to see controlled with Kinect4NES, please let me know in the comments!

Porting Open Source Libraries to Windows for IoT (mincore)

Microsoft is bringing Windows to a new class of small devices. Riding the crest of the “Internet of Things” movement, Microsoft is looking to capitalize on devices and sensor capabilities of popular development boards. Recently, members of the Windows Developer Program for IoT have been able to gain access to a build of Windows which supports the Intel Galileo chipset.

Bringing Windows to small devices is a huge feat that opens the door to many development opportunities.  Of course, this means, a lot of existing code can be brought over to aid in creating IoT solutions.  This post aims to identify the specifics of compiling two open source libraries to this new version of Windows.

The libraries in question concern apache-qpid-proton a light-weight messaging framework for sending AMQP messages and OpenSSL, an open-source library for implement Secure Socket Layer protocols.

Why these two libraries?  They were necessary for creating a Win32 application capable of sending AMQPS messages up to an Azure Event Hub as part of Galieo device support in the super awesome Connect the Dots project from MSOpenTech. More importantly, we get to encounter two rather distinct compilation exercises.  Apache Qpid can send AMQP (without the S) messages on its own, but Azure requires these are sent over SSL.  So we need to compile Apache Qpid against OpenSSL to get AMQPS support.  In addition, Apache Qpid gives us a Visual Studio Solution to work with while OpenSSL is built using a makefile in combination with Perl and python processors for producing the makefile itself.  This post will explain these scenarios and the necessary changes required to target Windows for IoT through the Visual Studio project for Apache Qpid and the makefile for Open SSL.

Let’s begin by looking at the default property configuration for the Apache Qpid Visual Studio Project:

qpid-proton-default

Normally, when compiling a Win32 application for a desktop PC, we will compile against Win32 libraries contained in C:\Windows\System32.  When targeting the Intel Galileo board you will notice that the default Intel Galileo Wiring App template contained in the Windows Developer Program for IoT MSI links against a single library, mincore.lib.  Jer’s blog goes into the best known detail on what this is. Long story short, we need to compile against mincore.lib in order to obtain code capable of running on the Galileo as the mappings for System and Win32 functions are completely different in Windows for IoT and contained in this particular lib.  This sets the basis for rules #1 and #2.

 

1. Remove all references to System 32 libs and replace with a reference to Mincore.lib

GalileoAppLinkerProperties

 

2. For all references removed in step 1, add these Dlls to the IgnoreDefaultLibraries Collection, this ensures that the linker will not attempt to link to these Dlls, as we want to link to references in Mincore only.  Note: I have added compatible OpenSSL binaries to Additional Dependencies to enable OpenSSL support

IgnoreSpecificLibraries

In addition, we need to consider the hardware present on the Galileo board itself.  Intel outfits the board with an Intel® Quark™ SoC X1000 application processor, a 32-bit, single-core, single-thread, Intel® Pentium® processor instruction set architecture (ISA)-compatible, operating at speeds up to 400 MHz.  This processor does not support enhanced instruction sets including SSE, SSE2, AVX, or AVX 2.    This sets the basis for rule #3.

 

3.  Ensure all code is compiled with the /arch:IA32 compiler flag

 

GalileoCodeGenerationProperties

You can now build Apache-Qpid-Proton to target Windows for IoT on the Intel Galileo, however, in order to be useful, we need to compile again OpenSSL as Azure event hubs require that we send messages using AMQPS.  Without OpenSSL support, we can only send AMQP messages which will be ignored by the Azure event hub.

There is an excellent article on compiling Apache-Qpid Proton against OpenSSL for Windows @ https://code.msdn.microsoft.com/windowsazure/Using-Apache-Qpid-Proton-C-afd76504

I don’t want to reproduce the content there so let’s talk about the changes necessary to target the Windows on the Galileo board.

In step A.3 the author describes the process for compiling the OpenSSL dynamic linking libraries using “nmake –f ms\ntdll.mak install”.  Nmake is Microsoft’s build tool for building makefiles.  To use the tool you can access it within a Visual Studio command prompt, from it’s actual location in C:\Program Files (x86)\Microsoft Visual Studio X.X\VC\bin, or call C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\vsvars32.bat in a standard command prompt to allow for the path to nmake to be available in your current shell.  The problem is the default makefile is configured to build against Win32, i.e. Win32 on the desktop.

Let’s take what we learned above and apply it to the ntdll makefile:

Inside the untouched ntdll.mak you will see the following:

# Set your compiler options
PLATFORM=VC-WIN32
CC=cl
CFLAG= /MD /Ox /O2 /Ob2 -DOPENSSL_THREADS  -DDSO_WIN32 -W3 -Gs0 -GF -Gy -nologo -DOPENSSL_SYSNAME_WIN32 -DWIN32_LEAN_AND_MEAN -DL_ENDIAN -D_CRT_SECURE_NO_DEPRECATE -DOPENSSL_USE_APPLINK -I. -DOPENSSL_NO_RC5 -DOPENSSL_NO_MD2 -DOPENSSL_NO_KRB5 -DOPENSSL_NO_JPAKE -DOPENSSL_NO_STATIC_ENGINE   
APP_CFLAG= /Zi /Fd$(TMP_D)/app
LIB_CFLAG= /Zi /Fd$(TMP_D)/lib -D_WINDLL
SHLIB_CFLAG=
APP_EX_OBJ=setargv.obj $(OBJ_D)\applink.obj /implib:$(TMP_D)\junk.lib
SHLIB_EX_OBJ=
# add extra libraries to this define, for solaris -lsocket -lnsl would
# be added
EX_LIBS=ws2_32.lib gdi32.lib advapi32.lib crypt32.lib user32.lib

# The OpenSSL directory
SRC_D=.

LINK=link
LFLAGS=/nologo /subsystem:console /opt:ref /debug

 

We essentially have a section of the makefile which outlines compiler flags and linker flags.  Here we can apply the rules from above to create a makefile that will produce a Win32 compatible library which targets the Intel Galileo.

Applying Rule #1 we remove the libs mentioned in EX_LIBS and replace with mincore.lib

Applying Rule #2 we take the libs that were in EX_LIBS and add to the linker flags (LFLAG): /NODEFAULTLIB:NAMEOFLIBARY

Applying Rule #3 we add /arch:IA32 to each compiler flag (*CFLAG)

 

This yields the following changes:

# Set your compiler options
PLATFORM=VC-WIN32
CC=cl
CFLAG= /arch:IA32 /MD /Ox /O2 /Ob2 -DOPENSSL_THREADS  -DDSO_WIN32 -W3 -Gs0 -GF -Gy -nologo -DOPENSSL_SYSNAME_WIN32 -DWIN32_LEAN_AND_MEAN -DL_ENDIAN -D_CRT_SECURE_NO_DEPRECATE -DOPENSSL_USE_APPLINK -I. -DOPENSSL_NO_RC5 -DOPENSSL_NO_MD2 -DOPENSSL_NO_KRB5 -DOPENSSL_NO_JPAKE -DOPENSSL_NO_STATIC_ENGINE   
APP_CFLAG= /arch:IA32 /Zi /Fd$(TMP_D)/app
LIB_CFLAG= /arch:IA32 /Zi /Fd$(TMP_D)/lib -D_WINDLL
SHLIB_CFLAG= /arch:IA32
APP_EX_OBJ=setargv.obj $(OBJ_D)\applink.obj /implib:$(TMP_D)\junk.lib
SHLIB_EX_OBJ=
# add extra libraries to this define, for solaris -lsocket -lnsl would
# be added
EX_LIBS=mincore.lib

# The OpenSSL directory
SRC_D=.

LINK=link
LFLAGS=/NODEFAULTLIB:kernel32.lib /NODEFAULTLIB:ws2_32.lib /NODEFAULTLIB:gdi32.lib /NODEFAULTLIB:advapi32.lib /NODEFAULTLIB:crypt32.lib /NODEFAULTLIB:user32.lib /nologo /subsystem:console /opt:ref /debug
RSC=rc

I have posted the the complete changes made to ntdll.mak compatible with the Windows for IoT on Intel Galileo @ https://gist.github.com/toolboc/490d53bdddc6626bce04

 

We can now build the OpenSSL libraries, but you will notice you receive a variety of errors.  This is due to missing functions in mincore.lib that were available in the original System32 dlls.

For example:

Creating library out32dll\libeay32.lib and object out32dll\libeay32.exp
cryptlib.obj : error LNK2019: unresolved external symbol __imp__DeregisterEventSource@4 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__RegisterEventSourceA@8 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__ReportEventA@36 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetProcessWindowStation@0 referenced in function _OPENSSL_isservice
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetUserObjectInformationW@20 referenced in function _OPENSSL_isservice
cryptlib.obj : error LNK2019: unresolved external symbol __imp__MessageBoxA@16 referenced in function _OPENSSL_showfatal
cryptlib.obj : error LNK2019: unresolved external symbol __imp__GetDesktopWindow@0 referenced in function _OPENSSL_isservice
rand_win.obj : error LNK2019: unresolved external symbol __imp__CreateCompatibleBitmap@12 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__DeleteObject@4 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDeviceCaps@8 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDIBits@28 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetObjectA@12 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__GetDC@4 referenced in function _readscreen
rand_win.obj : error LNK2019: unresolved external symbol __imp__ReleaseDC@8 referenced in function _readscreen

You will notice that these errors actually kind of make sense.  Recall Window for IoT (mincore) is stripped down to approximately 171 MB.  As a result, many unnecessary functions are removed, such as GetProessWindow and MessageBox as shown above (as there isn’t a GUI available on the stripped down mincore).  We now need to modify the source (as safely as possible) to resolve these externals.  In my case, I simply commented out the missing method where necessary.  Of course, this may have unintended side effects, but due to the fact that most of the missing calls deal with the GUI, you are probably okay.

Continue this until the only errors you receive are in creating the e_capi.obj

Now run nmake -i -f ms\ntdll.mak install (-i will ignore compilation errors, namely the ones coming from e_capi)

Capi is one of the engines used by OpenSSL and it is probably important but I could not get around the compilation errors without essentially breaking it completely so I left it out.  This will still produce a valid libeay32.dll and ssleay32.dll.  You can verify by copying these dlls along with the created openssl.exe and not that it runs on the Galileo! (Note: you can resolve the error mentioned by copying the produced openssl.cnf to the directory mentioned)

 

OpenSSLOnGalileo

Now to truly compile Apache-Qpid_Proton with OpenSSL support, you would continue forward from step B of https://code.msdn.microsoft.com/windowsazure/Using-Apache-Qpid-Proton-C-afd76504

Upon recreating and opening the Apache-Qpid-Proton Visual Studio solution, you would need to modify all the proton project using Rules #1 – #3 as defined above.

Of course, if you wish to obtain the precompiled binaries and see an example of using Apache-Qpid-Proton with OpenSSL support in a Galileo Wiring app, you may refer to this pull request in the Connect the Dots Project by MS OpenTech: https://github.com/MSOpenTech/connectthedots/pull/20

 

Happy Hacking!  Here’s to a great ideas and developments on Windows for IoT!

Mo’ Code Movember – A Month of Code happening in Houston!

MoCode

It looks like November is shaping up to be an event-packed month with various hackathons / coding challenges taking place at UH, Rice, and near UHD in the EaDo area.

To participate in Mo’ Code Movember you will want to join the official Facebook Event and view the contest rules!

Seeing  these events laid out, I can’t help but notice the natural progression it proposes for participants.

Assuming you are female, every one of the events above brings attention to Women in Technology when you participate!   It’s also a great time to encourage group involvement and learning by hosting workshops that coincide with any of the events listed above.

Ok, so what exactly is a Mo’Code Movember?  This is a month-long event hosted by CougarCS @ the University of Houston which aims to bring quality to the typical 24-hour hackathon format, by using a month-long running format with weekly check-ins, while enforcing that submissions themselves are checked-in to Github.  We want to teach and encourage proper software engineering practices over quick and dirty mish-mashing of API’s.  Have an itch you need to scratch?  Mo’Code Movember is the opportunity to get it done!

CodeDay Houston is a 24-hour event where students passionate about technology get together and build cool things together! You pitch ideas, form teams, and build a cool app or game in 24 hours! The best projects will even be rewarded with prizes.  The event is open to highschool and college students and boasts participation form 27 cities around the US!

Hacktoberfest is Hackathon with a tip of the hat to fall, Octoberfest, and Bavarian flare!  If you work in interface design or graphic design, or you currently build websites, apps, and services, you are probably a good fit!

3 Day Startup is an entrepreneurship education program designed for university students with an emphasis on learning by doing. 3 Day Startup teaches entrepreneurial skills to university students in an extreme hands-on environment.

Let’s assume you are a CS Student and you have an awesome idea you have been dying to work on.  Maybe you just want to try coding something challenging or just to learn what all this development stuff is all about because you are new to the scene.  You could vet that idea at the Mo’Code Movember kickoff and learn everything you need to know to get that code placed up on Github where it can be properly maintained (KILLER resume line!).  The next day, you bring your idea out to Codeday Houston and find some like-minded individuals who you recruit to your project, you bang out a minimally viable product, and you end up placing in the top 5!  Next week, you take it out to Hacktoberfest and get the UI polished by the wonderful design divas who will be in attendance.  Sweet, thankfully you had visions of grandiosity early on so you beat the October 28 deadline to sign up for 3 Day Startup.  You bring your baby out and discover some pals in the business school who prove out your product using market analysis, product validation, and other things you never considered.  You pitch the idea that Sunday and see a clear vision now on what you want to do.  The following week you showcase your project to the most difficult audience of all, your family at Thanksgiving dinner, and they provide you feedback that everyone before was scared to share!  Taking into account those nitpicky suggestions Uncle Jeff gave you, you incorporate them into your project and show off a polished product at the Mo’Code Movember showcase.  You now have invaluable experience at what it takes to be an idie developer OR maybe none of this happened but you learned invaluable skills along the way!  Oh and bonus if you are female, because you did all this and represented women in Technology!

 

Full text schedule with hyperlinks:

October 11 to December 12:
International Women’s Hackathon – http://iwhphx2014.challengepost.com/

November 7:
University of Houston Mo’Code Movember Kickoff – A Github hosted month-long competition where creations will be showcased at UH on first month of December

November 8 & 9:
Code Day Houston – https://codeday.org/houston

November 13:
Mo’Code Movember Checkpoint

November 13 to 14:
Hacktoberfest – http://hacktoberfe.st/

November 20:
Mo’Code Movember Checkpoint

November 21 to 23:
3 Day Startup – http://uofhouston.3daystartup.org/ – APPLY BY OCTOBER 20!

November 27 to 28:
Thanksgiving Holiday – Show your family the awesome thing(s) you created and add those finishing touches!

December 5: Mo’Code Movember Showcase – Show your colleagues the awesome thing(s) you created!

Kinect4NES – Control your classic NES with the Power of Kinect v2

Kinect4NES

Recently, I have found myself becoming involved in the exciting world of “IoT” or internet of things.  All of this started while attending a presentation on the subject  that was put on by my fellow colleague Bret Stateham.  If you are unfamiliar with the concept of “IoT”, I like to think of it as a programmable system comprised of input sensor(s) / polling service(s) which interact with a physical device and optionally store data received by the sensors to a web service where it could be optionally processed for patterns to facilitate things like forecasting.  In short, we are connecting your things that are inherently offline to the internet.

This particular project does not quite fully fit the definition above as the end result is a non-network connected thing (a classic NES console) connected to modern sensor (the Kinect V2).  However, it could be easily modified to fit such a description.  For example, this concept could be extended to allow the public to access and control a physical NES through a web interface (think Twitch plays Pokémon) ~Coming Soon~ OR it could allow users to upload Kinect Gesture profiles online that could be pulled down through the application to allow better control in certain games.  Nonetheless, it leverages concepts that are integral to most “IoT” projects, specifically hardware interface construction, software interfacing, and application development.

I’d like to elaborate a little bit more on inspiration for this project as I would really like to tear down any barriers currently holding back abled developers from breaking into this field.  At the end of Bret’s presentation, he showed off a quick demo that showcased an Intel Galileo board running Windows for IoT that contolled a blinking LED with breakpoints set in Visual Studio.  Not exactly, jaw dropping surface value, but when looked at for what it can enable rather what it is specifically doing, you may find an opportunity to expand the possibilities of your code.  It dawned on me that that this blinking LED demo was all I needed to know to allow computer code to interact with physical objects.  I began thinking about everything like a blinking LED project, SMS notifications from my washer/dryer/dishwasher when a cycle is complete, automating the addition of chemicals to a swimming pool, or firing a rocket when a threshold of retweets is achieved on a particular hashtag.   All of these become comparatively simple problems when looked at through the lens of turning on a light when a certain condition is met!  I soon found myself pondering the idea of mixing old nostalgic technology with the bleeding edge.  What if I could control a classic NES with a Kinect 2 device?  Not through an emulator, but a physical, rectangular, Gray-Box NES from 1984.

 

Ingredients:

  • An NES console with game to test
  • An NES controller OR some wiring skills and a CD4021BE 8-bit shift register
  • 12 strands of wire, recommend Kynar
  • 8 1k resistors (technically any value from 1k to 50k should suffice)
  • 2 3.6k resistors (again higher not necessarily bad)
  • IoT board capable of running Firmata, Intel Galileo or Arudio Uno etc.
  • Kinect V2 Sensor for Windows OR recently announced Kinect V2 Adapter and existing Xbox One Kinect Sensor
  • Machine capable of running the Kinect V2 SDK

 

In my writeup, I am going to assume you have zero experience with hardware development which is fitting because I literally had no idea how to even blink an LED when I started this project last week.  I am also going to assume you want to know how to do achieve the final product from a blank slate, let’s start by breaking down the problem into sub-problems.

 

Problem:

We want to use a Kinect V2 Sensor to control games on a physical NES

 

Sub-Problems:

1. We need to interface with the NES controller port using computer code

2. We need to speak to that hardware interface through a software interface, preferably in C#

3. We need to create an application that takes input from the Kinect V2 Sensor and processes it through the software interface, into the hardware interface, where it can reproduce button presses on the NES console based on defined gestures.

 

Sub-Problem #1 – Creating a controller interface in hardware

We need to understand how an NES controller works.  I found an excellent article on the subject @ the PoorStudentHobbyist Blog.  I highly recommend giving it a read.  We learn that the NES controller operates using a 4021B 8-bit shift register wired to 8 inputs (the 4 D-Pad directions + Select, Start, A, and B buttons).  Given this knowledge, we can build an interface in a couple of ways.  One would be to use an 8-bit shift register emulator like the one described @ MezzoMill or we can leverage the physical hardware within an NES controller to create the interface or we could substitute a comparable 8-bit shift register. Coincidentally, I came across 2 NES controllers brand new in the box at a local flea market for $6 a week before starting this project.  I considered it a sign and went through with deconstructing the controllers to get the parts I needed.

I removed the screws on the back of one of the controllers, opened it up, and desoldered the 5-wire braid that connects to the controller port and I also desoldered the 4021B shift register.  With the board disassembled you can determine the pinouts to see which wires / buttons are attached to which pins on the 4021b using a mulimeter or the painstaking process of tracing with your finger.

The 4021B Shift register:

NESControlInnards

Let’s assume you have never used an Arduino and have no idea what it is or does.  All you really need to know is that it is a programmable device that has the ability to turn on/off certain digital pins through programs referred to as sketches.  Those on/off pins will essentially become our buttons, where button press is determined by sending a low signal (see PoorStudentHobbyist blog for details).  Ergo, we are basically going to create a circuit and wire-up a glorified blinking LED demo, the blinking light is going to be NES controller buttons.

I followed the pinouts and wiring guide used @ PoorStudentHobbyistBlog but I did not use the proposed switch.  At this point, try running a sample program to see that your interface works by sending a low signal to the start button every few seconds or so.  The blog post above includes a sample for exactly that.

Here is a picture of the completed interface:

HWInterface

Sub-Problem #2 – Speaking to our Hardware Interface from C#

We are going to leverage open-source software to interface with our board via C#.  A very common scenario when developing for IoT boards is the ability to control the pinouts via an external interface i.e. a Web API or in our case the serial port.  Lucky for us, Firmata is an open-source protocol for doing exactly that!  Firmata is so pervasive that it is actually included as a default sketch in the Arduino IDE.  Simply, upload the standard Firmata sketch to your device.  Now we need to setup communication via C#.  Again, luck for us, we can leverage Arduino4Net which lets us speak to Firmata to control our board via C#.  Bonus, Arduino4Net can be brought in easily using Nuget!  At this point, you will want to create a simple test where you can verify that Arduino4NET is properly passing signals to your board.  I have included one as part of the Kinect4NES project.

 

Sub-Problem #3 – Creating the Kinect Application to simulate control signals based on gestures

We are going to connect up the Kinect V2 sensor and create a gesture scheme to signal button presses through our interface!  The Kinect V2 SDK includes something called the SDK Browser 2.0.  Inside you will find a Body Basics XAML sample.  Install the sample and copy out to somewhere where you can modify.

The Kinect V2 SDK Browser:

SDK

We begin by intercepting the Reader_FrameArrived method when datareceived is true.  Keeping in mind that Kinect can track more than one body, we take one of those bodies and call CalcController(Body body).  Inside this method, we setup the logic for controlling which pin we wish to signal to based on defined gestures which are determined from the joint tracking points.  All of this starts with trial and error, but essentially you make considerations based on where the joints are in relation to each other.  We could also train a gesture using the Gesture Builder tool which is included with the SDK, but that is for a later post =)  Working with my colleague Jared Bienz, we were able to blindly construct the gestures during a visit to the local Microsoft Store.  Simply find some space, get a body, and start doing some gestures and determine how to best capture!

The Kinect4NES Application:

KinectApp

 

Solution:

Once you have all this, put all the pieces together and turn on your favorite game!  I chose the pinnacle classic Super Mario 3 which worked well enough with our scheme to actually allow you to play through the first level!  Next thing to consider is trying other games out and possibly allowing for multiple gesture profiles.  For example, I have created a stub in the hosted project to play Mario by physically jumping and running as opposed to using hands.  All in all, this was an extremely fun hack that allowed me to bridge my interest in class video games with modern gaming peripherals!

If you want to get the bits and follow along with updates or even contribute to this project, you may want to check out the GitHub Project Page for Kinect4NES.

 

Introducing Azure Media Services – Uploading File Content to Azure Media Service

Azure Media Services allows you to leverage the highly scalable Azure infrastructure to deliver media content on demand.  It boasts the ability to deliver content at a scale equivalent of that used to distribute the Olympics to viewers worldwide!  It addition, it supports the ability to live stream muli-bitrate broadcasts, encode content on the fly, and configure accessibility to your content.   Azure Media Services is currently in Preview in the Microsoft Azure Portal, so let’s take it for a test spin with a very simply demo of uploading file content and making a highly-scalable pubicly distributed link to that content!

 

1. Create Media Service instance by selecting from panel on the left

1 - Create Media Service

 

2. Name the Media Service, determine the region where you would like it to exist, create a new storage account for your service

2 - Name Media Service

 

3. Wait for Azure to complete the creation of your new Media Service

3 - Media Service Activated

 

4. Click on the service to view the “Getting Started” page

4 - Media Service Dashboard

 

5. Click the upload button at the bottom of the screen and select the file you wish to upload

5 - Media Service Upload

 

 

6.  Click the “Content” tab and noticed your uploaded content, take note that it’s “Publish Url” value is “Not Published”

 

6 - Media Service Content

 

7. Click the “Publish” button and the “Publish Url” value will change to a URL

7 - Media Service Content Publish

 

8. Copy and paste the resulting Url and watch your content stream from Azure to your device!

8  - Media Service Content Playing

Although this demo may seem very basic, it highlights some interesting ideas.  You could theoretically utilize Azure Media Services to create your own Video On Demand service capable of serving a global audience.  This could possibly be consumed by client applications for any platform capable of consuming content supported by the Azure Media Service Encoder and even monetized via access controls.  In the next article we will look at how to create a near infinitely scalable live broadcast using Azure Media Service Channels.  Stay tuned for more on this topic in the next coming weeks!

Using Web App Template in App Studio to Publish Top-Music-Videos.com WP8 in 25 Steps

Microsoft’s App Studio portal allows developers to build rapid prototypes as well as full blown applications quickly and easily on any machine with a capable web browser.  This has resulted in thousands of published apps from developers around the world!  In today’s post, I am going to focus on one of the many provided templates, the Web App Template.

 

The Web App Template started as a project on Codeplex that allows for the creation of an app experience on Windows Phone and Windows 8 by leveraging your existing website.  This approach comes with various pros and cons, namely that the more responsive design implemented in the original website the better the layout is when wrapped into an application, in addition the app requires accessibility to the internet for full functionality.  However, this can be seen as a great opportunity for developers looking to create applications which require little maintenance to render across multiple platforms i.e. Android, IOS, Windows etc.

 

Recently, the Web App Template was brought over as an available template in App Studio offering developers the ability to easily generate wrapped applications over existing mobile ready websites.  This can aid companies in rapidly creating a presence on the Windows ecosystem.  Using this approach, I was able to bring my mobile ready website @ http://top-music-videos.com over to the Windows Store as an app in only 25 steps!

 

Prerequisites:

1. A mobile ready / responsive website (It is encouraged to use only IP’s or sites that you own)

2. A free App Studio account available @ http://appstudio.windows.com

3. A Windows Store Developer Account – If you are reading this blog, I may be able to help you out with getting access, e-mail “p decarlo @ Micro soft dot com”

 

Step 1:

Login to App Studio, and start a new project.

When prompted to Choose your template, select the Web App Template

1_ChooseTemplate

Step 2:

Click the Create Button

2_CreateApp

 

Step 3:

Note the default template which wraps http://m.microsoft.com

The fields should be fairly intuitive

3_DefaultApp

Step 4:

Modify the values below:

1. App Title

2. Base Url – Note the site may render differently on a device, do not always assume it will look like the simulator on the left

3.  Message (I did not prefer the default grammar)

Then click Save

4_TMV

Step 5:

Select Themes from the top menu

Here you can optionally change the color of the Application Bar

Note: Other template modifications won’t render changes in the Web App Template

5_TMV_Theme

Step 6:

Let’s gather logos using Bing Image Search

Note: I am searching for Free to modify, share, and use commercially tagged images

6_TMV_Images

Step 7:

Create a nice square logo larger

Note: It is recommended to use a large layout 400 x 400 or greater

App Studio will resize the large square image in the next step

7_TMV Logo

Step 8:

Select Tiles from the Top menu

Upload your image as the Small and Large Tile

Provide appropriate text for your tile

You may also click over to the Splash and Lock menu to modify the Splash Screen

8_TMV_Logos

Step 9:

Select Publish Info from the top menu

Click to modify the logo, again I used the same image from the previous step

 

Step 109_TMV_Logo:

Click Finish, then Generate and select the options below

10_TMV_GeneratePackages

Step 11:

Verify your app was successfully generated

11_TMV_PublishPackages

Step 12:

Head to the Windows Store Dev Center

Select Dashboard

12_Dashboard

Step 13:

Select Windows Phone Store

13_WPStore

Step 14:

Select Submit App

14_Submit

Step 15:

Select App Info

15_AppInfo

Step 16:

Reserve your App Name

Make sure this app matches the name given to your app in App Studio

If the name is unavailable, find one that is and regenerate your app in App Studio using the new name

Scroll down and select the appropriate categories and distribution options for your app

16_TMV_ReserveName

Step 17:

Head back over to your generated App Studio App and Download the publish package

Here you can also test your app by side-loading using the Installable Packages QR code

More information on these features can be found by clicking “How to”

17_TMV_GetPublishPackages

Step 18:

Note the location of the downloaded publish package

18_DownloadPrompt

Step 19:

Locate your downloaded publish package

20_Zip

 

Step 20:

Extract the .XAP file within

21_XAP

Step 21:

Head to Upload and Describe your packages in the Windows Phone Store

19_TMV_Upload

Step 22:

Click Add new and select your extracted .XAP file from the previous steps

22_TMV_AddXap

Step 23:

Add the necessary description and keyword info

In addition you will want to provide the necessary images

I resized my logo used for the live tiles in order to create the 300×300 App Tile Icon

I used the Snipping Tool to grab screenshots from the App Studio previewer and resized to 1280×800

You could also obtain screenshots by sideloading your package into the emulator if you have the WP8 SDK installed

23_TMV_AddImages

Step 24:

Click Review and Submit

24_TMV_Review

Step 25:

Note the summary of your app submission

If everything looks good, click Submit

25_TMV_Submit

Success!

26_TMV_Success

Download the published app!

 

 

Adblock for Buffalo WHR-G54 running DD-WRT

You can install and run adblock on your devices at home and create an ad-free experience for users of those devices, or you can block ads at the router level and block ads for every device on your network.  Using a WHR-G54 router flashed with DD-WRT I was able to implement such a solution in bash that executes on the router during boot.  As would be expected, this consists of hacks upon hacks upon hacks.

High-level overview:  Create script and house in ram, call script to download and parse adblock list to hostsfile, append hosts in hostsfile to dnsmasq config, restart web portal on port 81, redirect port 80 traffic to gateway ip to webserver on port 81 via firewall rules so things appear vanilla, download pixelserv from remote location, store in ram and initiate on current ip block ending in 254, then deploy cron job to refresh adblock list during router uptime every Friday at 11:45 PM.

To install simply copy and paste the script below into  the window  @ Administration=>Command and click “Save Startup”

WHRG54

Get the script @ Github

 

 

2nd Annual Houston Hackathon nets Honorable Mention for Team Awesome

Mayor of Houston

This year marked the 2nd annual Houston Hackathon held at the Houston Technology Center in Downtown Houston.  This year’s event pitted 18 teams against each other to brainstorm up and prototype a technological innovation which benefits the city for the chance to pitch the idea to mayor Annise Parker.  Patrick Wolf, Jesus Hernandez, Thomas Garza, and I formed a group unofficially titled “Team Awesome” to create a concept around the XPlatformCloudKit.  At first, we utilized the framework to consume Police Report data as participants were encouraged to used data repos from the city’s open data initiative.  However, due to the agility of the XPCK framework, we found ourselves with a complete and published project after approximately 6 hours!  We quickly shifted our approach to pitch the ease with which we were to able create a polished application as a pitch to encourage development amongst the community.  Using our app HPD Reports as a proof of concept, we showed the mayor that we could enable the consumption of a variety of data.  This posed a great opportunity to “appify” even more open data repos and even educate the community on how to do so!  At the end of the day, we wound up winning an honorable mention and got an excellent “Winner” banner placed on our Challenge Post submission.

Also awesome, was the fact that our teammate Patrick Wolf’s son’s team won the “Student Challenge” for their not one, but three websites that they created to better service citizens of Houston.  These included a Biking trail service, an HISD lunch menu, and Houston Hackathon promo site!

View the official Challenge Post submission for HPD Reports:

http://houstonhackathon.challengepost.com/submissions/24178-hpd-reports-a-mobile-framework-proof-of-concept

HPD Reports

Download the HPD Reports App @ http://hpdreports.azurewebsites.net/

mayor parker selfie

So much fun!  We even got to meet Mayor Annise Parker!

 

Workaround for “You’ve reached your MYSQL quota” with WordPress WebSite Deploy on Microsoft Azure

Normally to create a new WordPress site on Azure you would Click => New => Compute => Website => From Gallery => WordPress. Except, this doesn’t work if you already have an existing WordPress site on Azure because the MYSql provider “ClearDB” limits you to one SQL instance. In the final step, you may encounter the following message:

Message

But… you can create a new one and attach it to your site by following these instructions:

1. Select your existing website from the left panel in the Azure Portal

Step1

2. Click “Dashboard”

Step2

3. Scroll down to “Linked Resources” and click the resource in blue

Step3

4. This brings you into your associated “ClearDB” account, click “Dashboard”

Step4

5. Click “Create a new Database”

Step5

6. Select the free “Mercury” instance

Step6

7. Choose the Primary region to deploy your database (you likely want this match the region you intend to deploy your Azure Website)

Step7

8. Click “Create my free Database”

Step8

9. Take note of the name of your newly created database

Step9

10. Back in Azure, create your new WordPress site from New => Compute => Website => From Gallery => WordPress and in step 3, select to “Use an existing MYSQL Database” and create a new WebScale Group in the same region as the Database that was created in step 7

Step10

All you have to do now is select your newly created database (name given in step 9) and you are good to go with using your new site against a new ClearDB database instance! Check the “ClearDB terms” box in the final step and notice the check mark is enabled and you can now deply your new WordPress site!

Final