WinCoder Blog

Training Kinect4NES to Control Mike Tyson’s Punch-Out!

Kinect4NES @ HackRice – First Time Player Knocks out Glass Joe

In a previous post, I talked about how to create an interface to send controller commands to an NES based on interaction with the Kinect v2.  The idea was successful, but I received a bit of feedback on the control being less than optimal and a suggestion that it would likely work well with a game like Mike Tyson’s Punch-Out.

This aroused an interesting challenge, could I create a control mechanism that could allow me to play Mike Tyson’s Punch-Out using Kinect4NES with enough stability to accurately beat the first couple characters?

Let’s first look at how control was achieved in the first iteration of Kinect4NES.  There are essentially 2 ways of reacting to input on the Kinect, using a heuristic-based approach based on relatively inexpensive positional comparison of tracked joints or gesture based tracking (either discrete or continuous).  For my initial proof of concept, I used the following heuristic-based approach:

 

Taken from CalcController(Body body) in MainWindow.xaml.cs

/****************************************************************
* DPad from Calc
***************************************************************/

var dpadLeft = ((leftWrist.Position.Y > mid.Position.Y – 0.20) && (leftWrist.Position.X < mid.Position.X -0.5));
var dpadRight = ((rightWrist.Position.Y > mid.Position.Y – 0.20) && (rightWrist.Position.X > mid.Position.X + 0.5));
var dpadUp = ((leftWrist.Position.Y > head.Position.Y) || (rightWrist.Position.Y > head.Position.Y));
var dpadDown = ((spineBase.Position.Y – knee.Position.Y) < 0.10);
var start = ((head.Position.Y < shoulder.Position.Y));

 

As you can see this a basic approach that just compares current joint positions and if the condition is satisfied, it activates that controller input.

Ideally, we would like to have natural body movements drive our interaction with Mike Tyson’s Punch-Out.  To begin, we need to familiarize with they way the game is controlled by the NES controller.  I was lucky enough to come across a copy of the game at a local flea market around the time this project idea was going on in my head, the same one where I had found boxed NES controllers a couple weeks earlier.  I found an online manual which described the various game inputs and used these as a basis for defining my gestures.

 

-)  : Dodge to right
(-  : Dodge to left
DOWN: Once: block
      Twice rapidly: ducking

--- Left body blow (B + UP = Punch to left face)
|    -- Right body blow (A + UP = Punch to right face)
|    |
B    A

(When Mac is knocked down, press rapidly and he'll get 
up.)

SELECT: If pressed between rounds, Doc's encoraging
        advice can increase Mac's stamina
START:  Uppercut (If the number of stars is 1 or
	greater)

 

Take note of how some of these inputs are button combinations or rapid presses.  We will revisit later how I optimized the mechanism to account for these cases.

To begin with creating the gestures, I started a new solution using the Visual Gesture Builder Preview included in the Kinect v2 SDK to create a series of Discrete Gesture projects for each of the behaviors identified in the Punch-Out manual.

Gestures

For each of these projects, I had my brother perform a decided gesture with approximately 20 positive cases (gestures that should be considered as performed successfully) and 5 or so negatives (gestures that should not be considered performed successfully).  I.E. for the Uppercut, he would perform 20 uppercuts with the right hand for positive cases and a few regular left and right punches for the negative cases.  This way, we won’t accidentally perform an uppercut when a regular left or right punch is thrown.

KinectStudioAfter obtaining a successful recording, we add the clip to the appropriate project in our Visual Gesture Builder project.  Here we meticulously tag the key frames to indicate the frames where a successful gesture is performed.  As a result, areas that are not tagged are considered negative cases.

Tagging

We then perform a build of the project which uses the Adaboost algorithm to learn the intended positions of the joints to create a state machine for determining a successful gesture.  Each project outputs a .gba file which are composed into a .gbd when building the solution.

Build

We repeat this for all of our projects and then verify the .gbd with “File => Live Preview” in Visual Gesture Builder.  This allows us to see the signal generated by our current pose for all produced gesture projects, very handy for determining whether a given gesture creates interference with another.  In the image below, you see a very clear signal is generated by the uppercut pose.

Verify

With the recorded gestures verified, I looked at the sample code used in the “Visual Studio Gesture Builder – Preview” project included in the Kinect SDK browser.

SDKBrowser

 

From here, I incorporated the relevant bits into GestureDetector.cs.  In my original implementation, I iterated through all recorded gestures and employed a switch to perform the button press when one was detected.  This proved to be ineffecient and created inconsistent button presses.  I improved this significantly in my second update using a dictionary to hold a series of Actions (anonymous functions that return void) and a parallel foreach, allowing me to eliminate cyclomatic complexity in the previous switch while allowing me to process all potential gestures in parallel.  I also created a Press method for simulating presses.  This allowed me to send in any combination of buttons to perform behaviors like HeadBlow_Right (UP + A).  I also implemented a Hold method to make it possible to perform the duck behavior (press down, hold down).  In the final tweak, I implemented a method to produce a RapidPress for the Recover gesture.  This allowed me to reproduce a well known tip in Punch-Out where you can regain health in between matches by rapidly pressing select.

This was a rather interesting programming excercise, imagine coding at 2 in the morning with the goal of optimizing code for the intent of knocking out Glass Joe in a stable repeatable manner.  The end result wound up working well enough to where a ‘seasoned’ player can actually TKO the first two characters with relative regularity.  In the video at the top of this post, the player had actually never used the Kinect4NES and TKO’d Glass Joe on his first try.  As a result, I am satisfied with this experiment, it was certainly a fun project that allowed me to become more familiar with programming for the Kinect while also having the joy of merging modern technology with the classic NES.  For those interested in replicating, you can find the source code on github. If you have any ideas on future games that you would like to see controlled with Kinect4NES, please let me know in the comments!


4 comments for “Training Kinect4NES to Control Mike Tyson’s Punch-Out!

Leave a Reply

Your email address will not be published. Required fields are marked *