After the small post I wrote 2 days ago [# KINECT] HowTo: display the contents of the depth sensor in our applications, today we will see a little more detail or the guts of this mechanism. As well as the camera viewfinder is simply map of RGB values that convert to an image and then we show with an IMAGE of WPF, the information that gives us the depth sensor is a collection of distances that then "we paint" in our application.
The distances that handles Kinect ranges from 0.85 m to 4 m; and the information that we provide the depth sensor in millimeters. If we look at the code from the example of the post, we see that during the processing of the event DepthFrameReady() of the sensor, we work with an object of type PlanarImage (line 3). In this case the array of pixels that we return the sensor contains 2 bytes for every pixel. Each of these bytes determined us the distance to each of the cameras used the sensor.
1: void KinectDepthFrameReady(object sender, ImageFrameReadyEventArgs e)
3: var planarImage = e.ImageFrame.Image;
4: var depthImageHelper = new DepthImageHelper();
5: byte convertedDepthBits =
6: depthImageHelper.ConvertDepthFrame(planarImage.Bits, RuntimeOptions);
7: if (this.imageHelper == null)
9: this.imageHelper = new InteropBitmapHelper(planarImage.Width,
12: this.DepthSensorViewer.Source = this.imageHelper.InteropBitmap;
In addition, if we initialized the sensor with the option [DepthAndPlayerIndex], the array of information will contain 3 bytes where the 3rd byte corresponds to a specific player recognized by the sensor.
To identify the real value of the depth of each element of the array, we need to check if the sensor has been initialized with the option to [Depth] or [DepthAndPlayerIndex]. In the example of the post, we do from a simple calculation taking into account if we work with 3 or 2 bytes. In this case depthFrame is an array of bytes that we then convert to an image using the InteropBitmapHelper class we saw in the previous post.
1: var hasPlayerData = runtimeOptions.HasFlag(RuntimeOptions.UseDepthAndPlayerIndex);
2: for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16 += 2, i32 += 4)
4: var player = hasPlayerData ? depthFrame16[i16] & 0x07 : -1;
5: int realDepth;
7: if (hasPlayerData)
9: realDepth = (depthFrame16[i16 + 1] << 5) | (depthFrame16[i16] >> 3);
13: realDepth = (depthFrame16[i16 + 1] << | (depthFrame16[i16]);
16: // transform 13-bit depth information into an 8-bit intensity appropriate
17: // for display (we disregard information in most significant bit)
18: var intensity = (byte)(255 - (255 * realDepth / 0x0fff));
20: depthFrame32[i32 + RedIndex] = 0;
21: depthFrame32[i32 + GreenIndex] = 0;
22: depthFrame32[i32 + BlueIndex] = 0;
The great lack at this point, is to know the depth of a Joint or an element of the Skeleton in order to have access to gesturas as a Push or Pull.
Unfortunately that doesn’t come from factory in Kinect Beta 2 SDK , but…. isn’t so complicated no?
In the next post about Kinect solve it with 4 lines of code (that if the Victor returns me the Kinect!)
Greetings @ Home
PS: I have finished writing the post, but I see that on the challenges of MSDN they explain very, very well!http://blogs.msdn.com/b/esmsdn/archive/2011/07/20/Reto-kinect-Usar-Las-c-225-maras-del-sensor.aspx