Today I will show how you can use the new KinectSDK v2 to consume and display depth sensor data. For this example, we will work in a WPF application with a view with an image, and it has associated with an ImageSensor property which is the sample feed sensor.
The following code example shows the initialization of the Window. In it we can see that you as in the examples of Body and camera, we initialize a READER object for the feed of the Depth, also initialize a Frame with the depth sensor information, and finally the bitmap we use to display the information.
In the Window Load Event, we subscribe to the event FrameArrived of the reader. And now we see a little what we do when we process a frame of the DepthSensor.
- Like the other examples, a couple of validations for null objects
- We also validate that the size of the information us the sensor returns is the same frame that we initialized at the beginning of the app
- We get the minimum and maximum values of depth (this is very cool!)
- In a ForEach statement we go through bytes and calculate the intensity of them based on the minimum and maximum distances
- Then we create the array RGB and finally create the Bitmap with the information
Personally I think that this way of working with the DepthSensor is much more SIMPLE than that it had in the V1 version.X in addition to be consistent with the rest of the data (body and Chamber) sources is enough to understand
“This is preliminary software and/or hardware and APIs are preliminary and subject to change“
Saludos @ Home