[# KINECTSDK] HowTo: Use Kinect as a green screen (I) (playing with the Rolling Stones)

image

Buenas,

the following image already lets you clear of that goes this post

image

The Bruno with the Rollings! Awesome… things that allows us to KinectSdk. In this case I have taken as starting point the example that brings the Developer Toolkit Browser 1.5 that is included with the KinectSDK and after a few modifications I have to create my own green screen.

But well going to the code which is really what interests us and before I read to me, better to read a bit of code. In this case work with a WPF application in which the main form has the following code

   1: <Window x:Class="ElBruno.KinectGreenScreen.MainWindow"

   2:         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:         Title="MainWindow" Height="768" Width="1024">

   5:     <Grid>

   6:         <Grid.RowDefinitions>

   7:             <RowDefinition Height="245"></RowDefinition>

   8:             <RowDefinition Height="*"></RowDefinition>

   9:         </Grid.RowDefinitions>

  10:         <Image x:Name="imgHeader" Source="ElBrunoKinect.png" Stretch="Fill"></Image>

  11:         <Grid Grid.Row="1">

  12:             <Image x:Name="imgBackground" Source="Fondo3.png" Stretch="UniformToFill"></Image>

  13:             <Image x:Name="imgMask"></Image>

  14:         </Grid>

  15:     </Grid>

  16: </Window>

As you can see in the previous code, it is nothing out of this world. A WPF form with a superior as a header image and 2 images at the bottom. The first "imgBackground" image is used as the basis of the background with which it will work and that it will process data from the Depth and Kinect Camera. The second image "imgMask" is that which is used to paint the final outcome of the green screen.

The form code is not very complicated, as you can see below:

   1: using System;

   2: using System.Linq;

   3: using System.Windows;

   4: using System.Windows.Media;

   5: using System.Windows.Media.Imaging;

   6: using Microsoft.Kinect;

   7:  

   8: namespace ElBruno.KinectGreenScreen

   9: {

  10:     public partial class MainWindow

  11:     {

  12:         private KinectSensor _sensor;

  13:         private int _depthWidth;

  14:         private int _depthHeight;

  15:         private int _colorWidth;

  16:         private int _colorHeight;

  17:         private int _colorToDepthDivisor;

  18:         private short[] _depthPixels;

  19:         private byte[] _colorPixels;

  20:         private int[] _greenScreenPixelData;

  21:         private ColorImagePoint[] _colorCoordinates;

  22:         private WriteableBitmap _colorBitmap;

  23:         private const int OpaquePixelValue = -1;

  24:         private WriteableBitmap _playerOpacityMaskImage;

  25:  

  26:         public MainWindow()

  27:         {

  28:             InitializeComponent();

  29:             Loaded += MainWindowLoaded;

  30:             Closed += MainWindowClosed;

  31:         }

  32:  

  33:         void MainWindowClosed(object sender, EventArgs e)

  34:         {

  35:             if (_sensor != null)

  36:             {

  37:                 _sensor.Stop();

  38:                 _sensor = null;

  39:             }

  40:         }

  41:  

  42:         void MainWindowLoaded(object sender, RoutedEventArgs e)

  43:         {

  44:             foreach (var potentialSensor in KinectSensor.KinectSensors.Where(potentialSensor => potentialSensor.Status == KinectStatus.Connected))

  45:             {

  46:                 _sensor = potentialSensor;

  47:                 break;

  48:             }

  49:             if (_sensor == null) return;

  50:  

  51:             _sensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);

  52:             _depthWidth = _sensor.DepthStream.FrameWidth;

  53:             _depthHeight = _sensor.DepthStream.FrameHeight;

  54:             _sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);

  55:             _colorWidth = _sensor.ColorStream.FrameWidth;

  56:             _colorHeight = _sensor.ColorStream.FrameHeight;

  57:             _colorToDepthDivisor = _colorWidth / _depthWidth;

  58:             _sensor.SkeletonStream.Enable();

  59:             _depthPixels = new short[_sensor.DepthStream.FramePixelDataLength];

  60:             _colorPixels = new byte[_sensor.ColorStream.FramePixelDataLength];

  61:             _greenScreenPixelData = new int[_sensor.DepthStream.FramePixelDataLength];

  62:             _colorCoordinates = new ColorImagePoint[_sensor.DepthStream.FramePixelDataLength];

  63:             _colorBitmap = new WriteableBitmap(_colorWidth, _colorHeight, 96.0, 96.0, PixelFormats.Bgr32, null);

  64:             imgMask.Source = _colorBitmap;

  65:  

  66:             _sensor.AllFramesReady += SensorAllFramesReady;

  67:             _sensor.Start();

  68:         }

  69:  

  70:         void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)

  71:         {

  72:             if (_sensor == null) return;

  73:             var hasDepth = false;

  74:             var hasColor = false;

  75:             hasDepth = GetDepthFrame(e);

  76:             hasColor = GetColorFrame(e);

  77:             ProcessDepthFrame(hasDepth);

  78:             ProcessColorFrame(hasColor);

  79:         }

  80:  

  81:         // more code goes here

  82:  

  83:     }

  84: }

The process is actually quite simple:

  • The sensor is initialized to make it work with the skeleton, the Chamber and the depth sensor
  • Store the values of work of these Streams for the subsequent processing of data
  • Is tabaja on the AllFramesReady event of the Kinect sensor
  • In this event the depth and color frames are processed
  • With the outcome of these frames, creates a new image that makes a merge between the background and the body of the detected person

The interesting code is called in the 70-79 lines, that I reserve for tomorrow and to also be able to explain it better.

 

Saludos @ Home

El Bruno

image image image

[#KINECTSDK] HowTo: Utilizar Kinect como una pantalla verde (I) (tocando con los Stones !)

image

Buenas,

la siguiente imagen ya te deja claro de que va este post

image

El Bruno con los Rollings!!! Awesome … las cosas que nos permite hacer KinectSdk. En este caso he tomado como punto de partida el ejemplo que trae el Developer Toolkit Browser 1.5 que se incluye con el KinectSDK y después de unas modificaciones he llegado a crear mi propia green screen.

Pero bueno vamos al código que es realmente lo que nos interesa y antes que leerme a mi, mejor leer un poco de código. En este caso trabajo con una aplicación WPF en la que el formulario principal posee el siguiente código

   1: <Window x:Class="ElBruno.KinectGreenScreen.MainWindow"

   2:         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:         Title="MainWindow" Height="768" Width="1024">

   5:     <Grid>

   6:         <Grid.RowDefinitions>

   7:             <RowDefinition Height="245"></RowDefinition>

   8:             <RowDefinition Height="*"></RowDefinition>

   9:         </Grid.RowDefinitions>

  10:         <Image x:Name="imgHeader" Source="ElBrunoKinect.png" Stretch="Fill"></Image>

  11:         <Grid Grid.Row="1">

  12:             <Image x:Name="imgBackground" Source="Fondo3.png" Stretch="UniformToFill"></Image>

  13:             <Image x:Name="imgMask"></Image>

  14:         </Grid>

  15:     </Grid>

  16: </Window>

Como se puede ver en el código anterior, no es nada del otro mundo. Un formulario WPF con una imagen superior a modo de cabecera y 2 imágenes en la zona inferior. La primera imagen “imgBackground” se utiliza como la base del background con el que trabajaremos y sobre la que procesaremos los datos del Depth y Camera de Kinect. La segunda imagen “imgMask” es la que se utiliza para pintar el resultado final de la pantalla verde.

El código del formulario tampoco es muy complicado, como se puede ver a continuación:

   1: using System;

   2: using System.Linq;

   3: using System.Windows;

   4: using System.Windows.Media;

   5: using System.Windows.Media.Imaging;

   6: using Microsoft.Kinect;

   7:  

   8: namespace ElBruno.KinectGreenScreen

   9: {

  10:     public partial class MainWindow

  11:     {

  12:         private KinectSensor _sensor;

  13:         private int _depthWidth;

  14:         private int _depthHeight;

  15:         private int _colorWidth;

  16:         private int _colorHeight;

  17:         private int _colorToDepthDivisor;

  18:         private short[] _depthPixels;

  19:         private byte[] _colorPixels;

  20:         private int[] _greenScreenPixelData;

  21:         private ColorImagePoint[] _colorCoordinates;

  22:         private WriteableBitmap _colorBitmap;

  23:         private const int OpaquePixelValue = -1;

  24:         private WriteableBitmap _playerOpacityMaskImage;

  25:  

  26:         public MainWindow()

  27:         {

  28:             InitializeComponent();

  29:             Loaded += MainWindowLoaded;

  30:             Closed += MainWindowClosed;

  31:         }

  32:  

  33:         void MainWindowClosed(object sender, EventArgs e)

  34:         {

  35:             if (_sensor != null)

  36:             {

  37:                 _sensor.Stop();

  38:                 _sensor = null;

  39:             }

  40:         }

  41:  

  42:         void MainWindowLoaded(object sender, RoutedEventArgs e)

  43:         {

  44:             foreach (var potentialSensor in KinectSensor.KinectSensors.Where(potentialSensor => potentialSensor.Status == KinectStatus.Connected))

  45:             {

  46:                 _sensor = potentialSensor;

  47:                 break;

  48:             }

  49:             if (_sensor == null) return;

  50:  

  51:             _sensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);

  52:             _depthWidth = _sensor.DepthStream.FrameWidth;

  53:             _depthHeight = _sensor.DepthStream.FrameHeight;

  54:             _sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);

  55:             _colorWidth = _sensor.ColorStream.FrameWidth;

  56:             _colorHeight = _sensor.ColorStream.FrameHeight;

  57:             _colorToDepthDivisor = _colorWidth / _depthWidth;

  58:             _sensor.SkeletonStream.Enable();

  59:             _depthPixels = new short[_sensor.DepthStream.FramePixelDataLength];

  60:             _colorPixels = new byte[_sensor.ColorStream.FramePixelDataLength];

  61:             _greenScreenPixelData = new int[_sensor.DepthStream.FramePixelDataLength];

  62:             _colorCoordinates = new ColorImagePoint[_sensor.DepthStream.FramePixelDataLength];

  63:             _colorBitmap = new WriteableBitmap(_colorWidth, _colorHeight, 96.0, 96.0, PixelFormats.Bgr32, null);

  64:             imgMask.Source = _colorBitmap;

  65:  

  66:             _sensor.AllFramesReady += SensorAllFramesReady;

  67:             _sensor.Start();

  68:         }

  69:  

  70:         void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)

  71:         {

  72:             if (_sensor == null) return;

  73:             var hasDepth = false;

  74:             var hasColor = false;

  75:             hasDepth = GetDepthFrame(e);

  76:             hasColor = GetColorFrame(e);

  77:             ProcessDepthFrame(hasDepth);

  78:             ProcessColorFrame(hasColor);

  79:         }

  80:  

  81:         // more code goes here

  82:  

  83:     }

  84: }

En realidad el proceso es bastante simple:

  • Se inicializa el sensor para que trabaje con el skeleton, la cámara y el sensor de profundidad
  • Se almacenan los valores de trabajo de estos Streams para el posterior procesamiento de datos
  • Se tabaja sobre el evento AllFramesReady del sensor Kinect
  • En este evento se procesan los frames de depth y de color
  • Con el resultado de estos frames, se genera una nueva image que hace un merge entre el background y el cuerpo de la persona detectada

Como el código interesante es el que se llama en las líneas 70 a 79, a ese me lo reservo para mañana y para además poder explicarlo mejor.

Saludos @ Home

El Bruno

image image image