[#KINECTSDK] Starting with Kinect #Fusion



I have been seaching in google / bing for “Kinect Fusion” and after 10 minutes, I think that I better try to explain this by myself. So let’s start with the beggining, Which not is KINECT FUSION .

  • Kinect Fusion is not an application for Kinect
  • Kinect fusion is not a software for 3D modeling

And IMHO, my definition of KINECT FUSION

Kinect Fusion is a set of libraries included with the Kinect SDK that allow us to use the camera and the depth sensor in the Kinect Sensor, and give us information of this two sensors. The processing of these sensors from different points of view, allows you to create 3D models.

Ready! It’s as simple as this! On the official website of Kinect Fusion is possible to find more information or watching the following scheme that will give you a better idea:


Now, the interesting thing is that this is part of the SDK and in it we have enough code to add this functionality in our applications. Later if I have time I will do a Step by Step to explain how to create an app from scratch, by now the important is to know the basic operation.

Once you add to your solution “Microsoft.Kinect.Toolkit.Fusion”, the first class with which you should familiarize yourself is “Reconstruction.cs”.


It is responsible for creating the 3D models with the information that it receives from the Kinect sensor. It does this with the FusionCreateReconstructionfunction, which fortunately is fairly well documented.

   1: /// <summary>

   2: /// Initialize a Kinect Fusion 3D Reconstruction Volume.

   3: /// Voxel volume axis sizes must be greater than 0 and a multiple of 32.

   4: /// Users can select which device the processing is performed on with

   5: /// the <paramref name="reconstructionProcessorType"/> parameter. For those with multiple GPUs

   6: /// the <paramref name="deviceIndex"/> parameter also enables users to explicitly configure

   7: /// on which device the reconstruction volume is created.

   8: /// </summary>

   9: /// <param name="reconstructionParameters">

  10: /// The Reconstruction parameters to define the size and shape of the reconstruction volume.

  11: /// </param>

  12: /// <param name="reconstructionProcessorType">

  13: /// the processor type to be used for all calls to the reconstruction volume object returned

  14: /// from this function.

  15: /// </param>

  16: /// <param name="deviceIndex">Set this variable to an explicit zero-based device index to use

  17: /// a specific GPU as enumerated by NuiFusionGetDeviceInfo, or set to -1 to automatically

  18: /// select the default device for a given processor type.

  19: /// </param>

  20: /// <param name="initialWorldToCameraTransform">

  21: /// The initial camera pose of the reconstruction volume with respect to the world origin. 

  22: /// Pass identity as the default camera pose. 

  23: /// </param>

  24: /// <returns>The Reconstruction instance.</returns>

  25: /// <exception cref="ArgumentNullException">

  26: /// Thrown when the <paramref name="reconstructionParameters"/> parameter is null.

  27: /// </exception>

  28: /// <exception cref="ArgumentException">

  29: /// Thrown when the <paramref name="reconstructionParameters"/> parameter's <c>VoxelX</c>,

  30: /// <c>VoxelY</c>, or <c>VoxelZ</c> member is not a greater than 0 and multiple of 32 or the

  31: /// <paramref name="deviceIndex"/> parameter is less than -1 or greater than the number of

  32: /// available devices for the respective processor type.

  33: /// </exception>

  34: /// <exception cref="OutOfMemoryException">

  35: /// Thrown when the memory required for the Reconstruction volume processing could not be

  36: /// allocated.

  37: /// </exception>

  38: /// <exception cref="InvalidOperationException">

  39: /// Thrown when the Kinect Runtime could not be accessed, the Kinect device is not

  40: /// connected or the Reconstruction volume is too big so a GPU memory

  41: /// allocation failed, or the call failed for an unknown reason.

  42: /// </exception>

  43: public static Reconstruction FusionCreateReconstruction(

  44:     ReconstructionParameters reconstructionParameters,

  45:     ReconstructionProcessor reconstructionProcessorType,

  46:     int deviceIndex,

  47:     Matrix4 initialWorldToCameraTransform)

Once you have passed him the sensor information to Reconstruction, the next step is to create the virtual world, here we have the function GetCurrentWorldToVolumeTransform, also in Reconstruction.

   1: /// <summary>

   2: /// Get current internal world to volume transform.

   3: /// Note: A right handed coordinate system is used, with the origin of the volume (i.e. voxel 0,0,0) 

   4: /// at the top left of the front plane of the cube. Similar to bitmap images with top left origin, 

   5: /// +X is to the right, +Y down, and +Z is forward from origin into the reconstruction volume.

   6: /// The default transform is a combination of translation in X,Y to locate the world origin at the

   7: /// center of the front face of the reconstruction volume cube, and scaling by the voxelsPerMeter

   8: /// reconstruction parameter to convert from world coordinate system to volume voxel indices.

   9: /// </summary>

  10: /// <returns>The current world to volume transform. This is a similarity transformation

  11: ///  that converts world coordinates to volume coordinates.</returns>

  12: /// <exception cref="InvalidOperationException">

  13: /// Thrown when the call failed for an unknown reason.

  14: /// </exception>

  15: public Matrix4 GetCurrentWorldToVolumeTransform()

Once we have this “view” (could think it as if it were a “frame”), we can now continue by turning the Kinect sensor to find the next and finally put together a 3D model.

Note:Here comes into play another important class that is FusionPointCloudImageFrame, although I do not encourage you to explain it if I don’t have a clear use context.

And I think that with that already have enough to explain the 3 basic things that you have to look for now to create apps that use Kinect Fusion!


Saludos @ Home

El Bruno

image image image Google

3 thoughts on “[#KINECTSDK] Starting with Kinect #Fusion

  1. jordirios92

    Fantastica web/tutorial de Kinect SDK. Si pudieras colgar un step by step de una aplicacion, seria perfecto. Gracias por el contenido!


  2. sanju singh

    nice article . I have a problem. I create mesh of same object using kinect fusion for two kinects. now i want to transform both of them into the same coordinate system. currently the are in different coordinate system. can you tell how this can be done ?

    thank you



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s