[# CODEMOTION] Materials of the event of # KinectSdk and my passage through the CodeMotion.es



yesterday was fortunate to participate in the first Spain CodeMotion a presentation along with @ vgaltes. The event overall was great, many people joined, there was much networking, many faces known and obviously with the luck to be able to desvirtualizar more than one.

He was also the ideal excuse to learn a little tools and technologies outside my scope, where I have to admit that Groovy and Grails, have been a big surprise for my.

Finally at the last session it had the option of going to see Luis Fraile or David Bonilla. As Luis I have it known to @ David_Bonilla I had not seen him live but yes he knew his fame, got in his session. The truth is that it was a crack, on the one hand a deployment of resources as long not saw it and on the other, a close person but with well placed in the head ideas.

Turning to the meeting about Kinect with Vincenc, therefore we had well. We threw a couple of lines of code and we started to show examples of Kinect applications where what better gave us was to get a couple of smiles. The poor Vicenc had to suffer the labia of an Argentine and also a laptop to which it was not used but he stepped in as a crack!

I’ll stick with the anecdote that the room of our event was so filled with grapple to enter it when I had the following conversation with a skinny that was before mine:

Do Bruno – let me pass?

Flaco – there is no more room

Bruno – but is that with @ vgaltes we are the rapporteurs

Flaco – ahh so if they can pass Risa

almost no enter!



Code Motion KinectSdk

And the code source examples can be downloaded from https://skydrive.live.com/redir.aspx?cid=bef06dffdb192125 & SPL = BEF06DFFDB192125! 3904 & parid = BEF06DFFDB192125! 3842 & authkey =!AHlC-AoSBzrGWnA

[# CODEPLEX] Now also supports GIT



Although they have already commented in several places, it is important to highlight the news.

Now CodePlex supports the creation of projects based on GIT.


Seen from the outside does not seem something so important, but if we read between the lines there are 2 interesting things to highlight.

Firstly that this change is not based on a strategic decision of MS but which comes driven by the large number of petitions that performs the communicated. Beware, we must not be naive; This does not mean that now MS begin to do what you say the community, nor to think not not win anything with this change. But it is important to note that include a software of this type within the code for the Microsoft communities management platform is a point which opens up many possibilities.

On the one hand, help to improve the Visual Studio family of products. VS11 and TFS11 are fine, but if we compare the fluidity of work which gives a DCVS as GIT, because no color. Therefore, there is a desire to educate the developers guide to a new way of working (already included in VS) behind this decision.

In addition this change is intended to include in Codeplex to a large set of developers who are currently using GIT for their developments.

Secondly, do not lose sight that MS is incorporating an OpenSource software within its platform. This is not the first time, nor will not be the last, but serves as a reference to give an idea of how powerful that is the idea of OpenSource for teams of MS. something that from the outside is not seen much, already only takes into account the facet sell canned software that has Microsoft.

In the long run we will see if it ends up being a copy from GitHub or other more advanced sites based on GIT. What is important not to lose sight of, is that CodePlex not only offers support for a Source Control repository, but it adds several possibilities more… there I leave.

Saludos @ Home

El Bruno

image image image




[# TFS11] Microsoft Visual Studio Team Foundation Server 11 Beta Power Tools



I say and I repeat it:

You can not work with Visual Studio but have ReSharper

The phrase is similar to the case of Team Foundation

You can not work with Team Foundation Server without the TFS Power Tools

Visual Studio 11 and Team Foundation 11 are in Beta mode, but that does not mean that the guys from Redmond put the batteries and send us a version of the specific PowerTools for TFS11.

The Power Tools do not have anything new or new tool (this is now Guiño). In the same you will find classic Team Process Editor, Best Practices Analyzer, etc.

This is a good one.


Saludos @ Home

El Bruno

image image image

Download: http://visualstudiogallery.msdn.microsoft.com/27832337-62ae-4b54-9b00-98bb4fb7041a

[# KINECTSDK] HowTo: Paint a skeleton



a couple of days ago I asked how to paint a skeleton with the new KinectSDK in this post. Today’s post explains the basics in a few steps to paint the skeleton.

For this example we will use a form of WPF, which add a Canvas brush where the skeleton.

   1: <Window x:Class="KinectSkeleton01.MainWindow"

   2:         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:         Title="MainWindow" Height="480" Width="640">

   5:     <Grid>

   6:         <Canvas Name="skeletonCanvas" 

   7:                 Height="480" Width="640" 

   8:                 HorizontalAlignment="Center"/>

   9:     </Grid>

  10: </Window>

The following to keep in mind is to work with the sensor Kinect as a local variable of the form. In this post (link) talk a little in this regard.

Once controlled the State of the Kinect, the following is initialize the capture of skeleton (line 6 and 7) and subscribe to the event of change of frame for the skeleton (line 8).

In the implementation of this event, firstly we clean the canvas (line 14) and once validated the received frame (line 19) copy the array of skeletons to a local variable (line 21 and 22).

The concluding lines verify the State of the Joint of the head to see if tracking is correct and then use a helper of ElBruno.Kinect to paint the skeleton.

   1: void MainWindowLoaded(object sender, RoutedEventArgs e)

   2: {

   3:     if(KinectSensor.KinectSensors.Count == 0)

   4:         return;

   5:     _kinect = KinectSensor.KinectSensors[0];

   6:     _kinect.SkeletonStream.Enable();

   7:     _kinect.Start();

   8:     _kinect.SkeletonFrameReady += KinectSkeletonFrameReady;

   9: }


  11: void KinectSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)

  12: {

  13:     // Remove the old skeleton

  14:     skeletonCanvas.Children.Clear();

  15:     Skeleton[] skeletons = null;


  17:     using (var frame = e.OpenSkeletonFrame())

  18:     {

  19:         if (frame != null)

  20:         {

  21:             skeletons = new Skeleton[frame.SkeletonArrayLength];

  22:             frame.CopySkeletonDataTo(skeletons);

  23:         }

  24:     }


  26:     if (skeletons == null) return;


  28:     foreach (var skeleton in skeletons)

  29:     {

  30:         if (skeleton.TrackingState != SkeletonTrackingState.Tracked) continue;

  31:         var headJoint = skeleton.Joints[JointType.Head];

  32:         if (headJoint.TrackingState != JointTrackingState.NotTracked)

  33:         {

  34:             var skeletonDraw = new SkeletonDraw();

  35:             skeletonDraw.DrawSkeleton(_kinect, skeletonCanvas, skeleton);

  36:         }

  37:     }

  38: }

The class responsible for painting the skeleton basically paints lines between each of the joints of the same. As we can see in the following code lines are painted between 2 points with the canvas and the sensor as benchmarks.

   1: void AddLine(KinectSensor kinectSensor, Canvas drawCanvas, Joint j1, Joint j2)


   3:    var boneLine = new Line {Stroke = SkeletonBrush, StrokeThickness = 5};


   5:    var j1P = kinectSensor.MapSkeletonPointToDepth(j1.Position, DepthImageFormat.Resolution640x480Fps30);

   6:    boneLine.X1 = j1P.X;

   7:    boneLine.Y1 = j1P.Y;


   9:    DepthImagePoint j2P = kinectSensor.MapSkeletonPointToDepth(j2.Position, DepthImageFormat.Resolution640x480Fps30);

  10:    boneLine.X2 = j2P.X;

  11:    boneLine.Y2 = j2P.Y;


  13:    drawCanvas.Children.Add(boneLine);



  16: ublic  float JointDistance(Joint first, Joint second)


  18:    float dx = first.Position.X - second.Position.X;

  19:    float dy = first.Position.Y - second.Position.Y;

  20:    float dz = first.Position.Z - second.Position.Z;


  22:    return (float)Math.Sqrt((dx * dx) + (dy * dy) + (dz * dz));

The sample code can be downloaded from

https://skydrive.live.com/redir.aspx?cid=bef06dffdb192125 & SPL = BEF06DFFDB192125! 3903 & parid = BEF06DFFDB192125! 1932 & authkey =!AKQC01rb-avYBVg

Saludos @ Home

El Bruno

image image image



[# ALM] Automate processes save costs in the long term (more than you think)

ALM 03


I repeat the title of the post that more that a title is a statement


This is not an easy task, but one of the ways of addressing the same is as follows

1. Identify the repetitive tasks that we perform manually during the development of an application

2 Assess the possibility of creating an automated process to deal with these tasks

3 Define a period of trial for the implementation of this process

4 Verify the gained time using this process

If you follow these steps during the implementation of a process of automation, we’ll probably see one of these two options

-What initially appeared to be a task that could be quickly replaced by a script is then quite complicated and no sense abandoning the manual process

-automated process begins to be part of a process of increasing automation that helps us to gain quality in our developments

This seems a little theory of Friday night beer actually is fairly close to our day to day. Here is an example with one of the great "Gallardo Javi" (to see when create you a blog che!)

It was necessary to compress it in a special format, separate the compressed file in multiple chunks, sign them, and couple of steps more to share the output of an application.

When we did this process by hand, it took us less than a minute. But there was always the possibility of wrong put password of the ZIP, separate evil chunks, etc. Javi took 30 minutes and created a script that was responsible for this process.

In this way, we always have the same OUTPUT from a repetitive and predictable process (which is one of the bases on which we must work).

To metric level, simply avoiding Javi is wrong in the generation of a package already we had won the the script generation time. Javi is a crack, but if we assume that 2 times a day I could be wrong.

All subsequent executions gave us a profit of + 300 seconds. Finally, after a couple of months we had won 2 days/man. (This translated into €uros always gives us a joy)

With Javi not we went further, with the script us enough, but there is always the possibility of a little thinking out-of-the-box

  • delegate the responsibility to carry out this process in a Team Foundation build.
  • process the result of this process only occasionally successful compilation and if the tests are executed correctly
  • automate the packaging and distribution from this process
  • etc.

When we arrived at these scenarios are closer to having Continuous Delivery scenarios (about what spoke in this link), simply automating tasks.

There are many scenarios where it can automate, most result in deployments, but quickly occur to me the following

  • Deployments, for example when we deploy to AZURE, are why it doing always manually?
  • Tests, the main point where automate guarantees us quality
  • Code generation, is not one of the most recommended, but work with templates for example is a way to always ensure the same OUTPUT from a given INPUT
  • Many more…

Finally, discuss the best time where we can apply these processes is perhaps in time to integrate our code. In the case of working with Team Foundation, the definition of a build is incredibly powerful to implement these processes.

AVANADE Spain we have a number of Build definitions that allow us to perform different tasks of automation. From processes to ensure the quality, as the execution of analysis of coding style (StyleCop), generation of custom reports based on unit testing and testing of MTM, until deployments automated AZURE, ClickOnce, WebDeploy, etc.

Saludos @ Home

El Bruno

image image image

[# RESHARPER] HowTo: Create a template for a property (for the weird ones RaisePropertyChanged)



again ReSharper comes to my and hours and hours of unnecessary work, avoids me this time by taking advantage of the widespread ability to generate templates.

If you use much Visual Studio, I assume that you know that if you write "prop" and press the TAB key twice, then the code snippet that comes by default with the IDE create you a template of an auto-implemented property.


This is enough in most cases, but when come the friends of the paint and color, because you have to start working with Dinamyc Properties (I think that they are named) where your class must implement the INotifyPropertyChanged interface and then shoot the notification on the set of the property.

We are going a way of implementing a property Name of this type would be something like.

   1: #region Name

   2: [XmlIgnore] public const string NamePropertyName = "Name";

   3: [XmlIgnore] private string _name;

   4: public string Name

   5: {

   6:     get { return _name; }

   7:     set

   8:     {

   9:         _name = value;

  10:         RaisePropertyChanged(NamePropertyName);

  11:     }

  12: }

  13: #endregion

The problem comes when you have to go creating properties of this type in a quick way and you passes what happens to my Friday: you are tired (this serves as an excuse for not thinking a bit more).

But luckily, ReSharper comes with his edition of extended templates and allows you to get you up the problem as if nothing.

The first thing is to access the panel "Explorer Templates" of ReSharper, using the "ReSharper // Templates Explorer" menu. In it we will see the list of templates with which we can work.


For the case said earlier, I’ve created a new template called PropNot with the following text within the same

   1: #region $Name$

   2: [XmlIgnore]

   3: public const string $Name$PropertyName = "$Name$";

   4: [XmlIgnore]

   5: private $type$ _$NameLower$;

   6: public $type$ $Name$

   7: {

   8:     get { return _$NameLower$; }

   9:     set

  10:     {

  11:         _$NameLower$ = value;

  12:         RaisePropertyChanged($Name$PropertyName);

  13:     }

  14: }

  15: #endregion

As you can see, variables identified with the sign are implemented within the same $$ and then each of these variables has a special behavior.

-The variable Name $$ is of type input to define the name of the property

-the variable $type$ defines the type of the property and the list of types in the intellisense

-the variable $NameLower$ Gets the value of the variable $Name$ and change the first character by lowerCase()


Therefore ready, now with two clicks I can longer with rare properties these at a stretch Risa


Saludos @ La Finca

El Bruno

image image image

[# VS11] What happened to my unit tests migrated from # VS2010?



After a good while working with Visual Studio 11 and ReSharper 7, because will already grasping you hand. Those who complain of the icons and the Look & Feel because they complain free because in reality, once you get used to it or you realize that has changed the IDE.

Clarification: I personally think that if the colors of the VS11 IDE you bothered, is that these more concerned by the IDE for the code… should not be so.

But hey, how they have changed quite a lot, if it is important to know and learn about them. One of them is that now withVisual Studio 11 we can integrate the implementation of unit testing with different set of unit tests, nUnit, MSTest, etc. But of course to support this scenario, have had to modify the way in which Visual Studio 2010 worked tied to MSTests.

Let’s see an example. I have a solution in Visual Studio with a ClassLibrary featured Foo() and Bar(), and then 2 unit tests with the following code.

   1: [TestMethod()]

   2: public void AskForFooAndGetFoo()

   3: {

   4:     var target = new Class1();

   5:     var actual = target.Foo();

   6:     Assert.AreEqual("Foo", actual);

   7: }


   9: [TestMethod()]

  10: public void AskForBarAndGetBar()

  11: {

  12:     var target = new Class1();

  13:     var actual = target.Bar();

  14:     Assert.AreEqual("Bar", actual);

  15: }

As I am an organized person, was created my list of tests


and of course, the unit tests passed OK and code coverage was very good.


Until here everything great. And I so happy working with Visual Studio 2010.

However, things change a bit when we open the same solution with Visual Studio 11 we see that the lists of tests are not more supported. We can see the tests and the categories that we have created, but you can not run tests.


The reference article that redirected to us, it is very good although I am sure that will change in the near future.

For the implementation of unit testing, now have a new panel in Visual Studio 11 called "Unit Test Explorer". And here’s an interesting detail about the operation of the aircraft.

The "Unit Test Explorer" pane, is responsible for inspecting the code of our solution automatically and detects the unit tests in the same.


Then we can run the unit tests, launching the failed tests, which have passed correctly, etc.


We also have the ability to filter by these views and select the TestSettings with which we want to run the tests.

But the best (or one of the best things) is slightly hidden. Now the IDE allows us to configure Visual Studio 11 so that the unit tests run automatically after each compilation (something which previously made with a MACRO).


So… our tests is not lost, we only have to organize them again. And it is also a great time to review our evidence, starting with the same name.

Said it this morning via Twitter the ReSharperteam. @ ReSharper

Documenting a method name with a comment? Try giving it a better name. Better name doesn’t fit in one line?Now you have a serious issue!

Saludos @ Home

El Bruno

image image image

References: http://msdn.microsoft.com/library/dd286595(VS.110).aspx

[# Event] # KinectSdk in # CodeMotion



next March 24 in Madrid UPM I’ll participate with the great Vicenç García (@ vgaltes) in a session of introduction to KinectSDK.

We will see the basic combo 1 for KinectSdk, movement detection, management of the camera, the sensor depth, etc. In 45 minutes we will give a quick overview for everything that has the KinectSDK and if we have time we turn to surprises.

We want to thank the entire organization of CodeMotion.es so they are going big with all the management of the event and the incredible thing obviously is that within the number of session (http://codemotion.es/talks) can have time to show a little about Kinect Risa

Saludos @ Home

El Bruno

image image image

[# VS11] You can now edit your DGML (good one)



files DGML, which talked about enough in previous posts have incorporated a couple of interesting new features in Visual Studio 11.

One that have called me attention to is that we can now edit the position of the nodes of the graph and that information is stored within it.

If for example, create a DGML file with the following information:

   1: <DirectedGraph xmlns="http://schemas.microsoft.com/vs/2009/dgml">

   2:   <Links>

   3:     <Link Source="Bruno" Target="Valentino" />

   4:     <Link Source="Bruno" Target="Martina" />

   5:     <Link Source="Pao" Target="Valentino" />

   6:     <Link Source="Pao" Target="Martina" />

   7:   </Links>

   8: </DirectedGraph>

We can see the following in the graphic viewer


Now we can move the contents of the chart at our discretion. For example in this case I have also incorporated the "intelligent connectors" for automatic chart.


If we see the information that has been generated in the contents of the DGML we see that we now have much more to work.

   1: <?xml version='1.0' encoding='utf-8'?>

   2: <DirectedGraph xmlns="http://schemas.microsoft.com/vs/2009/dgml">

   3:   <Nodes>

   4:     <Node Id="Bruno" Bounds="99.7241031138102,-9.33186195373535,51.6633333333333,25.96" Label="Bruno" UseManualLocation="True" />

   5:     <Node Id="Martina" Bounds="11.4679100863139,103.543610028076,60.9266666666667,25.96" Label="Martina" UseManualLocation="True" />

   6:     <Node Id="Pao" Bounds="3.25611114501953,-8.40670049756368,50,25.96" Label="Pao" UseManualLocation="True" />

   7:     <Node Id="Valentino" Bounds="8.41111859003703,47.5835100280762,70.3266666666667,25.96" Label="Valentino" UseManualLocation="True" />

   8:   </Nodes>

   9:   <Links>

  10:     <Link Source="Bruno" Target="Martina" Bounds="68.7575836181641,16.628137588501,48.8051223754883,79.2549724578857" />

  11:     <Link Source="Bruno" Target="Valentino" Bounds="69.6639488796923,16.6281380462646,37.1953348604241,25.8227892240103" />

  12:     <Link Source="Pao" Target="Martina" Bounds="-3.06789708137512,17.5532989501953,23.9082214832306,79.9630508422852" />

  13:     <Link Source="Pao" Target="Valentino" Bounds="31.8073044893709,17.553299407959,5.84093040266691,21.3492393330849" />

  14:   </Links>

  15:   <Properties>

  16:     <Property Id="Bounds" DataType="System.Windows.Rect" />

  17:     <Property Id="Label" Label="Label" Description="Displayable label of an Annotatable object" DataType="System.String" />

  18:     <Property Id="UseManualLocation" DataType="System.Boolean" />

  19:   </Properties>

  20: </DirectedGraph>

That said, nice feature for VS11.


Saludos @ Home

El Bruno

image image image

[# ALM] Tips for working with static code analysis

ALM 03


Today I was thinking to write about some of the new features of Static Code Analysis in Visual Studio 11, but better get this post of the draft  folder and make it public.

When working on a project, regardless of the tool that we use for the analysis of static code (Code Analysis, FxCop, etc.) it is always advisable to apply the following rules

  • At least a minimum set of analysis rules apply to all projects. Based on the principle of "least da a stone", this helps us protect us from future mistakes. In the case of Code Analysis, the set of "Minimun Rules" is ideal.
  • You have to work in a continuous integration environment, this so am assuming. In the event that the analysis process consume too much time in local, it is advisable not to perform validation of Code Analysis in local. That Yes, in the process of continuous integration is due to force static code analysis
  • Fundamental. Do not ignore the Warnings in the builds, dedicate time and solve the problems detected in them. Many times I prefer to change the mode of Warning Notice error, only to force to resolve the problems detected on static code analysis.
  • Finally, and optional. If someone wants to have local detail, activates the process of Code Analysis in local. Visual Studio allows you to do in 2 clicks.


Saludos @ Home

El Bruno

image image image


List of tools for static code analysis