[#ALM] How often do we must make a CheckIn?

Hi,

Christmas and new year with the Javi were the only ones on The farm work. He played the pleasant task of preparing scripts of deployment, test them at local, then see how they fail in the test environment and not to mention in PRE and PRO. But hey, as the Javi is nicotinero, I was with him that you quench your Vice and between one thing and another we started to talk about the interesting and recurring themes

How often is recommended to make CheckIn while I modify codeshare? (or protect code in the repository)

But the issue was rather more specific, talking about what happens if we take a large portion of code and begin to work and improve it. In this case, can I protect my code at the end of the process of refactoring? do or do it more often when I apply will apply small changes? Here are some examples:

In the first case, it is common to see how a person takes a piece of code for a couple of days, is devoted to playing with them, and 48 hours after decides protect changes he has made. If you are working in a team together and you have modified common elements as for example the definition of a project, because it is very likely need to do one or moreMERGEactions. If in addition you have modified classes that were being used by other colleagues, then the MERGE would be more delicate.

After presenting this example, someone might think that the solution is to protect more frequently. Suppose that for each modification "mild" that we apply in our process of refactoring, and perform a CheckIn. In this case, we must very tuned the operation of the development team, as it is at that moment when other members should evaluate whether they need to get the latest version of SC and the same question should be the person who is doing the refactoring.

As we see none of the two cases it is a complete solution for this scenario. I from my humble opinion I can suggest the following for this example:

  • Evaluates the changes that you make and tries to them to be significant for the code. I.e. it’s not a line of commentary, nor the destruction and total replacement of 20 lessons per 7 different projects.
  • You must always comply with the basic premises prior to protect code > verify to compile and pass the latest version of unit testing.
  • If you find yourself frequently with "classes" which are working 2 people (or more), assesses whether you are serving the SOLIDprinciples. 2 People to work on the same classes usually indicator that this class is taking too much responsibility
  • If you fulfill the SOLIDprinciples, but you still find 2 people working on the same class; then give the person divides the tasks on the computer that we are sure that there is something that does not fit a touch.
  • He says this work with your classmates. The daily meeting of "updating" is an ideal time to discuss this work.
  • Finally, remember that a team must respect the principle of the shared ownership of the code. Each change that you apply affects the work of your peers and nobody is responsible and "master" of a single piece of code.

And to close, the classic of every day when we pass these things… (sourcehttp://geekandpoke.typepad.com/.a/6a00d8341d3df553ef0162fe399bd1970d-pi ))

image

Greetings @ Here

The Bruno

Geek And Poke: http://geekandpoke.typepad.com

Advertisements

[#KINECT] My posts for #KinectSdk

image

Hi,

the following list are some of my  Kinect posts

HowTo

Programming

Varios

 

Regards @ Home

El Bruno

[# KINECT] Opinion: Using ZOOM glasses

image

Good,

a few days ago I got the order of my glasses for the Kinect. If you don’t know what they are, as in the following image, you can see as a beautiful device aesthetically as the sensor Kinect, remains of the medieval era when you put a pair of glasses that "basically let you play with less space".

image

While there are several models, I have bought one Made in China, and that the truth is that quite well work. I have a large room where we can play alone, but other very different scenario is my job. Specifications promise a reduction of 40% space, but in my case that in addition to playing the Kinect I use it to work, I have been able to draw the following conclusions:

  • Without glasses the skeleton is detected from 1.5 meters.
  • With glasses, the skeleton is detected from 1 mt.
  • Special routines we have created for the detection of gesture (wave, swipes, etc) are still functioning properly.
  • The camera image is that most affected already to change the lens, in the same mode is the framework that applies the lens.

The following images show the comparison with the original implementation of Merry Christmas I posted recently on this post and with the same application, but the focus of night and putting lens. Detection continues to function and the application gives the feeling of winning valuable meters for when there is no space.

image

image

Greetings @ Home

The Bruno

Buy: http://www.dealextreme.com/p/zoom-of-kinect-for-xbox-360-slim-101323

[# KINECT] Kinect for Windows available on 1 February (and change of picture in development)

image

Hi,

yesterday at CES, our friend Steve Ballmer announced the availability of a new sensor Kinect especially designed for Windows. It is a bit more expensive than the current one, U$ S 249 but has an evolution in their firmware that allows you to identify objects in a very close distance and also special support for Windows 7 and Windows 8. Some asked why the price difference is almost 150 dolarasos more expensive than the Xbox version and the answer is as follows:

The ability to sell Kinect for Xbox 360 at its current price point is in large part subsidized by consumers buying a number of Kinect games, subscribing to Xbox LIVE, and making other transactions associated with the Xbox 360 ecosystem. In addition, the Kinect for Xbox 360 was built for and tested with the Xbox 360 console only, which is why it is not licensed for general commercial use, or under warranty when used on any other platform supported.

It seems that the development of applications with the Xbox Kinect sensor will not be supported for commercial applications that are developed for Windows. I.e. 2 devices to a common goal. Played strange and surely implications in a short time by Microsoft.

The other big change is given by the cost of the SDK. Until this time was speculated with a SDK free and another payment for developing applications for Windows, but that put you the hack with the price of the hardware, because they give you the SDK and runtime for the Kinect for Windows. Now well, and finally > > as there are many projects underway using the SDK in Beta and the sensor for XBox, because for them the license extends to 2016; But if you want to develop applications for Windows, better that you go putting the U$ S249 (or €249 here in Spain) to buy the sensor for Windows.

Greetings @ Home

The Bruno

Source: http://blogs.msdn.com/b/kinectforwindows/archive/2012/01/09/kinect-for-windows-commercial-program-announced.aspx

Reservation: http://www.amazon.com/exec/obidos/ASIN/B006UIS53K/xataka-20

[# ALM] ALM Assesment Online Tool

image

Good,

I am going to point because many times I have to dive among labyrinths of information to find this simple linkhttp://www.microsoft.com/visualstudio/en-gb/strategies/almassessment. It takes you to a Studio that allows you to Microsoft in online mode to evaluate your level of ALM While the data processed them in the UK, the result is quite consistent and well serves to any location.

So you know, if you want to know "how he carrying it?", this is a good starting point.

Greetings @ Home

The Bruno

Resources: http://www.microsoft.com/visualstudio/en-gb/strategies/almassessment

[# ALM] 1. ALM in 2012, refactorizando our way of working.

Hi,

in these days when the thing is quite complicated, one of the tools we have to move ahead is to improve the way in which we work. Those who work in computer science we know that for quite some time the stereotype of repetitive work from 0900 to 1700, for 30 years on the same site, it is impossible. Jose Manuel Alarcon has written a very good post in this connection, called the "Artisans of knowledge", and one of the points that most highlights in this post is the constant need to improve that we have in our profession.

But beware, many people have misunderstood the message. They think that what we have to do is keep abreast of all new technologies that appear, know all the new products coming out and worship any new trend that promises a bright and peaceful future. When is this only 10% of our new responsibility. 90% Is a little more complicated, it requires that we apply that very often we use which is one meter above our ass (a metro approximately, though there are many cases that the ass is connected to the brain). In the remaining 90% is required to think, is required to learn from the experiences that we propose changes to improve the way in which we work and knowledge we possess.

For example, is very simple to be subscribed to the 10 most influential blogs in the the blogosphere, and follow the 10 cracks think more on different topics of computer science. With spend 60 minutes a day to read these blogs and give you a quick look at twitter every, we can be aware of the news. We changed the form of craftsmanship of our fathers, where you aprendías a profession for life; and we will implement a new premise which is that of constant training. We can have a more advanced social status, where our colleagues we will look at and you may wonder how things: "from where it takes time to keep abreast of everything?"; and we even have a fan club in the best style of Justin Bieber.

However if all this information we read not we process correctly, this way of working is useless. We not only have to read, we have to learn to learn. In the management of software projects is essential to explore new forms of work, new tools and new trends; but it is much more important to understand how we can apply them to a particular scenario to get good results. We need to know, that there are no magic formulas that solve all problems; and also a technique as old as the "refactoring" not only is applicable to developments but also to the way of work and management.

In many cases, there are people/organizations that care to define "a process" which would be useful both for 2-month development projects, to projects of 2 years. That also serves also for teams of 5 people and teams of 200 people; and finally, that it can be used for teams that work all in the same place, as for distributed work teams. When we find ourselves in these cases, it is not uncommon to create this process will be invested in a two-year project. Where, obviously, are the premises on January 1, 2010 not the same that the premises to January 1, 2012, etc. etc. etc.

In the previous case, surely the extreme agilistas will have more than one ulcer and begin to evangelize with the ABC’s ofKent Beck on AGILE and trends of the past 10 years. On the other hand, the lovers of the processes, will be dedicated to destroying the Amazon creating manuals of processes covering 1000 situations will never in any process, and that will then be a book died in some closet organization. Surely there will be a 3rd line of work, which will try to meet the real needs of the person/organization and on the basis thereof; It will create a proposal with a solution. In each case, does not help the proposed solution, but we consider that each a period of time we have to look at the way in which we are working and see if we can improve it.

Worldwide AGILE this is known as a retrospective, and all the agilistas I know are lovers of creating retrospective meetings. However, these badly conducted meetings, only represent a waste of time that little brings to a project and much less to an organization. As I am not going to write about this in this post, will leave the reference to a book that I loved about this item > Agile Retrospectives: Making Good Team Great (http://pragprog.com/book/dlret/agile-retrospectives). In CMMI is also taken into account this model, something the "I" is for Improvement and the "M" Model. For (understood) CMMI, processes, people and tools have the same value in the values triangle. Mind you, that you may like or not, but is a way of organizing and managing it is also valid in many cases. If you want to invest money in a book that will explain you the same as CMMI guides, but with a more human touch, therefore CMMI for Development ®: Guidelines for Process Integration and Product Improvement (http://www.amazon.com/CMMI-Development-Integration-Improvement-Engineering/dp/0321711505/ref=cm_cr_pr_product_top )).

In short, the constant improvement of the way in which we work is as important as the technologies with which we work.What is more, explore new forms of work will help us to improve the guidelines with which we are working and will also not help a broader perspective on how we can work.

But before concluding, I must point out 2 big problems we have in this aspect. On the one hand, be aware of "all the latest" leads to situations like that says Rodrigo Corral in this tweet.

We have failed with Scrum is not a reason to adopt Kanban. The reason must be that Kanban is what best fits your needs.

image

The mere fact that something fails, and there is an alternative solution does not mean that we should adopt her as soon as possible. How many times I’ve seen what says Rodrigo, I could not resist to respond with a somewhat annoying analogy:

@r_corral do can also be applied > > "I’ve had an accident in car, I now switch to the motorbike;" better learn to drive not?

image

Finally, the 2nd problem is as follows:

How is that the problems in computer science remain equal today than those we had 20 years ago it possible?

It seems a contradiction, since methodologies such as SCRUM are born by the 1993 beyond, but even today we still have the same problems. I wrote my opinion on this in my book, and I see that the post is a little long, I leave it to pending for another post.

Greetings @ Home

The Bruno

Links

[RESHARPER] HowTo: Dynamic add references based on the typed code

Buenas,

otro tip de los buenos de ReSharper, en este caso que nos ayuda a ahorrarnos los 5 clicks o 15 segundos necesarios para agregar una referencia en un proyecto. Supongamos el siguiente escenario:

  • Proyecto de biblioteca de clases llamado ClassLibrary1, con una clase llamada Class1.
  • Proyecto de biblioteca de clases llamado ClassLibrary2, con una clase llamada NewClass.

En la clase NewClass escribimos el siguiente código y esperamos la ayuda contextual de ReSharper sobre la línea con la declaración y la referencia a Class1. Como se puede ver en la imagen, además de las típicas propuestas de crear una variable local, un parámetro, etc; ahora también tenemos una opción que agrega una referencia al proyecto ClassLibrary1 y luego nos permite utilizar el tipo Class1.

image

 

Esto no solo aplica a las referencias entre los proyectos de una solución. Si por ejemplo, declaramos un tipo del modelo de objetos del cliente de Team Foundation Server 2010 para Visual Studio 2010, también podremos ver como nos propone agregar una referencia dinámicamente al tipo Microsoft.TeamFoundation.Client.

image

 

Aunque, cuidado que este segundo ejemplo tiene trampa > ReSharper no es lo suficientemente inteligente para conocer TODOS LOS ENSAMBLADOS, sino que infiere el tipo a referenciar ya que una de las referencias del proyecto actual tiene esa referencia, en el ejemplo anterior, la referencia a ClassLibrary1 desde ClassLibrary2, permite importar las referencias del primero hacia el segundo

image

 

Otro +1 para ReSharper Open-mouthed smile

 

Saludos @ Home

El Bruno (@elbruno en Twitter)