#Opinion – Face-Depixelizer , a sad sample of how ML illustrates preexisting bias

Buy Me A Coffee

Hi !

During the past days, you may see this images about how a new ML model can start with a pixelated image of a face, and .. let me share the official project description:

Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly

GitHub, Face-Depixelizer (see references)

Great Idea, sadly, one of the first tested images show this

You probably guess the source image, and you can see how wrong is the guess. However, it’s not just a mistake, after a quick search we can find some other bad samples of the tool.

And we can even find some scary face generation from video game characters (see references)

Why this is wrong ?

Just taking a look at the generated faces, will give you a sense of what’s wrong here.

There is a trend which basically denied an error here. Researchers in deep generative modeling are obsessed with generating photo-realistic images from more abstract/low-information representations (down-sampled, cartoons, sketches, etc.). The technology behind this is amazing, however in this case, is not just “lack of data”, or a very poor trained ML model. The Model uses the popular FFHQ faces dataset, which seems to have a very diverse group of faces.

And here goes my question: how far did the author tested this before publishing? I’m guessing that if you just share this with a couple of friends (ML enthusiasts), someone will point all these errors back to you. Unless, your test circle is so poorly diverse, that you didn’t get to this point.

So, I’ll assume the best from the author, but I’ll also realize how these practices defines a specific type of bias in ML, or in software development in general.

These days, I learned a lot about history, empathy and, and the end I think we all need to do our best to be better humans.

In the following video, you will find an amazing story and samples about bias in Machine Learning.

Bonus: if you wonder how this works with Asian group? Let’s share a Lucy Lu sample

Happy coding!

Greetings

El Bruno

Resources

#Podcast – NTN 44 – CLIs vs GUIs con @jc_quijano y @eiximenis

Buy Me A Coffee

Buenas!

Buenas!
Hoy tengo la suerte de hablar con Eduard Tomas (@eiximenis) y Juan Quijano (@jc_quijano) sobre uno de los temas que ha tenido bastante popularidad en Twitter: CLIs o GUIs. Hablamos sobre las preferencias de cada uno y sobre cómo nos hemos movido hasta el punto actual donde parece que hay CLIs por todos lados.
Juan es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure y Consultor independiente en implantación de DevOps. Eduard es Microsoft MVP y Team Lead en Plain Concept.

Ir a descargar

Bonus

Aquí hay 2 artículos muy interesantes al respecto:

Happy coding!

Greetings

El Bruno

#Podcast – NTN 43 – ¬ŅHace falta t√≠tulo para programar? ¬ŅScrum ahora es un m√©todo de autoayuda? Y otro par de temas interesantes con @jc_quijano y @leomicheloni

Buy Me A Coffee

Buenas!

Hoy tengo la suerte de hablar con Juan Quijano (@jc_quijano) y Leonardo Micheloni (@leomicheloni) sobre algunos temas no técnicos pero que se convierten en conversación rápidamente en el mundo de IT.

Juan es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure y Consultor independiente en implantación de DevOps. Leonardo es Microsoft MVP y Team Lead and Senior Architect at TOKIOTA.

Importante 1: Hemos sido bastante cuidadosos con los contenidos, en la época en la que vivimos, las opiniones pueden ser boomerangs.

Importante 2: Después de probar mejoras en hardware y configuraciones de Zoom, creo que el audio ha mejorado un poco. El siguiente seguro que será mejor.

Ir a descargar

Happy coding!

Greetings

El Bruno

#Opinion – Bye #Kinect, thanks!

image_thumb17

Today I start without a [Hello!], because based on some news today it should be a sad day. I summarize it in a simple line

Microsoft has decided to discontinue the manufacture of Kinect.

Matthew Lapsen (Xbox Devices Marketing Manager) together with Alex Kipman have made this news to the public, so that’s it. Several years after the launch, the path of Kinect has come to an end. During its launch was the best-selling device in history, Kinect broke several records and generated a hype that few devices at Microsoft have been achieved.

It also time to finally stop listening and left behind to the ones who predicted the failure of Kinect (don’t get me start here). And I say “left behind”, because knowing a bit about how the market has evolved, it is easy to see how the ideas that were housed inside Kinect today exist in many everyday devices.

During the past 2 days I read, at least 10 articles that commented that Microsoft has decided to stop making Kinects, and has passed the baton to Apple which incorporates them in their new iPhone X.

Actually, iPhone X uses 3D scanning capabilities to perform tasks such as Face recognition or to animate some very cute Emojis. This is Kinect technology evolved up to today, a miniaturized sensor inside a $1000 device which allows us to animate Emojis faces! Chan! I sorry, I lost myself.

I once wrote about the reason for the rejection of many people about Kinect, so I will not touch the topic (link). What I do is remember the more than 20 events I gave all around Europe and America sharing my experiences creating Apps for Kinect.

Thank all the people who had the patience to attend these events,and hear how excited I was with each SDK breakthrough. I would also like to thank everyone who helped me with Kinect Apps tests in my house and my office. Remember that to try Apps that used Kinect, the most normal thing was always to have someone in front of it. Here Victor gets the biggest THANK YOU!

hackerman

Finally add that Kinect is one of the few products I have had so much fun. I have been fortunate to present the capabilities of Kinect to many people, clients and events and the truth is that it is one of the moments where I have heard the best ideas to create Apps.
I have not had such a grateful and enthusiastic public until these last years with Hololens.

Note: Hololens have a pair of mini Kinect embedded in the Helmet.

Well, I thank Kinect for everything. I will keep my 4 devices in the collection of gadgets and surely when I have grandchildren I can explain how fabulous it was to detect that a person raised his hand over his head or waved and with that make an App!

Happy Coding!

El Bruno

References

#Opinion – Adios #Kinect, GRACIAS!

image_thumb17

Hoy empiezo sin un [Hola!], ya que internamente es un día que debería ser triste. Lo resumo en una simple línea

Microsoft ha decidido descontinuar la fabricación de Kinect.

Matthew¬†Lapsen¬†(Xbox Devices Marketing Manager)¬†junto con¬†Alex¬†Kipman¬†lo han¬†hecho p√ļblico.¬†Y listo, varios a√Īos despu√©s del lanzamiento, el camino de Kinect ha llegado a su fin. Atr√°s quedan an√©cdotas como que durante su lanzamiento fue el¬†device¬†m√°s vendido¬†de la historia, que bati√≥ varios r√©cords¬†y que genero un¬†hype¬†que pocos devices en Microsoft han logrado.

También quedan atrás todos aquellos que pronosticaban el fracaso de Kinect de una forma avanzada. Y digo quedan atrás, porque conociendo un poco como ha evolucionado el mercado, es fácil ver como las ideas que se alojaban dentro de Kinect hoy existen en muchos devices cotidianos.

Hoy he leído, por lo menos 10 artículos que comentaban que Microsoft ha decidido dejar de fabricar Kinects, y le ha pasado el testigo a Apple que los incorpora en su nuevo iPhone X. En realidad, iPhone X aprovecha capacidades de escaneado 3D para realizar tareas como Face recognition o para animar unos Emojis la mar de cuquis. En eso ha quedado Kinect hoy, miniaturizado en un device de $1000 que nos permite animar las caras de Emojis. ¡Chan! Me empiezo a ir por las ramas.

Alguna vez escribí sobre el porqué del rechazo de muchas personas sobre Kinect, así que no tocare el tema (link). Lo que, si hare es recordar los más de 20 eventos que di por Europa y America hablando de Kinect, y sobre como podíamos crear Apps con el mismo.

Agradecer a toda la gente que tuvo la paciencia de asistir a estos eventos, y escuchar lo emocionado que estaba con cada avance del SDK. También agradecer a todos los que me ayudaron con pruebas de Kinect Apps en la oficina. Recordemos que para probar Apps que usaban Kinect, lo más normal era siempre tener a alguien al frente de la misma. ¡Aquí Victor se lleva el GRACIAS más grande!

hackerman

Finalmente comentar que con pocos productos me he divertido tanto. He tenido la suerte de presentar las capacidades de Kinect a muchas personas, clientes y eventos y la verdad es que es de los momentos en donde he escuchado las mejores ideas para crear Apps. No he vuelto a tener p√ļblico tan agradecido y entusiasmado hasta estos √ļltimos a√Īos con Hololens. Que conviene recordar que tiene un par de mini Kinect embebidas en el¬†Helmet.

Pues bien, yo le doy las gracias a Kinect. Me quedo con mis 4 devices en la colección de gadgets y seguramente cuando tenga nietos les poder explicar lo fabuloso que era poder detectar que una persona levantaba la mano sobre la cabeza o saludaba y con eso hacer una app!

Happy Coding!

El Bruno

References

#Opinion – #GoTouch, draw and share in any surface with 2 clicks!

Hi!

If you are a technology fan, there are some places where you can lost track of time. Best examples maybe are KickStarter or IndieGogo. I usually spend some time here, not only to look for cool projects but also to understand other technology trends like

  • Which are the platforms supported in the most popular projects? Nowadays is kind of weird to find a Windows SDK for most projects, 9 on 10 usually supports only¬†iOS and Android.
  • Which technology / trend are the most popular projects?¬†3D Printing¬†used to ne huge, now productivity and sharing are the most popular ones.

I don’t have any specific criteria, however I always found some great ones, in example, GoTouch. A quick video

So, one year later, I finally get my hands on my GoTouch. In the meatinme I moved between a couple of different houses, I backed the project with $100, I have some logistical issues, at the end the devices really makes for the waiting time. The device presentation is great.

I4

We start downloading the software from their homepage (references), and then we need to pair the device via BlueTooth.

Important:¬†The package also includes a BlueTooth dongle, with a custom BlueTooth Stack Software for this Dongle, just in case you don’t¬†have BlueTooth in the PC where you are planning to use GoTouch. If you ask me, I kind of not advice you to use this custom BlueTooth software.

I5

Once you have the device paired it’s time to calibrate the device. Quick note here: once you point the device to your monitor / screen it will automatically detect the interaction area and start to infer coordinates from there. This out of the box procedure works very well.

You can also have the option to perform a more refined calibration, when you need to touch a couple of specific points in your screen and it will make the writing and drawing experience very accurate.

I6

As I said, the out of the box calibration is fine, however with the extra calibration the device is amazing. You can see how my little one is having fun with GoTouch at home. And I have a tricky scenario, a monitor with a 21:9 format. Even here GoTouch works great.

2017 09 11 GoTouch 02

And you can use this also in a TV, in a projector screen, in a wall, etc. It works very well also with popular Apps like¬†Paint, Paint3D or even OneNote. At the end, you also et really productive when 2 or more people works in the virtual wall and also collaborates from PCs or smartphones. As soon as I test this I’ll probably share my experiences on this.

Bonus: One year after I backed the project, KickStarter is not for people on a hurry ūüėÄ

I2

Greetings @ Toronto

El Bruno

References

#Opinion – #GoTouch, convierte cualquier superficie plana en t√°ctil con 2 clicks!

Hola!

Si te gusta mucho la tecnología, hay lugares que pueden ser la perdición. Por ejemplo, KickStarter o IndieGogo. Yo suelo dedicar un poco de tiempo en cada uno de estos sitios para analizar cuestiones como las siguientes

  • ¬ŅQue plataformas son las m√°s soportadas por los proyectos m√°s populares? Es raro encontrar un SDK para Windows, por lo general 9/10 soportan siempre iOS y Android.
  • ¬ŅQue tipo de proyectos son los m√°s populares? Esto suele ir cambiando mucho, hay √©pocas donde todos los proyectos son extensiones de 3D Printing, otras √©pocas son m√°s orientados a productividad de escritorio, etc.

Los criterios son variados, sin embargo, siempre encuentro algo interesante. Por ejemplo, GoTouch. Me ahorro la explicación con un video

Pues bien, despu√©s de casi un a√Īo de espera, 2 mudanzas, $100 y algunos detalles log√≠sticos, por fin ha llegado a mis manos. La verdad es que la presentaci√≥n del device es genial.

I4

El software se descarga desde su Home Page (ver referencias) y después solo queda configurar y emparejar el mismo vía BlueTooth.

Importante: El device trae un dongle BlueTooth y un software bluetooth en caso de que no tengas BlueTooth en el ordenador para usar GoTouch. He probado el software para este dongle, y … mejor no instalarlo.

I5

Una vez instalado, el siguiente paso es posicionarlo frente a una pantalla y configurarlo. En mi caso, la prueba era especial, ya que tengo un monitor con formato 21:9, y no lo iba a poner frente al monitor. Decidí ponerlo en diagonal a la izquierda como muestra la siguiente foto. El software que posee GoTouch automáticamente detecta la pantalla para trabajar, esto funciona muy bien. También tenemos la posibilidad de realizar un calibrado, presionando en 4 puntos de la pantalla.

I6

Tengo que reconocer que GoTouch solo funciona bien, aunque despu√©s del calibrado es muy pero muy preciso. La siguiente imagen muestra a mi ni√Īa ‚Äúdibujando‚ÄĚ en mi pantalla.

2017 09 11 GoTouch 02

Y bien, a partir de aquí, has convertido una pantalla normal en un device táctil con un pen. Puedes usar GoTouch con aplicaciones populares como Paint, Paint3D o inclusive OneNote. Aunque realmente le sacas valor cuando usar las herramientas de colaboración cuando una persona trabaja en la pizarra virtual y otras trabajan desde PCs o smartphones.

Cuando lo pruebe en un entorno colaborativo, comentare algo mas al respecto.

Bonus: Ha pasado casi un a√Īo desde que apoye al proyecto ūüėÄ

I2

Saludos @ Toronto

El Bruno

 

References

#Opinion ‚Äď Your vacuum will be your #AR device ! Augmenting #ARKit, #ARCore and #MixedReality

Hi !

I have very good memories about the times when I used to speak in events about Microsoft Robotics. It was the year 2006 and my girl make give me the best present ever: Lego Mindstorm NXT. I’ve spend hours and hours with the Lego and also playing around with Visual Studio try to control my bot using C#. (During those days I really learn about the Microsoft approach to Bluetooth) Microsoft Robotics supported several types of bots, however the most popular ones were Lego Mindstorms and the popular ¬†Roomba vacuum. Yes, roomba used to have a special model with Bluetooth connectivity and, you know, hack the bluetooth, hack the world! You can control the Roomba using Microsoft Robotics.

Of course, once you were there, the next step was to get a 2nd Roomba vacuum and prepare a Sumo fight between the 2 devices. I think it was in TechEd 2008 when it took place the final of Microsoft Robotics challenge, I was lucky enough to be there as an attendee.

giphy.gif

In this line, you are probably thinking: vacuums? bots? how this is related to Augmented Reality? Bruno finally lost it.

Let’s get back a little to the AR world. Most of the AR SDKs relies on some type of sensors to create a model of the surrounding environment. In ARKit and ARCore this is supported by the camera and the motion sensors, in other devices and SDKs you can also have depth sensors.

And what happens if we add information of external sensors into the process of the creation, analysis or exploration of the surrounding 3D model ? Why not use external sensors, whose who can make a Full scan of the indoors of a house? Take a look at this video.

If the SDK can somehow detect “where it is” and make a request to a 3rd party service for a 3D model of the current environment this will be a huge advance. The chance to use this information as an external capability to our mapping process will be a big advance. Today in the AR world, the tracking / mapping is one of the main challenges for all the players (Apple, Microsoft, Google, Facebook, etc …)

Note: Of course, once you have the map, you need to understand “where are you inside the map”, and this is not an easy task. Today we see this as the “lost tracking” scenario, but this is just an idea for the near future.

There are plenty of paths to take from here. In our¬†Avanade TechVision 2017, we constantly suggest our clients to take advantage of the amazing ecosystem of services we have around.¬†I’m sure we didn’t think about using information from a vacuum to be used in a Mixed Reality experience !

Happy Coding!

Greetings @ Burlington

El Bruno

References

#Opinion ‚Äď Tu #ARKit, #ARCore y #MixedReality apoyados en ‚Ķ tu aspiradora!

Hola!

Siempre recuerdo con cari√Īo los a√Īos en los que daba charlas sobre Microsoft Robotics. All√° por el a√Īo 2006 mi chica me regalo un Lego Mindstorm NTX y con el mismo recorr√≠ casi toda Espa√Īa dando charlas de sobre esta tecnolog√≠a. Si bien Microsoft Robotics permit√≠a trabajar con diferentes robots, los m√°s utilizados eran Lego Mindstorms y las aspiradoras Roomba. √Čstas √ļltimas pose√≠an La capacidad de conectarse por Bluetooth con un ordenador y de esta forma era posible utilizar Microsoft Robotics.

Obviamente, una vez que tenías una aspiradora Roomba, lo siguiente era conseguir una segunda aspiradora y comenzar hacer lucha de Sumo entre ambas. Creo que fue en el TechEd 2008 donde se celebró la final mundial de luchas de sumo y tuve la suerte de verla en vivo.

giphy.gif

Y seguramente ahora piensas que al Bruno se le fue la cabeza, por qu√© tiene que ver esto con los kits de Realidad Aumentada de Apple, Android y Microsoft. Pues bien, Hoy por hoy, estos SDKs dependen de que el dispositivo sobre el que se est√°n utilizando tenga alg√ļn tipo de sensor para realizar un Mapping del entorno. Pero qu√© pasar√≠a si, adem√°s de la informaci√≥n de estos sensores, se pudiese agregar m√°s informaci√≥n adicional desde otros Devices ¬ŅPor qu√© no contar con sensores “externos” que puedan hacer un mapeo completo de una casa? Atenci√≥n al siguiente v√≠deo.

S√≠ de alguna forma el SDK pudiese tener una idea de ‚Äúd√≥nde est√°‚ÄĚ y con esto solicitar informaci√≥n a un servicio de 3ros, pues esto podr√≠a ser bastante √ļtil. Porque no utilizar esta informaci√≥n adicional y tambi√©n utilizar estos l√≠mites para tener un mejor modelo 3D del entorno donde posicionemos nuestros hologramas.

Como siempre, las posibilidades son infinitas y, siguiendo una de las premisas de nuestro Avanade TechVision 2017, todo pasa por saber cómo moverse y utilizar servicios en este nuevo ecosistema de servicios. En este caso, integrar información de una aspiradora en un SDK de Realidad Aumentada. ¡Ahí queda la idea para el fin de semana!

Happy Coding!

Saludos @ Burlington

El Bruno

References

#Opinion ‚Äď How will #ARKit and #ARCore impact on #MixedReality (2017 is still on fire!)

giphy-downsized-large.gif

Hi !

One year ago I wrote a post where I share my opinions on how important is to invest on learn 3D skills if you are a developer. In my own scenario, this was forced by the fact that I want to really understand the process of “create a Hololens App”, so I invested lot of time upgrading myself to this. Also, the bet is not for today, is for the near future (check references).

A few months ago during the Apple Developers Conference,¬†Apple released their SDK to create Augmented Reality Apps: ARKit. ARKit allows us to get some data from the camera, and phone sensors and with this information we can position and build 3D elements in our surrounding environment. If you go deeper, the way that this SDK works is very interesting. I’ll try to explain this in the way I usually understand things (in a very simple way).

It all starts with the information received by the camera, and motion sensors. Using this information, ARKit defines a couple of anchor points or reference points in the camera FOV. The big difference with Hololens, resides in that ARKit does not create a 3D model of the environment, it allows an App to anchor 3D objects in the space, and then it performs some crazy calculation every time the phone moves to get new perspective, size and location of the 3D object. A good way to describe this is, think on what if Pokemon Go was really good in the AR space.

So, that’s it on ARKit, take a look at the references for more.

i1

And now ARCore is here. But to understand this we need to go back a little on time and start with another amazing Google project: Project Tango. Project Tango works on devices (tablets and smart phones) in which we have the standard sensors: camera and motion sensors, and we also have a depth sensor. Using this sources, Project Tango was able to create a virtual 3D model of the environment, and later we can position, build and draw 3D objects on it.

It’s really cool, however, as I commented before, it only works on specific devices, the ones with the additional sensor. And, to avoid the need of this extra sensor, Google presented ARCore. Again, let me try to explain how ARCore works

ARCore is like ARKit, but for Android. That’s it.

Note: Project Tango is still there and evolving, and ARCore focus is different, but I like to present this in this way.

Both platforms, ARKit and ARCore are great as a ramp up to create Augmented Reality Apps. Of course, the final Apps are not going to be as amazing as the one created on AR Devices, but Apple and Google right now have a big advantage on their competition: they can reach to millions of phones to run this Apps without the need of an additional hardware (keep this thought).

This is important, very important. We can take some previous experiences on how other organizations are making the entrance on this AR/VR world. It’s time to review the Mixed Reality history in Flash mode.

  • It all started on 2015 when Microsoft presented Hololens.
  • Until today, there is no competition for Hololens as a single Mixed Reality Experience, wireless device. But, to you need to save $4000 to get the device, price is a constraint.
  • During the past 3 years, there was a lot of advances and great news:
    • The development tools have become more and more powerful. Microsoft choose Unity3D as main tool for design and build of mixed reality Apps and this open the door for a big community of developers, gamers and more.
    • The “price problem” was solved with an interesting approach. Instead of spending $4K on a device, you can get much cheaper Mixed Reality headsets, and you need a PC to use them. They starts on $300.
    • Microsoft is creating cool partnerships, like the one with Steam. Think about all the experience SteamVR have on this field.
    • Microsoft have acquired Minecraft. I think this is with the goal to hit the table and release something amazing based on Minecraft VR, looking for the Wow effect!

And this is great, and as long term plan it makes sense.¬†However, in less than 2 months Apple and Google have also open this market to millions of phone users, and without the need of an extra device (thanks for keeping the thought) and now it’s all in the hands of the people who creates Apps.

Another important one. There are thousands of companies and people building Apps for iPhone and Android. If they start to play around with this new technologies, for sure, we will have something amazing between all the crap which is also going to be generated. Again, the experience using ARKit or ARCore, is not as the ones you get when you use a dedicated device, but we don’t need to underestimate this.

Note:¬†And now it’s time to talk to you my friend, the one who complained about the small Hololens FOV. What do you think now about the new FOV with a 6” device? and when you need it to move with your hand?

giphy.gif

At the end, there is also a bright side here. If you invested time to learn 3D, in something like in example Unity3D, now you can create Apps for any of this platforms using Unity3D. I’m assuming that the porting between platforms will be some kind of easy, with ‚Äúlight‚ÄĚ versions of Apps for iPhone and Android, and much more powerful version of the same App for specific devices like Oculus Rift, HTC Vive or any one of the Windows 10 Mixed Reality Devices.

As I said one year ago, amazing times are coming!

Greetings @ Toronto

El Bruno

References