#MSBUILD – #HoloLens 2 Apollo moon landing demo

Hi !

Today at Build, just before Satya Nadella Keynote, there was a failed demo attempt using Hololens 2 and Unreal Engine.


“Well, it seems doing a live demo is actually harder than landing on the moon,” said A Man on the Moon author Andrew Chaikin.

Lucky for us, there is a video from a rehearsal of the HoloLens 2 demo. In this scenario the holograms are streamed directly to the headsets from remote PCs, and it’s all powered by Epic’s UnReal Engine.

The high-quality images streamed to the headsets are amazing, and this is a perfect scenario on how to leverage the power of remote computers to process and generate high quality graphics, and to render this graphics in the Hololens 2 device.

Let’s take a look at the video.

Greetings @ Toronto

El Bruno

Advertisements

#Hololens – Developer Resources for #Hololens2

Hi !

Hololens 2 are on the way to some selected group of developers, so it’s time to start to collect and group some dev resources for the not so near future.

Let’s start with a couple of tweets

There is also a new Mixed Reality Toolkit, available as usual in GitHub https://microsoft.github.io/MixedRealityToolkit-Unity/README.html

What is the Mixed Reality Toolkit

MRTK is a Microsoft driven open source project.

MRTK-Unity provides a set of foundational components and features to accelerate MR app development in Unity. The latest Release of MRTK (V2) supports HoloLens/HoloLens 2, Windows Mixed Reality, and OpenVR platforms.

  • Provides the basic building blocks for unity development on HoloLens, Windows Mixed Reality, and OpenVR.
  • Showcases UX best practices with UI controls that match Windows Mixed Reality and HoloLens Shell.
  • Enables rapid prototyping via in-editor simulation that allows you to see changes immediately.
  • Is extensible. Provides devs ability to swap out core components and extend the framework.
  • Supports a wide range of platforms, including
    • Microsoft HoloLens
    • Microsoft HoloLens 2
    • Microsoft Immersive headsets (IHMD)
    • Windows Mixed Reality headsets
    • OpenVR headsets (HTC Vive / Oculus Rift)

Input Simulation Service

The Input Simulation Service emulates the behaviour of devices and platforms that may not be available in the Unity editor. Examples include:

  • HoloLens or VR device head tracking
  • HoloLens hand gestures
  • HoloLens 2 articulated hand tracking

As usual, a video is the best way to describe this

Unreal Engine 4.22 released

New: HoloLens Remote Streaming Support

Unreal Engine 4 now supports Holographic Remoting through the Windows Mixed Reality plugin! This allows Unreal applications to run on a Windows desktop PC and stream the rendered result wirelessly to HoloLens over a Wi-Fi connection in real time.

Happy coding!
Greetings @ Toronto

Resources

#Hololens – After the huge announcement of #Hololens2, here are some interesting resources

 

 

 

Hi !

The official launch of the Hololens 2 was a couple of days ago during MWC 2019. Even if I miss my days supporting Avanade booth in the MWC, we are incredibly connected today, so I feel like I was there during the launch. So I may recap some resources to be used in the near future in an amazing surprise.

Main Microsoft Resources

Of course, let’s start with the official tweet announcement

The official video

The official post in Microsoft Blog: Microsoft at MWC Barcelona: Introducing Microsoft HoloLens 2

Some tech specs, https://www.microsoft.com/en-us/hololens/hardware

And from the original tweet some amazing Hololens 2 facts

  • #HoloLens2’s Field of View has increased by more than 2X.
  • With direct manipulation, you can now touch your holograms.
  • You can now log into your #HoloLens2 with Iris Recognition.
  • The new universal fit system works for everyone.
  • #HoloLens2’s front enclosure is made entirely out of carbon fiber.
  • #HoloLens2 will launch with a suite of solutions. Today we’re announcing Microsoft Dynamics 365 Guides.
  • Introducing #Azure Remote Rendering. A cross platform service that supports ARKit, ARCore, and #HoloLens. The birth of the internet of holograms. Leverage the power of Azure to directly stream high polygon content with no decimation to #HoloLens2.
  • We’re committed to openness.
    • We believe in an open app store model. Third party app stores are welcome within our ecosystem!
    • We believe in an open web browsing model. Firefox will be joining #HoloLens2.
    • We’ll continue to steer open standards such as Kronos & OpenXR.

And the full MWC session

Client Stories

 

Tech Coverage

  • The Verge

Happy coding!

Greetings @ Burlington

El Bruno

#Kinect – More information about the depth sensor in Project Kinect for #Azure (and #Hololens Bonus!)

Hi!

During the past Faculty Summit 2018, one of the exhibitions presented some details of the future [Project Kinect for Azure]. The following video gives a demonstration of the quality of the depth sensor in near and remote mode. In both demonstrations you can see that the same is much more powerful than the sensors we know.

And a special focus at 01:50 in the video, when a reference is made that this technology will be used in the next version of Hololens. The tracking options for the new device will be impressive.

Greetings @ Burlington

El Bruno

References

#Kinect – Mas información sobre el sensor de profundidad en Project Kinect for #Azure (#Hololens bonus!)

Buenas!

Durante el pasado Faculty Summit 2018, una de las exposiciones presentaba algunos detalles del futuro [Project Kinect for Azure]. El siguiente video da una demostración de la calidad del sensor de profundidad en modo cercano y en modo lejano. En ambas demonstraciones se puede apreciar que el mismo es mucho mas potente que los sensores que conocemos.

Y un detalle especial en el minuto 01:50 cuando se hace referencia a que esta tecnología se utilizara en la próxima version de Hololens. Tomando esto como base, las opciones de tracking del nuevo device serán impresionantes.

Saludos @ Burlington

El Bruno

References

#Opinion – Some news on #Hololens V2, HPU V2 and how #Microsoft choose the hardware path, build their own Chips

Hi !

I was planning to write this post yesterday, however, a Canadian wasp decided that it was better to leave me almost immobilized by attacking my foot and forcing me to plan my agenda differently.

Well, Marc Pollefeys (director of Science in the Hololens team) shared some information about the new version of Hololens. Until the code name is made public, I will refer to the new device as Hololens 2. What he tells us is a simple and powerful message:

The new HPU chip included in Hololens 2 will have Deep Neural Networks capabilities. (It means Artificial Intelligence!)

Let’s not forget that this is not new for Microsoft, but let’s also keep in mind that Microsoft is not dedicated to the design and creation of chips such as those we know from Intel or AMD. For years until now Microsoft is investing in R&D for a new generation of chips. Those chips are currently used mostly in the Azure Data Centers. In fact, it all started back in 2012, when Doug Burger presented a risky bet to Steve Ballmer: Project Catapult.

Doug commented to Steve that, in the near future, Internet would be controlled by a handful of group companies that would provide essential services for users. This “new Internet” would require a different architecture as a base platform. If Microsoft wanted to be part of this “new Internet”, they should not only build the OS and the Software, but they also had to take care of the hardware of the servers, manage the networks and more. It seems that at this moment Steve Ballmer change his face into a Gear of Wars Bad Boss, his eyes were all red and he responded with a “I thought this would be a research meeting, non a strategy one.

Note: I have been fortunate to meet Steve Ballmer face to face, and the energy that he has is impressive. While I have seen him in happy and animated mode, I imagine that a 1:1 in discussion mode should require special skills to pull the conversation forward.

And the Qi Lu appeared (he was in charge of Bing), he was added into the discussion. It seems that Qi Lu also had a similar idea in his head: the need to build re programmable chips, allowing upgrades much faster than those that were running at that time.

And there was more, the Bing team had already started to work on this, from here we began to read the term FPGA much more often in some areas. (FPGA: Field programmable gate arrays). And that’s me on this part of the story. This is quite interesting and I recommend reading the Wired article (see references).

Let’s go back to 2017 with Hololens 2 and the new HPU 2 (HPU: Holographic Processing Unit). The task performed by the HPU in Hololens version 1 is to coordinate, analyze and present a coherent result of the information obtained by the device of all sensors. In other words:

The HPU merge information from different sources: motion sensor, camera, depth sensors and infrared camera sensors, with all this information the HPU is capable to determine our position in the space around us. With this information, holographic projectors can determine how and where to position the holograms that are projected in our field of vision.

To this day, this type of processing is something that is unique to the Hololens. And if it is combined with a GPU, a CPU and a battery, it allows Microsoft to have a 100% autonomous untethered device: Microsoft Hololens 😀

Update: Thanks @AlexDrenea for some typos help here!

Now, what would happen if this processor, also has some kind of DNN capacity. In some blogs they called it “AI co-processor” and we can think that it could help in tasks such as voice recognition, face detection, shape detection, image analysis and more. The first thing they have presented during CVPR17 is how these new capabilities can be used to improve Hololens’s hand tracking and hand gestures capabilities. This is the demo recorded by a conference assistant

Clipboard02.png

Now comes the time to think that we can do with a device that “does not have to constantly send all this information to the cloud”, many of these tasks will be done in local. This will allow for more fluid applications, much more natural interactions and another couple of interesting surprises.

What is true is that 2018 will be a year where we will see what’s new Hololens 2 and surely we will have many interesting surprises along the way.

Greetings @ Burlington

El Bruno

References

PS: This is my foot 12 hours after the “Bruno vs The Wasp” moment

ee7db6f7-37c7-451b-8dae-4a13e9e6d782

#Opinion – Novedades en #Hololens V2, HPU V2 y como #Microsoft se decidió a crear sus propios Chips

Hola !

Tenía pensado escribir este post ayer, sin embargo, una avispa canadiense decidió que era mejor dejarme casi inmovilizado al atacarme un pie y así obligarme a planificar mi agenda de forma diferente.

Pues bien, Marc Pollefeys (Director of Science en el equipo de Hololens) dio a conocer un poco de información sobre la nueva version de Hololens. Hasta que se haga público el nombre clave, yo me referiré a las mismas como Hololens 2. En este caso, lo que nos comenta es simple

La nueva version del chip HPU incluido en Hololens 2 tendrá capacidades de Deep Neural Networks.

 

Ahí es nada. Un detalle para tener en cuenta es que Microsoft no se dedica al diseño y creación de Chips como los que conocemos de Intel o AMD. Desde hace años que Microsoft está invirtiendo en R&D para esta nueva generación de chips que actualmente se usan mayormente en los Azure Data Centers. En realidad, todo empezó allá por el 2012, cuando Doug Burger le presentó una apuesta más que arriesgada a Steve Ballmer: Project Catapult.

Doug le comentó que en un futuro cercano internet estaría controlado por un puñado de empresas que serían las que brindan los servicios esenciales para los usuarios, y que este “nuevo internet” requeriría una arquitectura diferente como plataforma base. Si Microsoft quería ser parte de ese “nuevo internet”, no solo debían construir los SOs y el software, sino que además debían encargarse del hardware de los servidores, de gestionar las redes y más. Parece que en este momento a Steve Ballmer se le pusieron los ojos en plan malo de Gears of Wars, con los ojos todo rojos y respondió con un “Ahi va la ostia ! pensaba que esta sería una reunión de Research no de estrategia.”

Nota: Yo he tenido la suerte de conocer en persona a Steve Ballmer, y la energía que desprende ese hombre es impresionante. Si bien, yo lo he visto en “modo happy” y animado, me imagino que un 1:1 en modo discusión debe requerir habilidades especiales para sacar adelante la conversación.

A la discusión se sumó Qi Lu (a cargo de Bing). Parece que Qi Lu también tenía la una idea parecía en su cabeza: la necesidad de construir chips reprogramables, que permitan actualizaciones mucho más rápidas que las que se ejecutaban en ese momento. Es más, el equipo de Bing ya había comenzado a trabajar en esto, a partir de aquí se comenzó a escuchar el termino FPGA mucho más seguido en algunas esferas. (FPGA: Field Programmable Gate Arrays). Y hasta aquí llego yo, que la historia es bastante interesante y recomiendo leer un artículo de Wired que vale la pena leer completo (ver referencias).

Volvamos al 2017 con Hololens 2 y el nuevo HPU 2 (HPU: Holographic Processing Unit). La tarea que realiza el HPU en la version 1 de Hololens es coordinar, analizar y presentar un resultado coherente de la información que obtiene el device de todos los sensores. Dicho en otras palabras:

El HPU hace un merge de la información del sensor de movimiento, de la cámara, de los sensores de profundidad y de los sensores de la cámara infrarroja para determinar nuestra posición en el espacio que nos rodea. Con esta información, los proyectores holográficos pueden determinar cómo y dónde posicionar los hologramas que se proyectan en nuestro campo de visión.

Hasta el día de hoy, este tipo de procesamiento es algo que es único, y que permite que, al combinarlo con un GPU, un CPU y una batería, podamos tener un dispositivo sin cables y 100% autónomo como son las Microsoft Hololens.

Ahora bien, que pasaría si este procesador, además posee algún tipo de capacidad de DNN. En algunos blogs lo han llamado “AI coprocessor” y podemos pensar que el mismo podría ayudar en tareas como voice recognition, face detection, shape detection, image analysis y mucho más. Lo primero que han presentado durante CVPR17 es como se pueden utilizar estas nuevas capacidades para mejorar las capacidades de hand tracking y hand gestures de Hololens. Esta es la demo grabada por un asistente a la conferencia

Clipboard02.png

Ahora llega el momento de pensar que podemos hacer con un device que “no tiene que enviar constantemente toda esta información al cloud”, muchas de estas tareas se realizaran en local. Esto permitirá aplicaciones más fluidas, interacciones mucho más naturales y otro par de sorpresas interesantes.

Lo que si es cierto es que 2018 será un año donde veremos lo nuevo que tendrán Hololens 2 y seguramente tendremos muchas sorpresas interesantes en el camino.

Saludos @ Burlington

El Bruno

References

PS: El estado de mi pie después de la picadura de avispa

ee7db6f7-37c7-451b-8dae-4a13e9e6d782