#Windows10 – #Hololens and the new Motion Controllers and more #MixedRealitydevices

Hi !

During the past days on the Canada Hololens Tour, I’ve been answering a couple of questions related to the future of Mixed Reality devices. Some of them have been presented in the past months, and they are already available for preorder, like the HP and ACER headsets.

In the past week, on the Unity Europe 2017 event, there was an interesting session where more information was revealed “Mixed Reality and the Next Evolution in Human Computer Interaction”. Some other partners are now also going to create devices for the Windows Holographic platform. In case you don’t know this, this is the Windows 10 common platform where Hololens and this devices will use Mixed Reality capabilities. So far, we can get devices from this other sellers

  • HP (available for preorder)
  • ACER (available for preorder)
  • ASUS
  • Dell
  • Lenovo (Available on December this year)

microsoft-motion-controller2

Note: Dell MR device will be in charge of the same team who creates XPS and Alienware, we can expect a very good device here based on the gaming experience of this teams.

This is cool, a lot of different devices will probably some different features in each one. And talking about features the next section is also important.

We are also waiting for the Motion Controllers devices presented by Microsoft on May this year. There is no preorder date, or price, however during this session they shared a chart which have a lot of important information

Mixed_Reality_foundations_difference_between_HoloLens_and_MR_Devices-1024x768

  • Motions Controllers will not work with Hololens. They will work with all the other Mixed Reality devices. I was wrong when I was guessing about this on the Hololens Tour.
  • Mixed Reality Devices will not support gesture recognition, only Microsoft Hololens will support this feature.
  • Mixed Reality Devices will not perform Spatial Mapping operations. This is not a surprise, in the presentation we saw that they don’t have any spatial mapping sensors. Maybe in some next releases, they’ll have some of this sensors? (with a very long wire).
  • All the other features like Gaze, Spatial Sound and more will be available in both platforms.

This is a great moment for AR, VR and MR. I’m waiting like a child for a present for this new devices, to start to find some cool business scenarios to use them!

Greetings @ Burlington

El Bruno

References

#Event – Materials used during the #Hololens Tour

Clipboard03

Hi.

The Hololens Tour is over. I traveled over 500 Kilometers in 3 days in 3 cities on Ontario, Canada; and I was lucky enough to share some of my expereriences and lessons learned working with the Microsoft Hololens.

  • June 22 Mississauga .Net User Group (link)
  • June 26 Toronto .Net User Group (link)
  • June 27 London .Net Developers (link)

Al the User Groups were great, and the attendees were also amazing people. Now is time to rest during the summer for most of the user groups, however we are already planning some activities for next year.

The session was Level 100, so there is no cool source code to share. Only a simple “Hello Holo”, but I promised the slides, and here they are

Some photos of the tour (twitter and meetup have plenty of them!)

 

And, as always, the best part was to put a face on a twitter name on lot of people, and have some very cool conversations about the future of VR, AR & MR.

Now it’s time to start the planning / thinking of the activities I want to participate after summer!

Greetings @ Burlington

El Bruno

References

#Event – The #Hololens Tour is here! Start at Mississauga, Toronto and finish in London (CA)! #HoloTour

hololens skype minecraft

Hi!

Quick post today, this week I’m back to the podcast world (link) and also I need to share some of our Avanade experiecens on AI and Virtual Reality at the Digital Marketing for Financial Services Summit (link).

The next June 22, we start the Hololens Tour at Mississauga. Then Toronto and we finish at London!

  • Junio 22 Mississauga .Net User Group (link)
  • Junio 26 Toronto .Net User Group (link)
  • Junio 27 London .Net Developers (link)

Abstract

In this session, Bruno Capuano will share his experiences creating applications for Hololens. We will review the most popular tools for create Hololens Apps, Unity3D and Visual Studio 2017. And we also going to see some other interesting options like URHOSharp (from Xamarin) and WaveEngine. He will share his experience on the nice journey you need to face to entering into the 3D world, based on his point of view: A developer who has work and thought in “2D mode” for years.

Greetings @ Toronto

-El Bruno

References

#Events – #HoloTour, 3 #Hololens events in June at Mississauga, Toronto and London ! (500Kms in 3 days)

Hi !

I’m back from the sunny Calgary, and I need to update my blog. I have some drafts posts on Custom Speech Recognition samples, other with bots and more.

However, I need to start with an awesome news: June is all about Hololens ! I’m lucky enough to collaborate with some Ontario User Groups to talk on how to develop Hololens Apps. These are the dates and locations:

  • Junio 22 Mississauga .Net User Group (link)
  • Junio 26 Toronto .Net User Group (link)
  • Junio 27 London .Net Developers (link)

And, of course, this is Canada, big distances ! Almost 500 KMs in 3 days !!!

Clipboard03

If you are interested in the future on human interactions with devices, save the date and I’ll see you there. Before, I close this one, let me share the session abstract

In this session, Bruno Capuano will share his experiences creating applications for Hololens. We will review the most popular tools for create Hololens Apps, Unity3D and Visual Studio 2017. And we also going to see some other interesting options like URHOSharp (from Xamarin) and WaveEngine. He will share his experience on the nice journey you need to face to entering into the 3D world, based on his point of view: A developer who has work and thought in “2D mode” for years.

Greetings @ Toronto

El Bruno

#Event – #DevOps Lessons learned and some information on the #Hololens Tour on June

giphy
Hello!

During these days in Visual Studio Live @ Austin I have been lucky enough to go to a lot of very good sessions. In example, I’ve seen how Brian Randell or Richard Hundhausen made great DevOps demos concepts using Visual Studio Team Services. I liked a lot how now, when we use Team Foundation in the Cloud, it allows us to explain DevOps concepts almost instantaneously.

This makes me remember my ALM sessions 10 years ago. In those days, we made the first demos of TFS 2005 or 2008 and had to deal with resources for everything. On the one hand I needed a powerful laptop, because usually the demo was running in a virtual machine. There were even more complex scenarios where in addition to the TFS VM, I needed another virtual machine for a Domain Server, another VM for SharePoint, and maybe a couple more.

In those days, I remember that as we needed power, we used to have very big laptops. In my case, I used to travel with one or more external disks where I got “my demo VMs”. And, of course, well the experience of assembly and preparation before an event was not easy at all. I will not deny it, they were moments of many nerves, and also of instant gratification when at the end of the session you received a question related to the topic of the event.

These days I have to deal with something similar, Hololens sessions. Everything seems easy when we watch Build or another big event mixed reality sessions. However, in technical sessions we usually may deal with

  • The need for a very powerful laptop. Mostly to be able to use Visual Studio and Unity3D in a comfortable way. This is going back to traveling with big laptops which are usually bigger than an airplane suitcase
  • The complicated development flow for this type of projects. At minimum we need one instance of Unity3D, and 2 Visual Studio instances
  • The compilation dead times we have for and App or the long deploy times to deploy an App to a device
  • The delay we have when the FOV of Hololens is presented. Delay between what the user of Hololens actually sees and what is projected from an external computer.

And finally

  • The low sense of immersion of the audience when viewing a 2D projection on a projector or similar. When the real experience is lived by the person who is using the HoloLens

While there are some workaround for some topics, such as to improve projection delay, the ideal is to think of the session in a different way. In one year with more than 20 Hololens sessions for groups of users and clients, I have learned to review these points when I prepare a session:

  • Based on the time of the session, create a suitable script for the time and content to show. For example, if the session is a session for programmers, it is best to plan a demo moment with a live debugging of an App from Hololens to Visual Studio
  • If the session allows you, perform “Live Code”. In this type of example, where a simple Hello world can take a long time to create from scratch, it is best to have 90% of the project/code complete and only add the missing parts live.
  • Avoid App deployment times to device. A built and deployed App saves a few valuable minutes in the session

And finally not forgetting the basics

  • Review and know in advance the place where the session will be held. It is important to note that Hololens demos need space for the holograms, an adequate light level and also a good Internet connection

I will take all of this in mind for the future Hololens Tour in the month of June, where I’ll be lucky enough to talk about Hololens in the .Net User Groups in Mississauga, Toronto and London. In addition, I am preparing some special surprises for these events, which I will communicate as soon as I can confirm them.

The last days of June will be fun!

Greetings @ Austin

El Bruno

#Hololens – How to use clases TapToPlace and InitialScanManager

Hi !

Yesterday I wrote about a custom solution I made to have Tap to Place feature in an Hololens App. The main set of features was

  • We can perform an initial scan of an environment, based on a time constraint. Also we can continue the scan process if we don’t find a minimum of number of planes for floor or wall
  • We can associate one class to an hologram to have the Tap to Place feature
  • This process also allow us to validate if we can place the hologram in a vertical (wall) or horizontal (floor) plane
  • The validation process also have some visual features like change the color of a box collider from red to green depending if we can or can’t place an object and also cast a shadow with the final location of the hologram

The “InitialScanManager.cs” is the one I use to perform the initial environment scan. I usually add this class to an Empty Game Object. The main properties to use are this ones:

  • Limit Scanning by Time and Scan Time, definen how the scan process will be performed. If we set a Scan Time value of N, and check the Limit Scan time, the app will scan the environment for N seconds and then will stop the scan process
  • Minimum Floor and Minimum Wall defines the minimum amount of frames to be searched in the floor and the wall
  • Player, this is optional for the main hologram. When the scan process start it will be hide and when the scan process ends the player will be show again
  • DebugTextPlayer, optional, a TextMesh used for debug actions

Clipboard02.png

The class “TapToPlacePlayer.cs” is the one we must associate to the element where we want this feature. The main properties are

  • Placeable Bounds and Non Placeable Bounds Materials are materials used when we start the “place mode”. I create a collider with this materials and the class change the materials based on if we can place or not the hologram at the selected position
  • Placeable Shadow and Non Placeable Shadow Materials are materials used to display the final position to be placed for the hologram
  • Display Shadow defines if we want to show (or not) the final position of the hologram
  • Placement Surface defines the positioning mode Horizontal or Vertical. I mean Floor or Wall
  • DebugTextPlayer, optional, a TextMesh used for debug actions

Clipboard04.png

The complete source code can be downloaded from GitHub (link).

Greetings @ Toronto

El Bruno

References

El Bruno, my posts

#Hololens – Alternative fix to the TapToPlace and WorldAnchor issue (until official #HoloToolkit release)

Hi !

If you work with Beta products you will find funny scenarios, like something working, then not working, then working again. The next Hololens forums thread is a must read if you want to understand the frustration you can get in this scenarios:

VS2017/U5.6 Crash – WinRTScriptingObjectWrapper::InternalAddRef Reference Count cannot be zero (link)

TLTR HoloToolkit current version is not officially supported for the combo Visual Studio 2017 and Unity3D 5.6. 

There are features like Tap To Place which are not working fine with VS2017 and Unity3D 5.6. If you implement this, as soon as the Hololens App starts, it will crash and close. Without any clear explanation on what´s happening. It seems to be something related to World Anchors and the way Unity are is working with this,  this issue is still alive on GitHub (link).

So, I need Tap To Place, and I decided to implement this myself. I pickup some code from HoloToolkit and some other pieces from Holographic Academy tutorials and I’ve created 2 main classes which fix the problem, and also add some additional value. Here are the main implemented features

  • We can perform an initial scan of an environment, based on a time constraint. Also we can continue the scan process if we don’t find a minimum of number of planes for floor or wall
  • We can associate one class to an hologram to have the Tap to Place feature
  • This process also allow us to validate if we can place the hologram in a vertical (wall) or horizontal (floor) plane
  • The validation process also have some visual features like change the color of a box collider from red to green depending if we can or can’t place an object and also cast a shadow with the final location of the hologram

In the next animation we can see a cylinder hologram with the following configuration:

  • Green material for the rigid box for valid place
  • Red material for the rigid box for invalid place
  • No shadows enabled

2017 04 26 Holo TapToPlace with Validation 01

The complete source code can be downloaded from GitHub (link).

Greetings @ Toronto

El Bruno

References

#Hololens – #SpatialMapping, tutorial search for simple surfaces (1×1 floor tile)

Hi !

In my previous posts I wrote on how to use Spatial Understanding to have a better control of the environment scanning process with Hololens. In that post I created a demo with the following steps

  • As soon as the App starts, Hololens started the scanning process
  • We defined a minimmun number of planes to be found, when we reach this number the scan process stopped
  • We have a FPS displaying the scan process

Today’s sample is the next required steps after last post to

  • Search for a simple surface in the floor with a size of 1×1
  • We start this search process with an AirTap / Click on a hologram

The next ugly animation shows the scanning process (at X6 speed) and then the floor mapping process.

2017 04 25 Hololens Spatial Understanding 01.gif

Tutorial simple steps.

  1. Create a 3D project in Unity3D
  2. Import HoloToolkit package
  3. Configure project to
    1. Support HoloLens projects (UWP, VR, etc)
    2. enable Spatial Mapping feature
  4. Clean Scene elements
  5. Add
    1. Hololens Camera
    2. Cursor With Feedback
    3. Input Manager
    4. Spatial Mapping
    5. Spatial Understanding
    6. FPS Display
  6. Add Empty element
    1. Rename to CodeManagers
    2. Add new C# Script named “GameStartScanner.cs”
  7. Add Empty Element
    1. Rename to HoloCollection
    2. Add 3D Cube element to HoloCollection
    3. Rename to “InteractiveCube”
  8. Set this properties to Cube
    1. Position: x: 0, y:0, z:1.2
    2. Scale: x: 0.2, y:0.2, z:0.2
  9. Add a new C# Script to the cube named “Scanner Analyzer”

The final project must be similar to this one

Clipboard02

The source code for the ScannerAnalyzer.cs script is below. As always a couple of important notes on this script.

  • The sample is based on the Spatial Mapping sample included on HoloToolkit
  • I use several additional classes (mostly for drawing), you can find this classes as part of the project
    • AnimatedBox.cs
    • AnimationCurve3.cs
    • GameStartScanner.cs
    • Line.cs
    • LineData.cs
    • ScannerAnalyzer.cs
  • In simple terms, the new class starts when the scanning process stops. As a reminder the scanning process was defined in “GameStartScanner.cs”
  • In the Update() function we check if we are in “search tile process”. If we are looking, we draw blue tiles on the floor
  • The search process start on “OnInputClicked”, this one is triggered on an AirTap on he Cube
  • The variables minWidthOfWallSpace and minHeightAboveFloor defines the size of the tile to search
  • Line 54, this is the important section. Here we use the core of Spatial Understanding to start the search process. We use the functions
    • SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(_resultsTopology);
    • SpatialUnderstandingDllTopology.QueryTopology_FindPositionsOnFloor()
  • The 1st steps create a memory pointer with all the scanned elements. 2nd steps filter this for the elements which matches the query
  • The next lines are drawing lines. Most of this code is based on HoloToolkit Spatial Mapping samples. Tons on C# lines of code, which requires a nice refactoring

Sample code

using System.Collections.Generic;
using HoloToolkit.Unity.InputModule;
using UnityEngine;
using HoloToolkit.Unity;
public class ScannerAnalyzer : MonoBehaviour, IInputClickHandler
{
const int QueryResultMaxCount = 512;
const int DisplayResultMaxCount = 32;
private List<AnimatedBox> _lineBoxList = new List<AnimatedBox>();
private SpatialUnderstandingDllTopology.TopologyResult[] _resultsTopology = new SpatialUnderstandingDllTopology.TopologyResult[QueryResultMaxCount];
private LineData _lineData = new LineData();
private string _spaceQueryDescription;
public TextMesh DebugDisplay;
public static bool AnalyzerEnabled;
public Material MaterialLine;
void Update()
{
if (!AnalyzerEnabled) return;
if (DebugDisplay != null)
DebugDisplay.text = _spaceQueryDescription;
// Queries
if (SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done)
{
//Update_Queries();
}
// Lines: Begin
LineDraw_Begin();
// Drawers
var needsUpdate = false;
needsUpdate |= Draw_LineBoxList();
// Lines: Finish up
LineDraw_End(needsUpdate);
}
public void OnInputClicked(InputClickedEventData eventData)
{
if (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
{
return;
}
var minWidthOfWallSpace = 1f;
var minHeightAboveFloor = 1f;
// Query
var resultsTopologyPtr = SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(_resultsTopology);
var locationCount = SpatialUnderstandingDllTopology.QueryTopology_FindPositionsOnFloor(
minWidthOfWallSpace, minHeightAboveFloor,
_resultsTopology.Length, resultsTopologyPtr);
// Output
var visDesc = "Find Positions On Floor";
var boxFullDims = new Vector3(minWidthOfWallSpace, 0.025f, minHeightAboveFloor);
var color = Color.red;
ClearGeometry();
// Add the line boxes (we may have more results than boxes – pick evenly across the results in that case)
var lineInc = Mathf.CeilToInt((float)locationCount / (float)DisplayResultMaxCount);
var boxesDisplayed = 0;
for (var i = 0; i < locationCount; i += lineInc)
{
var timeDelay = (float)_lineBoxList.Count * AnimatedBox.DelayPerItem;
_lineBoxList.Add(
new AnimatedBox(
timeDelay,
_resultsTopology[i].position,
Quaternion.LookRotation(_resultsTopology[i].normal, Vector3.up),
color,
boxFullDims * 0.5f)
);
++boxesDisplayed;
}
// Vis description
if (locationCount == boxesDisplayed)
{
_spaceQueryDescription = string.Format("{0} ({1})", visDesc, locationCount);
}
else
{
_spaceQueryDescription = string.Format("{0} (found={1}, displayed={2})", visDesc, locationCount, boxesDisplayed);
}
}
#region Line and Box Drawing
protected void LineDraw_Begin()
{
_lineData.LineIndex = 0;
for (var i = 0; i < _lineData.Lines.Count; ++i)
{
_lineData.Lines[i].isValid = false;
}
}
private bool Draw_LineBoxList()
{
var needsUpdate = false;
for (var i = 0; i < _lineBoxList.Count; ++i)
{
needsUpdate |= Draw_AnimatedBox(_lineBoxList[i]);
}
return needsUpdate;
}
protected void LineDraw_End(bool needsUpdate)
{
if (_lineData == null)
{
return;
}
// Check if we have any not dirty
var i = 0;
while (i < _lineData.Lines.Count)
{
if (!_lineData.Lines[i].isValid)
{
needsUpdate = true;
_lineData.Lines.RemoveAt(i);
continue;
}
++i;
}
// Do the update (if needed)
if (needsUpdate)
{
Lines_LineDataToMesh();
}
}
private void Lines_LineDataToMesh()
{
// Alloc them up
var verts = new Vector3[_lineData.Lines.Count * 8];
var tris = new int[_lineData.Lines.Count * 12 * 3];
var colors = new Color[verts.Length];
// Build the data
for (var i = 0; i < _lineData.Lines.Count; ++i)
{
// Base index calcs
var vert = i * 8;
var v0 = vert;
var tri = i * 12 * 3;
// Setup
var dirUnit = (_lineData.Lines[i].p1 _lineData.Lines[i].p0).normalized;
var normX = Vector3.Cross((Mathf.Abs(dirUnit.y) >= 0.99f) ? Vector3.right : Vector3.up, dirUnit).normalized;
var normy = Vector3.Cross(normX, dirUnit);
// Verts
verts[vert] = _lineData.Lines[i].p0 + normX * _lineData.Lines[i].lineWidth + normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c0; ++vert;
verts[vert] = _lineData.Lines[i].p0 normX * _lineData.Lines[i].lineWidth + normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c0; ++vert;
verts[vert] = _lineData.Lines[i].p0 normX * _lineData.Lines[i].lineWidth normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c0; ++vert;
verts[vert] = _lineData.Lines[i].p0 + normX * _lineData.Lines[i].lineWidth normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c0; ++vert;
verts[vert] = _lineData.Lines[i].p1 + normX * _lineData.Lines[i].lineWidth + normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c1; ++vert;
verts[vert] = _lineData.Lines[i].p1 normX * _lineData.Lines[i].lineWidth + normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c1; ++vert;
verts[vert] = _lineData.Lines[i].p1 normX * _lineData.Lines[i].lineWidth normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c1; ++vert;
verts[vert] = _lineData.Lines[i].p1 + normX * _lineData.Lines[i].lineWidth normy * _lineData.Lines[i].lineWidth; colors[vert] = _lineData.Lines[i].c1; ++vert;
// Indices
tris[tri + 0] = (v0 + 0); tris[tri + 1] = (v0 + 5); tris[tri + 2] = (v0 + 4); tri += 3;
tris[tri + 0] = (v0 + 1); tris[tri + 1] = (v0 + 5); tris[tri + 2] = (v0 + 0); tri += 3;
tris[tri + 0] = (v0 + 1); tris[tri + 1] = (v0 + 6); tris[tri + 2] = (v0 + 5); tri += 3;
tris[tri + 0] = (v0 + 2); tris[tri + 1] = (v0 + 6); tris[tri + 2] = (v0 + 1); tri += 3;
tris[tri + 0] = (v0 + 2); tris[tri + 1] = (v0 + 7); tris[tri + 2] = (v0 + 6); tri += 3;
tris[tri + 0] = (v0 + 3); tris[tri + 1] = (v0 + 7); tris[tri + 2] = (v0 + 2); tri += 3;
tris[tri + 0] = (v0 + 3); tris[tri + 1] = (v0 + 7); tris[tri + 2] = (v0 + 4); tri += 3;
tris[tri + 0] = (v0 + 3); tris[tri + 1] = (v0 + 4); tris[tri + 2] = (v0 + 0); tri += 3;
tris[tri + 0] = (v0 + 0); tris[tri + 1] = (v0 + 3); tris[tri + 2] = (v0 + 2); tri += 3;
tris[tri + 0] = (v0 + 0); tris[tri + 1] = (v0 + 2); tris[tri + 2] = (v0 + 1); tri += 3;
tris[tri + 0] = (v0 + 5); tris[tri + 1] = (v0 + 6); tris[tri + 2] = (v0 + 7); tri += 3;
tris[tri + 0] = (v0 + 5); tris[tri + 1] = (v0 + 7); tris[tri + 2] = (v0 + 4); tri += 3;
}
// Create up the components
if (_lineData.Renderer == null)
{
_lineData.Renderer = gameObject.AddComponent<MeshRenderer>() ??
gameObject.GetComponent<Renderer>() as MeshRenderer;
_lineData.Renderer.material = MaterialLine;
}
if (_lineData.Filter == null)
{
_lineData.Filter = gameObject.AddComponent<MeshFilter>() ?? gameObject.GetComponent<MeshFilter>();
}
// Create or clear the mesh
Mesh mesh;
if (_lineData.Filter.mesh != null)
{
mesh = _lineData.Filter.mesh;
mesh.Clear();
}
else
{
mesh = new Mesh { name = "Lines_LineDataToMesh" };
}
// Set them into the mesh
mesh.vertices = verts;
mesh.triangles = tris;
mesh.colors = colors;
mesh.RecalculateBounds();
mesh.RecalculateNormals();
_lineData.Filter.mesh = mesh;
// If no tris, hide it
_lineData.Renderer.enabled = (_lineData.Lines.Count != 0);
// Line index reset
_lineData.LineIndex = 0;
}
protected bool Draw_AnimatedBox(AnimatedBox box)
{
// Update the time
if (!box.Update(Time.deltaTime))
{
return false;
}
if (box.IsAnimationComplete)
{
// Animation is done, just pass through
return Draw_Box(box.Center, box.Rotation, box.Color, box.HalfSize, box.LineWidth);
}
// Draw it using the current anim state
return Draw_Box(
box.AnimPosition.Evaluate(box.Time),
box.Rotation * Quaternion.AngleAxis(360.0f * box.AnimRotation.Evaluate(box.Time), Vector3.up),
box.Color,
box.HalfSize * box.AnimScale.Evaluate(box.Time),
box.LineWidth);
}
protected bool Draw_Box(Vector3 center, Quaternion rotation, Color color, Vector3 halfSize, float lineWidth = Line.DefaultLineWidth)
{
var needsUpdate = false;
var basisX = rotation * Vector3.right;
var basisY = rotation * Vector3.up;
var basisZ = rotation * Vector3.forward;
Vector3[] pts =
{
center + basisX * halfSize.x + basisY * halfSize.y + basisZ * halfSize.z,
center + basisX * halfSize.x + basisY * halfSize.y basisZ * halfSize.z,
center basisX * halfSize.x + basisY * halfSize.y basisZ * halfSize.z,
center basisX * halfSize.x + basisY * halfSize.y + basisZ * halfSize.z,
center + basisX * halfSize.x basisY * halfSize.y + basisZ * halfSize.z,
center + basisX * halfSize.x basisY * halfSize.y basisZ * halfSize.z,
center basisX * halfSize.x basisY * halfSize.y basisZ * halfSize.z,
center basisX * halfSize.x basisY * halfSize.y + basisZ * halfSize.z
};
// Bottom
needsUpdate |= Draw_Line(pts[0], pts[1], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[1], pts[2], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[2], pts[3], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[3], pts[0], color, color, lineWidth);
// Top
needsUpdate |= Draw_Line(pts[4], pts[5], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[5], pts[6], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[6], pts[7], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[7], pts[4], color, color, lineWidth);
// Vertical lines
needsUpdate |= Draw_Line(pts[0], pts[4], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[1], pts[5], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[2], pts[6], color, color, lineWidth);
needsUpdate |= Draw_Line(pts[3], pts[7], color, color, lineWidth);
return needsUpdate;
}
protected bool Draw_Line(Vector3 start, Vector3 end, Color colorStart, Color colorEnd, float lineWidth = Line.DefaultLineWidth)
{
// Create up a new line (unless it's already created)
while (_lineData.LineIndex >= _lineData.Lines.Count)
{
_lineData.Lines.Add(new Line());
}
// Set it
var needsUpdate = _lineData.Lines[_lineData.LineIndex].Set_IfDifferent(transform.InverseTransformPoint(start), transform.InverseTransformPoint(end), colorStart, colorEnd, lineWidth);
// Inc out count
++_lineData.LineIndex;
return needsUpdate;
}
public void ClearGeometry(bool clearAll = true)
{
_lineBoxList = new List<AnimatedBox>();
}
#endregion
}

You can download the complete code from GitHub (link).

Greetings @ Toronto

El Bruno

References

#Hololens – Spatial Understanding vs Spatial Mapping, and a step-by-step on how to use it

Hi !

HoloToolkit has 2 main assets to perform a Spatial Mapping scan in an environment with the Hololens.

  • Spatial Mapping
  • Spatial Understanding

Spatial Mapping is the asset that we use by default and is the one which allows us to have basic control over the scan process performed by the Hololens. For example, once we add this element to an Unity3D project and we enable the drawing of the Mesh, we can see in real time the scan of the environment where while we are using it.

Spatial Understanding allows us to go one step further and perform more complex tasks. For example:

  • Define the start and end time for the scan process. This is really useful for example, when we need to perform an initial scan and then we know that the environment will not change. If we disable the scan in real time, we give more resources to Hololens to be used on other tasks
  • Perform simple queries on different elements in our point of view. For example, to search for flat surfaces in a wall or in the floor to position holograms.
  • Perform more complex queries, to search for items such as a chair, a guitar, etc. I have not used this feature yet, but I have it in in my to do list for when I need it.

Important: Whenever we use Spatial Understanding, we must have added the Spatial Mapping prefab, if we don’t do this we find see some very funny errors.

Personally I think that the 2 best ways to learn about how Spatial Understanding are as follows

  • Read Jeff Evertt article Case study – Expanding the spatial mapping capabilities of HoloLens, he comments on how they created this asset for specific scenarios that where nor contemplated in the original HoloToolkit Spatial Mapping.
  • Review the sample Scene located in “HoloToolkit-Unity/Assets/HoloToolkit-Examples/SpatialUnderstanding/SpatialUnderstanding-FeatureOverview/Scenes/“.
    It uses Spatial Understanding to perform an initial scan of the atmosphere and then present some options of simple queries.

If you also want it to use in a local project, the next tutorial may help you. The end result is an App that shows in real time a HUD with the status of the scan and at the end disable the visualization of the Mesh.

Easy steps.

  1. Create a 3D project in Unity3D
  2. Import HoloToolkit package
  3. Configure project to
    1. Support HoloLens projects (UWP, VR, etc)
    2. enable Spatial Mapping feature
  4. Clean Scene elements
  5. Add
    1. Hololens Camera
    2. Cursor With Feedback
    3. Input Manager
    4. Spatial Mapping
    5. Spatial Understanding
    6. FPS Display
  6. Add Empty element
    1. Rename to CodeManagers
    2. Add new C# Script named “GameStartScanner.cs”

The final project may seems similar to this one

Clipboard02

You can download the complete source code from GitHub (link).

The script for the new class is below, and it is best to check a couple of interesting notes about the same

  • MinAreaForComplete, MinHorizAreaForComplete, MinWallAreaForComplete properties defined the minimum area to be scanned to give the process how valid
  • In the Start() function starts the Scan process with operation SpatialUnderstanding.Instance.RequestBeginScanning (); and we also subscribe to us to state changes
  • The DoesScanMeetMinBarForCompletion property is that is responsible for validating if the scan process is finished
  • In the Update 2 major operations are performed
    • The status of the scan process is displayed in a FPS Text
    • If the minimal elements, it has been found we finished the scan process, and disable visual scan process

Sample Code

using System;
using HoloToolkit.Unity;
using HoloToolkit.Unity.SpatialMapping;
using UnityEngine;
using UnityEngine.UI;
public class GameStartScanner : MonoBehaviour
{
public float MinAreaForComplete = 30.0f;
public float MinHorizAreaForComplete = 20.0f;
public float MinWallAreaForComplete = 5.0f;
private bool _scanComplete = false;
public TextMesh DebugDisplay;
void Start()
{
SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged;
SpatialUnderstanding.Instance.RequestBeginScanning();
}
private void Instance_ScanStateChanged()
{
if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done)
&& SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
{
_scanComplete = true;
}
}
void Update()
{
if (DebugDisplay != null)
DebugDisplay.text = PrimaryText;
if (_scanComplete || !DoesScanMeetMinBarForCompletion) return;
SpatialUnderstanding.Instance.RequestFinishScan();
_scanComplete = true;
// hide mesh
var customMesh = SpatialUnderstanding.Instance.GetComponent<SpatialUnderstandingCustomMesh>();
customMesh.DrawProcessedMesh = false;
SpatialMappingManager.Instance.DrawVisualMeshes = false;
}
public string PrimaryText
{
get
{
// Scan state
if (SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
{
switch (SpatialUnderstanding.Instance.ScanState)
{
case SpatialUnderstanding.ScanStates.Scanning:
// Get the scan stats
IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0)
{
return "playspace stats query failed";
}
// The stats tell us if we could potentially finish
if (DoesScanMeetMinBarForCompletion)
{
return "When ready, air tap to finalize your playspace";
}
return @"Bruno it's time to walk !
Move around and scan in your playspace";
case SpatialUnderstanding.ScanStates.Finishing:
return "Finalizing scan (please wait)";
case SpatialUnderstanding.ScanStates.Done:
return "Scan complete – Now go back to work!";
default:
return @"I'm working,
ScanState = " + SpatialUnderstanding.Instance.ScanState.ToString();
}
}
return "";
}
}
public bool DoesScanMeetMinBarForCompletion
{
get
{
// Only allow this when we are actually scanning
if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) ||
(!SpatialUnderstanding.Instance.AllowSpatialUnderstanding))
{
return false;
}
// Query the current playspace stats
var statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0)
{
return false;
}
var stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats();
// Check our preset requirements
if ((stats.TotalSurfaceArea > MinAreaForComplete) ||
(stats.HorizSurfaceArea > MinHorizAreaForComplete) ||
(stats.WallSurfaceArea > MinWallAreaForComplete))
{
return true;
}
return false;
}
}
}

Greetings @ Toronto

El Bruno

References

#Hololens – What to do when your app is not a 3D App!

Hi !

Today is a quick posts on Unity3D and Hololens, I’ve been dealing with com nice and heavy C# bot code during the past days so it’s time to go back to Hololens. So, a couple of days ago I found that my Apps were not deployed as Unity VR Apps to the Hololens, instead they were 2D UWP Apps. I usually leave all the configuration of the project to HoloToolkit.

I’ve already wrote about this. However this is a good reminder on how to do this. Go to the menu “HoloToolkit // Configure // Apply Hololens Settings”, and that’s it this will change the project settings to be a Unity3D Hololens enabled project. It changes the project target to UWP, the use of Direct3D,  it will set “Fastest” as the default Quality and some other internal actions.

Clipboard03

Once we apply this change, we need to restart Unity3D and then we can see all of this changes in the “Build Settings” window.

Clipboard04

One of the important steps performed by this menus was to add “Windows Holographic SDK” into our project virtual reality SDKs to be used. In my current version this was not part of the project settings process. And that’s why I found a UWP 2D version of my app in the device.

Clipboard05

It maybe related to my version of Unity3D or the HoloToolkit. However the fix is easy: add “Windows Holographic” and that’s it !!!

Clipboard07

Greetings @ Toronto

El Bruno

References