#Podcast – NTN 74 – #Xamarin, #Apple M1, Lidar en iPhone 12, Cloud y mucho más !

Buy Me A Coffee

Buenas!

Contar con Javier y Yeray siempre es un placer, y también una caja de sorpresas. El tema principal era Xamarin, y en el camino empezamos con:

  • El Lidar del nuevo Iphone 12
  • Entornos de desarrollo para MacOS
  • El nuevo Apple M1, redimiento y benchmarks; desde el punto de vista de un programador
  • Y la noticia bomba de Amazon Cloud y Apple

Mas de  una hora con grandes amigos!

Speakers

  • Javier Suárez Ruiz es Senior Software Engineer at Microsoft (LinkedIn)
  • Josué Yeray Julián Ferreiro es CTO & DevsDNA Co-Founder (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Ir a descargar

Happy coding!

Greetings

El Bruno

#Podcast – NTN 72 – Sobre escenarios #MultiCloud, Centros de Excelencia en #Cloud y mucho más.

Buy Me A Coffee

Buenas!

Hoy nos acompaña Richard, en uno de los episodios en los que más notas he tomado. La cantidad de información y experiencias que ha compartido han sido geniales.

Pido perdón por adelantado, ya que los primeros 6 o 7 minutos tenemos algunos problemas de sonido. Lo bueno es que ya sabemos cómo balancear la carga entre Amazon, Google, o Microsoft para tener una mejor calidad de grabación.

Otro gran episodio con nuevos amigos!

Speakers

  • Richard Steven Padilla Montaño  es Director of Cloud Architecture at Bankia (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Ir a descargar

Happy coding!

Greetings

El Bruno

#RPi – Some #RaspberryPi screen options and how to quickly find your device IP with #RaspberryPi Finder from @Adafruit

Hi !

Today’s post is about my experience doing presentations and demos with a Raspberry Pi.

Doing demos with a Raspberry Pi is amazing. I really enjoy share some of the amazing stuff we can do with the device, and usually there is one or two people in the audience who can share other even better Raspberry Pi experiences.

The only issue that you find in this scenarios is an easy way to connect your device to an internet connection. Sometimes, using a standard network cable between your laptop and the device is good enough, however there are other scenarios where connecting to a network is more complicated. In example: the Raspberry Pi connects automatically to a WiFi network, and you need to find the IP address to interact with the device.

These days, I ordered a Raspberry Pi 3 case with includes a 3.5 inches TFT screen, also with touch capabilities. I hope that, using this and a Bluetooth keyboard should make my life easier. (see references)

Sometimes you can’t connect your device to a HDMI screen, so a good option is to bring your own 7 inches screen for the device. For me, this is not optimal, because I need to handle a lot of cables, but it works every-time!

The following image show my typical hotel bedroom when I’m speaking and using a Raspberry Pi. Laptop, Raspberry Pi, Bluetooth keyboard, a mouse, the 7 inches screen, and more.

Finally, if your device is connected to the same wireless network but you don’t know the IP address, you may want to use a tool like Adafruit Raspberry Pi Finder. It only requires 2 clicks to find one or more devices in your network.

I’ll leave this here, and maybe in the near future I’ll update this posts with my experiences using the small case with TFT screen.

Happy coding!

Greetings @ Burlington

El Bruno

References

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (5 on N)

Hi!

Now that I have written about how to use C # with Visual Studio 2017 to connect an Alexa Skill with an EndPoint in Azure, I think it’s time to share some contents which should be at the beginning of the series:

How to design and plan user interactions with Alexa

Of course, I will not write about it because the account of Alexa Developers has a great series of short videos about it. Here is the list of videos

How to Choose Projects for Voice – Amazon Alexa Voice Design Guide

Design for Voice in 3 Steps – Amazon Alexa Voice Design Guide

How to Understand Intents? – Amazon Alexa Voice Design Guide

How To Interact with Alexa – Amazon Alexa Voice Design Guide

And if we want something more for developers the next series of 6 episodes is what we should add to our list of videos

Developing Alexa Skills – Hello, Alexa! (Part 1 of 6)

 

Happy Coding!

Greetings @ Toronto

El Bruno

#Tutorial – Como crear un #Amazon #AlexaSkill con Visual Studio 2017, C# y #Azure (5 de N)

Buenas!

Ahora que ya he escritor sobre cómo utilizar C# con Visual Studio 2017 para conectar un Alexa Skill con un EndPoint en Azure, creo que es momento de escribir lo que debería ser el 1er post:

Como se debe diseñar y planificar las interacciones de usuario con Alexa

Eso sí, no voy a escribir al respecto ya que la cuenta de Alexa Developers tiene una serie grandiosa de videos cortos al respecto. He aquí la lista

How to Choose Projects for Voice – Amazon Alexa Voice Design Guide

Design for Voice in 3 Steps – Amazon Alexa Voice Design Guide

How to Understand Intents? – Amazon Alexa Voice Design Guide

How To Interact with Alexa – Amazon Alexa Voice Design Guide

Y si queremos algo más para developers la siguiente serie de 6 episodios es lo que debemos agregar a nuestra lista de

Developing Alexa Skills – Hello, Alexa! (Part 1 of 6)

 

Happy Coding!

Saludos @ Toronto

El Bruno

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (4 de N)

Hi!

In the last post comment how to try a Skill using the traditional interface that is provided in the edition of an Alexa Skill. Well, another option that we have in Beta mode is [Test Simulator]

I1

This new interface allows us to make a couple of interesting actions. On the one hand, it allows us to enter the text with which we want to test our Skill in the same way that we would with an Amazon Echo device.

For example, if we would like [Help] to [Bruno Events] it will return the following results

I2

In this simulator we can listen to the same output that we will have in an Amazon Echo Device. The result for the Hololens Event question is similar

I3

And, finally we can try the Speech Recognition Features, since using the microphone of our computer, we can test our Skill directly with our voice.

I4

This allows us to try our Skill in the most natural way possible, even without having an Amazon Echo device.

Happy Coding!

Greetings @ Toronto

El Bruno

#Tutorial – Como crear un #Amazon #AlexaSkill con Visual Studio 2017, C# y #Azure (4 de N)

Buenas!

En el último post comente como probar un Skill utilizando la interfaz tradicional que se provee en la edición de un Alexa Skill. Pues bien, otra opción que tenemos en modo Beta es [Test Simulator]

I1

Esta nueva interfaz nos permite hacer un par de acciones interesantes. Por un lado, nos permite ingresar el texto con el que deseamos probar nuestra Skill de la misma manera que lo haríamos con un dispositivo Amazon Echo.

Por ejemplo, si soliciatmos [Help] a [Bruno Events] el mismo nos retornara lo siguiente

I2

Lo interesante de este simulador es que podremos escuchar el mismo output que tendremos en el Amazon Echo Device. El resultado para la pregunta de eventos de Hololens es similar

I3

Y, finalmente podremos probar las Features de Speech Recognition, ya que utilizando el micrófono de nuestro ordenador, podremos probar nuestra Skill directamente con nuestra voz.

I4

Esto nos permite probar nuestro Skill de la forma más natural posible, aun sin tener un Amazon Echo device.

Happy Coding!

Saludos @ Toronto

El Bruno

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (3 on N)

Hi !

In the previous post describe the steps necessary to create a WebSite that interacts with an Alexa Skill. The next step is to publish the website in Azure. There are many tutorials on how to do this, so I will not write the necessary steps (see references)

In my case I published my Alexa Skill Website in the following url

http://alexaskillbrunoevents.azurewebsites.net/

I1

It is important to remember the URL since it is the one that we will use in the configuration of our Skill.

Now it’s time to go back to the Amazon Developer Portal and edit the Skill we created in the 1st post. In the section [Configuration] we must define the location of the interaction endpoint, which in this case is our Azure WebApp

I2

Important: By default the url we got from Azure is a HTTP site, we should change this to HTTPS to be used in Amazon Alexa Skill.

The other values ​​in this section are left with their default values ​​and we go to the [SSL Certificate] section. In the case of Azure sites, they already have a certificate for safe navigation. We must select the 2nd option.

I3

And finally we get to the testing section [Test]. In it we can see, in JSON mode, the different messages that are sent from Alexa to our EndPoint. For example, if we invoke something simple like “help” we will have the following result

I4

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent.",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent."
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent",
"EventTopic": "hololens"
}
}

If for example we invoke something more complicated than using Slots, we will see a different result. In this example, I ask Alexa for events about Hololens

I5

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , ",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , "
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent",
"EventTopic": "hololens"
}
}

Well, this gives an idea about how to connect an Azure Api App to be the EndPoint of an Alexa Skill. In the following posts we will see more details on how to add extra functionality to the backend and how to use the new Alexa test environment.

Happy Coding!

Greetings @ Toronto

El Bruno

References

#Tutorial – Como crear un #Amazon #AlexaSkill con Visual Studio 2017, C# y #Azure (3 de N)

Buenas!

En el post anterior describir los pasos necesarios para crear un WebSite que interactúe con un Alexa Skill. El siguiente paso es publicar el mismo en Azure.  Hay muchos tutoriales sobre como hacer esto, así que no escribiré los pasos necesarios (ver referencias)

En mi caso he publicado el site en la siguiente url

http://alexaskillbrunoevents.azurewebsites.net/

I1

Es importante recordar la URL ya que es la que utilizaremos en la configuración de nuestro Skill.

Ahora es momento de abrir nuevamente el Amazon Developer Portal y editar el Skill que creamos en el 1er post. En la sección [Configuration] debemos definir la ubicación del endpoint de interaccion, que en este caso es nuestro site de Azure

I2

Importante: Por defecto la publicación se realiza con http, debemos cambiar el mismo por https.

Los demás valores de esta sección los dejamos con sus valores por defecto y pasamos a la sección [SSL Certificate]. En el caso de los sites en Azure, los mismos poseen ya un certificado para la navegación segura. Debemos seleccionar la 2da opción.

I3

Y finalmente llegamos a la sección de pruebas [Test]. En la misma podremos ver, en modo JSON, los diferentes mensajes que se envían desde Alexa a nuestro EndPoint. Por ejemplo, si invocamos algo simple como “help” tendremos el siguiente resultado

I4

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent.",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent."
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent",
"EventTopic": "hololens"
}
}

Si por ejemplo invocamos algo mas complicado que utilice Slots, veremos un resultado diferente. En este ejemplo, le pregunto a Alexa por eventos sobre Hololens

I5

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , ",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , "
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent",
"EventTopic": "hololens"
}
}

Pues bien, esto da una idea sobre como conectar un Azure Api App para que sea el EndPoint de un Alexa Skill. En los siguientes posts veremos mas detalles sobre como agregar funcionalidad extra al backend y sobre cómo utilizar el nuevo entorno de pruebas de Alexa.

Happy Coding!

Saludos @ Toronto

El Bruno

References

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (2 on N)

Hi!

Well, in today’s post we will see how to create an Azure API App, which we will then use as an Alexa Skill backend that we created in the previous post.

Tommy describes in an excellent way how to create a website and prepare him to work with Alexa Skill in this post (link). This project uses the NuGet Package Alexa Skills Kit.
Net for the processing of messages with Amazon Alexa.

I have created a template for Visual Studio 2017 that can be downloaded from here (link) and that allows us to create a project already configured to work with Alexa Skills.

Note: the quick way to import this template is to copy the file [WebApi Alexa Skill.
zip] to the Visual Studio 2017 project directory. Usually in [% \ Documents \ Visual Studio 2017 \ Templates \ ProjectTemplates \ Visual C #]

We create a new project using the imported type.

I1.jpg

We compiled the project to refresh the NuGet references, update the NuGet packages and we can start working.

i2.jpg

The 2 main elements in the project are

  • Controllers / AlexaController.cs. This is an standard ApiController wchi will handle the Alexa requests. It uses 2 HTTP requests
    • HTTP GET mostly for testing
    • HTTP POST process Alexa requests with the class [AlexaResponseAsync]
  • Alexa / AlexaResponseAsync.cs. All the business logic is here.

So, in the Alexa Response class we will add some basic changes

  • Línea 7, change the message used on the skill activation event
  • Líneas 10 a 22, creates a message including the Intent name and the used slots.

Something similar to this

namespace AlexaSkillBrunoEvents.Alexa
{
public class AlexaSkillBrunoEvents : SpeechletAsync
{
public override Task<SpeechletResponse> OnLaunchAsync(LaunchRequest launchRequest, Session session)
{
return Task.FromResult(CompileResponse("Launch Event for Bruno Events"));
}
public async override Task<SpeechletResponse> OnIntentAsync(IntentRequest intentRequest, Session session)
{
var message = new StringBuilder();
message.Append($"Intent called {intentRequest.Intent}.");
if (intentRequest.Intent.Slots.Count > 0)
{
message.Append($" Slots used. ");
foreach (var intentSlot in intentRequest.Intent.Slots)
message.Append($"{intentSlot.Key} ");
}
var response = CompileResponse(message.ToString());
return await Task.FromResult(response);
}
public override Task OnSessionStartedAsync(SessionStartedRequest sessionStartedRequest, Session session)
{
return Task.FromResult(0);
}
public override Task OnSessionEndedAsync(SessionEndedRequest sessionEndedRequest, Session session)
{
return Task.FromResult(0);
}
public static SpeechletResponse CompileResponse(string output)
{
var response = new SpeechletResponse
{
OutputSpeech = new PlainTextOutputSpeech { Text = output },
ShouldEndSession = true
};
return response;
}
}
}

So far the creation of the web project. In the next post, we will see how to publish this website and connect it with the Alexa Skill.

Happy Coding!

Greetings @ Toronto

El Bruno

References