Yesterday I realized that while I was writing a couple of posts about Microsoft Bot Framework, I never wrote about LUIS (Language Understanding Intelligent Service, so that’s some work for the future)
But today is all about CRIS, Custom Recognition Intelligent Service. The service is part of the Speech Services of Cognitive Services, and it has been opened in preview mode to be used in AZURE publicly. You only need an Azure account to test the service.
Please don’t think on CRIS as a simple Speech to Text service, it is much more. CRIS has several components, and the most important are 2: the acoustic model and the language model. These 2 models have been optimized for common usage scenarios, such interact with Cortana in a smartphone, Tablet, or PC, search the Web by voice or send text messages.
The language model allows us to add definitions within the context of the app that will use this service to make it more precise. On the other hand, the model enables an app to do a better job recognizing speech in specific environments or when it’s working with weird populations of users. For example, in an app designed for use in a warehouse or factory voice-enabled, a custom acoustic model can recognize with more precision speech in the presence of the sounds found in these environments. As always, in both cases, so the models are accurate, a quite extensive “training” work is necessary.
The best way to start is to navigate to the CRIS homepage and start creating a language definition to work with it. In addition, the examples on GitHub also provide examples to see how to use this service in a “simple for an app of biology” model.
Greetings @ Calgary