Hi !
3rd post, to achive a simple goal:
Use Wio Terminal microphone to recognize wake up words and open or close the Azure IoT Door πͺ

So, let’s start.
Import samples and setup Arduino IDE
In my previous post I explained how to record audio data and how to train the model in Edge Impulse. Now it’s time to deploy and run the model in the Wio Terminal.
Note: check my 1st steps in Wio Terminal to setup your development environment for Arduino and Wio.
Let’s export the model.

And after importing the model to Arduino IDE, we can check the examples on how to use the model.

Using the Samples in Wio Terminal
As you can see in the previous image, the samples includes 2 specific demos to work with a microphone. However, we can’t use those samples in the Wio Terminal.
These demos are based on Arduino Nano 33 BLE and use PDM.The Wio Terminal uses Direct Access Memory controller to get data from the ADC. This can be long to explain, so we just need to change some code.
We have an article, in the official Wio Terminal Wiki, including the necessary steps to change the exported code:
Deploy to Wio Terminal
This saved me to write an extra article!
Running the samples
Once we applied the changes, we can
- Deploy the model to the Wio Terminal
- Open the COM Monitor and check the recognized audio
When we start to speak to the Wio Terminal, we will have a continuous loop analyzing the audio and performing the wake-up words detection.

Let’s take a look at the loop() code.
- Invoke the microphone inference function to record audio
- Run the classifier using the recorded buffer as input
- Print the predictions
void loop()
{
ei_printf("Recording...\n");
bool m = microphone_inference_record();
if (!m) {
ei_printf("ERR: Failed to record audio...\n");
return;
}
ei_printf("Recording done\n");
signal_t signal;
signal.total_length = EI_CLASSIFIER_RAW_SAMPLE_COUNT;
signal.get_data = µphone_audio_signal_get_data;
ei_impulse_result_t result = { 0 };
EI_IMPULSE_ERROR r = run_classifier(&signal, &result, debug_nn);
if (r != EI_IMPULSE_OK) {
ei_printf("ERR: Failed to run classifier (%d)\n", r);
return;
}
// print the predictions
ei_printf("Predictions ");
ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
result.timing.dsp, result.timing.classification, result.timing.anomaly);
ei_printf(": \n");
for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
ei_printf(" %s: %.5f\n", result.classification[ix].label, result.classification[ix].value);
}
}
The full code is available here:
Conclusion
In the output window we can check that the average time to run the audio detection is 10 ms. Super fast !
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno
References
- Download Arduino IDE
- Wiki Seeed – Historgram
- Wiki Seeed β loading Images
- Wiki Seeed β Installing the File System Library
- Wiki Seed – Wifi Connectivity
- GitHub – Arduino_JSON
- Wikipedia – X BitMap
- Edge Impulse
- CodeCraft
WioTerminal – Posts to interact with a Digital Twin Door πͺ with Azure IoT βοΈ and Azure Functions
- Convert and use images on the device
- 1st steps π£, developer steps
- Buttons and Charts time πππ
- Connecting to Wifi πΆ, display local IP and get ready for Azure βοΈ scenarios
- Getting JSON data from an Azure βοΈ Function
- Parsing JSON data from an Azure βοΈ Function
- Display a Digital Twin Door πͺ state using XBitmap with Azure IoT βοΈ
- Display a countdown progress bar π₯ for the next Azure IoT βοΈ refresh data call
- Open and close the Digital Twin Door πͺ using the Wio Terminal Buttons
- Training an π£οΈ audio recognition module. Record ποΈ audio samples forΒ training
- Training an π£οΈ audio recognition module. Edge Impulse for Arduino step-by-step and optimizations
- Training an π£οΈ audio recognition module. Running the model on the device
- Playing sound on the open and close events from the Digital Twin Door πͺ (coming soon)