#WioTerminal – Training an πŸ—£οΈ audio recognition module. Running the model on the Device β―️

Hi !

3rd post, to achive a simple goal:

Use Wio Terminal microphone to recognize wake up words and open or close the Azure IoT Door πŸšͺ

So, let’s start.

Import samples and setup Arduino IDE

In my previous post I explained how to record audio data and how to train the model in Edge Impulse. Now it’s time to deploy and run the model in the Wio Terminal.

Note: check my 1st steps in Wio Terminal to setup your development environment for Arduino and Wio.

Let’s export the model.

edge impulse export audio project for arduini

And after importing the model to Arduino IDE, we can check the examples on how to use the model.

sample imported to arduino ide

Using the Samples in Wio Terminal

As you can see in the previous image, the samples includes 2 specific demos to work with a microphone. However, we can’t use those samples in the Wio Terminal.

These demos are based on Arduino Nano 33 BLE and use PDM.The Wio Terminal uses Direct Access Memory controller to get data from the ADC. This can be long to explain, so we just need to change some code.

We have an article, in the official Wio Terminal Wiki, including the necessary steps to change the exported code:

Deploy to Wio Terminal

This saved me to write an extra article!

Running the samples

Once we applied the changes, we can

  • Deploy the model to the Wio Terminal
  • Open the COM Monitor and check the recognized audio

When we start to speak to the Wio Terminal, we will have a continuous loop analyzing the audio and performing the wake-up words detection.

recording and prediction on the COM Monitor

Let’s take a look at the loop() code.

  • Invoke the microphone inference function to record audio
  • Run the classifier using the recorded buffer as input
  • Print the predictions
void loop()
{
    ei_printf("Recording...\n");

    bool m = microphone_inference_record();
    if (!m) {
        ei_printf("ERR: Failed to record audio...\n");
        return;
    }

    ei_printf("Recording done\n");

    signal_t signal;
    signal.total_length = EI_CLASSIFIER_RAW_SAMPLE_COUNT;
    signal.get_data = &microphone_audio_signal_get_data;
    ei_impulse_result_t result = { 0 };

    EI_IMPULSE_ERROR r = run_classifier(&signal, &result, debug_nn);
    if (r != EI_IMPULSE_OK) {
        ei_printf("ERR: Failed to run classifier (%d)\n", r);
        return;
    }

    // print the predictions
    ei_printf("Predictions ");
    ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
        result.timing.dsp, result.timing.classification, result.timing.anomaly);
    ei_printf(": \n");
    for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
        ei_printf("    %s: %.5f\n", result.classification[ix].label, result.classification[ix].value);
    }
}

The full code is available here:

https://github.com/elbruno/Blog/blob/main/20211125%20TinyML%20Audio%20OpenClose%20door/TinyML_Audio_OpenClose_door_02.ino

Conclusion

In the output window we can check that the average time to run the audio detection is 10 ms. Super fast !

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno


References

Advertisement

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: