After an amazing weekend, where I split my time supporting 2 different hackathons, today big news is the announcement of the new Raspberry Pi 4.
Did you check the news?
64-bit Quad-Core ARM Cortex-A72 processor, with a core clock speed of 1.5GHz
1GB, 2GB or 4GB of RAM
2 4K HDMI displays
starting price at $35
and much more !
I won’t get deep in details, I prefer to share some really good reviews, later i this post. Now is time to wait until the 4GB model is available in Canada and I’ll do some update on my Custom Vision docker and Machine Learning performance posts with the new device!
I’ve write a couple of time about project Malmo and Minecraft, so if you like Minecraft and Artificial Intelligence, MineRL will make your day. Let’s start with some basis:
MineRL is a large-scale dataset on Minecraft of seven different tasks, which highlight a variety of research challenges including open-world multi-agent interactions, long-term planning, vision, control, navigation, and explicit and implicit subtask hierarchies.
There are 2 main ways to be involved with MineRL, entering the AI (DL) competition, or playing Minecraft (to create more source data to train and test models!)
In the play more, MineRL want to solve Minecraft using state-of-the-art Machine Learning! To do so, MineRL is creating one of the largest datasets of recorded human player data. The dataset includes a set of tasks which highlights many of the hardest problems in modern-day Reinforcement Learning: sparse rewards and hierarchical policies.
There is plenty of information and details on the main website, and as soon as I finish some of my current work and personal projects, I’ll for sure spend more time here!
The Insider Dev Tour is for developers
interested in building Microsoft 365 experiences (Windows, Graph,
Office, Teams) today, using the latest dev technologies, as well as for
those who want a peek into the future. If you can read code, this is for
you, regardless if you are a beginner, an expert, a student, or a
The morning keynote
session will give you a demo and code-focused overview of the new
developer technology emerging from Redmond. It’s the best of Microsoft
Build for Microsoft 365 developers. We’ll cover Windows, Graph, Office,
AI, and more so that you are up-to-date and able to make informed
decisions when choosing from the latest technology for your next
This is a 100 percent developer-focused event.
tour is an opportunity to connect directly with leads and engineers
from Redmond, as well as regional industry leaders and the community of
Microsoft MVPs and RDs. We’re excited to meet you in person!
And after yesterday’s post I realize that the code is working, but there is room for performance improvement. So, I went back to my 1st sample, the one for face detection and I added some code to get some times for Frames per Second (FPS).
In my initial code, the app was working processing almost 6 FPS. Then I started to read the code and think on improvements and I manage to get an amazing +30FPS.
So, before moving forward, I want to remark this StackOverflow post that quickly pointed me in the easiest way to do a StopWatch in Python.
My original code, was this one:
And then, I realize that I may use some of the OpenCV functions to increase the face detection process. I really don’t need to process a full HD image (1920 x 1080), I may resize the frame to a quarter size and work with this. That’s how, based on some of the samples, I got the following code:
The line 12 perform the initial resize and then I recalculate back the positions before drawing the face frame. This process works almost 6 times faster than the original one.
I’ll continue improving the code and samples, and sharing my learning path !
In my previous posts I explained how to detect faces and perform face recognition in python. Today I’ll explore another feature in the face_recognition package: Find Facial Features.
The live camera output will be something like this:
Note: Special thanks to my daughter who is always OK to help me with this.
The main sample in the source code uses a photo to detect facial features and creates a new one with the features detected. In the following sample, is amazing to check that it detect a far away face behind the main ones and also, somehow, it detect some landmarks behind my girl glasses:
I wanted to see how fast this library work to perform this with a live camera feed, and the results are very good.
I spend sometime figuring out the best way to draw lines with OpenCV, at the end the PolyLine() function is the one doing all the magic (lines 14 to 17). It took me sometime, to find the best way to deal with matrix transformations and some other performance tricks, but at the end I get this up and running in 25 lines which is kind of amazing. And the final code is very simple:
Today’s post is about my experience doing presentations and demos with a Raspberry Pi.
Doing demos with a Raspberry Pi is amazing. I really enjoy share some of the amazing stuff we can do with the device, and usually there is one or two people in the audience who can share other even better Raspberry Pi experiences.
The only issue that you find in this scenarios is an easy way to connect your device to an internet connection. Sometimes, using a standard network cable between your laptop and the device is good enough, however there are other scenarios where connecting to a network is more complicated. In example: the Raspberry Pi connects automatically to a WiFi network, and you need to find the IP address to interact with the device.
These days, I ordered a Raspberry Pi 3 case with includes a 3.5 inches TFT screen, also with touch capabilities. I hope that, using this and a Bluetooth keyboard should make my life easier. (see references)
Sometimes you can’t connect your device to a HDMI screen, so a good option is to bring your own 7 inches screen for the device. For me, this is not optimal, because I need to handle a lot of cables, but it works every-time!
The following image show my typical hotel bedroom when I’m speaking and using a Raspberry Pi. Laptop, Raspberry Pi, Bluetooth keyboard, a mouse, the 7 inches screen, and more.
Finally, if your device is connected to the same wireless network but you don’t know the IP address, you may want to use a tool like Adafruit Raspberry Pi Finder. It only requires 2 clicks to find one or more devices in your network.
I’ll leave this here, and maybe in the near future I’ll update this posts with my experiences using the small case with TFT screen.
The first time I was doing a demo and connecting my MacBook Pro to a projector, I found out that the screen settings and configuration in a MacOS are completely different to the usual Windows experience. And, IMHO, not in a good way.
In Windows, when you are working with 2 or more screens (monitors / projectors) you have the chance to define specific settings for each one. This settings include orientation, resolution, and more.
MacOS display runs at its native resolution, which is the best resolution designed for the display. To change the resolution, you need to open System Preferences and select displays. Then in the [Display section], we can change the resolutions by selecting the [Scaled] option.
So far, so good. Each option also shows a “Looks like resolution …” options which is great. But, not good enough. Before a demo for a client or a user group, I like to dry run my presentations and demos using a custom resolution. You never know, which hardware is going to be at your final destination.
After testing a couple of apps, I finally found a very good one with the features I need: