#VS2019 – ML.NET Model Builder GPU vs CPU test: 4 times faster !

Buy Me A Coffee

Hi !

Yesterday I wrote about the new options that we have to train models in ML.Net Model Builder. The main new is that we have the option to use our GPU to train models.

Quick recap, Model Builder supports 3 specific training environments

  • Local (CPU)
  • Local (GPU)
  • Azure

Yesterday I tested train a small image recognition model using CPU and GPU and the training time were very similar. The image training set was small, and I also haven’t configured my GPU & CUDA environment, so I decided to raise my own bet and test with something a little more challenging.

For this new test, I’ll use a set of images from a Kaggle challenge [State Farm Distracted Driver Detection] (see references). This is a 1GB image set, 22424images, with ten categories.

Of course, I used Model Builder to train an Image Classifier scenario, here is a preview of the configuration:

CPU Training

This training scenario was much more resource heavy than yesterday easy test. The total time was 39.2 minutes. Here is more detail

Total experiment time : 2353.6729442 Secs
------------------------------------------------------------------------------------------------------------------
|                                                     Summary                                                    |
------------------------------------------------------------------------------------------------------------------
|ML Task: image-classification                                                                                   |
|Dataset: C:\Users\bruno\AppData\Local\Temp\5e873581-2dab-4d46-911d-cfc0a0455eb1.tsv                             |
|Label : Label                                                                                                   |
|Total experiment time : 2353.6729442 Secs                                                                       |
|Total number of models explored: 1                                                                              |
------------------------------------------------------------------------------------------------------------------

GPU Train

Using the GPU, we have 1/4 time over CPU! 9.6 minutes.

Total experiment time : 581.1946062 Secs
------------------------------------------------------------------------------------------------------------------
|                                                     Summary                                                    |
------------------------------------------------------------------------------------------------------------------
|ML Task: image-classification                                                                                   |
|Dataset: C:\Users\bruno\AppData\Local\Temp\cccb2b3f-dbce-45e5-b17e-872b6cc3f116.tsv                             |
|Label : Label                                                                                                   |
|Total experiment time : 581.1946062 Secs                                                                        |
|Total number of models explored: 1                                                                              |
------------------------------------------------------------------------------------------------------------------

Conclusion

GPUs are great for deep learning because the type of calculations they were designed to process are the same as those encountered in deep learning. Images, videos, and other graphics are represented as matrices so that when you perform any operation, such as a zoom-in effect or a camera rotation, all you are doing is applying some mathematical transformation to a matrix.

Even if you have a non-powerful GPU (like me!), you may want to give it a try when you are training a model. ML.Net Model Builder documentation includes a series of steps to configure a CUDA 10.0 environment, which is good enough for most NVIDIA Graphic Cards.

Happy coding!

Greetings

El Bruno

References

#VS2019 – ML.NET Model Builder GPU Support, awesome !

Buy Me A Coffee

Hi !

Machine Learning.Net (ML.Net) includes a visual / step-by-step option for Auto ML: Model Builder. Even if ML.Net is based on .Net Core, and we can use it on Windows, Linux or Mac; Model Builder is only available for Visual Studio 2019.

And, in the latest preview version, Model Builder supports 3 specific training environments

  • Local (CPU)
  • Local (GPU)
  • Azure

And as you can imagine, if you have a decent GPU, the 2nd option is a must. So, let’s have a review for this.

First step is to install the [ML.NET Model Builder GPU Support (Preview)] extension (see references).

This will take a couple of minutes, and it will add the GPU option to our Model Builder projects. Once we have the extension installed, when we create a new ML.Net Model Builder Scenario, we can choose for CPU, GPU, or Azure environment.

In each one, we can see the different options available.

For this test, I use a sample Image Recognition scenario with 24 images for 3 different labels: fish, flower and human. This is a very simple scenario and the RestNet will easily work on this.

CPU Train

So, the total time to train a Image Recognition model with my CPU is:

Total experiment time : 230.8386996 Secs

Here are some more details:

------------------------------------------------------------------------------------------------------------------
|                                                     Summary                                                    |
------------------------------------------------------------------------------------------------------------------
|ML Task: image-classification                                                                                   |
|Dataset: C:\Users\bruno\AppData\Local\Temp\81efe1ab-c776-4071-b0ea-b7c93c65b239.tsv                             |
|Label : Label                                                                                                   |
|Total experiment time : 230.8386996 Secs                                                                        |
|Total number of models explored: 1                                                                              |
------------------------------------------------------------------------------------------------------------------

GPU Train

So, the total time to train a Image Recognition model with my GPO is:

Total experiment time : 228.1201648 Secs

More Details

------------------------------------------------------------------------------------------------------------------
|                                                     Summary                                                    |
------------------------------------------------------------------------------------------------------------------
|ML Task: image-classification                                                                                   |
|Dataset: C:\Users\bruno\AppData\Local\Temp\727e5bf8-bbe0-4d13-9513-043453a06bec.tsv                             |
|Label : Label                                                                                                   |
|Total experiment time : 228.1201648 Secs                                                                        |
|Total number of models explored: 1                                                                              |
------------------------------------------------------------------------------------------------------------------

CUDA and GPU must be configured

As you can see, the time is very similar in both scenarios, and there is a good reason for this. I just installed a brand new Windows 10 environment, and I haven’t configured my GPU / CUDA.

Lucky for us, there is an option in the Environment process, which allow us to check if our GPU will work.

Last time, it took me almost a day to configure my NVIDIA CUDA environment. So, as soon as I get this up and running, I’ll update this post!

Happy coding!

Greetings

El Bruno

References