#Azure – How to push / export and run a docker python flask web app on Azure

Buy Me A Coffee

Hi !

Here is an important reminder before starting:

Please do not use as a guide to deploy to production environments.

I was reading a lot about Azure Container Registry and other container scenarios. And most of the demos / tutorials are based on .Net Container Web Apps. Which is great, however there are no much information about the same process using Python Container Web Apps. So, I take this as an excuse to learn something new and here I am 馃榿, Let’s start.

Python Web App using Flask

I started with a simple Python Flask app. For this demo, I will use a post I write about flask and multi threading (see references for the full post).

# Bruno Capuano
# start a webserver with flask in a thread
# start a different thread +1 a shared var

from flask import Flask                                                         
import threading
import time

iCounter = 0
data = 'foo'
app = Flask(__name__)

def mainSum():
    # increment counter every second
    global iCounter
    while True:
        iCounter = iCounter + 1
        t = time.localtime()
        current_time = time.strftime("%H:%M:%S", t)    
        print(str(f"{current_time} - data {iCounter}"))
        time.sleep(1)

def startWebServer():
     app.run(host='0.0.0.0', port=8100)

@app.route("/getdata")
def main():
    global iCounter
    t = time.localtime()
    current_time = time.strftime("%H:%M:%S", t)    
    return str(f"{current_time} - data {iCounter}")

if __name__ == "__main__":
    stateThread = threading.Thread(target=mainSum)
    stateThread.daemon = True
    stateThread.start()

    webThread = threading.Thread(target=startWebServer)
    webThread.start()

This a resource consuming app, so be careful with this code!

Create a Docker container for the app

So next step was a to create a container for the app. This is my docker file. I added the commands to build and run the image at the bottom.

FROM python:3.7-slim

RUN pip install -U pip
RUN pip install --no-cache-dir flask~=1.1.2

COPY app /app

# Expose the port, default 8100
EXPOSE 8100

# Set the working directory
WORKDIR /app

# Run the flask server for the endpoints
CMD python -u app.py

# Commands to build and run the image
# docker build --pull --rm -f "Dockerfile" -t dockertoazure-webapp-service "."
# docker run -d -p 8100:8100 --privileged --name dockertoazure-webapp-service dockertoazure-webapp-service:latest

So, build and run my docker image and everything works fine !

Double check in Portainer, because I’m not a command line person

docker to azure build and run local ok in portainer

So, here my 1st milestone:

A local python web app using flask, in a docker container ready to be used elsewhere.

Push local Image to Azure

Our next step will be to have a Azure container registry, that will be the place that we will use as repository in Azure to host our images. In the references section the official definition is:

An Azure container registry is a private Docker registry in Azure where you can store and manage private Docker container images and related artifacts. In this quickstart, you create a container registry with the Azure portal. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.

Quickstart: Create an Azure container registry using the Azure portal

I will assume Azure CLI is installed, so next step will be to login to azure and also login to your container.

# LOGIN TO AZURE
az login
az acr login --name <container name>.azurecr.io

Next step will require to have a tagged image for the Azure Container Registry.

# TAG LOCAL IMAGE
docker tag dockertoazure-webapp-service <container name>.azurecr.io/dockertoazure-webapp-service:v1

And, time to push the local image to the cloud !

# PUSH LOCAL IMAGE
docker push <container name>.azurecr.io/dockertoazure-webapp-service:v1

And we have 2 ways to check if the image was successfully pushed. Using the command line

# LIST CURRENT REPOSITORIES
az acr repository list -n <container name>.azurecr.io
docker to azure list azure container registry repositories

We can also use the Azure Portal to check the available repositories

docker to azure azure container repositories list

Running the Docker image in a WebApp

And here comes the tricky part. Once you select a repository, you can see the tags of the image and you can deploy a WebApp directly from there.

docker to azure docker image available options

However, if you haven’t created a WebApp using other steps, you won’t have this option enabled.

Note: My guess is that until you have an Linux usage plan created, the [Run instance] and [Deploy to web app] options will be disabled. In this scenario, you may want to skip next steps and go directly to option 2.

Let’s see both options. From the context menu, from the previous image, create a web app is just a single step

docker to azure create using the context menu

A couple of seconds later we have our App up and running

The 2nd option to create your Web App is creating a new [Web App for Containers]

And here we found a couple of interesting options, like create a web site based on a docker container:

And in the next step, we can choose the image that we uploaded in previous steps

new web app docker image details

And, again, a couple of seconds later, we also have this new App up and running

Conclusion

So, nothing new here. I haven’t found an article or post, that describes this, so I put all the pieces together and dump this lines. Again an important warning

There are much better ways to do this ! Think on a DevOps approach.

I promise to write a follow up on this one, with the step-by-step details on this using GitHub actions. Which is the right way.

Happy coding!

Greetings

El Bruno

Resources

#VisualStudioCode – Create and use Code Snippets in @code, and create custom snippets just for your workspace

Buy Me A Coffee

Hi !

Code Snippets are super useful, and create your custom Code Snippets is super easy in Visual Studio Code. The official Documentation (see references) explains how to create snippets, with the full details of the JSON file, sections, contents and more. However there are some extra tips on top of this information that are super useful.

Snippet Creator

This is an amazing Visual Studio Code Extension. Just search for [snippet creator] on the Extension and install the extension.

visual studio code snippet creator extension

Once the extension is installed, the way to use this is super simple.

  • Select some code
  • CTRL + SHIFT + P, and type [create snippet]
  • Follow the steps, define snippet prefix, and optionally description
  • That’s it!
visual studio code snippet creator add new snippet

I created a custom snippet with the prefix [cameraCalcFPS] and now the snippet is available for any python file !

visual studio code custom code snippet available

In order to check all the custom code snippets

  • CTRL + SHIFT + P
  • Type [configure user snippets]
  • Open the specific one, for this sample python.json

The code snippets file has all the custom created snippets

visual studio code open custom snippets for python

Workspace specific snippets

There is also another specific scenario which I found very interesting:

The need for specific snippets inside a workspace

In order to do this, just create a file [mysnippets.code-snippets] and paste your own snippets here. This snippets will be only available for this workspace!

snippets for a workspace

Happy coding!

Greetings

El Bruno

References

Desplegando aplicaciones desde Azure Deployment Center

Mteheran's Blog

Azure Deployment center es una funcionalidad que tienen varios servicios de Azure en donde f谩cilmente podemos seleccionar la fuente donde se encuentra nuestra nuestro c贸digo en algunos simples pasos y de esta forma podemos tener toda la configuraci贸n de integraci贸n continua y despliegue continuo de nuestra app hacia Azure.

El primer paso seria crear nuestro App service que nos ayudar谩 a alojar nuestra app. En este caso vamos a crear un App service gratuito para una maquina linux. En el ejemplo estoy usando una Api en Node.js.

Una vez creado nuestro recurso podemos entrar a las opciones que tiene y seleccionar Deployment Center donde vamos a escoger la fuente del que c贸digo que vamos a utilizar. Azure Devops, Github y Bitbucket son las mas populares, para este demo voy a usar Azure devops.

Seleccionamos el proveedor que nos va a generar la configuraci贸n de integraci贸n continua y despliegue continuo, en鈥

View original post 135 more words

#Event – Resources used on “Getting Started with Machine Learning .NET” for the Central Ohio .NET Developer’s Group (@CONDG)

Buy Me A Coffee

Hi !

Great session yesterday with Matthew and the Central Ohio .NET Developer’s Group (CONDG). And the excuse was to talk about Machine Learning .Net.

As usual, now it’s time for slides and code

Slides

Code

The code used during the session is available here: https://github.com/elbruno/events/tree/main/2020%2009%2024%20CONDG%20MLNet

Session Recording

(I’ll update this block when the recording is available!)

Resources

#MSIgnite – Book of news is available ! Also translated to Chinese, French, German, Italian, Japanese, Portuguese and Spanish #MSIgnite2020

Buy Me A Coffee

Hi !

Awesome news from Microsoft Ignite 2020. We have access to the book of news, and it also have translations to support languages other than English. The Book of news is a guide to the key news items that we are announcing at Ignite, and instead of the usual PDF, now we have an amazing website with all the contents!

Note: The original plan was to share this from New Orleans 馃榾

Microsoft Ignite – BOOK OF NEWS

The book of news is huge with these main sections

And, as I mentioned before it has translations to

And remember, see you later today at our AI session

The promise of AI and ML – Will it change the world?

Bruno in the Ignite session promotional poster

Happy coding!

Greetings

El Bruno

#Personal – We have a “Learn to pronounce” feature in Google !

Buy Me A Coffee

Hi !

Some days ago, while I was searching for the meaning of a word, I found an amazing feature in Google:

Learn to pronounce

learn to pronounce feature in Google results

And it’s very basic and amazing at the same time.

  • You search for a definition of a word
  • In the result area, you get the speaker button, who reproduces the word
  • Once you play the word, a new feature will be available: [Learn to Pronounce]
  • This will open a new section, with the mouth movements for the American and British pronunciation

That’s it, super useful ! I could not find a lot of related information about this. It seems that it’s been around for over 2 years, based on these The Verge articles:

Besides some great AI in the back, this is so amazing 馃榾

Happy coding!

Greetings

El Bruno

d

d

d

#VS2019 鈥 ML.NET Model Builder training using GPU, CPU and … Azure !

Buy Me A Coffee

Hi !

I my previous posts on Model Builder, I show the differences between using the CPU and GPU to train a model. There is a 3rd option, which involves an Azure ML Experiment, and performs the training on the cloud.

It took me some time, to setup this environment, mostly because I tried to use an existing Azure Compute Instance that I already have, and Model Builder needs a Compute Cluster.

Here is also important to remark, that you need to create a Dedicated GPU based Compute Cluster. There are some expenses / costs associated to these resources, so make your numbers before you start.

And, here we go, now we can move forward with the Model Builder Assistant.

I made some tests using a small image data set, and it was awesome. Training an 24 images dataset took between 8 and 9 minutes. and the results were very good. A good insight here, is the chance to get more details directly in the Azure Machine Learning portal.

We can go deep in each experiment, and take a look at some metrics like F1 Score, Precision, Recall and more.

Each Model Builder Image Classification project, will trigger several Azure ML Experiments.

  • Automated ML
  • HyperDrive
  • Preparation
  • Script

The Script experiment is the one we can open to get access to some detailed logs, and also to the ONNX model.

So, I decided to go big and test this using the set of images from a Kaggle challenge [State Farm Distracted Driver Detection] (see references). This is a 1GB image set, 22424images, with ten categories.

The 1st step is to upload the 22424 images to an Azure resource, this took some time.

And then, start tracking the progress in the Azure Machine Learning portal.

And after some time, the process triggered a timeout.

The details on the 4 experiments suggests that some limit was exceeded. I’m not sure if from the IDE or on the Azure side.

However, the experiment in charge to train the model [Run 12], got some sucess models. Acuraccy, F1 and precision were getting better

Reading some log I can see how the error was triggered on Epoch 8. I need to spend more time here to figure out what’s happened !

Note: I already reported the issue to the GitHub Repo.

As final thought. Using Azure as the training environment in Model Builder is an amazing option. A big DataSet maybe a problem, or maybe my quota is the problem. Anyways, in smaller DataSets it worked great. I’ll keep an eye in this issue and update the blog with some news.

Happy coding!

Greetings

El Bruno

References

#MSIgnite – FREE Microsoft Certification Voucher as part of the Cloud Skills Challenge

Buy Me A Coffee

Hi !

Next week we will have the 1st virtual Ignite. I’m lucky enough to participate in a great AI session and while I’m building my agenda, I also realize that we have a Free Certification Voucher program, similar to the one that we had at Build 2020.

Once you are registered to the event, you need to sign to the Microsoft Ignite Cloud Skills Challenge.

Participate in the Microsoft Ignite Cloud Skills Challenge to earn a free Microsoft Certification exam and a chance to win prizes! Choose up to six different challenges to complete on Microsoft Learn. After you鈥檝e completed your first challenge, you鈥檒l earn a free Microsoft Certification exam. For each challenge you complete, you鈥檒l earn five entries into a sweepstakes. One grand prize winner will receive a chance to meet with a Microsoft leader and invite up to four friends.

Microsoft Ignite Cloud Skills Challenge

Another important details is this one

The free certification exam is available for redemption starting October 12, 2020 and is valid until March 31, 2021. If you do not schedule and take your eligible exam within that timeframe, you will not be able to redeem your offer.
Additional details and instructions on how to redeem your free certification exam will be available starting October 12, 2020.

Microsoft Ignite Cloud Skills Challenge 2020: Free Certification Exam

So, now you can wait for the certification voucher and choose one exam from the Exams list.

Awesome !

Happy coding!

Greetings

El Bruno

References

#YoloV3 – Rapid Object detection with 5 lines of code ( @code and #Python for the win! )

Buy Me A Coffee

Hi !

Sometime ago, I wrote a simple Python class a wrapper for YoloV3. This allows me to write 5 lines of code to analyze an image. Once I use this class, I only press F5 in Visual Studio Code and it’s magic:

detecting images with 3 lines of code

A sample usage will be similar to this one. The only required parameter for the YoloV3 analyzer is the confidence. 0.5 is good enough for this demo:

# Bruno Capuano 2020
# simple implementation for YoloV3 object detection
import cv2
from yoloanalyzer import yoloV3_analyzer
image_path = "02.jpg"
# init Analyzer with confidence 50%
ya = yoloV3_analyzer(0.5)
# analyze and show image
image = cv2.imread(image_path)
newImage = ya.ImageProcess(image)
cv2.imshow("Rapid YoloV3 demo", newImage)
# wrap up
cv2.waitKey()
cv2.destroyAllWindows()

And the output is rocking, not only cats, also dogs and humans !

I added a couple of parameters to define if we want labels and bounding boxes in the output image. The same image without bounding boxes will be:

And finally, the main class to perform this. It’s a very simple one, and feel free to use it and remember that you must check the official YoloV3 repository to get the files:

  • coco.names
  • yolov3.cfg
  • yolov3.weights
# Bruno Capuano 2020
# performs object detection using YoloV3 in an image and return the processed image
import imghdr
import os
import numpy as np
import cv2
import time
class yoloV3_analyzer:
def __init__(self, confidence):
self.confidence = confidence
def InitYoloV3(self):
#global net, ln, LABELS
self.weights = "yolov3.weights"
self.config = "yolov3.cfg"
self.labelsPath = "coco.names"
self.LABELS = open(self.labelsPath).read().strip().split("\n")
self.COLORS = np.random.uniform(0, 255, size=(len(self.LABELS), 3))
self.net = cv2.dnn.readNetFromDarknet(self.config, self.weights)
self.ln = self.net.getLayerNames()
self.ln = [self.ln[i[0] 1] for i in self.net.getUnconnectedOutLayers()]
def ImageProcess(self, image, showLabels = True, showBoundingBox = True):
# Init YOLO if needed
if(self.net is None):
self.InitYoloV3()
(H, W) = image.shape[:2]
frame = image.copy()
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
self.net.setInput(blob)
starttime = time.time()
layerOutputs = self.net.forward(self.ln)
stoptime = time.time()
print("FPS: {:.4f}".format((stoptimestarttime)))
confidences = []
outline = []
class_ids = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
maxi_class = np.argmax(scores)
confidence = scores[maxi_class]
if confidence > self.confidence:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX (width / 2))
y = int(centerY (height / 2))
outline.append([x, y, int(width), int(height)])
class_ids.append(maxi_class)
confidences.append(float(confidence))
box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)
if len(box_line) > 0:
flat_box = box_line.flatten()
pairs = []
for i in flat_box:
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
x_plus_w = round(x+w)
y_plus_h = round(y+h)
label = str(self.LABELS[class_ids[i]])
color = self.COLORS[class_ids[i]]
if (showBoundingBox == True):
cv2.rectangle(frame, (x,y), (x_plus_w,y_plus_h), color, 2)
if (showLabels == True):
cv2.putText(frame, label, (x10,y10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
return frame
# Yolo
net = (None)
ln = (None)
LABELS = (None)
frameno = 0
view raw YoloV3Analyzer.py hosted with ❤ by GitHub

Happy coding!

Greetings

El Bruno

References