#YoloV3 – Rapid Object detection with 5 lines of code ( @code and #Python for the win! )

Buy Me A Coffee

Hi !

Sometime ago, I wrote a simple Python class a wrapper for YoloV3. This allows me to write 5 lines of code to analyze an image. Once I use this class, I only press F5 in Visual Studio Code and it’s magic:

detecting images with 3 lines of code

A sample usage will be similar to this one. The only required parameter for the YoloV3 analyzer is the confidence. 0.5 is good enough for this demo:

# Bruno Capuano 2020
# simple implementation for YoloV3 object detection
import cv2
from yoloanalyzer import yoloV3_analyzer
image_path = "02.jpg"
# init Analyzer with confidence 50%
ya = yoloV3_analyzer(0.5)
# analyze and show image
image = cv2.imread(image_path)
newImage = ya.ImageProcess(image)
cv2.imshow("Rapid YoloV3 demo", newImage)
# wrap up
cv2.waitKey()
cv2.destroyAllWindows()

And the output is rocking, not only cats, also dogs and humans !

I added a couple of parameters to define if we want labels and bounding boxes in the output image. The same image without bounding boxes will be:

And finally, the main class to perform this. It’s a very simple one, and feel free to use it and remember that you must check the official YoloV3 repository to get the files:

  • coco.names
  • yolov3.cfg
  • yolov3.weights
# Bruno Capuano 2020
# performs object detection using YoloV3 in an image and return the processed image
import imghdr
import os
import numpy as np
import cv2
import time
class yoloV3_analyzer:
def __init__(self, confidence):
self.confidence = confidence
def InitYoloV3(self):
#global net, ln, LABELS
self.weights = "yolov3.weights"
self.config = "yolov3.cfg"
self.labelsPath = "coco.names"
self.LABELS = open(self.labelsPath).read().strip().split("\n")
self.COLORS = np.random.uniform(0, 255, size=(len(self.LABELS), 3))
self.net = cv2.dnn.readNetFromDarknet(self.config, self.weights)
self.ln = self.net.getLayerNames()
self.ln = [self.ln[i[0] 1] for i in self.net.getUnconnectedOutLayers()]
def ImageProcess(self, image, showLabels = True, showBoundingBox = True):
# Init YOLO if needed
if(self.net is None):
self.InitYoloV3()
(H, W) = image.shape[:2]
frame = image.copy()
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
self.net.setInput(blob)
starttime = time.time()
layerOutputs = self.net.forward(self.ln)
stoptime = time.time()
print("FPS: {:.4f}".format((stoptimestarttime)))
confidences = []
outline = []
class_ids = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
maxi_class = np.argmax(scores)
confidence = scores[maxi_class]
if confidence > self.confidence:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX (width / 2))
y = int(centerY (height / 2))
outline.append([x, y, int(width), int(height)])
class_ids.append(maxi_class)
confidences.append(float(confidence))
box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)
if len(box_line) > 0:
flat_box = box_line.flatten()
pairs = []
for i in flat_box:
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
x_plus_w = round(x+w)
y_plus_h = round(y+h)
label = str(self.LABELS[class_ids[i]])
color = self.COLORS[class_ids[i]]
if (showBoundingBox == True):
cv2.rectangle(frame, (x,y), (x_plus_w,y_plus_h), color, 2)
if (showLabels == True):
cv2.putText(frame, label, (x10,y10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
return frame
# Yolo
net = (None)
ln = (None)
LABELS = (None)
frameno = 0
view raw YoloV3Analyzer.py hosted with ❤ by GitHub

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – Sorting Arrays in #JavaScript, #Python and #CSharp … choose your own adventure

Buy Me A Coffee

Hi !

Not my usual set of posts, however this keep me thinking a lot. It all started with this tweet; an usual joke about JavaScript, and how it works with types.

array sorting in Javascript

I noticed that some responses focused on the “you need to know how the language works” before posting this. And it is correct, you need to know how JavaScript converts the elements into strings and then sort, to understand the crappy not so accurate output.

However, I am not a JavaScript expert, and I do not want to be one. And, when I use a tool or a programming language, I expect to have a nice learning curve; the previous example is not a nice “Welcome to JavaScript, where arrays and type conversions will make you think twice until you get it!”.

I’ve been doing a lot of Python programming, so I did the same exercise, and it worked as you may expect !

array and sort in python

I also did a similar in C#, and it also worked !

array and sort in C#

Do not get me wrong, this is not an Anti-JavaScript post. JavaScript is awesome and is almost everywhere. I know a lot of awesome JavaScript programmers, who I really admire.

However as a developer, I expect to avoid something simple like an array sort issue. I will always focus on “the code is clean and everyone will understand how it works, just read the code!”.

I am used to this as a C# developer; and I also found that Python is cool for this. But, sometimes, just sometimes, JavaScript looks like hell !

Bonus: An old post in my last JavaScript adventure.

Happy coding!

Greetings

El Bruno

#Python – Context Meny and Key Press with #PySimpleGUI

Buy Me A Coffee

Hi !

After my base sample code for a GUI app, now it’s time to add some interaction features:

  • Context Menu
  • Capture Key Press

In the following example, I’m adding a context menu with the following elements:

python pysimplegui context menu

This can be done as part of the window definition, in example

right_click_menu = ['Unused', ['&FPS', '---', 'Menu A', 'Menu B', 'Menu C', ['Menu C1', 'Menu C2'], '---', 'Exit']]

window    = sg.Window("El Bruno - Webcams and GrayScale with PySimpleGUI", layout, 
                    right_click_menu=right_click_menu,
                    no_titlebar=False, alpha_channel=1, grab_anywhere=False, 
                    return_keyboard_events=True, location=(100, 100))      

The menu definition and how to create submenus, separators, quick access keys and more are part of the PySimpleGUI documentation.

And then, in order to capture events in the window, we need to check the events read in each loop of the while. The following sample, check the window events

  • Close the Window is the user press the [X] to close the window, or click the [Exit] element on the context menu
  • Change the value of a boolean var if the user press the key [F] , or click the [FPS] element on the context menu
    # process windows events
    event, values = window.read(timeout=20)
    if event == sg.WIN_CLOSED or event == "Exit":
        break
    if event == "f" or event == "F" or event == "FPS":
        display_fps = not display_fps

The full code:

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display FPS
import time
import cv2
import PySimpleGUI as sg
# init Windows Manager
sg.theme("DarkBlue")
# def webcam col
colwebcam1_layout = [[sg.Text("Camera View", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1")]]
colwebcam1 = sg.Column(colwebcam1_layout, element_justification='center')
colwebcam2_layout = [[sg.Text("Camera View GrayScale", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1gray")]]
colwebcam2 = sg.Column(colwebcam2_layout, element_justification='center')
colslayout = [colwebcam1, colwebcam2]
rowfooter = [sg.Image(filename="avabottom.png", key="-IMAGEBOTTOM-")]
layout = [colslayout, rowfooter]
right_click_menu = ['Unused', ['&FPS', '—', 'Menu A', 'Menu B', 'Menu C', ['Menu C1', 'Menu C2'], '—', 'Exit']]
window = sg.Window("El Bruno – Webcams and GrayScale with PySimpleGUI", layout,
right_click_menu=right_click_menu,
no_titlebar=False, alpha_channel=1, grab_anywhere=False,
return_keyboard_events=True, location=(100, 100))
# Camera Settings
camera_Width = 480 # 640 # 1024 # 1280
camera_Heigth = 320 # 480 # 780 # 960
frameSize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(0)
time.sleep(2.0)
display_fps = False
while True:
start_time = time.time()
# process windows events
event, values = window.read(timeout=20)
if event == sg.WIN_CLOSED or event == "Exit":
break
if event == "f" or event == "F" or event == "FPS":
display_fps = not display_fps
# get camera frame
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, frameSize)
if (display_fps == True) and (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
# update webcam1
imgbytes = cv2.imencode(".png", frame)[1].tobytes()
window["cam1"].update(data=imgbytes)
# transform frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# update webcam2
imgbytes = cv2.imencode(".png", gray)[1].tobytes()
window["cam1gray"].update(data=imgbytes)
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

Resources

#Python – Creating GUIs with #PySimpleGUI. 2 webcams view with 50 lines of code

Buy Me A Coffee

Hi !

Working with Computer Vision is super fun. And there are some scenarios where display the step by step of the processing of an image is the best way to present this.

In most of my scenarios I use OpenCV, however for a more detailed presentation I needed to search and learn a GUI framework in Python. That’s how I get to PySimpleGUI (see references).

Note: As a long time C# dev, I’m missing XAML a lot !

PySimpleGUI is very simple (as you can expect!), and with a few lines of code we can create an UI like this one:

Let’s display the camera feed and a gray scale view of the camera feed

2 cameras in python, using PySimpleGUI to create a windows to display this

Super easy !

Let’s take a look at the code

  • Lines 16-34. This is the main window definition. The Window have 2 rows.
    • 1st row have 2 columns with the 2 cameras, each camera have their own element key
    • 2nd row have an image to display a bottom
  • Line 30. This is the final merge of the 2 rows
  • Lines 32-34. This is the window definition. We can define title, transparency, etc.
  • Lines 37-40. Window Event management, I’ll write more about this for sure. Right now, I’m only checking for window close to exit the loop.
  • Lines 51-53. Transform the camera frame to a byte array, based on the PNG format and assign the array to the 1st camera viewer.
  • Lines 55-60. Transform the camera frame to Gray Scale frame. Then transform the gray scale frame to a byte array, based on the PNG format and assign the array to the 2nd camera viewer.

Done !

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display the camera feed with grayscale using OpenCV
import time
import cv2
import PySimpleGUI as sg
# Camera Settings
camera_Width = 320 # 480 # 640 # 1024 # 1280
camera_Heigth = 240 # 320 # 480 # 780 # 960
frameSize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(0)
time.sleep(2.0)
# init Windows Manager
sg.theme("DarkBlue")
# def webcam col
colwebcam1_layout = [[sg.Text("Camera View", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1")]]
colwebcam1 = sg.Column(colwebcam1_layout, element_justification='center')
colwebcam2_layout = [[sg.Text("Camera View GrayScale", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1gray")]]
colwebcam2 = sg.Column(colwebcam2_layout, element_justification='center')
colslayout = [colwebcam1, colwebcam2]
rowfooter = [sg.Image(filename="avabottom.png", key="-IMAGEBOTTOM-")]
layout = [colslayout, rowfooter]
window = sg.Window("El Bruno – Webcams and GrayScale with PySimpleGUI", layout,
no_titlebar=False, alpha_channel=1, grab_anywhere=False,
return_keyboard_events=True, location=(100, 100))
while True:
start_time = time.time()
event, values = window.read(timeout=20)
if event == sg.WIN_CLOSED:
break
# get camera frame
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, frameSize)
# if (time.time() – start_time ) > 0:
# fpsInfo = "FPS: " + str(1.0 / (time.time() – start_time)) # FPS = 1 / time to process loop
# font = cv2.FONT_HERSHEY_DUPLEX
# cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
# # update webcam1
imgbytes = cv2.imencode(".png", frame)[1].tobytes()
window["cam1"].update(data=imgbytes)
# # transform frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# # update webcam2
imgbytes = cv2.imencode(".png", gray)[1].tobytes()
window["cam1gray"].update(data=imgbytes)
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

Resources

#Python – #FastAPI Webserver sharing information from values in a different thread

Buy Me A Coffee

Hi !

After my yesterday post using Flask, I was sure that a FastAPI version will be needed, so here it goes:

I have a common scenario which involves:

  • A sensor collecting information
  • A web-server publishing the sensor information

Read my previous posts to understand why I think this is the simple way to solve this: Multi-threading.

  • Thread 1, where an infinite loop request information from the sensor, and stores the latest value to be shared.
  • Thread 2, where a web-server process requests and share the latest sensor information.

Easy ! And after a couple of tests, I manage to create a single file implementing this:

# Bruno Capuano
# simple webserver with fastapi
# run with uvicorn 07:app -reload
# test with http://127.0.0.1:8000/getdata
# on each call, validate if the thread is started,
# of the thread is None, start a different thread +1 a shared var
from typing import Optional
from fastapi import FastAPI
import threading
import time
stateThread = None
iCounter = 0
app = FastAPI()
def validateStateThread():
global stateThread
if (stateThread is None):
print(f"start thread")
stateThread = threading.Thread(target=mainSum)
stateThread.daemon = True
stateThread.start()
@app.get("/getdata")
def main():
global iCounter
validateStateThread()
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
return str(f"{current_time} – data {iCounter}")
def mainSum():
# increment counter every second
global iCounter
while True:
iCounter = iCounter + 1
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
print(str(f"{current_time} – data {iCounter}"))
time.sleep(1)

So at this point, you may think: why does El Bruno need this? So, let’s share an image that I’ll use in future posts:

thermal camera demo

Happy coding!

Greetings

El Bruno

Resources

#Python – Flask Webserver sharing information from values in a different thread

Buy Me A Coffee

Hi !

I have a common scenario which involves:

  • A sensor collecting information
  • A web-server publishing the sensor information

This is simple, however the sensor does not support constants requests, and it may return a “too many requests” response when called directly. The idea to get the sensor information directly in the web-request was not valid from day zero.

I asked for support / guidance and my amazing and smart friends show me the concept of OVER ENGINEERING. Dockers, Compose, Queues, Coordination and more was part of some of the proposals. However, they also show me the most easy and simple way to solve this: Multi-threading.

  • Thread 1, where an infinite loop request information from the sensor, and stores the latest value to be shared.
  • Thread 2, where a web-server process requests and share the latest sensor information.

Easy ! And after a couple of tests, I manage to create a single file implementing this:

# Bruno Capuano
# start a webserver with flask in a thread
# start a different thread +1 a shared var
from flask import Flask
import threading
import time
iCounter = 0
data = 'foo'
app = Flask(__name__)
def mainSum():
# increment counter every second
global iCounter
while True:
iCounter = iCounter + 1
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
print(str(f"{current_time} – data {iCounter}"))
time.sleep(1)
def startWebServer():
app.run(host='0.0.0.0', port=8080)
@app.route("/getdata")
def main():
global iCounter
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
return str(f"{current_time} – data {iCounter}")
if __name__ == "__main__":
stateThread = threading.Thread(target=mainSum)
stateThread.daemon = True
stateThread.start()
webThread = threading.Thread(target=startWebServer)
webThread.start()

So at this point, you may think: why does El Bruno need this? So, let’s share an image that I’ll use in future posts:

thermal camera demo

Note: Some very smart people also suggested to implements this using FastAPI instead of Flask, so a future post may include this.

Happy coding!

Greetings

El Bruno

Resources

#Python – Update all Python Packages using pip with #PowerShell

Buy Me A Coffee

Hi !

Here is a small script that I always go back to search in my notes. I found this on internet sometime ago, and is the easiest way to update all packages in a specific environment using pip and PowerShell:

pip freeze | %{$_.split('==')[0]} | %{pip install --upgrade $_}

Depending on your environment, it may take some time. The output is cool, your packages are updated !

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (20/N)

Buy Me A Coffee

Hi !

We already have the drone camera feed ready to process, so let’s do some Image Segmentation today. As usual, let’s start with the formal definition of Image Segmentation

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as image objects). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.[1][2] Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).[1] When applied to a stack of images, typical in medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms like marching cubes.[3]

Wikipedia, Image Segmentation

The technique is amazing, and once is attached to the drone camera, we can get something like this:

I used a Python library to make most of the work: PixelLib. It was created by an amazing set of colleagues, so please check the references and take a look at the project description.

PixelLib: is a library built for an easy implementation of Image Segmentation in real life problems. PixelLib is a flexible library that can be integrated into software solutions that require the application of Image Segmentation.

PixelLib

Once I have all the pieces together, I pulled a Pull Request with a single change to allow the use of OpenCV and webcam camera frames and I got a basic demo up and running.

Let’s review the code

  • Line 147. That’s it, a single line which performs the instance segmentation, and also display the bounding boxes.

Sample Code

# Bruno Capuano
# enable drone video camera
# display video camera using OpenCV
# display FPS
# add a bottom image overlay, using a background image
# key D enable / disable instance segmentation detection
# save a local video with the camera recorded
import pixellib
from pixellib.instance import instance_segmentation
import socket
import time
import threading
import os
import cv2
def receiveData():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def readStates():
global battery
while True:
try:
response_state, _ = stateSocket.recvfrom(256)
if response_state != 'ok':
response_state = response_state.decode('ASCII')
list = response_state.replace(';', ':').split(':')
battery = int(list[21])
except:
break
def sendCommand(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
def sendReadCommand(command):
response = sendCommand(command)
try:
response = str(response)
except:
pass
return response
def sendControlCommand(command):
response = None
for i in range(0, 5):
response = sendCommand(command)
if response == 'OK' or response == 'ok':
return True
return False
# ———————————————–
# Main program
# ———————————————–
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
response_state = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
stateSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
stateSocket.bind(('', STATE_UDP_PORT))
# start threads
recThread = threading.Thread(target=receiveData)
recThread.daemon = True
recThread.start()
stateThread = threading.Thread(target=readStates)
stateThread.daemon = True
stateThread.start()
# connect to drone
response = sendControlCommand("command")
print(f'command response: {response}')
response = sendControlCommand("streamon")
print(f'streamon response: {response}')
# drone information
battery = 0
# open UDP
print(f'opening UDP video feed, wait 2 seconds ')
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)
time.sleep(2)
# open video writer to save video
vid_cod = cv2.VideoWriter_fourcc(*'XVID')
vid_output = cv2.VideoWriter("cam_video.mp4", vid_cod, 20.0, (640,480))
dsize = (640, 480)
# load bottom img
background = cv2.imread('Bottom03.png')
background = cv2.resize(background, dsize)
# load model
instance_seg = instance_segmentation()
instance_seg.load_model("mask_rcnn_coco.h5")
# main app
detectionEnabled = False
i = 0
while True:
i = i + 1
start_time = time.time()
sendReadCommand('battery?')
print(f'battery: {battery} % – i: {i}')
try:
ret, frame = cap.read()
img = cv2.resize(frame, (640, 480))
if (detectionEnabled):
# save image to disk and open it
imgNumber = str(i).zfill(5)
frameImageFileName = str(f'tmp\image{imgNumber}.png')
outputImageName = str(f'tmp\image{imgNumber}Out.png')
if os.path.exists(frameImageFileName):
os.remove(frameImageFileName)
cv2.imwrite(frameImageFileName, img)
segmask, img = instance_seg.segmentFrame(img, show_bboxes= True)
cv2.imwrite(outputImageName, img)
# overlay background
img = cv2.addWeighted(background, 1, img, 1, 0)
if (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(img, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
cv2.imshow('@elbruno – DJI Tello Camera', img)
vid_output.write(img)
except Exception as e:
print(f'exc: {e}')
pass
# key controller
key = cv2.waitKey(1) & 0xFF
if key == ord("d"):
if (detectionEnabled == True):
detectionEnabled = False
else:
detectionEnabled = True
if key == ord("q"):
break
# release resources
response = sendControlCommand("streamoff")
print(f'streamon response: {response}')
# close the already opened camera, and the video file
cap.release()
vid_output.release()
cv2.destroyAllWindows()

I’ll show a couple of live demos of this in my next Global AI Community, Drone AI demos. Check my next event sections!

Happy coding!

Greetings

El Bruno

References

#Windows10- #WindowsTerminal and Anaconda, bonus: start an Anaconda Virtual Environment

Buy Me A Coffee

Hi !

Windows Terminal (WT) is one of the coolest tools I’ve using in the last couple of years. I’m not an expert, and not even a fan of CLIs, however I assume working with WT is super cool.

Bonus: If you speak Spanish, I shared my own thoughts about this with Juan and Eduard in a podcast episode here.

On top of this, I also use Anaconda a lot. And, now that we can launch and use Anaconda from a PowerShell Prompt, I think I should spend some time trying to figure out how to have Anaconda inside Windows Terminal.

windows terminal new profiles for Anaconda

I will assume that you know the basis of Windows Terminal profiles. As a WT user, we can create as many profiles as we want to have different tools available. This is my starting point to use Anaconda and Windows Terminal.

Note: Check References for Donovan Brown post about working with profiles.

Create a new profile to launch Anaconda in Windows Terminal

Let’s go to Windows Terminal Settings to create a new profile for Anaconda. In order to do this, I’ll copy and paste an existing profile, update the Guid and complete the following values.

  • guid: create and paste a new Guid
  • name: I defaulted this to Anaconda
  • commandline: this is the tricky one. So I’ll describe the steps below.

I browse to [C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Anaconda3 (64-bit)] and view the properties for the [Anaconda PowerShell Prompt]. Then copy the Target value and use the value in the commandline element.

get properties from shortcut for Anaconda Command Prompt
// To view the default settings, hold "alt" while clicking on the "Settings" button.
// For documentation on these settings, see: https://aka.ms/terminal-documentation
{
"$schema": "https://aka.ms/terminal-profiles-schema",
"defaultProfile": "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",
"profiles": [
{
// Make changes here to the powershell.exe profile
"guid": "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",
"name": "Windows PowerShell",
"commandline": "powershell.exe",
"hidden": false
},
{
// Anaconda
"guid": "{4E6F14F9-4636-4A6B-93D4-7C9CB913FD5B}",
"name": "Anaconda",
"commandline": "%windir%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe -ExecutionPolicy ByPass -NoExit -Command & 'C:\\ProgramData\\Anaconda3\\shell\\condabin\\conda-hook.ps1' ; conda activate 'C:\\ProgramData\\Anaconda3' ",
"hidden": false,
"colorScheme": "Campbell"
},

That should be enough to have a new profile to work with Anaconda. When we open a new tab for Anaconda, we can see that it open the standard virtual environment (base).

default anaconda prompt in windows terminal

1st step done, have Anaconda integrated with Windows Terminal !

Start an Anaconda Virtual Environment in a new Windows Terminal Tab

Moving on, I’ll create and activate a new Virtual Environment named [drone], with the following commands.

conda create -n drone pip python=3.7
conda activate drone

Now I have a new environment named [drone], and I want to have a shortcut in Windows Terminal to open a new tab with this VirtualEnv activated. I copy & paste the definition of the Anaconda profile, used a new Guid, and added, the following command to the end of the line:

conda activate drone
virutal env openend in windows terminal

As you can see in the previous image, when I open a new tab for my Drone Virtual Env, I already have it loaded. I also added a [cls] command at the end, so I can start with a clean environment.

Finally, and for reference, this is my current Windows Terminal settings file including the 2 Anaconda profiles.

{
"$schema": "https://aka.ms/terminal-profiles-schema",
"defaultProfile": "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",
"profiles": [
{
// Make changes here to the powershell.exe profile
"guid": "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",
"name": "Windows PowerShell",
"commandline": "powershell.exe",
"hidden": false
},
{
// Anaconda
"guid": "{4E6F14F9-4636-4A6B-93D4-7C9CB913FD5B}",
"name": "Anaconda",
"commandline": "%windir%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe -ExecutionPolicy ByPass -NoExit -Command & 'C:\\ProgramData\\Anaconda3\\shell\\condabin\\conda-hook.ps1' ; conda activate 'C:\\ProgramData\\Anaconda3' ",
"hidden": false,
"colorScheme": "Campbell"
},
{
// Anaconda Drone Virtual Environment
"guid": "{E0B3392E-E850-4071-A7B1-483751342A8C}",
"name": "Anaconda – Drone VirtualEnv",
"commandline": "%windir%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe -ExecutionPolicy ByPass -NoExit -Command & 'C:\\ProgramData\\Anaconda3\\shell\\condabin\\conda-hook.ps1' ; conda activate 'C:\\ProgramData\\Anaconda3' ; conda activate drone; cls ",
"hidden": false,
"colorScheme": "Campbell"
}

Happy coding!

Greetings

El Bruno

References

#Python – Easy way to install #OpenCV and #TensorFlow with Anaconda

Buy Me A Coffee

Hi !

After sharing a couple of sessions using the Drone and working with the drone camera, a couple of people contacted me with issues while they try to install OpenCV and/or TensorFlow, to be used with Python.

There are plenty of tutorials about this, so I will share a very easy one.

1st step is to install Anaconda (see references). Once Anaconda is installed, let’s launch the Anaconda PowerShell Prompt to install dependencies.

The 2 main commands to install the desired packages are

# install TensorFlow
pip install tensorflow --user

# install OpenCV
pip install opencv-python

And that’s it! All the packages should be installed. An easy way to test if both packages are installed is to launch python and

# check tensorflow, and display tensorflow version
import tensorflow as tf
tf.__version__

# check OpenCV, and display OpenCV version
import cv2
cv2.__version__

The output should be similar to this one

anaconda check tensorflow and opencv with python

Happy coding!

Greetings

El Bruno

References