Quick post today. I’ll pickup yesterday demo, showing the facial features and adding Face Recognition on top of that. In other words, we’ll move from this
With a couple of extra lines for face recognition
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np
bruno_image = face_recognition. load_image_file( "d:\Faces\Bruno1.jpg")
bruno_face_encoding = face_recognition. face_encodings( bruno_image)[ 0]
valentino_image = face_recognition. load_image_file( "d:\Faces\Valen1.jpg")
valentino_face_encoding = face_recognition. face_encodings( valentino_image)[ 0]
known_face_encodings = [
known_face_names = [
return known_face_encodings, known_face_names;
video_capture = cv2. VideoCapture( 0)
known_face_encodings, known_face_names = LoadFaces()
ret, frame = video_capture. read()
rgb_frame = frame[:, :, :: – 1]
face_landmarks_list = face_recognition. face_landmarks( rgb_frame)
for face_landmarks in face_landmarks_list:
for facial_feature in face_landmarks. keys():
pts = np. array([ face_landmarks[ facial_feature]], np. int32)
pts = pts. reshape(( – 1, 1, 2))
cv2. polylines( frame, [ pts], False, ( 0, 255, 0))
face_locations = face_recognition. face_locations( rgb_frame)
face_encodings = face_recognition. face_encodings( rgb_frame, face_locations)
for ( top, right, bottom, left), face_encoding in zip( face_locations, face_encodings):
matches = face_recognition. compare_faces( known_face_encodings, face_encoding)
name = "Unknown"
face_distances = face_recognition. face_distance( known_face_encodings, face_encoding)
best_match_index = np. argmin( face_distances)
if matches[ best_match_index]:
name = known_face_names[ best_match_index]
#cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
cv2. rectangle( frame, ( left, bottom), ( right, bottom), ( 0, 255, 0), cv2. FILLED)
font = cv2. FONT_HERSHEY_COMPLEX_SMALL
cv2. putText( frame, name, ( left + 6, bottom – 6), font, 0.7, ( 255, 255, 255), 1)
cv2. imshow( 'Video', frame)
if cv2. waitKey( 1) & 0xFF == ord( 'q'):
There is some room for performance improvement, so I’ll focus on this in next posts.
The complete project is available here
Greetings @ Burlington
Detecting Faces with 20 lines in Python Face Recognition with 20 lines in Python Detecting Facial Features with 20 lines in Python
Like this: Like Loading...
I hate to break it to you Amigo, but this is closer to 40 lines 🤣🤣🤣.
Neat stuff, we should get this into all the inno studios. See you in Milan next week.
LikeLiked by 1 person
Hahaha, yes, the samples started with 20, and now … a little bigger 😀
See you next week in Milan! 😀
I am a student who is currently doing facial recognition for my project. And I have followed your guide from this website https://elbruno.com/2019/05/30/vscode-lets-do-some-facerecognition-with-20-lines-in-python-4-n/ However, there was some issues with the accuracy of the face recognition. I was wondering if there is a way to add more image to the data base instead using only one picture to increase the dataset ? Below is an example of the code i used to retrieve the image. May i know how do i retrieve a set of image instead of a single image for better accuracy ?
JIMMY_image = face_recognition.load_image_file(r”/Users/jimmy/Desktop/Face/JIMMY/JIMMY.jpeg”)
Sure, Happy to help!
I haven’t write a sample for this, however Adrian Rosebrock has.
Take a look at his sample here and it may help > https://www.pyimagesearch.com/2018/06/11/how-to-build-a-custom-face-recognition-dataset/