#ComputerVision – Object Detection with #YoloV4 (work in progress …) and let’s think about ethics in Computer Vision

Buy Me A Coffee

Hi !

So after yesterday post where I used YoloV3 and MobileNetSSD, I also remember that we have YoloV4 released on April. I managed to make my code work with YoloV4 with some poor FPS results.

If you are interested on the code, let me know and I’ll be happy to share the code. It’s still a mess, working but a mess.

Abstract: There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100.

However, what I also learned is part of the story behind YoloV4. This is very relevant to our days. The next 10 min video, really nails an explanation about how YoloV4 works.

YOLO History

YOLO was developed by Joseph Redmon. It was 1st presented in 2016, and it was key for object recognition research. This led to better and faster Computer Vision algorithms.

The latest version, YOLO v4 is currently developed by three developers:

  • Alexey Bochkovskiy
  • Chien-Yao Wang
  • Hong-Yuan Mark Liao

No Joseph Redmon in YOLOv4?

Joseph Redmon quit developing YOLO v4 because of the potential misuse of his tech. He recently announced that he would stop doing computer vision research because of the military and ethical issues….

So, why this is important? It’s all about how we use this technology. There are amazing advances in the Computer Vision area, but we also are lacking some regulation about how to use this.

IBM announced that they will no longer offer facial recognition software

2 days ago, IBM announced that they will no longer offer facial recognition software. The Verge wrote an amazing article about this (see references). This sentences really hit a point regarding Ethics and more:

IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Facial recognition software has come under scrutiny for issues with racial bias and privacy concerns

Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses.

The Verge, IBM will no longer offer, develop, or research facial recognition technology

There it is, think about this.

Happy coding!


El Bruno