#Opinion – Face-Depixelizer , a sad sample of how ML illustrates preexisting bias

Buy Me A Coffee

Hi !

During the past days, you may see this images about how a new ML model can start with a pixelated image of a face, and .. let me share the official project description:

Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly

GitHub, Face-Depixelizer (see references)

Great Idea, sadly, one of the first tested images show this

You probably guess the source image, and you can see how wrong is the guess. However, it’s not just a mistake, after a quick search we can find some other bad samples of the tool.

And we can even find some scary face generation from video game characters (see references)

Why this is wrong ?

Just taking a look at the generated faces, will give you a sense of what’s wrong here.

There is a trend which basically denied an error here. Researchers in deep generative modeling are obsessed with generating photo-realistic images from more abstract/low-information representations (down-sampled, cartoons, sketches, etc.). The technology behind this is amazing, however in this case, is not just “lack of data”, or a very poor trained ML model. The Model uses the popular FFHQ faces dataset, which seems to have a very diverse group of faces.

And here goes my question: how far did the author tested this before publishing? I’m guessing that if you just share this with a couple of friends (ML enthusiasts), someone will point all these errors back to you. Unless, your test circle is so poorly diverse, that you didn’t get to this point.

So, I’ll assume the best from the author, but I’ll also realize how these practices defines a specific type of bias in ML, or in software development in general.

These days, I learned a lot about history, empathy and, and the end I think we all need to do our best to be better humans.

In the following video, you will find an amazing story and samples about bias in Machine Learning.

Bonus: if you wonder how this works with Asian group? Let’s share a Lucy Lu sample

Happy coding!


El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno



Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: