#Office – New Microsoft Word editor helps support Inclusive Language.

Buy Me A Coffee

Hi!

I already write about some cool features embedded in Office to help us to be more productive. These features are mostly based on Artificial Intelligence. And one great example is the Microsoft Word Editor feature.

Today I was reading an article “New AI tools help writers be more clear, concise and inclusive in Office and across the web” (see references) , and I realize that the Microsoft Word Editor focus not only on productivity, it also help us to fight bias and to be more inclusive while we are writing.

Important: this feature is not enabled by default. In order to enable this, we must follow these steps “

  • Open Word Options
  • Select Proofing section
  • Go to [When correcting spelling …. / Writing Style] option
  • Open Settings
  • Enable the [Inclusiveness] options
enable Inclusiveness options in Microsoft Word

Once we enable these features, the editor will start to analyze for age bias, cultural bias, and more.

I’ve tested this, and it’s nice to get suggestions based on gender bias, like use firefighter instead of fireman.

suggestion change fireman by firefighter

It also suggest other options if we are using “whitelist”

suggestion, do not use whitelist

In the references sections, I shared 2 amazing articles where Microsoft explains the details about this feature.

And, in the personal side, this is also a great tool for non-english speakers like myself, to learn Inclusiveness and also to do a better work.

Super cool !

Happy coding!

Greetings

El Bruno

References

#Opinion – Face-Depixelizer , a sad sample of how ML illustrates preexisting bias

Buy Me A Coffee

Hi !

During the past days, you may see this images about how a new ML model can start with a pixelated image of a face, and .. let me share the official project description:

Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly

GitHub, Face-Depixelizer (see references)

Great Idea, sadly, one of the first tested images show this

You probably guess the source image, and you can see how wrong is the guess. However, it’s not just a mistake, after a quick search we can find some other bad samples of the tool.

And we can even find some scary face generation from video game characters (see references)

Why this is wrong ?

Just taking a look at the generated faces, will give you a sense of what’s wrong here.

There is a trend which basically denied an error here. Researchers in deep generative modeling are obsessed with generating photo-realistic images from more abstract/low-information representations (down-sampled, cartoons, sketches, etc.). The technology behind this is amazing, however in this case, is not just “lack of data”, or a very poor trained ML model. The Model uses the popular FFHQ faces dataset, which seems to have a very diverse group of faces.

And here goes my question: how far did the author tested this before publishing? I’m guessing that if you just share this with a couple of friends (ML enthusiasts), someone will point all these errors back to you. Unless, your test circle is so poorly diverse, that you didn’t get to this point.

So, I’ll assume the best from the author, but I’ll also realize how these practices defines a specific type of bias in ML, or in software development in general.

These days, I learned a lot about history, empathy and, and the end I think we all need to do our best to be better humans.

In the following video, you will find an amazing story and samples about bias in Machine Learning.

Bonus: if you wonder how this works with Asian group? Let’s share a Lucy Lu sample

Happy coding!

Greetings

El Bruno

Resources