Stop blaming the avatar-generating AI for unnecessarily sexualized images – blame the creators instead

Stop blaming the avatar-generating AI for unnecessarily sexualized images – blame the creators instead

Stop blaming the avatar-generating AI for unnecessarily sexualized images – blame the creators instead

In December 2022, the Internet was in an uproar with a new application. For £1.79, Lensa AI would generate 50 artistic portraits from uploaded photos, quickly topping the download charts as users shared the images on social media. When some people complained about sexual and disturbing body modifications, the developers of the app posted a notice that they could not guarantee that the content would not be offensive. But when artificial intelligence (AI) makes mistakes, this kind of disclaimer is not enough.

When I tried Lens AI’s magic avatar feature myself, I chose my gender and uploaded 10-20 headshots. He quickly brought back blooming fairies, fantasy warriors, and other creative characters, all with recognizable traits. Magical, indeed, except for two of my photos naked and, oddly enough, with gigantic breasts. Other female-identifying users also reported being depicted naked, despite only uploading professional headshots.

In addition to undressing women, the app also seems to “beautify” their faces and slim their bodies. Other users reported that their dark skin was lightened, and an Asian journalist found that her photos were overly sexualized compared to those of her white colleagues. From a technical standpoint, unfortunately, it is not surprising that these portrayals of AI contain harmful stereotypes, including fetishizing Asian women.

The reason is “garbage in, garbage out”, a saying that applies to most AI systems today. Efficiency is not magical, it depends mainly on what we put into it. Lensa AI uses Stable Diffusion, a model that has been trained on 5.85 billion images scraped from the internet. If you take stuff from the web all the time, you invariably end up with an app that likes to draw big tits on my small, perfectly tender chest.

Generative AI models require such huge amounts of training data that it is difficult to select them. And while it is possible to add some safeguards, it is impossible to predict everything that AI will create. To make these tools available at all, it makes sense for companies to want people to use them at their own risk. For example, Open AI’s ChatGPT website warns users that their chat tool may generate incorrect information, harmful instructions, or biased content.

But these companies also benefit from our willingness to see AI systems as the culprits. Because autonomous systems can create their own content and make their own decisions, people project a lot of agency onto them. The smarter a system seems, the more likely we are to see it as an independent actor. As a result, companies can put a disclaimer up front, and many users will accept that it’s the AI’s fault when the tool produces offensive or harmful output.

The problem goes far beyond “magic” body edits. Chatbots, for example, have improved since Microsoft’s infamous Tay started spouting racist responses within hours of its launch, but they continue to baffle users with toxic language and dangerous prompts. We know that image generators and hiring algorithms suffer from gender bias, and the AI ​​used for facial recognition and justice is racist. In short, algorithms can do real harm to people.

Imagine if the zoo let the tiger out for a walk around town and said, “We’ve done everything we can to train him, but we can’t guarantee that the tiger won’t do anything offensive.” We wouldn’t let them off track. Even more so than the tiger, the AI ​​system does not make autonomous decisions in a vacuum. People decide how and for what purpose to design it, choose its training data and parameters, and choose when to release it to an unsuspecting population.

Companies may not be able to predict every outcome. But their claim that the output is simply a reflection of reality is a deviation. The creators of Lens AI claim that “man-made unfiltered data coming from the Internet has introduced a model to the existing prejudices of humanity. Essentially, AI is a mirror of our society.” But is the app a reflection of society or a historical bias and injustice that the company sets out to perpetuate and reinforce?

The persistent claim that AI is neutral is not only wrong, but also obscures the fact that the above choice is not neutral. It’s fun to get new profile pictures, and there are many other valuable and important apps for generative AI. But we don’t need to shield companies from moral or legal liability to get there. In fact, it would be easier for society to rely on the potential of artificial intelligence if the creators were responsible. So let’s stop pointing fingers at AI and let’s talk about who is really driving the outcomes of our technological future.

Read more about AI:

Leave a Reply

Your email address will not be published. Required fields are marked *