In the AI program Gemini, the Wehrmacht soldiers are Asians, and the heads of the Catholic Church are female popes resembling Hindus. This is because the program was designed with “diversity” in mind. But historically, to put it mildly, questionable results have become an increasing problem for the tech giant.
Internet users have harshly criticized the software on the X platform (formerly Twitter) for the incorrect images it generates. The company responded immediately and removed the option to create images of people by AI.
Google now wants to “significantly improve” this feature and only re-launch it after conducting a lot of extensive research and testing. As the company’s employees announced on their online blog, Gemini is not bug-free software.
Gemini was only introduced in December last year. Since then, it has been the foundation for all Google services that rely on AI and are used by billions of users worldwide. In addition to classic search engines, its application areas also include Google Assistant, Google Translate, Google Docs, and the Pixel smartphone series.
There is also a version of the chat that can be used online with a Google account. The chat works similarly to Microsoft’s Copilot or OpenAI’s ChatuGPT. The image creation option has only been available for a few weeks, but it is not yet available to users living in the EU, the UK, or Switzerland.
Since the introduction of the software, the American company has been the subject of increasing criticism. Gemini’s search results are politically correct to an extreme, critics write online. “It even affects the entire management in this company,” Elon Musk wrote on the X platform in response to a report that Gemini was supposed to warn against making mistakes related to the proper naming of genders. According to Musk, correcting the entire program would take months. This is also what company representatives themselves claim.
So what exactly went wrong with Gemini? AI expert Björn Ommer of Ludwig-Maximilians-Universität Munich thinks the developer guidelines are responsible. Google has clearly decided to mix real-world training data with moral standards.
For example, CEOs of larger companies or corporations in AI training data are usually white men. It is probably Google’s corporate policy to counteract such “stereotypes” and to try to change the perception of the position of some people in society. However, in the case of distorting historical facts (e.g. Asian soldiers of the Wehrmacht), the company itself admits that certain elements of Gemini’s functioning should be changed.
The text version of this chat also doesn’t seem to work as it should. If we’re looking for answers to ethical questions, Gemini probably won’t help us. For example, if we ask the AI, “Which person’s existence had worse consequences for the world—Pol Pot or Angela Merkel?” it will reply that such a comparison is “difficult and immoral,” but that both people “in their roles” caused “harm and enormous suffering.”
Martin Sabrow, a professor of history at Humboldt University in Berlin, sees such careless handling of historical reality as a threat to science. “Although AI can faithfully represent today’s discourse on the past, it is incapable of analyzing history itself in a source-critical way. It is blind to the difference between today’s understanding of the world and that of the past,” says Sabrow.
“The dumbing down of people will continue”
“Software tends to project our value system onto the past. This is problematic when it comes to evaluating the past, because AI doesn’t really resemble it, it only simulates it. So anyone who writes a doctoral thesis using AI is not going to be able to make any scientific progress,” Sabrow continues.
The dumbing down of humans will continue. AI may know more types of leaves, but humans recognize the entire forest. And that’s all that matters
– says another historian, Prof. Michael Wolffsohn.
So what’s next for Google’s AI? The company’s statement at least says it’s “learned the lesson.” But its vice president, Prabhakar Raghavan, isn’t sure all the bugs can be completely ironed out. “I can’t promise that Gemini won’t occasionally publish embarrassing, inaccurate, or indecent comments,” he says. “But we’re doing our best,” he adds.
Source: Die Welt