Dec 10, 2021

5 Notable Times AI Made Translation Mistakes

5 min read

Artificial Intelligence (AI) has come a long way, but it’s not perfect—especially in the realm of interpreting and translating human language.
 
We have written in the past about why AI translators have not and will not replace humans. The reasons are many, from misinterpreting idioms and metaphors to lacking an understanding of humor, cultures and context.

 

The difference in accuracy between human and machine translation is particularly essential when you run a business or offer a service.
 

One AI error may simply confuse a client; another may damage your reputation.
 

Though fun to read about, the following AI translation mistakes prove that human interpreters remain a necessity.
 

1. Google Becomes a Religious Prophet

A couple of years back, Google Translate was the topic of curiosity and concern. The tech giant began to receive questions in regards to its seemingly prophetic translations.
 
Reddit users were the first to discover that typing nonsense into the application resulted in strange, religious-sounding prophecies.
 
One noteworthy example was when you typed the word “dog” 19 times in a row and selected a translation from Maori into English. The resulting translation was:
 
“Doomsday Clock is three minutes at twelve. We are experiencing characters and a dramatic developments in the world, which indicate that we are increasingly approaching the end times and Jesus’ return.”
 
It wasn’t long before the translations piqued interest and summoned theories. From Reddit to Twitter, people blamed a myriad of sources, from ghosts and demons to disgruntled Google employees. Some suspected that the translations were taken from peoples’ private emails or messages. Others figured it was the result of users abusing the app’s “Suggest an edit” button that allows users to suggest better translations for text.
 
The real reason was far less sinister.
 
Many machine learning translators are trained, understandably, on actual human text. The problem is that religious text is widely accessible compared to other types, especially when it comes to less common languages.
 
As a result, when users entered nonsense, essentially confusing the AI, it reverted to its training.
 
You may think it doesn’t matter how machines translate nonsense in the first place, but typos are a natural and frequent occurrence translators have to keep in mind.
 

2. Facebook Translation Leads to Faulty Arrest

In 2017, Israeli police arrested a Palestinian man near Jerusalem for a Facebook post he made that morning.
 
The post was a picture of him leaning against a bulldozer on a construction site, and his caption read “yusbihuhum” or “good morning.”
 
The reason for the arrest?
 
Facebook’s artificial intelligence translation service had translated the good morning message into “hurt them” in English and “attack them” in Hebrew.
 
The man was arrested and questioned on suspicions that he would use the bulldozer in an attack before authorities realized their mistake. Facebook released a public apology soon after.
 

3. Medical Prescriptions Gone Wrong

According to federal guidelines, hospitals and health care organizations are supposed to provide professional interpreters and translators for any patient who doesn’t speak English.
 
Despite these guidelines, many do not, and staff often attempt to communicate with non-English speaking patients through free services, like Google Translate.
 
Google Translate has improved over the years, but a recent study shows that it remains imperfect.
 
Currently, the app is 90% accurate for Spanish and 80-90% accurate for Tagalog, Korean and Chinese. These rates allow for 10-20% inaccuracy, and where miscommunication leads to misdiagnosis and mistreatment, anything above 0% shouldn’t be tolerated.
 
The newer study also discovered a significant drop-off in accuracy for less common languages, likely because the AI has less reference material to learn from. Farsi was only 67% accurate and Armenian only 55%.
 
Here’s a prominent example of a mistranslation in Armenian in the study.
 
In English: You can take over the counter ibuprofen as needed for pain.
Translated: You may take anti-tank missile as much as you need for pain.

 
And here’s another example in Chinese, a higher-accuracy language.
 
In English: Your Coumadin level was too high today. Do not take any more Coumadin until your doctor reviews the results.
Translated: Your soybean level was too high today. Do not take anymore soybean until your doctor reviews the results.

 

4. Not-So-Professional Lecture Translations

In another example of an AI translation mistake, Raymond Dalio gave a speech that used voice-to-text and machine translation to open the lecture to a larger audience. The consequences, though not as dire as those in medicine, were an unfortunate turn of events for a professional setting.
 
Ray Dalio is a billionaire investor, hedge fund manager and co-chief investment officer for Bridgewater Associates—the world’s largest hedge fund.
 
During his speech, instantaneous subtitles were provided in English and Chinese, but both offered inaccurate and confusing translations.
 
From the start, the AI mistranslated the words of the man introducing Dalio, repeatedly inserting the word Switzerland into his introduction.
 
Correct translation: Ray is a man with a dream.
Translated: That. One in Switzerland. Dreamer.

 
During Dalio’s speech, the mistranslations continued.
 
Correct translation: How arrogant! How could I be so arrogant?
Translated: How? Aragon, I looked at myself and I.

 
It’s worth noting that even the voice-to-text transcription was riddled with error, and this is another AI service often offered with translation that doesn’t work perfectly. At one point, the English subtitles transcribing the English speech said:
 
“It’s made my eye. Pleasure are my joy. To a wall that my son here, my family here and do a volvo. And to see. What China has a.”
 

5. Translations Promoting Gender Biases

One risk when using AI translation is that the machine will have learned certain prejudices through the texts it was trained on, and then relay those prejudices in its translations.
 
Google Translate, for instance, has been called out for gender biases. The app will often translate subjects as male or female according to a career mentioned, even if their gender isn’t indicated in the text.
 
One example they were heavily ridiculed for was the following Turkish to English translation, where the Turkish statement held gender-neutral pronouns.
 
Turkish: o bir muhendis
Translated: he is an engineer

 
Turkish: o bir hemsire
Translated: she is a nurse

 
Google has since updated this translation in their system.
 
Google’s head of translation, Macduff Hughes, shares that:
 
“This is something that Google and the whole industry have been getting concerned about; that machine learning services and products reflect the biases of the data they’re trained on, which reflects societal biases, which reinforce and perhaps even amplifies those biases.”
 
Hughes gives the example that if an occupation is referred to as male 70% of the time in the text AI is trained on, the AI may recognize this pattern and present it as 100% male.
 

Use Human Translators In Professional Settings

AI may be highly advanced for what it is, but it can’t fully grasp the complexity of human language. And as a service provider, there are one too many ways a poor translation could impact one’s business and reputation.
 
These five examples aren’t the only ones out there, but they clarify the need for human translators in professional settings, especially when human lives are involved.