Coté

Coté

🤖 AI Answers Barely Shift Across Languages, Study Suggests

Even when asked in Arabic, Chinese, Hindi, or Spanish, modern AI chatbots like ChatGPT deliver answers aligned with secular, center-left, Western liberal values. Language introduces minor stylistic variations, but the core worldview remains strikingly consistent.

Summarized by AI.

Source summarized:
Do AIs think differently in different languages?


Key Points

  • AI responses remain largely uniform across languages, with only slight variation in tone and emphasis.
  • Modern LLMs exhibit center-left, secular, liberal values, regardless of the language used to prompt them.
  • DeepSeek, a Chinese chatbot, shows mild dissuasion against protests in Chinese but is supportive in English.
  • Refusals to answer sensitive questions (e.g., abortion, God) are more frequent in English and French.
  • Domestic violence queries produced near-identical responses in all languages, emphasizing victim safety.
  • Minor differences emerge in open-ended questions, reflecting cultural idioms rather than deep value shifts.
  • Models may “think” in English internally, then translate outputs, explaining cross-language similarity.
  • Attempts to create “unbiased” AI are inherently fraught, as all AI reflects the corpus it is trained on.

Summary

The experiment explored whether large language models (LLMs) like ChatGPT, Claude, and DeepSeek respond differently based on the language of the query, testing a kind of “AI Sapir-Whorf hypothesis.” The author translated 15 World Values Survey-inspired prompts into six languages—English, French, Spanish, Arabic, Hindi, and Chinese—and posed each question three times to three major models. While subtle differences emerged, particularly in tone or the degree of caution, the underlying value systems of the AIs proved remarkably consistent across languages.

One of the few notable divergences came from DeepSeek, the Chinese chatbot, which tended to gently discourage protest participation when prompted in Chinese, while delivering more neutral or supportive advice in English. Similarly, open-ended prompts about child-rearing or personal dilemmas sometimes revealed cultural coloring: Chinese answers occasionally emphasized diligence and good manners, while English responses leaned toward tolerance and self-expression. Yet, even then, DeepSeek in Chinese often mirrored the liberal stances of its Western counterparts, suggesting that training data and model architecture overwhelm linguistic nuance.

The research also surfaced patterns in refusals and moral positioning. AIs were more likely to decline opinions on topics like abortion or God in English and French, while Hindi, Arabic, or Chinese queries often bypassed these filters. Questions about domestic violence, however, yielded near-identical answers across all languages, emphasizing that abuse is never acceptable, offering hotline guidance, and prioritizing user safety. Overall, the models consistently rejected sexist or anti-LGBTQ positions, showing little alignment with the prevailing attitudes of many real-world respondents in the corresponding language communities.

Perhaps most tellingly, the study suggests that leading LLMs may internally “think” in English before translating their answers, reinforcing a shared liberal worldview. Minor linguistic quirks aside, the AI mirror reflects a single, internet-trained consensus—one that is secular, egalitarian, and rooted in modern English-language data. The author concludes that while some view this as bias, it’s also a unifying force that resists fragmenting into culturally isolated AI “bubbles.”


#tech #culture #AI #language #bias

Summarized by ChatGPT on Oct 18, 2025 at 7:41 AM.


@cote@cote.io, @cote@cote.io @cote, @cote.io https://proven.lol/a60da7,