Preprint / Version 1

Language models and political bias: do LLMs reflect public opinion in Brazil?

##article.authors##

  • Carlos Freitas Rio de Janeiro State University image/svg+xml https://orcid.org/0000-0002-2493-8154
    • Conceptualization
    • Data Curation
    • Methodology
    • Visualization
    • Writing – Original Draft Preparation
    • Writing – Review & Editing
  • Tomás Paixão Borges Rio de Janeiro State University image/svg+xml https://orcid.org/0000-0002-5276-6636
    • Conceptualization
    • Writing – Original Draft Preparation
    • Writing – Review & Editing
    • Methodology
    • Data Curation
    • Visualization
  • Pedro Paixão Borges Pontifical Catholic University of Rio de Janeiro image/svg+xml https://orcid.org/0000-0002-1081-4679
    • Conceptualization
    • Data Curation
    • Methodology
    • Visualization
    • Writing – Original Draft Preparation
    • Writing – Review & Editing

DOI:

https://doi.org/10.1590/SciELOPreprints.15668

Keywords:

artificial intelligence, LLMs, political bias, language models, public opinion

Abstract

This article examines whether large language models (LLMs) reflect or diverge from the political preferences of the populations that use them. Existing research typically measures political bias using abstract ideological scales or benchmarks derived from Anglo-American contexts, implicitly assuming cross-national comparability. We propose an alternative approach that conceptualizes bias in terms of comparison with the public opinion of a given population. Using the Brazilian Electoral Study (ESEB) 2022 as a benchmark, we compare responses from four widely used LLMs — ChatGPT, DeepSeek, Gemini, and Grok — across 22 political questions covering five thematic domains. Each model was queried 50 times per question, allowing estimation of both central tendencies and response variability. Results indicate systematic but multidirectional misalignment between LLM outputs and Brazilian public opinion. In the questions about democracy, models tend to be more protective of democratic institutions than the average Brazilian. Models also tend to adopt more liberal positions on moral and diversity issues, while Brazilians take more punitive stances on criminal justice than the models; and, counterintuitively, models show, in average, weaker support for some redistributive programs than the electorate. We also observed differences between models, although in lesser degree. The findings contribute methodologically by introducing population-specific benchmarks for evaluating political bias and substantively by extending the debate beyond Anglo-American contexts, raising implications for the role of AI systems as political information intermediaries in diverse democracies.

Downloads

Download data is not yet available.

Posted

05/07/2026

How to Cite

Language models and political bias: do LLMs reflect public opinion in Brazil?. (2026). In SciELO Preprints. https://doi.org/10.1590/SciELOPreprints.15668

Section

Applied Social Sciences

Plaudit

Research data

Freitas, Carlos; Paixão Borges, Tomás; Paixão Borges, Pedro, 2026, "Dados de replicação para: Modelos de linguagem e viés político: os LLMs refletem a opinião pública no Brasil?", https://doi.org/10.48331/SCIELODATA.K9QAAG, SciELO Data, V1