This preprint has been published elsewhere.
DOI of the published preprint https://doi.org/10.3389/frai.2026.1781552
Preprint / Version 1

On the interface between Linguistics, Computer Science and Psychiatry: analyzing textual key-factors affecting BERT-based classification of schizophrenia in social media texts

##article.authors##

  • João Victor Miranda e Silva Pontifical Catholic University of Rio de Janeiro image/svg+xml https://orcid.org/0000-0002-0525-5307
    • Conceptualization
    • Data Curation
    • Formal Analysis
    • Investigation
    • Methodology
    • Project Administration
    • Writing – Original Draft Preparation
  • Cilene Rodrigues Pontifical Catholic University of Rio de Janeiro image/svg+xml
    • Conceptualization
    • Formal Analysis
    • Investigation
    • Methodology
    • Supervision
    • Writing – Review & Editing
  • Emilio Ashton Vital Brazil IMPA Tech
    • Formal Analysis
    • Methodology
    • Supervision

DOI:

https://doi.org/10.1590/SciELOPreprints.14646

Keywords:

schizophrenia, language, data filtering, natural language processing

Abstract

This paper investigates language impairments in schizophrenia (SZ) by integrating insights from language-centered investigations with computational approaches. Using BERT-base-cased, a transformer-based model, it explores how linguistic markers of SZ can be identified through Natural Language Processing (NLP) techniques, with emphasis on improving performance reliability via dataset refinement and approaching interpretability of deep learning outputs via statistical analyses of thematic content. We report the fine-tuning of a BERT model for text classification of 31,278 Reddit posts (15,639 SZ, 15,639 controls). The experiment evaluated the capacity of the model to distinguish language produced by individuals with SZ. The model achieved moderate performance (Accuracy = 0.6969; AUC = 0.78) and remained stable across hyperparameter configurations, indicating that foundation models such as BERT easily fit to data and, therefore, further performance gains are more likely to be derived from dataset refinement than from additional hyperparameter optimization. There were three key factors affecting the model’s performance: text length, topic of discussion and vocabulary choices. Posts that were correctly classified tended to be significantly longer (p < 0.001, M = 37.30), focused on specific abstract topics (e.g., religion), and contained more words related to mental conditions. These factors have also been reported in manual analyses of the impacts of SZ on language. These findings contribute to the accuracy of computational models aimed at working on linguistic classification tasks and underscore the value of carefully curated datasets, while demonstrating the viability of NLP methods in profiling SZ language.

Downloads

Download data is not yet available.

Posted

12/31/2025

How to Cite

On the interface between Linguistics, Computer Science and Psychiatry: analyzing textual key-factors affecting BERT-based classification of schizophrenia in social media texts. (2025). In SciELO Preprints. https://doi.org/10.1590/SciELOPreprints.14646

Section

Linguistic, literature and arts

Plaudit

Data statement