top of page

Researchers Find AI Chatbot GPT Effective for Multilingual Psychological Text Analysis

A new study published in the Proceedings of the National Academy of Sciences finds that the artificial intelligence chatbot GPT is an effective tool for analyzing psychological constructs in text across multiple languages.



Researchers from Stanford University, Princeton University and New York University tested GPT's ability to detect sentiment, emotions, offensiveness and moral foundations in texts written in 12 different languages. They found that GPT performed significantly better than traditional dictionary-based methods and nearly as well as top machine learning models that require extensive training.

"GPT achieved high accuracy in detecting psychological constructs across languages without any additional training data," said lead author Steve Rathje of Stanford University. "This makes it a powerful and easy-to-use tool for researchers looking to analyze text data in multiple languages."

The study tested three versions of GPT (3.5, 4 and 4 Turbo) on over 47,000 manually annotated social media posts and news headlines. GPT outperformed dictionary methods by a wide margin and in some cases exceeded the performance of fine-tuned machine learning models.

Notably, GPT performed well even on lesser-spoken African languages like Swahili and Kinyarwanda. Its accuracy improved substantially with each new version, particularly for these less common languages.

The researchers suggest GPT and similar AI models could help make advanced text analysis more accessible to social scientists around the world, potentially facilitating more cross-cultural research. However, they caution that potential biases in GPT's training data should be considered.

"While not perfect, GPT appears to be a promising tool that could democratize complex text analysis capabilities," said co-author Jay Van Bavel of New York University. "But researchers should be aware of its limitations and potential biases when using it."

The study provides sample code and tutorials for using GPT for psychological text analysis. The researchers hope this will enable more scientists to leverage AI language models in their work, while encouraging further research on the strengths and limitations of these rapidly evolving tools.



Abstract

The social and behavioral sciences have been increasingly using automated text analysis to measure psychological constructs in text. We explore whether GPT, the large-language model (LLM) underlying the AI chatbot ChatGPT, can be used as a tool for automated psychological text analysis in several languages. Across 15 datasets (n = 47,925 manually annotated tweets and news headlines), we tested whether different versions of GPT (3.5 Turbo, 4, and 4 Turbo) can accurately detect psychological constructs (sentiment, discrete emotions, offensiveness, and moral foundations) across 12 languages. We found that GPT (r = 0.59 to 0.77) performed much better than English-language dictionary analysis (r = 0.20 to 0.30) at detecting psychological constructs as judged by manual annotators. GPT performed nearly as well as, and sometimes better than, several top-performing fine-tuned machine learning models. Moreover, GPT’s performance improved across successive versions of the model, particularly for lesser-spoken languages, and became less expensive. Overall, GPT may be superior to many existing methods of automated text analysis, since it achieves relatively high accuracy across many languages, requires no training data, and is easy to use with simple prompts (e.g., “is this text negative?”) and little coding experience. We provide sample code and a video tutorial for analyzing text with the GPT application programming interface. We argue that GPT and other LLMs help democratize automated text analysis by making advanced natural language processing capabilities more accessible, and may help facilitate more cross-linguistic research with understudied languages.




Comments


bottom of page