Large Language Models for Psychological Assessment: A Comprehensive Overview

  • Jocelyn Brickman
  • , Mehak Gupta
  • , Joshua R. Oltmanns

Research output: Contribution to journalArticlepeer-review

Abstract

Large language models (LLMs) are extraordinary tools demonstrating potential to improve the understanding of psychological characteristics. They provide an unprecedented opportunity to supplement self-report in psychology research and practice with scalable behavioral assessment. However, they also pose unique risks and challenges. In this article, we provide an overview and guide for psychological scientists to evaluate LLMs for psychological assessment. In the first section, we briefly review the development of transformer-based LLMs and discuss their advances in natural language processing. In the second section, we describe the experimental design process, including techniques for language data collection, audio processing and transcription, text preprocessing, and model selection, and analytic matters, such as model output, model evaluation, hyperparameter tuning, model visualization, and topic modeling. At each stage, we describe options, important decisions, and resources for further in-depth learning and provide examples from different areas of psychology. In the final section, we discuss important broader ethical and implementation issues and future directions for researchers using this methodology. The reader will develop an understanding of essential ideas and an ability to navigate the process of using LLMs for psychological assessment.

Original languageEnglish
Article number25152459251343582
JournalAdvances in Methods and Practices in Psychological Science
Volume8
Issue number3
DOIs
StatePublished - Jul 1 2025

Keywords

  • deep learning
  • fine-tuning
  • large language models
  • natural language processing
  • prompt engineering

Fingerprint

Dive into the research topics of 'Large Language Models for Psychological Assessment: A Comprehensive Overview'. Together they form a unique fingerprint.

Cite this