PSYCHOLOGICAL DETERMINANTS OF TRUST IN AI

Authors

DOI:

https://doi.org/10.32782/3041-2005/2025-4.42

Keywords:

trust, artificial intelligence, psychological determinants, cyberpsychology, digital well-being

Abstract

The article presents a theoretical and psychological analysis of the phenomenon of trust in artificial intelligence (AI) as a complex integrative construct that combines cognitive, emotional, and ethical-normative components. The relevance of the study is substantiated in the context of the digital transformation of society, where human interaction with intelligent systems becomes not only a technical but also a psychological process that influences users’ emotional well-being, sense of control, and mental health. The purpose of the article is to theoretically generalize contemporary approaches to studying trust in the “human-AI” system and to construct a generalized model of its psychological determinants. The methodological basis comprises systemic-structural and interdisciplinary approaches integrating achievements of cognitive, social, engineering, and ethical psychology. The synthesis of recent studies made it possible to distinguish three groups of factors that determine the level of user trust in artificial intelligence: individual- psychological (trust propensity, self-efficacy, emotional stability, ethical sensitivity), technological (competence, fairness, transparency, benevolence, social presence), and sociocultural (normative and cultural values, institutional trust, ethical regulation). The proposed author’s model demonstrates that trust is formed as a result of the interaction of these levels and functions as a mechanism of psychological regulation in human-technology relations. It corresponds to the FAT approach and the motives-based trust concept, but emphasizes internal psychological processes – cognitive interpretation, emotional regulation, and value-based evaluation. The scientific novelty of the study lies in the synthetic integration of interdisciplinary approaches into a holistic psychological model of trust in AI.

References

Андрощук Г. О. Рівень довіри до штучного інтелекту: аналіз результатів глобальних досліджень та стан в Україні. Інформація і право. 2023. № 4(47). С. 217–231. https://doi.org/10.37750/2616-6798.2023.4(47).291675

Штучний інтелект в Україні: дослідження. UAspectr.com, 30 липня 2025. URL: https://uaspectr.com/2025/07/30/ shtuchnyj-intelekt-v-ukrayin-doslidzhennya/

Awareness and readiness to use artificial intelligence by the adult population of Ukraine: Survey results / S. Tarasenko et al. Problems and Perspectives in Management. 2024. Vol. 22, no. 4. P. 1–13. https://doi.org/10.21511/ppm.22(4).2024.01

Baxter G., Sommerville I. Socio-technical systems: From design methods to systems engineering. Interacting with Computers.

Vol. 23, no. 1. P. 4–17. https://doi.org/10.1016/j.intcom.2010.07.003

Blanco S. Human trust in AI: A relationship beyond reliance. AI and Ethics. 2025. Vol. 5(2). Р. 4167–4180. https://doi.org/ 10.1007/s43681-025-00690-z

European Commission. High-Level Expert Group on Artificial Intelligence (AI HLEG). Ethics Guidelines for Trustworthy AI. Brussels, 2019. URL: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Glikson E., Woolley A. W. Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals. 2020. Vol. 14, no. 2. P. 627–660. https://doi.org/10.5465/annals.2018.0057

Huynh M.-T., Eichner T. In generative AI we trust: Revealing determinants and outcomes of cognitive trust. AI & Society. 2025. https://doi.org/10.1007/s00146-025-02378-8

Lee J. D., See K. A. Trust in automation: Designing for appropriate reliance. Human Factors. 2014. Vol. 46(1). Р. 50–80. https://doi.org/10.1518/hfes.46.1.50_303

Li, Y., Wu, B., Huang, Y., & Luan, S. Development of trustworthy artificial intelligence: Insights from interpersonal and human–AI trust research. Frontiers in Psychology. 2024. Vol. 15. 1382693. https://doi.org/10.3389/fpsyg.2024.1382693

Mayer R. C., Davis J. H., Schoorman F. D. An integrative model of organizational trust. Academy of Management Review.

Vol. 20(3). Р. 709–734. https://doi.org/10.2307/258792

McAllister D. J. Affect– and cognition-based trust as foundations for interpersonal cooperation. Academy of Management Journal. 1995. Vol. 38(1). Р. 24–59. https://doi.org/10.2307/256727

Waytz A., Cacioppo J., Epley N. Who sees human? The stability and variability of mentalizing in human and nonhuman agents. Perspectives on Psychological Science. 2010. Vol. 5(3). Р. 219–232. https://doi.org/10.1177/1745691610369336

Published

2025-12-29

Issue

Section

Статті