THE PSYCHOLOGICAL IMPLICATIONS OF ANTHROPOMORPHISING ARTIFICIAL INTELLIGENCE: REFLEXIVE AND ETHICAL RISKS

Authors

DOI:

https://doi.org/10.31732/2663-2209-2025-79-465-477

Keywords:

consciousness, artificial intelligence, critical thinking, addiction, ethics, information behaviour

Abstract

The article is dedicated to a comprehensive analysis of artificial intelligence as a phenomenon that is increasingly entering the sphere of human consciousness, acting not only as a tool but also as a new type of communication intermediary. The study focuses on the underlying psychological mechanisms activated through interaction with algorithmic systems, particularly emphasising changes in cognitive processes, the delegation of ethical decision-making, the erosion of reflexivity, and the emergence of emotional dependency. The understanding of AI as a source of cognitive and moral influence on human consciousness goes beyond purely technological issues and moves into the realm of psychoanthropological discourse, which allows us to reveal the deep nature of the latest changes in thinking, communication and behaviour. The article argues that digital interaction with neural networks activates archetypal representations and mechanisms of projection, which lead to the perception of AI as a subject capable of empathy, moral judgement, and support. It is this misconception that creates the risk of losing critical thinking, moving from reflective cognition to automated perception, and moral infantilisation of the individual. Particular emphasis is placed on the issue of AI anthropomorphism, which, by penetrating the psycho-emotional sphere, transforms conceptions of interpersonal relationships, communicative reciprocity, and the boundaries of the human «self». It is pointed out that such interaction gradually replaces authentic dialogue and complicates the ability to ethical doubt, reflection and responsible choice. The need is highlighted for the development of new models of digital hygiene that encompass not only technical skills, but also psychological and ethical literacy. The study uses the methods of critical analysis, conceptual generalisation of interdisciplinary approaches, and an associative experiment as a means of revealing hidden attitudes of consciousness towards the image of AI. The conclusion is substantiated that interaction with a technologically simulated interlocutor constitutes not only a challenge to the reconsideration of the structure of thought but also an indicator of a shift in the value paradigm that defines the mode of human existence in the digital age.

Downloads

Download data is not yet available.

Author Biography

Veronica Horielova, KROK University

PhD in Law, Associate Professor, Associate Professor of the Department of State Law and Humanities, V.I. Vernadsky Taurida National University, Educational and Scientific Institute of Humanities, Kyiv, Ukraine

References

Декарт, Р. (2000). Метафізичні розмисли (З. Борисюк, Пер. з франц.). Юніверс. https://chtyvo.org.ua/authors/Dekart_Rene/Metafizychni_rozmysly

Кант, І. (2000). Критика чистого розуму (І. Бурковський, Пер. з нім.). Юніверс. https://chtyvo.org.ua/authors/Kant_Immanuel/Krytyka_chystoho_rozumu

Свідерська, О. І., & Дидів, І. В. (2021). Магічне мислення в цифрову епоху. Вісник НУОУ, (4(57)), 104–110.

Юнг, К. Г. (2018). Архетипи і колективне несвідоме (К. Котюк, Пер. з нім.; О. Фешовець, Наук. ред.; 2–ге опрац. вид.). Астролябія. Google Books

Polger, T. (2006). Natural minds. Psyche, 19(4), 539–556. ResearchGate

Jung, C. G. (1967). Symbols of transformation: An analysis of the prelude to a case of schizophrenia (Р. Ф. К. Галл, Пер. з нім.; 2-ге вид., випр.). Princeton University Press. Jungian Center

Mahari, R., & Pataranutaporn, P. (2025). Addictive intelligence: Understanding psychological, legal, and technical dimensions of AI companionship. MIT PubPub

Essel, H. B., Opoku, E. A., & Asamoah, D. (2024). ChatGPT effects on cognitive skills of undergraduate students. Education for Information, 40(2). ScienceDirect

Iqbal, U., & Iqbal, A. (2024). Assessing the effects of artificial intelligence on student cognitive skills. ResearchGate

Gerlich, M. (2024). Exploring motivators for trust in the dichotomy of human–AI trust dynamics. Social Sciences, 13(5), 251. MDPI

Çela, E., Binoshi, G., & Bino, N. (2024). Risks of AI–assisted learning on student critical thinking: A case study of Albania. International Journal of Digital Learning, 10(1). IGI Global

Jose, S., et al. (2025). The cognitive paradox of AI in education: Between enhancement and erosion. Frontiers in Psychology, 16. Frontiers

Singh, P. V. (2024). Bridging the gap: AI and the hidden structure of consciousness. ResearchGate

Mehrotra, A. (2025). The erosion of cognitive attention in the age of AI. Medium. Medium

Deliu, R. (2025). Cognitive dissonance artificial intelligence (CD–AI): The mind at war with itself. Harnessing discomfort to sharpen critical thinking. arXiv. arXiv

Lee, H.-P., Sarkar, A., Banks, R., & Rintel, S. (2025). The impact of generative AI on critical thinking: Self–reported reductions in cognitive effort and confidence effects. Microsoft Research. Microsoft

Frank, D.-A., & Otterbring, T. (2024). Consumer autonomy in generative AI services: The role of task difficulty and design in enhancing trust. Behavioral Sciences, 14(3), 205. MDPI

Psychological impacts of AI dependence: Assessing the cognitive and emotional costs of intelligent systems in daily life. (2025). ResearchGate

Published

2025-09-30

How to Cite

Horielova, V. (2025). THE PSYCHOLOGICAL IMPLICATIONS OF ANTHROPOMORPHISING ARTIFICIAL INTELLIGENCE: REFLEXIVE AND ETHICAL RISKS. Science Notes of KROK University, (3(79), 465–477. https://doi.org/10.31732/2663-2209-2025-79-465-477