LLMs: »Independent, complex thinking not (yet) possible after all«

TU Darmstadt | Informationsdienst Wissenschaft (idw)

2024/08/12

According to a new study conducted by TU Darmstadt's UKP Lab, AI models such as ChatGPT are apparently less capable of learning independently than previously assumed. According to the study, there is no evidence that what are known as large language models (LLMs) are beginning to develop a general »intelligent« behaviour that would enable them to proceed in a planned or intuitive manner or to think in a complex way. The study »Are Emergent Abilities in Large Language Models just In-Context Learning?« was led by UKP director Prof. Iryna Gurevych and her colleague Dr. Harish Tayyar Madabushi from the University of Bath (UK). It will be presented in August at the annual conference of the renowned Association for Computational Linguistics (ACL) in Bangkok, the largest international conference on automatic language processing.

»However, our results do not mean that AI is not a threat at all,« emphasised Gurevych. »Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.«

Learn more here:

»Doch (noch) kein selbstständiges, komplexes Denken möglich: Studie unter Leitung der TU zeigt Begrenztheit von ChatGPT & Co.« Press Release TU Darmstadt, August 12, 2024.

»Independent, complex thinking not (yet) possible after all: Study led by TU shows limitations of ChatGPT & co.« Press Release idw – Informationsdienst Wissenschaft, August 12, 2024.

Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, Iryna Gurevych (2024): »Are Emergent Abilities in Large Language Models just In-Context Learning?« (DOI: 10.48550/arXiv.2309.01809)