A.3 Digital Economy: Economics of Privacy and Trust in AI Applications

The subarea A.3, “Economics of privacy and trust in artificial intelligence applications”, aims to investigate the influence of users’ trust in artificial intelligence (AI) applications. AI as a general-purpose technology has recently found its way into businesses and organisations. Being capable of understanding and learning from external data, AI and particularly machine learning (ML) promise enormous advantages for users and organisations. Thus, ML methods are becoming increasingly important in the digital economy. However, many ML algorithms perform like a black boxes, whose results are not precisely comprehensible. Consequently, users have concerns and distrust towards ML-based applications. The sub-project A.3. investigates the application of AI and ML-based systems in the economy to understand users' attitude and behaviour towards AI (e.g. analysing factors for the implementation and usage of AI applications). In addition, trust in AI systems shall particularly be investigated by considering a multi-layer perspective of trust. Thereby, factors influencing users' trust in AI systems shall be identified and measures for increasing this trust shall be elaborated.

Current PhD project of subarea A.3:

Potentials and Challenges for the Adoption of Artificial Intelligence – An Economic Investigation of Trust in Artificial Intelligence

-Mariska Fecho-

Artificial intelligence (AI), as a key technology of the 21st century, is becoming increasingly important in many organizations across various industries. Thereby, AI can be applied to different fields of activity to make business processes more efficient. With the availability of large amounts of data and increased computing capacity, machine learning (ML), a sub-field of AI, has gained more attention. ML enables computers to learn specific tasks from data and make predictions based on that data. However, AI and machine learning algorithms are often criticized for their black box behavior, as they do not provide information about how they came to a particular result.

Trust is a decisive factor for the use and adoption of technologies. It can mainly help to overcome perceived uncertainties and risks and thus increase the acceptance of the technology. Thereby, trust has emerged as a multi-faceted concept that can be created or induced in several ways. Therefore, the research of area A.3 aims to investigate relevant trust factors for the adoption and usage of AI-based applications. In particular, concepts and dimensions of trust for adopting and using AI-based applications shall be investigated. Moreover, it has been shown that transparency can increase trust in technologies. Thus, the specific factor of transparency shall be further investigated in the context of AI.

  Name Working area(s) Contact
Prof. Dr. Peter Buxmann
A.3, B.2
+49 6151 16-24333
S1|02 242
Mariska Fecho
A.3, Tandem: A.1, B.2
+49 6151 16-24321
S1|02 237a