Part 2 of the three-part article series looks at the lack of transparency in artificial intelligence processes. This poses a significant problem, as those affected can hardly understand decisions and identify possible errors or damage. Prof. Becker therefore calls for black box algorithms to be made transparent in order to ensure appropriate responsibility. The ISM Head of?Business Intelligence & Date Science?(M. Sc. attendance program) and?Applied Business Data Science?(M. Sc. distance learning program) degree programs suggests practical approaches.
Validating algorithms
If you do the test with prompters for writing, you can quickly identify whether a sentence or paragraph makes sense or has just thrown together grammatically correct set pieces without ultimately producing new thoughts. In this field, it is comparatively easy because, as speaking beings, we are quick to recognise incorrect speech or illogical thinking.
What is the situation in other areas where we only obtain results based on complex calculation formulae? How can the quality of such processes and results be verified?
Firstly, as a stochastician, I am very pleased to see how well the application of probability theory works in practice. With ChatGPT, we have an example of a construct that learns language skills simply by reading Internet sources and generates sentences that are not only grammatically correct, but sometimes also make sense. This was unthinkable just a short time ago. Probability calculations are a great help in associating not only individual words, but entire text modules with each other in such a way that it sounds "human".
We can't just take the predictions generated by black box algorithms without checking them.
Prof. Dr. Marcus Becker
As a researcher, I am of course also committed to the search for truth. This includes the desire to be able to understand more precisely how an algorithm arrives at a result. Unfortunately, this is not currently the case with many AI applications.
I am therefore in favour of retaining the upper hand over the algorithms and call for the use of transparency algorithms (so-called "Explainable AI" or XAI for short) in order to open up the black box of artificial neural networks.
This is because we must also be able to validate the decisions, i.e. determine whether the results found using algorithms can be regarded as valid. Only then can black box algorithms be useful as a lever for human decision-making expertise. By meaningful, I mean: in such a way that they correspond to human expert knowledge, or even extend it.
Simply continuously reapplying what we have already learnt will only result in our level of information remaining at the same level. We then move in a knowledge bubble. Innovations fail to materialise and progress is slowed down.
Does this mean that AI-based applications are ultimately preventing innovation?
To what extent general artificial intelligence (AGI) can contribute to the innovation process, I cannot say at this point in time. Many innovations have also arisen purely by chance. So called. Deep reinforcement learning algorithms (DRLA) also mix their decisions from a combination of what is known ("exploitation") and what is new ("exploration"). These are purely random actions that can marginally lead to an improvement in the decision-making system.
What makes human decision-making so unique, however, is intuition. Intuition is very difficult to map technically. How well general artificial intelligence can approximate human intuition remains to be seen in the future. Whether these processes bring fundamental added value or only take on semi-structurable tasks, i.e. processes that follow a certain structure but cannot be fully determined on the basis of so-called "if-then relationships", is the subject of current research.
Our student advisors are happy to answer all your questions about the ISM, our study programs, application and selection process and financing options in a personal appointment.