The last part of the three-part series addresses questions about the liability of AI methods. When these are raised, things get tricky. Prof. Dr Marcus Becker, Head of the Master's degree programs?Business Intelligence & Date Science (on-campus)?and?Applied Business Data Science (online program), and his team found in analyses, that the lack of transparency of algorithmic systems makes the legal examination of liability rules more difficult, as shown by the example of investment advice (robo-advisors).
The problem of causality in AI applications
Anyone asking the question of culpability in the case of AI-driven investment advice will hardly get any further. Why is that the case?
With black box algorithms, the question of how decisions are made is often difficult to answer (so-called opacity risk). Even the developer is often no longer able to find or determine the error. There is a causality problem. This causes major problems for claimants. Breaches of duty cannot be proven, so it remains unclear who bears the burden of proof in such constellations. This is because AI systems lack an independent legal personality. According to the current legal situation, it is therefore necessary to refer to the misconduct of the user who uses the black box algorithm in accordance with Section 280 (1) BGB (Civil Code).
Accordingly, the user himself would have to pay for any damage caused to him by AI. In this view, however, the inventor of an AI application is not liable unless it can be proven that he or she made grossly negligent errors of judgement.
It is often no longer even possible for the developer to find or determine the error.
Has the question of liability been forgotten, neglected or faded out step by step in the hope of cost savings?
Technical innovation and the adaptation of case law rarely go hand in hand. As is often the case, lawyers initially try to answer the question of breach of duty using existing case law. At some point, they reach a point where the current case law no longer clearly applies or can at least be interpreted contradictorily. Only then does the legislator react with appropriate adjustments.
The draft legislation issued by the EU Commission this week, known as the EU AI Act, is a first step in the right direction. In principle, standardised AI regulations are to be welcomed, but whether they will reduce the risks associated with the use of AI remains to be seen. It will be interesting to see whether future AI systems, such as artificial general intelligence, will have an independent legal personality.
Can the creators of AI instruments in the sector of Financial assets provide a remedy here or has their instrument "taken on a life of its own"?
I would not yet speak of such so-called robo-advisors becoming independent. As I already suggested in 2021, machine learning algorithms are not yet used as frequently in automated asset management as one might think (Becker, M., Beketov, M., & Wittke, M. (2021). Machine Learning in Automated Asset Management Processes 4.1. Die Unternehmung, 75(3), 411-431).
Basically, we are also moving in a highly regulated area here. According to the principles of the EU directive on the harmonisation of financial markets MIFID-II, investment advice, whether human or automated, must always comply with the "suitability" principle. This means that pensioners cannot simply be sold "junk bonds".
Without transparency algorithms (XAI), however, the algorithms simply act arbitrarily among themselves. This can sometimes even lead to "flash crashes", as the past has shown. To name just two examples: Algo Trader Softwares sometimes facilitated the stock market crash on 19 October 1987, the so-called "Black Monday".
Another example is 6 May 2010, when the Dow Jones fell by over 1000 points within eight minutes. This crash was triggered by a simple sell order from the trading house Waddell & Reed. This cannot even be described as AI, but was simply an automated sell order that was triggered when a certain threshold was reached.
My prediction is that if opaque back-box algorithms are unleashed on the financial markets without any validation, flash crashes could repeat themselves at ever shorter intervals. Fortunately, there are already laws that allow certain financial instruments to be excluded from trading (see Section 73 WpHG / Securities Trading Act). This limits the potential for price slides.
Life and Work during the NMUN conference
There are a large number of pre-implemented explanatory models (such as LIME and SHAP) that are virtually universally applicable. However, companies hardly use them, partly because they are not prescribed by law.
Due to the "suitability" requirement in accordance with the MIFID II directive mentioned above, the use of transparency algorithms is, in my opinion, absolutely necessary. They would increase user confidence, create associations between input and output and generate reliable and fair RA systems that are also in line with current data protection regulations and thus protect the identity of the user.
Without explainability through transparency algorithms, no interpretation and therefore no validation can be made by the verifying specialist.
In practice, the right to explanation sometimes gives way to the right to be forgotten - what do you think about this?
Trust
Associations
Reliability
Fairness
Identity
Is strict liability feasible?
Get started now and request more information
Do you have any questions about studying? Please contact us!
The only AACSB accredited private German university of applied sciences