The lifecycle of an AI tool encompasses three main stages: (i) the development phase, which consists of the design and manufacturing and the initial training and validation phase, (ii) the use phase and (iii) the liquidation phase. In each stage different stakeholders are involved. If an AI tool causes harm, liability needs be allocated to one or several stakeholder(s). Which stakeholders are liable for the damage caused depends on (a) the stage of the lifecycle of the AI tool in which the damage occurred and (b) the nature and cause of the damage.
Allocating liability becomes more complex during the later stages of the lifecycle due to the increasing number of stakeholders involved and the increasing number of potential causes for the failure of the AI tool. For example, damage caused during the use phase of an AI tool could be the direct consequence of a defective AI component during the development phase, due to insufficient training during the initial training phase, due to a faulty operation of the tool by the operator or due to an unexpected interaction with an external interference. In this opinion, we will identify the stakeholders and their liability for each stage of the lifecycle of an AI tool.
Much is still unclear in relation to liability questions around AI tools. What follows represents our considered opinion based on current laws and our collective experience of how they are applied.
In our next opinion piece, we will examine the existing liability regimes and the specific requirements and legal issues when allocating liability for AI tools. We advise to read both opinion pieces to have a full overview of the challenges we consider essential to consider when dealing with AI and liability.
Click on the button below to download your pdf.