Artificial Intelligence The Legal Challenge

Artificial intelligence : the legal challenge

The is playing an increasingly important role in business activity. Its increasingly frequent use raises several legal questions, in particular concerning intellectual property, personal data and, above all, liability. By Laurent Szuskin, Associate Lawyer at Baker McKenzie.

Artificial intelligence the legal challenge

The artificial intelligence ( “AI”), not the subject of a legal definition. It can nevertheless be understood as a method and / or a technological device designed and developed to solve complex problems that require intelligence and therefore learning capabilities. Thus, AI develops in many sectors of activity. For example, in the financial sector, it is used, in particular, for risk management and trading automation. The healthcare sector is also increasingly using it. As such, the UK health system is using AI and machine learning, in particular to improve diagnostics through rapid and profound analysis of large Quantities of data.

Intellectual property and personal data, two major issues

The IA poses many legal questions, including two aspects.

From the point of view of intellectual property, the absence of copyright is generally the principle for the algorithms underlying the AI, whereas the software and databases of an AI system are, Them, protected. It is to a distributive application of intellectual property to which one must indulge. For example, an invention in an AI system could be protected by patents, software underlying copyright, databases by law specific to them, and so on.

Another question is whether the result produced by the AI belongs to the developer of the solution or the supplier or the company that integrated the solution to its production systems or to another person such as the one who supplied the data. In the absence of a legal regime or jurisprudence to date, the solution must be contractual. Intellectual property and know-how clauses governing the development of AIs or the provision of AI services must attribute the ownership or at least the contractual assignment of the results arising from the use of the solution.

On the level of personal data, the AI involves the implementation of data processing in large proportions – the ” big data “. Risks increase with respect to the safety of these data, but also from a regulatory compliance point of view. With the entry into force of the new European regulation on the protection of personal data in May 2018, ” privacy by design” or “by default ” – protection of privacy from the conception of the product or service – Solution for the development of these tools to take legal issues into account from the outset. The principle of ” data minimization “, which consists of defining the purposes of a processing and collecting only the data really necessary for these purposes, should also be retained.

Responsibility at the center of all questions

Beyond personal data and intellectual property, a legal question appears to be fundamental in terms of AI: the issue of attribution of liability for damage arising from its design or use.

In recent months, the media have echoed several AI dysfunctions. Thus, a chatbot developed by a software publisher to respond and interact on Twitter and who had learning abilities from its interactions with users became problematic when it integrated the language used by twittos to its Countered and replied this behavior, responding with racist, xenophobic or sexist tweets. On the other hand, the lack of mechanized IA solutions, such as those in stand-alone cars, may have caused the death of at least one driver to date.

There is then a clear problem of liability: that of the parties involved can be managed between them by means of a contract – although clauses limiting or exonerating the liability of a professional are not opposable to consumers under French law. In addition, the contracts will not be enforceable against a third party who suffers damage and has recourse in tort: there is therefore a risk of being attacked by the person or third party that has been affected by the malfunction of the IA. The legal response is still unknown in this respect.

The European Parliament has just taken a stand by demanding that European rules be implemented in the field of robotics (which is not far from that of the AI): Members of the working group on this subject advocate , determining who would be responsible for the consequences of robotics  ” on social, environmental and human health aspects. As regards autonomous cars, they are in favor of an insurance system which would be mandatory in order to guarantee the total compensation of victims of accidents caused by this type of vehicle. In the longer term, an ” electronic person ” status   could be envisaged – from their point of view. The European Commission should take up the issue in the coming months in an attempt to build EU-wide regulation.

In this still uncertain and evolving environment, it is therefore necessary to address legal issues from the outset of these solutions, including through contractual means. Investing in a start-up dedicated to artificial intelligence – called ” deep tech ” – can also be a solution for companies wishing to use this technology, especially since Europe – and in particular Paris – Of the most successful AI centers.


Leave a Reply

Your email address will not be published. Required fields are marked *