Limiting the abuse of artificial intelligence according to assumptions in the US and EU – what do you need to know about it?

6 minut
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spis treści

Artificial Intelligence (English: Artificial Intelligence, “AI”) is an extremely broad topic. It is increasingly being widely discussed in the scientific, journalistic and political spheres. The development of artificial intelligence systems is of significant importance worldwide, as artificial intelligence can bring many benefits in the economic and social spheres, as well as in the fields of environment, health, finance and mobility.

The dynamic development of artificial intelligence systems makes it necessary to create a unified legal system to ensure the further proper development of artificial intelligence, and thus ensure the protection of those who use it. So is the creation of such a unified legal system even possible? Have any legal acts already been created to regulate artificial intelligence? If you want to find out how the US and the EU “see it,” I invite you to read our article!

 

AI Bill of Rights – a White House proposal

On October 5, 2022 the White House has unveiled a document, colloquially known as the “AI Bill of Rights,” which aims to set rules for the deployment of artificial intelligence algorithms and protective barriers to their applications. The aforementioned document was drafted based on public opinion and IT industry giants Microsoft and Palantir, as well as human rights and artificial intelligence groups.

What are the key tenets of the AI Bill of Rights?

According to the Office of Science & Technology Policy (OSTP), the creator of artificial intelligence, the AI Bill of Rights will lead to much better performance of artificial intelligence systems, while minimizing harmful consequences on everyday life. The AI Bill of Rights calls for artificial intelligence systems to be safe and effective, performing testing and consulting with stakeholders. Until now, it has been limited to monitoring production systems, rather than improving them based on the needs of stakeholder groups.

 

Interestingly, the AI Bill of Rights explicitly denies the system of algorithms on which the mechanism of artificial intelligence is based. One of the main tenets of the AI Bill of Rights is to design artificial intelligence systems in such a way as to protect the public and the individuals concerned from biased decision-making by those responsible for constructing certain algorithms. Moreover, users should have a “free hand” when it comes to opting out of an AI system, e.g. in the event of a system failure. In addition, the White House’s plan is that each artificial intelligence user should have a say in the use of their data – both in decision-making and with regard to the AI system itself.

 

The need to implement the AI Bill of Rights – the controversy over algorithms

According to OSTP, there have been a number of incidents involving algorithms so far, which only reinforce the belief that certain changes need to be made. It is necessary to standardize AI so as to minimize the occurrence of negative consequences for those using artificial intelligence systems.

 

For example, algorithm models used in hospitals to inform patients about their treatment have been found to be discriminatory, to put it mildly. The malfunctioning of algorithms has also been noticed in the recruitment process in many workplaces – the tool designed to identify potential employees actually automatically rejected female candidates in favor of male candidates. Why? Because that’s exactly what the tool’s algorithm was designed to do.

 

Moving forward, with the release of the AI Bill of Rights, the White House has announced that the Department of Health and Human Services, among others, as well as the Department of Education, will have to publish assumptions in the coming months aimed at limiting the negative consequences of harmful algorithms. However, these steps do not quite coincide with the assumptions of the EU regulation, which explicitly prohibits and restricts certain categories of artificial intelligence. So what idea does the EU have for standardizing the operation of artificial intelligence systems? Let me explain!

 

AI act – an EU proposal

It is not only in the US that there is a lot of discussion on the subject of artificial intelligence. The EU in April 2021 published a proposal for the so-called “AI Act,” which was not received with the expected enthusiasm. Certainly, due to its innovative nature, the Act has the potential to be considered “leading”, not only in Europe, but also in the world. Why? The AI Act is the first in the world to attempt to comprehensively regulate the mechanism of artificial intelligence systems and their applications. Although work on it is still in progress, it seems that its main principles will remain the same.

Highlights of the AI Act

What primarily distinguishes the EU Act from the one postulated in the US is the comprehensive regulation of such issues as:

subjective scope – i.e. who will be affected by the new regulation and in what situations,

a risk-based approach,

the rights of those subjected to artificial intelligence.

 

Scope

The EU’s proposed regulation is intended to apply to, among others:

  • suppliers marketing or commissioning SI systems in the EU – i.e., natural and legal persons that are based in the EU or a third country,
  • users of AI systems that have a physical presence or are based in the EU,
  • suppliers and users of AI systems that are physically present or based in a third country, if the results of the systems are used in the EU,
  • importers and distributors of AI systems,
  • authorized representatives of suppliers who are located in the Union,
  • manufacturers of products that market or put into service an SI system with their product and under their own name or trademark, and additionally:
  • individuals subjected to the AI system,
  • EU institutions, offices, bodies and agencies acting as a supplier or user of an SI system.

 

As you can see, the scope of the regulation is really wide. It shows, among other things, that an AI system will be subject to EU regulation – even when it is only developed in a third country on behalf of, for example, an operator based in the EU or by an operator based in another third country, to the extent that the results of the system will be used in the EU.

 

A risk-based approach

This is by far one of the most important developments in EU regulation. The risk-based approach, for example, is included in the famous GDPR.

All right, but what is this approach all about? The premise of the regulation is that only those AI systems whose use will involve a certain level of risk to the rights, health and safety of the individuals who will use those systems will be subject to obligations and restrictions.

 

The AI Act divides AI systems into those that pose:

  • unacceptable risk,
  • high risk,
  • limited risk,
  • low or minimal risk.

As I indicated above, only AI systems categorized as unacceptable risk, high risk and limited risk will be subject to restrictions. In the case of the first group – unacceptable risk – the legislator is concerned with AI systems whose use poses a threat to the rights, health and safety of individuals. A total ban on the practice is envisaged for this group.

 

High-risk systems, which, according to the draft, include biometric systems, law enforcement, administration of justice or environmental protection, must meet a number of requirements, and providers of such systems must comply with certain obligations outlined in the draft.

 

On the other hand, limited-risk systems, i.e. those that interact with people, are used to recognize emotions, generate or manipulate audio/video content, are covered by transparency and information obligations.

 

The above certainly poses quite a challenge for entrepreneurs, who will have to conduct a thorough analysis of an AI system’s performance before implementing it. This will not be an easy task, so the assistance of specialists in the field of new technologies will be essential for risk assessment and analysis.

 

Rights of people subjected to artificial intelligence

Another major change proposed in the April 20, 2022 draft report of the IMCO and LIBE Committee in the European Parliament is the definition of rights for individuals or groups of individuals who are subjected to AI systems.

 

The amendment stipulates that if a user’s rights, health or safety are violated, he or she will be able to file a complaint against the providers or users of the system with the national supervisory authority of the member state. In addition, he will have the right to be heard as part of the complaint procedure, as well as the right to appeal the decision made in his case by the national supervisory authority.

 

Artificial Intelligence vs. the Law

As you can see, AI systems, in addition to having great potential and being important for development on many levels, pose quite a challenge for legislation trying to fold them into a single legal system. Nevertheless, it’s a good sign that attempts are being made to unify AI rules and mechanisms. This shows that, in addition to its considerable benefits, artificial intelligence is associated with many risky and not entirely safe aspects. The models I have presented, as postulated by the US and the EU, differ from each other and still need to be refined. This is not an easy task, but certainly absolutely necessary.

 

Author

  • Mateusz Sawaryn

    Jestem radcą prawnym, posiadam 13-letnie doświadczenie w obsłudze prawnej spółek i przedsiębiorstw. Specjalizuję się w zakresie prawa spółek. Doradzałem w wielu transakcjach dotyczących przekształceń, połączeń spółek oraz wejść inwestycyjnych w spółki, związanych z pozyskaniem finansowania funduszy private equity oraz funduszy venture capital. Zajmuję się głównie planowaniem procesów i koncepcji transakcji inwestycyjnych, przekształceniowych czy założycielskich dla nowych spółek. Świadczę bieżącą obsługę korporacyjną spółkom prawa handlowego, w tym spółkom należącym do zagranicznych grup kapitałowych. W ostatnich latach poszerzyłem swoje umiejętności szkoleniowe zostając między innymi autorem webinarów Polskiej Agencji Rozwoju Przedsiębiorczości. Od 2015 r. prowadzę własną kancelarię prawniczą, działającą pod firmą Sawaryn i Partnerzy sp.k. Kancelarię tworzy obecnie zespół kilkunastu prawników, specjalizujących się w obsłudze prawnej przedsiębiorców - ze szczególnym uwzględnieniem branży informatycznej i nowych technologii, prawa własności intelektualnej oraz ochrony danych osobowych czy tematyki start-up.

    Pokaż inne publikacje
Artykuł to za mało?
Jeśli wciąż nie znalazłeś odpowiedzi na swoje pytania lub potrzebujesz konsultacji - skorzystaj z naszej porady prawnej!

Wypełnij formularz kontaktowy, porozmawiajmy.

Skontaktuj się
Wróć do publikacji
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spis treści
Powered By MemberPress WooCommerce Plus Integration