Insights Blog Understanding AI Ethics: Future-proof ...

Understanding AI Ethics: Future-proof with the ethical guidelines for AI technologies

Written by Angela Hirsch & Selman Özen
Agenda

Use and aim of the EU AI Act

The AI Regulation, also known as the EU AI Act, is a regulation of the European Union to regulate the development, deployment and use of artificial intelligence (AI) within the EU.

The scope of the EU AI Act includes all parties involved in AI technologies, such as sellers, operators, users, producers and developers of AI technologies. It also includes parties from third countries that offer AI systems or technologies in the European Union. Accordingly, AI that was developed in other countries but is used in the European Union is AI that falls within the scope of the EU AI Act.

The aim of the EU AI Act is to create a uniform, legally valid regulation for the deployment and use of AI that is safe and trustworthy for all parties involved. In particular, AI technologies should be designed and handled in such a way that they protect the fundamental rights of all citizens and minimize risks. Excluded from the scope of application are exclusively private, non-professional use, military purposes, law enforcement by authorities, national security and criminal prosecution.

At the same time, AI should be used as an innovative opportunity for the progress of research, education and digitalization. The EU is using the EU AI Act to promote innovation and ensure fair play here too.

Finally, the EU's competitiveness also plays a role. It wants to lead the way as a positive and successful pioneer in responsible AI and find preventive solutions to curb the misuse of AI and assume a protective control function for all users.

Existing law is affected by the specifically required measures and requirements. The EU AI Act supplements regulations relating to AI technologies that fall within the scope of data protection law, anti-discrimination law, product safety, copyright, consumer protection, personal rights and employment law, for example. These existing rights are to be stabilized by the AI Regulation and strengthened with regard to the new areas of digitalization.

The GDPR is supplemented by the AI Regulation, as many AI technologies process personal data. This includes requirements for transparency and disclosure of the processed data.

The member states undertake to address flexible, emerging AI issues in accordance with the EU AI Act and to take fair measures as soon as one of the aforementioned legal interests is threatened. Ethical guidelines for the use of AI are also defined within the company in order to remain true to its ethical standards. Regular monitoring of AI systems in practice and audits also help here.

In addition, compliance with ethical standards can be carried out by various organizations and external certified bodies. Training for employees is also helpful and necessary.

With responsibility come duties

The greater the impact of an AI technology, the greater the responsibility for certain roles in both a legal and ethical sense.

The question here is which different roles within the EU AI Act should initially be distinguished and for which roles particular ethical risks may arise as a result.

The most important roles are probably the providers and operators. The role of provider exists, for example, if the body develops an AI system or an AI model or has one developed and places it on the market or puts it into operation. The operator or provider within the meaning of the Regulation is the party that uses an AI system under its own responsibility. It should also be noted that an operator, distributor or importer can also become a provider within the meaning of the EU AI Act if certain conditions are met. A careful distinction should therefore always be made when carrying out the conformity assessment. Special circumstances that contribute to a change of role should be taken into account, as the respective role consequently leads to different obligations.

d3149ab2-adb8-423d-955a-2ceec6b882e0-0002

Within the realms of possibility - risk classification makes all the difference

In order to determine the responsibilities and requirements to be met, AI technologies are categorized into different risk classes.

  • Acceptable risks: If AI technologies fall into a low or acceptable risk class, they are therefore subject to less stringent requirements than the other two risk classes. However, the prerequisite for AI technologies placed on the market or put into operation remains that they are safe and do not infringe the rights of natural persons. Search algorithms or spam filters, for example, are classified as limited or minimal risks.

  • High-risk AI systems: AI technologies that pose a high risk to the health, safety or fundamental rights of natural persons are classified in the high-risk AI technologies risk class. This includes risk assessments of health and life insurance policies, profiling or influencing decisions regarding the terms and conditions of employment.

  • Unacceptable risk/prohibited practices: AI technologies with an unacceptable risk to safety and rights or technologies that subliminally influence a person's consciousness to their detriment are prohibited. However, this does not apply conclusively, for example, to systems for inferring or assessing the emotional state of a natural person in the workplace.

In order to maintain both legal and ethical standards in the development or commissioning of AI technologies in the company, as well as to ensure the confidentiality of business secrets when using generative AI, not only should the respective obligations per risk class be observed, but preventive technical and organizational measures should also be taken to ensure sufficient AI compliance.

Ethical demands drive the EU AI Act: The invisible hand that should guide your data strategy

AI technologies are not an end in themselves, but advance the goals of the company. Whether and to what extent AI technologies are used in a company is not a question of trends, but a question of ethics and strategy.

Ethics in the context of AI aims to ensure that this technology is used for the benefit of society, that negative effects are minimized and that the fundamental rights of all those involved are protected.

How can companies ensure that the use of AI is not only efficient but also ethically justifiable? How can AI be handled responsibly? What measures and processes should and must companies introduce in order to ensure responsible handling?

The answer lies in the full integration of ethical considerations into your company's strategic decisions. This is not just a matter of ticking a box, but a deeply rooted attitude that permeates all areas of the business. Ethics in AI is more than just a buzzword. It is the backbone of a responsible data strategy.

Imagine your AI makes decisions that deeply impact the lives of your customers: it influences lending decisions, determines insurance rates or curates content. What happens if these decisions are unfair, biased, discriminatory or non-transparent? The risk lies not only in possible legal consequences, but also in an irreparable loss of trust.

Responsibility is one of the key ethical principles: companies must clearly define who is responsible for the decisions and actions of AI systems, especially when it comes to possible malfunctions or negative effects. In addition to responsibility, fairness is of great importance. AI systems must not deliver discriminatory or biased results; their development must be designed in such a way that existing injustices are not reinforced or new ones created. Transparency is another key aspect, as the functioning of AI systems must be comprehensible. Users and data subjects should be able to understand how and why certain decisions are made in order to build trust in the technology.

An equally important principle is data protection. The protection of personal data must comply with the requirements of data protection laws such as the GDPR in order to safeguard the privacy of individuals. Finally, security is also one of the key ethical aspects. AI technologies must be protected from misuse and unauthorized access to ensure the integrity of data and systems.

Trustworthy AI - is another well-known concept that is closely linked to these ethical principles. In its ethical guidelines for trustworthy AI, the European Commission has formulated seven essential requirements that AI systems must meet not only legal but also the highest ethical standards.

These requirements include human autonomy and control by designing AI to support, not replace, human decision-making. Technical robustness and security ensure that AI systems are reliable and safety risks are minimized. Data protection and data quality ensure the protection of personal data and the integrity of data processing.


Other requirements include transparency, which enables the decision-making processes of AI to be traceable; diversity, non-discrimination and fairness, to ensure that AI treats all people equally and does not promote discrimination; and societal and sustainable well-being, which ensures that AI serves the good of society and is used in an environmentally conscious manner. Finally, accountability is crucial to clarify who is responsible for the impact of AI systems.