- The European Union's new AI Act sets high standards for AI in high-risk areas and poses new challenges for companies with extensive testing and documentation obligations.
- New diconium service offering "Trustworthy AI" helps companies in meeting new regulations and supports the development of trustworthy AI for business-critical and safety-relevant applications.
- Companies can significantly shorten the costly and time-consuming certification process.
Agenda
What is "Trustworthy AI" - Everything at a glance in our infographic
Trust plays an important role in the breakthrough of technologies when users can rely heavily on its performance. For example, humans ride elevators without a second thought, trusting the systems and skills of the engineers who built its mechanisms. And there are good reasons for this: elevators are continuously inspected by well-trained specialists and there are strict regulations for maintenance. In addition, there are numerous safety systems in modern elevators and the number of incidents is comparatively low. This example can be applied to many other technologies in which we trust their machines. As for artificial intelligence, this has yet to become the norm. In contrast to many learned technologies, most people and companies lack daily experience with AI platforms, leading to user uncertainty.
EU act presents new AI challenges for companies
One thing is clear: AI systems can provide companies with data-based support in making the right decisions and driving economic growth forward. However, to exploit the full potential of AI, these decisions must also be built on trust. This is especially true when AI is used in a business-critical area such as finance, logistics or research and development, or in a safety-critical area such as medical technology or driver assistance. To maximize trust, companies will need dive deeper and understand the machine's decisions - and their limitations - to develop trustworthy AI.
At a European level, the EU Parliament adopted a common set of rules on December 8, 2023 to enable trustworthy AI. The upcoming EU AI Act will regulate the use of AI in a wide range of high-risk areas, including infrastructure, employment, essential financial services, education, and others. AI will be closely scrutinized in these areas to ensure that it is safe, robust, and fair. To this end, the AI Act sets strict standards and categorizes AI systems into four groups:
- AI systems with unacceptable risk that will be banned
- High-risk systems that will be subject to extensive regulation
- Limited risk systems that are monitored but not regulated
- AI systems with minimal risk that do not require regulatory oversight.
Companies that use AI in a high-risk area in the future will be subject to extensive requirements under Article 29 of the AI Act in order to ensure the safe use of these systems. These requirements may include, for example, the implementation of sufficient technical and organizational measures, the regular monitoring and adjustment of cybersecurity measures or the continuous monitoring of the entire AI operation.
Our new service offering supports the development of trustworthy AI
When the EU act comes into force at the beginning of 2025 at the latest, companies will be subject to extensive testing and documentation obligations for the use of AI in safety-critical applications. While the path to trustworthy AI can be rocky and consume a lot of resources, we are committed to being a competent partner as we support companies with our new Trustworthy AI service offering in the development of trustworthy AI for business-critical and safety-relevant applications and make them fit for the upcoming AI regulation. With our assistance, companies can expedite the lengthy and expensive certification processes by availing our support until the AI is certified by an independent certification authority.
When developing trustworthy AI, we rely on three central pillars as part of our "Trustworthy AI" offering:
- Transparency: AI-based decisions must be transparent about the key performance indicators and the data used to train the system.
- Process: Trustworthy AI must be developed with a comprehensive and systematic process based on best practices.
- Certification: The ultimate seal of trust is AI certification by a reputable independent certification authority.
With this in mind, our technical expertise is directly available to companies and, as a development partner and certification coach, we offer a quick process towards a certifiable AI application.
Developing and implementing certifiable AI applications together
In a collaborative workshop, we assess the company's AI use cases, challenges, and opportunities. In personal interviews and comprehensive analyses, we evaluate the status quo of AI use based on a catalog of criteria developed and used by TÜV Austria. We draw up a detailed gap analysis in close cooperation with the company, while providing specific recommendations on how these gaps can be closed in an individual consultation. If companies are planning to certify their AI as an additional sign of confidence, we navigate them efficiently through this complex process.
Until now, we have already gone through the certification process for assistance systems in vehicles including AI development (in cooperation with companies from the Volkswagen Group), and for AI systems for image recognition in medical technology. Our proficiency in safety-critical AI is seamlessly integrated into our assistance throughout the certification process. This includes leveraging our regulatory expertise concerning the EU's AI Act, insights garnered from active participation in associations such as Bitkom and the KI Bundesverband, as well as our own certification by TÜV Austria for an image recognition application in medical technology. Additionally, we adhere to the ISO 9001 standard for quality management systems.
Get in touch: With our expertise as an AI partner, we can help prepare your company for the future, ensuring the reliable and trustworthy implementation of AI.