Trustworthy AI

Trust is essential for AI success. We help you build certification-ready AI as a safe and robust business companion.

Three pillars for trustworthy AI development

How can we build trusted solutions with such a young technology such as AI?
Element 1@2x

Transparency

A system’s decision making needs to be transparent with respect to performance metrics and the data used to train the system.

Element 2@2x

Process

A trustworthy system needs to be developed with a thorough and systematic process based on established best practices.

Element 3@2x

Certification

Prepare your AI for regulation

AI systems are decision-making systems. To realize the full potential of AI trust in those decisions is key. This is especially true if your AI is used in a mission-critical area, such as finance, logistics, or R&D, or in a safety-critical domain, such as medical devices or driver assistance. The EU AI Act regulates the use of AI in a wide range of high-risk areas, including infrastructure, employment, essential financial services, education, and others. AI in critical areas needs to be safe, robust, and fair.

How diconium helps you develop trustworthy AI

Benefit from our experience

ki_bundesverband_logo_png

We are member of the KI Bundesverband, an alliance of 400+ AI companies dedicated to ethical AI.

OBS_20230823_OBS0026.layout

We have gained the TRUSTIFAI certificate by TÜV Austria for a medical imaging application.

Is my system high-risk under the EU AI Act?

EU Regulated Domains

Any AI system that falls under one of the following categories is high-risk, and has to be registered in the EU database:

  • Biometric identification
  • Operation of critical infrastructure
  • Access to education
  • Access to and management of employment
  • Access to essential public/private services (for example insurance, credit)
  • Law enforcement
  • Migration, asylum and border control

Generative AI

GenAI must comply with transparency requirements, disclose that the content was generated by AI, and where to find summaries of copyrighted data used for training. Also, the model must be designed such that generating illegal content is prevented.

EU Product Safety Legislation

An AI System used in products falling under the EU's product safety legislation, including toys, aviation, cars, medical devices or lifts, will be considered high-risk, and will therefore be regulated.

READY TO UPDATE YOUR BUSINESS?

Get in touch with our AI experts to discuss your trustworthy AI

Swantje Kowarsch

managing director diconium data

Swantje+Kowarsch