Find answers to common questions about the High-Risk AI Systems Database and the requirements under the AI Act. If you can't find the answer you're looking for, please contact our support team.
High-risk AI systems are defined in Annex III of the AI Act. They include AI systems used in critical sectors such as biometric identification, critical infrastructure, education and vocational training, employment, essential private and public services, law enforcement, migration and border control management, and administration of justice and democratic processes. These systems are subject to strict requirements before they can be placed on the market or put into service.
Providers of high-risk AI systems that are placed on the market or put into service in the EU must register their systems in this database. This includes both EU-based and non-EU providers whose AI systems are used within the European Union. Registration must be completed before the system is placed on the market or put into service.
A provider is a natural or legal person, public authority, agency or other body that develops an AI system or has an AI system developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
A deployer is a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
Yes, registration is mandatory for all high-risk AI systems as defined in the AI Act. Failure to register can result in significant penalties, including fines and restrictions on placing the system on the market. The registration requirement is part of the transparency and accountability framework of the AI Act.
Registration requires comprehensive information about the provider, the AI system's intended purpose, its technical specifications, conformity assessment details, and information about deployers. You will also need to provide details about risk management measures, post-market monitoring plans, and instructions for use. All information should be accurate and kept up-to-date throughout the system's lifecycle.
The registration itself can be completed within a few days once you have gathered all required documentation. However, the overall process from initial preparation to approval may take longer depending on the completeness of your submission and any required reviews by competent authorities. It is recommended to begin the registration process well in advance of your planned market entry or service launch date.
Yes, providers can register multiple AI systems under a single provider account. Each AI system must be registered separately with its own complete set of information and documentation. This allows for centralized management of all your high-risk AI systems through one interface.
After submission, your registration will be reviewed by the competent authorities. You will receive confirmation of receipt and may be contacted if additional information is needed. Once approved, your AI system will be listed in the public database (with appropriate confidentiality protections for sensitive information). You will be able to manage and update your registration.
Registered providers can log into their account and access the control center to update information about their AI systems. Any significant changes to the system must be reported and may require reassessment depending on the nature of the changes. Providers are required to keep the registration information current and accurate at all times.
A conformity assessment is the process by which providers demonstrate that their high-risk AI system complies with the requirements set out in the AI Act. This assessment can be performed through internal control (self-assessment) for certain systems, or must be conducted by a notified body for others. The assessment results in a certificate that must be included in the registration.
Deployers of high-risk AI systems are required to conduct a fundamental rights impact assessment before putting the system into use, except for certain categories of deployers. This assessment should identify and evaluate the potential impact of the AI system on fundamental rights, and document the measures taken to address any identified risks.
The AI Act provides for significant penalties for non-compliance. These can include fines of up to €35 million or 7% of total worldwide annual turnover (whichever is higher) for the most serious infringements, such as placing a prohibited AI system on the market. Lower penalties apply for other infringements, such as supplying incorrect information to authorities or non-compliance with transparency obligations. Member States may also impose additional national penalties.
Providers must maintain comprehensive technical documentation throughout the lifecycle of the AI system, including design specifications, training data information, testing and validation results, risk management documentation, and post-market monitoring records. This documentation must be available for review by competent authorities and kept up to date as the system evolves.
High-risk AI systems must be designed and developed with appropriate human oversight measures. The level and type of oversight depends on the specific system and its risks, but should enable humans to understand the system's capabilities and limitations, remain aware of automation bias, be able to interpret the system's output, and decide not to use the system or override its output when necessary.
Training, validation, and testing data sets must be relevant, representative, free of errors, and complete. They must have appropriate statistical properties and take into account the specific geographical, behavioral, or functional characteristics relevant to the intended purpose. Data governance and management practices must be documented and maintained.
Providers must report serious incidents and malfunctions to the relevant authorities of the Member States where the incident occurred. Deployers must also inform providers and relevant authorities about any serious incidents that constitute a breach of obligations under EU law intended to protect fundamental rights. Reports should be made without undue delay after becoming aware of the incident.
Post-market monitoring is the systematic collection and analysis of information about the AI system's performance throughout its lifecycle. Providers must establish and document a post-market monitoring system appropriate to the nature and risks of the AI system. This includes actively collecting and reviewing data about the system's operation, analyzing incidents and malfunctions, and taking appropriate corrective actions.
If you couldn't find the answer to your question, please visit our documentation page for more detailed information, or contact our support team for assistance.