How Deployers can comply with the EU AI Act even (when using High-risk Artificial Intelligence Systems)
An actionable procedure for companies to deal with regulatory uncertainty while still delivering value.
In less than 16 months, the majority of the European Union (EU) Artificial Intelligence (AI) Act comes into force.
This is despite huge gray areas in the law, expected delays in publication of “Harmonised Standards,” and onerous requirements for any company using what it calls “High-Risk Artificial Intelligence Systems.”
Regulators don’t care about your pain.
But StackAware does, so we put together an actionable procedure addressing the law’s requirements.
The most effective way to build a governance system capable of complying with the law is ISO/IEC 42001:2023. StackAware is already working with one multi-national company on implementing this standard to facilitate EU Act Compliance.
Need help?
But if you’d like to do-it-yourself, feel free to use and adapt the below procedure.
Of note, this applies only to private sector organizations operating as Deployers (and not doing so on behalf of public authorities, EU institutions, bodies, and offices). We may create a similar procedure for Providers in the future, but this basically means building an entire AI governance program.
EU AI Act Deployer Compliance Procedure
The Chief Information Security Officer must:
Ensure AI literacy of all personnel using AI Systems (Article 4).
For High-Risk Artificial Intelligence Systems, before first use and at least annually:
Conduct an AI Model, System, Impact, and Risk Assessment per the StackAware SOP (Article 27, Paragraph 1).
Upon completion, provide the Market Surveillance Authority(ies) the results, per the template provided by the European Union AI Office, if available (Article 27, Paragraphs 3, 5).
Data owners must:
Not use any AI system that (Article 5):
Uses subliminal deceptive techniques reasonably likely to cause harm to a person.
Exploits personal vulnerabilities due to age, disability or a specific social or economic situation.
Generates a score leading to detrimental treatment:
In social contexts unrelated where the data was originally collected.
Unjustified or disproportionate to a person’s underlying behavior.
Predicts the risk of a person committing a crime based solely on personality.
Creates or expands facial recognition databases through untargeted scraping of facial images.
Infers from biometric data a person’s race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
Infers emotions of a person.
Allows real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.
For High-Risk Artificial Intelligence Systems:
Only use the output of the system in the European Union if the Provider has certified it for use there (Article 2, Paragraph 1c).
Use and monitor such systems per Provider instructions (Article 26, Paragraph 1).
Retain logs generated by the system for at least 6 months, unless otherwise prohibited by applicable law (Article 26, Paragraph 6).
Provide information regarding the operation of the system via the Provider’s Post-Market Monitoring System (Article 26, Paragraph 5; Article 72).
Inform, prior to using the system, all persons (including customers, candidates, applicants, employees, and contractors) subject to the system (Article 26, Paragraphs 7, 11).
Assign human oversight of the system to an individual with appropriate competence, training, authority, and support (Article 26, Paragraph 2).
Ensure Input Data is relevant and sufficiently representative in view of the intended purpose of the system (Article 26, Paragraph 4).
Monitor operation of the system per Provider instructions (Article 26, Paragraph 5).
If the system produces legal effects or similarly significantly affects a person in a way s/he considers to have an adverse impact on health, safety, or fundamental rights, and that person so requests, provide a concise explanation of the:
Role of the AI system in the decision (Article 86, Paragraph 1).
Main element(s) of the decision taken (Article 86, Paragraph 1).
Upon identification of an AI System Presenting a Risk1, cease using the system within 3 days (Article 26, Paragraph 5; Article 79, Paragraph 1).
Upon identification of a Serious Incident caused by the system, do not allow the AI system to be altered in a way affecting subsequent investigation (Article 26, Paragraph 5).
Subsequent to a Serious Incident, implement the corrective action plan documented by the General Counsel.
For any Biometric Categorisation System, inform people whose Personal Data is processed by the system (Article 50, paragraph 3).
For systems that generate or manipulate image, audio or video content constituting a Deep Fake, disclose in plain language—accessible to people with disabilities—the content has been so generated or manipulated (Article 50, Paragraph 4).
For systems that generate or manipulate text published to inform the public on matters of public interest and where AI-generated content has not undergone human review disclose in—plain language accessible to people with disabilities—the text has been so generated or manipulated (Article 50, Paragraph 4).
The General Counsel must:
Upon identification of an AI System Presenting a Risk, inform the Provider and relevant Market Surveillance Authority within 30 days (Article 26, Paragraph 5; Article 79, Paragraph 1).
Upon identification of a Serious Incident caused by the system (Article 26, Paragraph 5):
Inform the Provider within 3 days.
If the Provider does not confirm receipt within 3 subsequent days, report the Serious Incident to the Market Surveillance Authority of all European Union Member States where the incident occurred within 2 subsequent days (Article 73, Paragraph 1-3).
Inform the Importer or Distributor (if applicable) within 30 days.
Investigate the Serious Incident and the AI system concerned, by:
Conducting a revised Risk Assessment of the AI System in light of the Serious Incident (Article 73, Paragraph 4).
Documenting a corrective action plan (Article 73, Paragraph 4).
Definitions
AI System - a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
Artificial Intelligence (AI) System Presenting a Risk - Equivalent to ‘product presenting a risk’ as defined in Article 3, point 19 of Regulation (EU) 2019/1020, in so far as they present risks to the health or safety, or to fundamental rights, of persons.
Authorised Representative - a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.
Biometric Categorisation System - an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons.
Deep Fake - AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.
Deployer - a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Distributor - a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
Emotion Recognition System - an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
High-risk artificial intelligence system - 1. Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.
3. By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.
The first subparagraph shall apply where any of the following conditions is fulfilled:
(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.
-- BEGIN ANNEX III DEFINITIONS --
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
1. Biometrics, in so far as their use is permitted under relevant Union or national law:
(a) remote biometric identification systems.
This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be;
(b) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;
(c) AI systems intended to be used for emotion recognition.
2. Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
3. Education and vocational training:
(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;
(c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;
(d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.
4. Employment, workers’ management and access to self-employment:
(a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
(b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.
5. Access to and enjoyment of essential private services and essential public services and benefits:
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
(c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;
(d) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.
6. Law enforcement, in so far as their use is permitted under relevant Union or national law:
(a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences;
(b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;
(c) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;
(d) AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;
(e) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.
7. Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law:
(a) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools;
(b) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;
(c) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence;
(d) AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.
8. Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;
(b) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.
-- END ANNEX III DEFINITIONS --
Importer - a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
Input data - Data provided to or directly acquired by an AI system on the basis of which the system produces an output.
Market Surveillance Authority - the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020.
Operator - a provider, product manufacturer, deployer, authorised representative, importer or distributor.
Personal Data - any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.
Post-Market Monitoring System - all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.
Provider - a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Serious Incident - an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
(a) the death of a person, or serious harm to a person’s health;
(b) a serious and irreversible disruption of the management or operation of critical infrastructure;
(c) the infringement of obligations under Union law intended to protect fundamental rights;
(d) serious harm to property or the environment.
StackAware considers a Serious Incident to also represent an AI System Presenting a Risk.