Article 09 - Risk Management System (ART09)
Article 09 deals with the AI risk management practices required for EC AIA compliance. All activities related to risk management are detailed as part of this article. In this article, on the Seclea Platform, there are seven defined categories with relevant checks.
Following is the article text with relevant category numbers (09.##) from the Seclea Platform.
- 2.The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating (09.02). It shall comprise the following steps:
- 1.identification and analysis of the known and foreseeable risks associated with each high-risk AI system;
- 2.estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;
- 3.evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
- 4.adoption of suitable risk management measures in accordance with the provisions of the following paragraphs.
- 3.The risk management measures referred to in 2.3 shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in the Chapter 2 of the EC Artificial Intelligence Act. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
- 4.The risk management measures referred to in 2.3 shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user (09.05). In identifying the most appropriate risk management measures (09.03), the following shall be ensured:
- 1.elimination or reduction of risks as far as possible through adequate design and development;
- 2.where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminated;
- 3.provision of adequate information pursuant to Article 13, in particular as regards the risks referred to point 2.2 of this Article, and, where appropriate, training to users.
- 4.In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used.
- 5.High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures (09.04). Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter 2 of the EC Artificial Intelligence Act.
- 7.The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system (09.04).
- 8.When implementing the risk management system described in points 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children (09.06).
- 9.For credit institutions regulated by Directive 2013/36/EU, the aspects described in points 1 to 8 shall be part of the risk management procedures established by those institutions pursuant to Article 74 of that Directive (09.07).
Below is the list of controls/checks part of Article 09.