EC Artificial Intelligence Act

European Commission published the draft resolution (it is still waiting for approval) on 21st April 2021 over the use of AI. The regulation applies to all organisations that do business in or with European Commission jurisdiction and citizens (living in the EC jurisdictions). So any company that operates in the European Union will comply with this regulation (if approved by the EU parliament).
The draft resolution proposal can be downloaded from the link
EUR-Lex - 52021PC0206 - EN - EUR-Lex
. The media briefing with the draft proposal reported that the European Commission’s objective for proposing the new rules and actions (as part of the draft resolution) is to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). It is the combination of the first-ever legal framework on AI and guarantees the safety and fundamental rights of people and businesses.

Regulation’s Key Points

For reference, some of the highlights of the proposal include:

Definition of AI

  • An AI system is defined as the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension.
  • AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded).
  • AI Techniques and Approaches this regulations applies to:
    • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
    • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
    • Statistical approaches, Bayesian estimation, search and optimization methods.
  • The rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union.
  • This Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union.
  • AI systems are categorised into four groups; Unacceptable risk, high risk, limited risk and minimal risk.
  • AI systems (unacceptable risk category) are forbidden or prohibited to be deployed in the markets include:
    • Systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur
    • Systems providing social scoring of natural persons for general purpose by public authorities or on their behalf.
  • High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements.
    • AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
  • There are no mandatory requirements on Listed Risk (AI systems with specific transparency obligations) and minimal risk. However, EC encourages Incentivisation of the non-High-risk AI providers to voluntarily meet the mandatory requirements.
  • Compliance Requirements include:
    • Risk management system (Article 9), Data and Data Governance (Article 10), Technical Documentation (Article 11), Record-Keeping (Article 12), Transparency and provision of information to users (Article 13), Human Oversight (Article14), Accuracy, robustness and cybersecurity (Article 15).
  • The the conformity assessment (validation that a provider meets the mandatory requirements) of their respective AI systems should be carried out as a general rule by the provider under its own responsibility - at least in an initial phase of application of this regulation.
  • For any AI system placed on the market, a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.
  • The non-compliance of the AI systems can be:
    • Administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher
      • non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 (Prohibited Artificial Intelligence Practices);
      • non-compliance of the AI system with the requirements laid down in Article 10 (High-risk AI systems - Data and Data Governance).
    • Administrative fines of up to 20 000 000 EUR or, if the offender is company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher
      • non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10
    • Administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
      • for supplying of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request
Seclea Platform template for EC Artificial Intelligence Act (AIA) consists of the following compliance categories:
More details on the EC AIA can be found here.