March 13, 2024Client Alert


Authors: Sarah Alt and Damir Zababuryn
ai web

The EU AI Act was approved in Parliament on Wednesday, March 13. The complete set of regulations are expected to become law in May, with some provisions enforceable within 6-12 months and the remaining provisions in 24 months.

The European Commission began drafting the AI Act in 2017, just after completing the landmark GDPR legislation. Notably, while AI was not new in 2017 when drafting began, it was before generative AI was released to the world in 2022. The last several months of discussions especially considered generative AI and the regulatory impact on innovation for member countries.

Here are some of the significant highlights in EU AI Act:

  1. Risk Classification
    AI will be banned in situations like social scoring systems that govern how people behave, emotional recognition systems in schools and workplaces, and police scanning using AI-powered remote biometric systems - except for serious crimes. High risk AI systems will require conformity assessments
  2. General Purpose AI Requirements
    Just as the name implies, general purpose AI systems are those that have a wide range of possible uses. They are often large language models and often can process more than just text input. They can also process audio, video and images. Sometimes referred to as foundation models, many technology companies will incorporate these models and derivatives of them into their applications. The EU AI Act requires detailed summaries of data gathered from the internet to train these models, AI-generated deep fakes to be labeled as artificially manipulated, and companies providing some of the most significant models to assess and mitigate risks to ensure they can report serious incidents and disclose their energy use.
  3. Innovation-Friendly Approach
    Regulatory sandboxes will allow for real-world research, development and testing of AI technologies under less stringent regulations, thereby fostering innovation and design for regulatory compliance.
  4. Shared Accountability for Ongoing Monitoring
    Once an AI system is on the market, EU authorities will monitor for proper risk classification. Builders and developers providing AI systems will need to maintain human oversight and post-market monitoring. Buyers and subscribers of AI will need to report serious incidents and malfunctions.
  5. Penalties
    Depending on the violation, noncompliance with the AI Act could result in penalties up to €35 million, or 7% of a company’s global revenue.
back to top