The New Rules: The EU Artificial Intelligence Act
April 29, 2024 | 8 minutes read
With the development of new technology, new laws always follow, and in this case, it is Artificial Intelligence (AI) facing regulations for the first time in its fresh tenure.
The European Parliament has adopted the EU Artificial Intelligence Act with intentions to place constraints on AI and its use, understanding that difficulties can arise based on the functionality of the technology. The act aims not to control AI and its capabilities but to establish guidelines to help manage and reduce any risk it may pose. The regulation was agreed upon by Members of the European Parliament (MEPs) in December 2023 and was finally adopted into law on March 13th, 2024.
The legislation mandates overarching transparency obligations, compelling developers to ensure AI systems are constructed with traceability at their core. Larger Artificial intelligence that presents more of a risk will deal with more regulations than less intrusive models which won’t deal with almost any.
AI technology now must also comply with EU copyright law, meaning that it cannot make or reproduce the work of any human, hopefully eliminating the risk of fraud or libel due to any artificial intelligence system.
Risk Categories: Understanding the EU AI Act
The EU’s AI Act classifies AI systems into four categories based on risk level. They are as follows:
Unacceptable Risk: AI systems designed to violate fundamental rights are strictly banned. This includes systems that could manipulate your behavior, enable mass surveillance, or encourage harmful actions.
High Risk: AI systems used in important areas like hospitals or power plants need extra rules for safety. Humans must carefully monitor these systems and have clear records of how they work so that problems can be fixed quickly.
Limited Risk: Even seemingly harmless AI applications can become problematic if users aren’t aware of their true nature. The EU AI Act prioritizes transparency, requiring chatbots to disclose their non-human status and AI content generators to label their output clearly. This protects against deception, prevents the spread of deepfakes and other forms of AI-generated misinformation, and helps maintain public trust in AI technology as it becomes more widespread.
Minimal Risk/No Risk: AI systems that pose no threat. Examples include AI spam filters and AI used for entertainment. This risk category includes most of the systems used in the EU, operating under no regulations unless self-imposed.
Why do we need Regulation?
AI’s rapidly increasing autonomy calls for urgent societal discussion and proactive safeguards. Regulations are crucial to ensure AI development prioritizes safety and ethical use.
How is it Enforced?
To properly enforce this new act, the EU has created the EU AI Office, a committee tasked with ensuring the new rules and regulations are followed.
The AI Office will play a pivotal role in ensuring compliance with the regulations on general-purpose AI models. This office has the power to conduct evaluations, request information from providers, and apply sanctions where necessary.
The AI Office’s duties include promoting trustworthy AI, working with both public and private sectors, and promoting best practices and innovation. Additionally, it seeks to strengthen the EU’s role in global AI affairs by stimulating international collaboration and shaping global AI standards and agreements.
If the AI office finds instances of noncompliance, they can issue fines that range anywhere from millions of euros to a percentage of the company involved’s global revenue. These punishments are dynamic and change based on the infringement committed and the size of the company, meaning that businesses affected by these rules need to have a critical understanding of them.
How does this change AI?
The fact that AI now needs to go through channels and its developers are subject to compliance means that MEPs are concerned with what the future of the technology might look like.
According to Dragos Tudorache, Romanian MEP, AI will “push us to rethink the social contract at the heart of our democracies”. Brando Benifei, Italian MEP, believes that due to the bill, “the rights of workers and citizens will be protected.”
Before this act, the world was in an unclear position on AI. China has rules in place that allow them to oversee AI technology and the United States passed an executive order in October requiring AI developers to share safety results with the government, but MEPs have achieved something groundbreaking.
In February of this year, Jensen Huang, CEO of Nvidia, a leading developer in AI, made a statement acknowledging that the technology is “at a tipping point.” He also remarked that “we’re starting to see mainstream usage of AI,” insinuating that AI is gone from the realm of use solely by technology companies and now occupies space in tons of different industries.
These new regulations on AI help it develop in a way that not only benefits its developers and their companies but also helps the people using it while maintaining their security.
How Does it Affect You?
The main effect of AI regulations in our daily lives will be, if successful, the ability to stay worry-free about what AI could be doing out there.
As we learn more about AI and what it’s capable of, these regulations attempt to offer consumer protection by mandating informed decision-making.
AI regulations provide guidelines that ensure the responsible and ethical development of AI technologies, thereby protecting individuals who may lack an understanding of AI from potential harm or exploitation.
These measures additionally guarantee equitable access to AI services for all individuals. By supervising and regulating specific aspects of the technology, AI systems are confidently bias-free, ensuring fair treatment for every user. Your data has better protection than before with AI developers following instructions to implement new strategies that safeguard individual privacy.
In essence, these new regulations not only provide a framework for the responsible deployment of AI within the EU but also serve as a pivotal step towards fostering trust, transparency, and fairness in the increasingly widespread realm of artificial intelligence, ultimately safeguarding both individuals and society as a whole.
The Takeaway
While the EU’s AI Act is a crucial step in addressing the potential risks associated with Artificial Intelligence, it’s important to recognize that global cooperation is essential in regulating this rapidly evolving technology. Many AI systems operate across borders, and a unified approach to regulation will be necessary to manage their impact effectively.
In addition to regulatory measures, promoting AI literacy among the general public and fostering ethical awareness within the AI development community is also vital. Education and awareness initiatives can help individuals understand the capabilities and limitations of AI, empowering them to make informed decisions about its use in various applications.
It’s also necessary that policymakers keep an active dialogue with developers in order to maintain expectations and so that concerns can be expressed. Trust between the two parties and good communication can not only benefit the people whom AI affects but also the development of the systems themselves.
While these regulations set a precedent for companies who do not want to be fined millions of euros for violations, it also points to the idea that AI can be used to assist in following these regulations. One benefit of AI is its ability to quickly scan and redact any sensitive information that might compromise customer identities or infringe on the rules of the AI Act.
While the EU’s AI Act represents a significant milestone in the regulation of AI, it’s just the beginning of a complex and ongoing process. By prioritizing transparency, ethical considerations, and global collaboration, we can work towards minimizing the potential risks of AI while also harnessing its power.