In a groundbreaking move, the European Union (EU) Parliament has given its approval to the world’s first comprehensive set of AI rules, known as the AI Act. As governments worldwide grapple with the challenges posed by AI technology, the EU’s regulations are expected to become the global standard. Understanding the history, key provisions, potential impact, and feasibility of AI regulation is crucial in grasping the significance of the EU’s AI Act.
The EU’s AI rules can be traced back to the EU’s Digital Decade initiative, which commenced in 2020. This initiative aims to transform the European continent by 2030, aligning with the United Nations’ Sustainable Development Goals (SDGs). While some of the EU’s digital policies have faced skepticism, such as online censorship laws, others have been more favorable. For instance, the Digital Markets Act grants users the freedom to modify the software on devices owned by big tech companies, promoting greater choice and flexibility.
Surprisingly, the AI Act falls into the category of reasonable regulation. However, it is important to note that it is a regulation, meaning it will supersede any existing AI regulations in European countries without the need for citizen input or approval. The European Commission published the text of the AI Act in April 2021, well before OpenAI introduced ChatGPT.
OpenAI, a prominent AI company, has reportedly lobbied EU politicians to ensure that the AI rules do not overly burden their operations. This lobbying effort aimed to remove the “high risk” label for OpenAI’s generative AI technologies like ChatGPT and DALL·E. Time Magazine reported that OpenAI’s efforts were successful, as the final text approved on June 14th provided more lenient regulations for OpenAI’s technologies.
The AI Act categorizes AI technologies based on their risk levels, with the highest risk technologies being prohibited entirely. High-risk AI technologies are defined as those that have a significant harmful impact on health, safety, fundamental rights, the environment, democracy, and the rule of law within the European Union. The prohibited list includes AI use in biometric systems, predictive policing, emotion recognition systems, and social scoring.
The Act also establishes a set of requirements for high-risk AI technologies, including detailed risk assessments, unbiased data sets, transparency, and clear information provision to users. Low-risk AI technologies primarily require transparency and labeling obligations, while AI technologies with no measurable risk, such as those used in video games and spam filters, have no specific requirements.
Non-compliance with the AI Act can result in substantial fines, with the highest penalties applying to those operating prohibited AI technologies. The fines can amount to €40 million or 7% of annual revenue, whichever is higher. Non-compliance with data-related or transparency-related requirements may lead to fines of up to €20 million or 4% of annual revenue. Other violations of the Act could result in fines of up to €10 million or 2% of annual revenue.
Although the EU’s AI regulations may appear reasonable at first glance, some concerns arise. The Act’s broad definition of “significant harmful impact” and its mention of the environment, democracy, and the rule of law leave room for interpretation. Critics argue that these broad terms could be exploited by EU bureaucrats to control AI technologies and stifle dissent. Additionally, the Act’s effectiveness heavily relies on the EU’s ability to maintain control over the data used by AI systems.
Regulating AI poses significant challenges as it requires control over either the software or the hardware. Governments may aim to control the internet by implementing digital IDs and closely tracking and controlling access. Alternatively, they may seek to control advanced microchip production to limit AI-related functionalities.
The US and the EU’s plans to establish their own microchip manufacturing facilities indicate a growing concern over geopolitical tensions and the need for greater control. These facilities could potentially implement advanced security measures that only allow AI-related chips to function if the purchaser has received regulatory approval.
The EU’s approach focuses on data control, requiring anyone working with AI technologies to provide their data to regulators. However, this approach comes with potential loopholes that need addressing. Ultimately, the effectiveness of the EU’s AI regulations will depend on its ability to maintain data control. It remains to be seen whether other regulators worldwide will adopt similar approaches if successful.
While some countries may attempt to delay AI regulations to harness its rapid evolution for their advantage, the risks posed by unchecked AI development, including its potential for sentience or misuse in information warfare, have compelled regulators to intervene. Striking the right balance between enabling AI innovation and preventing unwanted consequences will likely resemble the ongoing struggles with crypto regulations.
As AI continues to advance, its impact on various aspects of life cannot be understated. Staying updated on AI developments and regulations is crucial, as they will profoundly shape our future, similar to the impact of the internet and cryptocurrency. The EU’s AI Act represents a significant milestone, setting the stage for further global discussions and actions.