The EU AI Act
In the artificial intelligence (AI) world, the European Union is making a significant move with its landmark AI regulation. Though still provisional, this agreement among the EU’s three branches represents a monumental step in shaping the future of AI usage within the economic bloc. It aims to prohibit specific applications of AI technology and mandates transparency from providers. However, despite the gravity of this agreement and warnings from global leaders, the exact implications for AI companies, especially those based in the U.S., remain murky and potentially distant.
Initially proposed in 2021, the AI Act has yet to receive full approval. The intensity of the debates has led to last-minute compromises, diluting some of its more stringent regulatory threats. The enforcement of these regulations is not expected to commence for several years. Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, notes that in the short term, major U.S.-based AI players like OpenAI, Microsoft, Google, and Meta are unlikely to be directly affected, given the delayed implementation timeline projected for 2025.
These tech giants are currently preoccupied with maintaining their dominance in the market, particularly as they navigate a landscape of regulatory uncertainty in the U.S. The AI Act was initiated before the surge in general-purpose AI tools, such as OpenAI’s GPT-4, complicating its regulation. The Act categorizes AI systems based on their societal risk, adhering to the principle of "the higher the risk, the stricter the rules." However, concerns from some member states about the Act's stringent nature potentially deterring AI investment in the EU led to key concessions. Notably, France, Germany, and Italy successfully lobbied to relax restrictions on general-purpose AI, resulting in a more nuanced classification of “high-risk” systems and specific law enforcement exemptions.
Despite these modifications, criticism persists. French President Emmanuel Macron, for instance, has criticized the AI Act for creating a regulatory environment that stifles innovation. Barrett points out that the current rules may pose challenges for new European AI companies in raising capital, inadvertently favoring American firms. Non-European companies might even consider avoiding the EU market or restricting platform access to evade penalties, a dilemma similar to what the non-AI tech industry faces with regulations like the Digital Markets Act and Digital Services Act.
One central point of contention revolves around AI models trained on publicly available, sensitive, and potentially copyrighted data. The provisional rules of the AI Act do not introduce new laws on data collection, requiring companies to comply with existing GDPR guidelines without prohibiting data gathering. Susan Ariel Aaronson, director of the Digital Trade and Data Governance Hub and a research professor at George Washington University, points out that while the Act may demand transparency summaries or data nutrition labels, it won’t significantly alter company behavior regarding data use.
The Act also leaves ambiguity around treating copyrighted material in model training data, offering no incentive for developers to avoid such data. Notably, the AI Act exempts open-source developers, researchers, and smaller companies working further down the value chain from its stringent fines, a decision celebrated by open-source proponents. Shelley McKinley, chief legal officer at GitHub, views this as a positive development for open innovation.
The EU's transparent and debated approach to AI regulation presses other nations, particularly the U.S., to accelerate their policy development. While China has already passed guidelines for AI services, the EU's process provides a more precise expectation for the industry. Though subject to change, the AI Act reflects the EU's responsiveness to public concerns around AI technology.
WTF?
As the AI Act moves towards finalization, it's clear that it won't retroactively regulate existing models or applications. However, future iterations of AI technologies, like OpenAI's GPT, Meta's Llama, or Google's Gemini, must comply with the EU's transparency requirements. This development, though gradual, signals the EU's stance on AI and its commitment to shaping its evolution.
The apparent call here will be the Act’s potential to influence global regulatory landscapes, especially as mature AI companies align with existing privacy laws like GDPR. In contrast, the U.S. struggles with advancing AI regulation, with limited progress beyond executive orders for safety standards and discussions on deepfakes and watermarking.
While the AI Act is pivotal for AI governance, swift changes are unlikely. The focus should be on aiding organizations in developing safe, transparent, and accountable AI.
Member discussion