The European Union recently issued a provisional political agreement on legislation regulating artificial intelligence (AI) systems and their use. This "Artificial Intelligence Act" is the first comprehensive legal framework. It establishes obligations and restrictions for specific AI applications to protect fundamental "rights" while supporting AI innovation.
What does this deal cover, and what changes might it bring about? As AI becomes deeply integrated into products and services worldwide, Europeans and tech companies globally need to understand these new rules. This post breaks down critical aspects of the Act and what it could mean going forward.
Defining AI
First, what counts as an AI system under the Act? It defines AI as Software developed with specific techniques to predict, recommend, or decide actual world outcomes and interactions. This means today's AI assistants, self-driving vehicles, facial recognition systems, and more would fall under the law.
Banned Uses of AI
Recognizing AI's potential for rights and democracy, specific applications are prohibited entirely:
- Biometric identification systems that use sensitive personal AI characteristics like religious beliefs, sexual orientation, race, etc., to categorize people. Example: Software ranking individuals as LGBTQ+ without consent.
- Scraping facial images from the internet or surveillance cameras to create recognition databases. Example: Companies scraping social media photos to build facial recognition systems.
- Emotion recognition software in workplaces and schools. Example: Software gauging student engagement and boredom during online classes.
- Social scoring systems judging trustworthiness or risk levels based on social behaviors. Example: Apps rating individuals' personality traits to determine access or opportunities.
- AI that seeks to circumvent user free will or agency. Example: Chatbots manipulate individuals into purchases by exploiting psychological vulnerabilities.
- AI exploiting vulnerabilities of disadvantaged groups. Example: Lenders use income level data to steer low-income applicants towards unfavorable loan offers.
These bans address some of the most problematic uses of emerging AI capabilities. However, the most contentious issue proved to be biometric identification systems used for law enforcement.
Law Enforcement Exemptions
The Act removes certain narrow exceptions allowing law enforcement to use biometric identification, like facial recognition tech, in public spaces. However, it comes with restrictions and safeguards.
Specific types of biometric ID systems are permitted, subject to prior judicial approval, only for strictly defined serious crimes and searches. Real-time scanning would have tight locational and time limits.
For example, searches for trafficking victims or to prevent an imminent terrorist threat may use approved biometric tech for that specific purpose. Extensive databases of facial images or other biometrics can only be compiled with cause.
The rules seek to balance investigating significant crimes and protecting civil liberties. However, digital rights advocates argue any biometric surveillance normalizes intrusions disproportionately affecting marginalized communities. Companies building or providing such tech must closely track evolving EU guidance here.
High-Risk AI Systems
For AI applications classified as high-risk, like those affecting health, safety, fundamental rights, and more, strict obligations apply under the Act. Examples include autonomous vehicles, recruitment tools, credit scoring models, and AI used to determine access to public services.
Requirements will include risk assessments, documentation, transparency, human oversight, and more. There are also special evaluation and reporting procedures when high-risk AI systems seem likely to be involved in any breach of obligations.
Citizens gain the right to file complaints over high-risk AI impacts and ask for explanations of algorithmic decisions affecting them. These provisions acknowledge the growing influence of opaque AI systems over daily life.
General AI and Future Advancements
The rapid expansion of AI capabilities led policymakers to build in measures even for cutting-edge systems yet to be realized fully. General purpose AI, expected to become mainstream within 5-10 years, faces transparency rules around training data and documentation.
For high-impact general AI anticipated down the line, special model checks, risk mitigation processes, and incident reporting apply. So emerging AI fields like natural language processing chatbots are on notice to meet similar standards to high-risk apps eventually.
Supporting Innovation
Will these new obligations stifle European AI innovation and competitiveness? The Act balances today's technology development with supporting tech development, especially for smaller enterprises.
Regulatory sandboxes let companies test innovative AI in natural environments pre-deployment. Favorable market access procedures aid new market entrants. Requirements kick in only after an AI system is placed on the EU market.
Overall, the Act signals that human rights and ethics lead to development, not vice versa. But they avoided imposing some of the most stringent restrictions tech companies opposed.
Fines for Violations
Failure to meet requirements results in fines of up to €30 million or 6% of a company's global turnover. Intentional non-compliance sees even harsher penalties, a substantial incentive towards the company.
What It Means for US Tech Companies
American tech giants like Microsoft, IBM, and Google, more deeply involved in European markets, will need to implement structures and processes adhering to the new rules. Smaller startups entering the EU marketplace will want to build compliance into products from the start.
Companies exporting AI software or devices to Europe must determine if products fall under high-risk categories or other designations mandating accountability steps. Strict data and documentation requirements around developing and updating AI systems demand additional staffing and oversight.
While the Act avoids the most burdensome restrictions, adhering to transparency principles and ensuring human oversight of automated decisions requires investment. Tech lobbying failed to defeat obligations reinforcing ethical AI practices many researchers have long called for.
US policymakers have proposed federal guidelines and legislation governing AI systems and companies. However, something different from the EU's comprehensive regulatory approach has advanced. That may gradually change as the global impacts of the landmark European Act become more apparent in the coming years.
Glossary of Key Terms
-
Biometric identification systems: Technology using biological or behavioral traits – like facial features, fingerprints, gait, and voice – to identify individuals. Examples include facial recognition, fingerprint matching, and iris scans.
-
High-risk AI systems: AI technology presents a significant potential risk of harm to health, safety, fundamental rights, and other areas defined by EU regulators. Self-driving cars and AI tools in critical infrastructure like hospitals exemplify high-risk systems.
-
General purpose AI: Artificial intelligence can perform complex cognitive tasks across many industries and use cases. Sometimes called artificial general intelligence (AGI), it does not entirely exist, but advanced AI exhibits some broad capabilities.
-
Regulatory sandbox: Controlled testing environments allow developers to try innovative digital products/services while oversight agencies review functionality, risks, and effectiveness before full deployment or marketing.