Posts for Tag: Executive Order

Demystifying the Biden Administration's Approach to AI

Artificial intelligence (AI) is transforming our world in ways both wondrous and concerning. From self-driving cars to personalized medicine, AI holds enormous promise to improve our lives. Yet its rapid development also raises risks around privacy, security, bias, and more. That's why the Biden Administration just issued a landmark executive order to ensure AI's benefits while managing its risks. 

At its core, the order aims to make AI trustworthy, safe, and socially responsible as its capabilities grow more powerful. It directs federal agencies to take sweeping actions on AI safety, equity, innovation, and more. While some sections get technical, the order contains key facts anyone should know to understand where US AI policy is headed. As a primer, here are five top-level facts about President Biden's approach to artificial intelligence:

New safety rules are coming for robust AI systems 

For advanced AI models like chatbots, the order mandates new requirements around testing, transparency, and risk disclosure. Developers of high-risk systems will have to share the results of "red team" security tests with the government. This ensures dangerous AI doesn't spread unchecked. Companies will also need to label AI-generated content to combat disinformation.

New protections aim to prevent AI discrimination

A significant concern is AI perpetuating biases against marginalized groups. The order tackles this by directing agencies to issue guidance preventing algorithmic discrimination in housing, lending, and social services. It also calls for practices to promote fairness in criminal justice AI.

AI research and innovation will get a boost

The US wants to lead in AI while ensuring it's developed responsibly. The order catalyzes research on trustworthy AI via new computing resources, datasets, and funding. It promotes an open AI ecosystem so small businesses can participate. And it streamlines visas to attract global AI talent.

Safeguards aim to protect people's privacy and well-being

AI can put privacy at greater risk by enabling the extraction of personal data. To counter this, the order prioritizes privacy-preserving techniques and more robust data protection practices. It directs studying how to prevent AI harm to consumers, patients, and workers.

International collaboration will help govern AI globally 

With AI's worldwide impacts, the order expands US leadership in setting norms and frameworks to manage AI risks. It increases outreach to create standards so AI systems can operate safely across borders.

Unlocking AI's Promise - Responsibly

These actions represent the most comprehensive US government approach for steering AI toward broad public benefit. At its essence, the executive order enables society to unlock AI's tremendous potential while vigilantly managing its risks. 

Getting this right is crucial as AI capabilities race forward. Systems like ChatGPT can write persuasively on arbitrary topics despite lacking human values. Image generators can fabricate believable photos of people who don't exist. And micro-targeting algorithms influence what information we consume.

Without thoughtful safeguards, it's easy to see how such emerging technologies could impact everything from online deception to financial fraud to dark patterns in marketing and beyond. We're entering an era where discerning AI-generated content from reality will become an essential skill.

The Biden roadmap charts a course toward AI systems we can trust - whose outputs don't jeopardize public safety, privacy, or civil rights. Its regulatory approach aims to stimulate US innovation while ensuring we develop AI ethically and set global standards.

Much work remains, but the executive order sets a baseline for responsible AI governance. It recognizes these systems need to be more robust to guide carefully toward serving society's best interests.

Democratizing AI Innovation

Excitingly, responsible AI development can accelerate breakthroughs that improve people's lives. In health, AI is already designing new proteins to fight disease and optimizing cancer radiation therapy. It even enables software like CPROMPT.AI that makes AI accessible for anyone to build customized web apps without coding. 

Such democratization means small businesses and creators can benefit from AI, not just Big Tech companies. We're shifting from an era of AI magic for the few to one where AI unlocks new possibilities for all.

With prudent oversight, increasing access to AI tools and training will open new avenues for human empowerment. Imagine personalized education that helps students thrive, more innovative technology assisting people with disabilities and more efficient sustainability solutions - these show AI's immense potential for good.

Of course, no technology is an unalloyed blessing. As with past innovations like the automobile, airplane, and internet, realizing AI's benefits requires actively managing its risks. With foresight and wisdom, we can craft AI systems that uplift our collective potential rather than undermine it.

President Biden's executive order marks a significant milestone in proactively shaping our AI future for the common good. It balances seizing AI's promise with protecting what we value most - our safety, rights, privacy, and ability to trust what we build. For an emerging technology almost daemonic in its capabilities, those guardrails are sorely needed.


Q1: What does the new executive order do to make AI safer?

The order requires developers of robust AI systems to share safety testing results with the government. It also directs agencies to establish standards and tools for evaluating AI system safety before release. These measures aim to prevent the uncontrolled spread of dangerous AI.

Q2: How will the order stop AI from discriminating? 

It instructs agencies to issue guidance preventing algorithmic bias and discrimination in housing, lending, and social services. The order also calls for practices to reduce unfairness in criminal justice AI systems.

Q3: Does the order support American leadership in AI?

Yes, it boosts AI research funding, facilitates visas for AI talent, and promotes collaboration between government, academia, and industry. This aims to advance US AI capabilities while steering development responsibly.

Q4: What's being done to protect privacy?

The order makes privacy-preserving techniques a priority for AI systems. It directs stronger privacy protections and evaluates how agencies use personal data and AI. This aims to reduce the risks of AI enhancing the exploitation of private information.

Q5: How will the US work with other countries on AI governance?

The order expands US outreach to create international frameworks for managing AI risks. It increases efforts to collaborate on AI safety standards and best practices compatible across borders.

Q6: Is any of what has been provided by the government become binding or mandatory for companies or individuals?

Good question. The executive order does contain some mandatory requirements, but much of it is currently guidance rather than binding law. Specifically, the order leverages the Defense Production Act to mandate that companies developing high-risk AI systems notify the government before training them and share the results of safety tests.

It also directs agencies to make safety testing and disclosure a condition of federal funding for AI research projects that pose national security risks. Additionally, it mandates federal contractors follow forthcoming guidance to avoid algorithmic discrimination. However, many other parts of the order focus on developing best practices, standards, and guidelines that promote responsible AI development. These provide direction but need enforceable rules that companies must follow.

Turning the aspirations outlined in the order into binding regulations will require federal agencies to go through formal rule-making processes. So, work is still ahead to make safe and ethical AI development obligatory across the private sector. The order provides a strong starting point by articulating priorities and initiating processes for creating guardrails to steer AI down a path aligned with democratic values. But translating its vision into law will be an ongoing process requiring continued public and private sector collaboration.

Q7: What are the potential criticisms or concerns that could be raised about the AI executive order?

  • Overregulation that stifles innovation: Some may argue the order goes too far and that mandatory testing and disclosure could limit AI advances. 
  • Insufficiently bold - Critics could say the order relies too much on voluntary standards and doesn't do enough to restrict harmful uses of AI.
  • Undermines competitiveness - Mandatory sharing of testing results could reduce incentives for US companies to invest in developing advanced AI systems.
  • Privacy risks - Requiring companies to share data with the government raises privacy issues despite provisions to minimize harm.
  • Lack of enforcement mechanisms: The order needs to outline penalties for non-compliance so that some directives may be ignored without consequences.
  • Narrow focus - The order centers on the safety and technical aspects of AI while devoting less attention to workforce impacts and economic dislocations caused by AI.
  • International cooperation challenges: Getting other nations and companies abroad to adhere to AI rules defined by the US could prove difficult. 
  • Moving too slow: The emphasis on guidance documents rather than firm regulations means concrete protections could take years to materialize.

Overall, the order charts a thoughtful course, but reasonable experts could critique it as either too bold or not bold enough, given the profound implications of AI. Turning its vision into reality will require ongoing diligence and dialogue.

Q8: How does this order sit with current US political landscape?

The Biden administration's executive order on AI would likely prompt differing reactions from the political left and right:

Left-wing perspective:

  • Appreciates provisions aimed at reducing algorithmic bias and discrimination but may argue the order doesn't go far enough to restrict harmful uses of AI.Welcomes support for privacy-preserving technology but wants stronger legal protections for personal data.
  • Applauds boosts for academic research but worries about partnerships with corporations. 
  • Supports worker training programs but argues more is needed to protect against job losses from automation.
  • Argues the order favors corporate interests over individual rights and well-being.
  • Thinks voluntary ethical guidelines are inadequate and mandatory guardrails are needed.

Right-wing perspective:

  • Opposes government overreach and wants AI development to be driven by private-sector innovation.
  • Believes required testing and disclosure of AI systems amounts to burdensome red tape.
  • Thinks trying to regulate a fast-moving technology like AI will inevitably fail.
  • Argues too much regulation of AI will undermine US competitiveness against China.
  • Supports provisions streamlining visas to attract global AI talent.
  • Welcomes collaboration with industry but opposes expanded academic research funding. 
  • Thinks algorithms should not be subjected to affirmative action-style requirements.

In summary, the left likely believes the order doesn't go far enough, while the right is more wary of government constraints on AI advancement.


Q9: What are the main ways this executive order could impact the openness of AI models?

The executive order promotes openness overall by focusing on defending attack surfaces rather than restrictive licensing or liability requirements. It also provides funding for developing open AI models through the National AI Research Resource. However, the details of the registry and reporting requirements will significantly impact how available future models can be.

Q10: Will I have to report details on my AI models to the government? 

The executive order requires AI developers to report details on training runs above a particular scale (over 10^26 operations currently) that could pose security risks. This threshold is above current open models like DALL-E 2 and GPT-3, but it remains to be seen if the threshold changes over time.

Q11: How might this order affect who can access the most advanced AI models?

By promoting competition in AI through antitrust enforcement, the order aims to prevent the concentration of advanced AI among a few dominant firms. This could make frontier models more accessible. However, the reporting requirements may also lead to a bifurcation between regulated, sub-frontier models and unconstrained frontier models.

Q12: Are there any requirements for disclosing training data or auditing AI systems?

Surprisingly, there are no provisions requiring transparency about training data, model evaluation, or auditing of systems. This contrasts with other proposals like the EU's AI Act. The order focuses more narrowly on safety.

Q13: What comes next for this executive order and its implementation? 

Many details will still be determined as government agencies implement the order over the next 6-12 months. Given the ambitious scope, under-resourced agencies, and fast pace of AI progress, effective implementation is not guaranteed. The impact on openness will depend on how the order is interpreted and enforced.


Executive order: A directive from the US president to federal agencies that carries the force of law

Image generators: AI systems like DALL-E that create realistic images and art from text prompts

Micro-targeting: Using data and algorithms to deliver customized digital content to specific user groups  

Dark patterns: Digital interface design intended to manipulate or deceive users into taking specific actions