The Controversial Decision to Award OpenAI the 2023 Hawking Fellowship

The Cambridge Union and Hawking Fellowship committee recently announced their controversial decision to jointly award the 2023 Hawking Fellowship to OpenAI, the creators of ChatGPT and DALL-E. While OpenAI is known for its advancements in AI, the award has sparked debate on whether the company truly embodies the values of the fellowship. 

What the committee saw in OpenAI:

  • OpenAI has successfully shifted perceptions about what AI is capable of through innovations like ChatGPT. Their models represent significant progress in natural language processing.
  • The company has committed to releasing most of its open-source AI work and making products widely accessible. 
  • OpenAI espouses responsible development of AI to benefit humanity, which aligns with the spirit of the Hawking Fellowship.

However, as a well-funded startup, OpenAI operates more like a tech company than an altruistic non-profit acting for the public good. Its mission to create and profit from increasingly capable AI systems takes precedence over caution. There are concerns about the potential dangers of advanced AI systems that could be misused.

Anyway, in case you didn't watch the above video, here is what Sam Altman's speech highlighted:

  • AI has extraordinary potential to improve lives if developed safely and its benefits distributed equitably. 
  • OpenAI aims to create AI that benefits all humanity, avoiding the profit maximization incentives of big tech companies.
  • They are working to develop safeguards and practices to ensure robust AI systems are not misused accidentally or intentionally.
  • Democratizing access to AI models allows more people to benefit from and provide oversight on its development. 
  • OpenAI is committed to value alignment, though defining whose values to align with poses challenges.
  • Another breakthrough beyond improving language models will likely be needed to reach advanced general intelligence.

While OpenAI is making impressive progress in AI, reasonable concerns remain about safety, ethics, and the company's priorities as it rapidly scales its systems. The Hawking Fellowship committee took a gamble in awarding OpenAI, which could pay off if they responsibly deliver on their mission. But only time will tell whether this controversial decision was the right one.

FAQ

Q: What is OpenAI's corporate structure?

OpenAI started as a non-profit research organization in 2015. In 2019, they created a for-profit entity controlled by the non-profit to secure funding needed to develop advanced AI systems. The for-profit has a capped return for investors, with excess profits returning to the non-profit. 

Q: Why did OpenAI change from a non-profit? 

As a non-profit, OpenAI realized it could need more time, tens or hundreds of billions required to develop advanced AI systems. The for-profit model allows them to access capital while still pursuing their mission.

Q: How does the structure benefit OpenAI's mission?

The capped investor returns and non-profit governance let OpenAI focus on developing AI to benefit humanity rather than pursuing unlimited profits. The structure reinforces an incentive system aligned with their mission.

Q: Does OpenAI retain control of the for-profit entity? 

Yes, the non-profit OpenAI controls the for-profit board and thus governs significant decisions about the development and deployment of AI systems.

Q: How does OpenAI use profits to benefit the public?

As a non-profit, any profits of the for-profit above the capped returns can be used by OpenAI for public benefit. This could include aligning AI with human values, distributing benefits equitably, and preparing society for AI impacts.

Q: What is Sam Altman's perspective on how universities need to adapt to AI?

Sam Altman believes that while specific curriculum content and educational tools will need to adapt to advances in AI, the core value of university education - developing skills like critical thinking, creativity, and learning how to learn across disciplines - will remain unchanged. Students must fully integrate AI technologies to stay competitive, but banning them out of fear would be counterproductive. Educators should focus on cultivating the underlying human capacities that enable transformative thinking, discovery, and problem-solving with whatever new tools emerge. The next generation may leapfrog older ones in productivity aided by AI, but real-world critical thinking abilities will still need honing. Universities need to modernize their mediums and content while staying grounded in developing the fundamental human skills that power innovation.

Q: What did Sam say about British approach to AI?

Sam Altman spoke positively about the emerging British approach to regulating and governing AI, portraying the UK as a model for thoughtful and nuanced policymaking. He admires the sensible balance the UK government is striking between safely oversighting AI systems while still enabling innovation. Altman highlighted the alignment across government, companies, and organizations in acknowledging the need for AI safety precautions and regulation. At the same time, the UK approach aims to avoid reactionary measures like banning AI development altogether. Altman sees excellent promise in constructive dialogues like the UK AI Summit to shape solutions on governing AI responsibly. He contrasted the reasonable, engaged UK approach to more polarized stances in other countries. Altman commended the UK for its leadership in pragmatically debating and formulating policies to ensure AI benefits society while mitigating risks.

Q: What does Sam think are the critical requirements of a startup founder?

Here are five essential requirements Sam Altman discussed for startup founders:

Determination - Persistence through challenges is critical to success as a founder. The willingness to grind over a long period is hugely important.

Long-term conviction - Successful founders deeply believe in their vision and are willing to be misunderstood long before achieving mainstream acceptance. 

Problem obsession - Founders need an intense focus on solving a problem and commitment to keep pushing on it.

Communication abilities - Clear communication is vital for fundraising, recruitment, explaining the mission, and being an influential evangelist for the startup.

Comfort with ambiguity - Founders must operate amidst uncertainty and keep driving forward before formulas or models prove out.

Q: Why does Sam think the computed threshold needs to be high?

Here are the key points on why Sam Altman believes the computed threshold needs to be high for advanced AI systems requiring oversight:

  • Higher computing power is required to train models that reach capabilities, posing serious misuse risks.
  • Lower capability AI systems can provide valuable applications without the exact oversight needs.
  • If the computed threshold is too low, it could constrain beneficial innovation on smaller open-source models.
  • Altman hopes algorithmic progress can keep the dangerous capability threshold high despite hardware advances reducing compute costs.
  • If capabilities emerge at lower compute levels than expected, it would present challenges for governance.
  • But for now, he thinks truly concerning AI abilities will require large-scale models only accessible to significant players.
  • This makes it feasible to regulate and inspect those robust systems above a high compute threshold.
  • Allowing continued open access to lower capability systems balances openness and safety.
  • In summary, a high compute/capability bar enables oversight of risky AI while encouraging innovation on systems not reaching that bar.

Q: How does Sam think value alignment will work for making ethical AI?

Here are the key points on how Sam Altman believes value alignment will allow the development of ethical AI:

  • Part one is solving the technical problem of aligning AI goal systems with human values.
  • Part two is determining whose values should be aligned with - a significant challenge. 
  • Having AI systems speak with many users could help represent collective moral preferences.
  • This collaborative process can define acceptable model behavior and resolve ethical tradeoffs.
  • However, safeguards are needed to prevent replicating biases that disenfranchise minority voices.
  • Global human rights frameworks should inform the integration of values.
  • Education of users on examining their own biases may be needed while eliciting perspectives.
  • The system can evolve as societal values change.
  • Altman believes aligning AI goals with the values of impacted people is an important starting point. 

However, the process must ensure representative input and prevent codifying harmful biases. Ongoing collaboration will be essential.

Q: What does Sam say about the contemporary history of all technologies?

Sam Altman observed that there has been a moral panic regarding the negative consequences of every major new technology throughout history. People have reacted by wanting to ban or constrain these technologies out of fear of their impacts. However, Altman argues that without continued technological progress, the default state is decay in the quality of human life. He believes precedents show that societal structures and safeguards inevitably emerge to allow new technologies to be harnessed for human benefit over time. 

Altman notes that prior generations created innovations, knowing future generations would benefit more from building on them. While acknowledging new technologies can have downsides, he contends the immense potential to improve lives outweighs the risks. Altman argues we must continue pursuing technology for social good while mitigating dangers through solutions crafted via societal consensus. He warns that abandoning innovation altogether due to risks would forego tremendous progress.

Q: What does Sam think about companies that rely on advertising for revenue, such as the social media mega-companies?

Sam Altman said that while not inherently unethical, the advertising-based business model often creates misaligned incentives between companies and users. He argued that when user attention and data become products to be exploited for revenue, it can lead companies down dangerous paths, prioritizing addiction and engagement over user well-being. Altman observed that many social media companies failed to implement adequate safeguards against harms like political radicalization and youth mental health issues that can emerge when systems are designed to maximize engagement above all else. However, he believes advertising-driven models could be made ethical if companies prioritized societal impact over profits. Altman feels AI developers should learn from the mistakes of ad-reliant social media companies by ensuring their systems are aligned to benefit society from the start.

Q: What does Sam think about open-source AI?

Sam Altman said he believes open-sourcing AI models are essential for transparency and democratization but should be done responsibly. He argued that sharing open-source AI has benefits in enabling public oversight and access. However, Altman cautioned that indiscriminately releasing all AI could be reckless, as large models should go through testing and review first to avoid irreversible mistakes. He feels there should be a balanced approach weighing openness and precaution based on an AI system's societal impact. Altman disagrees with both altogether banning open AI and entirely unfettered open sourcing. He believes current large language models are at a scale where open source access makes sense under a thoughtful framework, but more advanced systems will require oversight. Overall, Altman advocates for openness where feasible but in a measured way that manages risks.

Q: What is Sam's definition of consciousness?

When asked by an attendee, Sam Altman did not provide his definition of consciousness but referenced the Oxford Dictionary's "state of being aware of and responsive to one's surroundings." He discussed a hypothetical experiment to detect AI consciousness by training a model without exposure to the concept of consciousness and then seeing if it can understand and describe subjective experience anyway. Altman believes this could indicate a level of consciousness if the AI can discuss the concept without prior knowledge. However, he stated that OpenAI has no systems approaching consciousness and would inform the public if they believe they have achieved it. Overall, while not explicitly defining consciousness, Altman described an experimental approach to evaluating AI systems for potential signs of conscious awareness based on their ability to understand subjective experience despite having no training in the concept.

Q: What does Sam think about energy abundance affecting AI safety?

Sam Altman believes energy abundance leading to cheaper computing costs would not undermine AI safety precautions in the near term but could dramatically reshape the landscape in the long run. He argues that while extremely affordable energy would reduce one limitation on AI capabilities, hardware and chip supply chain constraints will remain bottlenecks for years. However, Altman acknowledges that abundant clean energy could eventually enable the training of models at unprecedented scales and rapidity, significantly accelerating the timeline for advancing AI systems to transformative levels. While he feels risks would still be predictable and manageable, plentiful energy could compress the progress trajectory enough to substantially impact the outlook for controlling super-advanced AI over the long term. In summary, Altman sees energy breakthroughs as not negating safety in the short term but potentially reshaping the advancement curve in the more distant future.