The Cambridge Union and Hawking Fellowship committee recently announced their controversial decision to jointly award the 2023 Hawking Fellowship to OpenAI, the creators of ChatGPT and DALL-E. While OpenAI is known for its advancements in AI, the award has sparked debate on whether the company truly embodies the values of the fellowship.
What the committee saw in OpenAI:
- OpenAI has successfully shifted perceptions about what AI is capable of through innovations like ChatGPT. Their models represent significant progress in natural language processing.
- The company has committed to releasing most of its open-source AI work and making products widely accessible.
- OpenAI espouses responsible development of AI to benefit humanity, which aligns with the spirit of the Hawking Fellowship.
However, as a well-funded startup, OpenAI operates more like a tech company than an altruistic non-profit acting for the public good. Its mission to create and profit from increasingly capable AI systems takes precedence over caution. There are concerns about the potential dangers of advanced AI systems that could be misused.
Anyway, in case you didn't watch the above video, here is what Sam Altman's speech highlighted:
- AI has extraordinary potential to improve lives if developed safely and its benefits distributed equitably.
- OpenAI aims to create AI that benefits all humanity, avoiding the profit maximization incentives of big tech companies.
- They are working to develop safeguards and practices to ensure robust AI systems are not misused accidentally or intentionally.
- Democratizing access to AI models allows more people to benefit from and provide oversight on its development.
- OpenAI is committed to value alignment, though defining whose values to align with poses challenges.
- Another breakthrough beyond improving language models will likely be needed to reach advanced general intelligence.
While OpenAI is making impressive progress in AI, reasonable concerns remain about safety, ethics, and the company's priorities as it rapidly scales its systems. The Hawking Fellowship committee took a gamble in awarding OpenAI, which could pay off if they responsibly deliver on their mission. But only time will tell whether this controversial decision was the right one.
FAQ
Q: What is OpenAI's corporate structure?
OpenAI started as a non-profit research organization in 2015. In 2019, they created a for-profit entity controlled by the non-profit to secure funding needed to develop advanced AI systems. The for-profit has a capped return for investors, with excess profits returning to the non-profit.
Q: Why did OpenAI change from a non-profit?
As a non-profit, OpenAI realized it could need more time, tens or hundreds of billions required to develop advanced AI systems. The for-profit model allows them to access capital while still pursuing their mission.
Q: How does the structure benefit OpenAI's mission?
The capped investor returns and non-profit governance let OpenAI focus on developing AI to benefit humanity rather than pursuing unlimited profits. The structure reinforces an incentive system aligned with their mission.
Q: Does OpenAI retain control of the for-profit entity?
Yes, the non-profit OpenAI controls the for-profit board and thus governs significant decisions about the development and deployment of AI systems.
Q: How does OpenAI use profits to benefit the public?
As a non-profit, any profits of the for-profit above the capped returns can be used by OpenAI for public benefit. This could include aligning AI with human values, distributing benefits equitably, and preparing society for AI impacts.
Q: What is Sam Altman's perspective on how universities need to adapt to AI?
Q: What did Sam say about British approach to AI?
Q: What does Sam think are the critical requirements of a startup founder?
Q: Why does Sam think the computed threshold needs to be high?
Here are the key points on why Sam Altman believes the computed threshold needs to be high for advanced AI systems requiring oversight:
- Higher computing power is required to train models that reach capabilities, posing serious misuse risks.
- Lower capability AI systems can provide valuable applications without the exact oversight needs.
- If the computed threshold is too low, it could constrain beneficial innovation on smaller open-source models.
- Altman hopes algorithmic progress can keep the dangerous capability threshold high despite hardware advances reducing compute costs.
- If capabilities emerge at lower compute levels than expected, it would present challenges for governance.
- But for now, he thinks truly concerning AI abilities will require large-scale models only accessible to significant players.
- This makes it feasible to regulate and inspect those robust systems above a high compute threshold.
- Allowing continued open access to lower capability systems balances openness and safety.
- In summary, a high compute/capability bar enables oversight of risky AI while encouraging innovation on systems not reaching that bar.
Q: How does Sam think value alignment will work for making ethical AI?
Here are the key points on how Sam Altman believes value alignment will allow the development of ethical AI:
- Part one is solving the technical problem of aligning AI goal systems with human values.
- Part two is determining whose values should be aligned with - a significant challenge.
- Having AI systems speak with many users could help represent collective moral preferences.
- This collaborative process can define acceptable model behavior and resolve ethical tradeoffs.
- However, safeguards are needed to prevent replicating biases that disenfranchise minority voices.
- Global human rights frameworks should inform the integration of values.
- Education of users on examining their own biases may be needed while eliciting perspectives.
- The system can evolve as societal values change.
- Altman believes aligning AI goals with the values of impacted people is an important starting point.
However, the process must ensure representative input and prevent codifying harmful biases. Ongoing collaboration will be essential.
Q: What does Sam say about the contemporary history of all technologies?
Sam Altman observed that there has been a moral panic regarding the negative consequences of every major new technology throughout history. People have reacted by wanting to ban or constrain these technologies out of fear of their impacts. However, Altman argues that without continued technological progress, the default state is decay in the quality of human life. He believes precedents show that societal structures and safeguards inevitably emerge to allow new technologies to be harnessed for human benefit over time.
Altman notes that prior generations created innovations, knowing future generations would benefit more from building on them. While acknowledging new technologies can have downsides, he contends the immense potential to improve lives outweighs the risks. Altman argues we must continue pursuing technology for social good while mitigating dangers through solutions crafted via societal consensus. He warns that abandoning innovation altogether due to risks would forego tremendous progress.