Open Sourcing AI is Critical to AI Safety

This post is inspired by recent tweets by Percy Liang, Associate Professor of Computer Science at Stanford University, on the importance of open-sourcing for AI safety.

Developing robust AI systems like large language models (LLMs) raises important questions about safety and ethics. Some argue limiting access enhances safety by restricting misuse. However, we say that open-sourcing AI is critical for safe development. 

First, open access allows diverse researchers to study safety issues. Closed systems prohibit full access to model architectures and weights for safety research. API access or access limited to select groups severely restricts perspectives on safety. Open access enabled innovations like Linux that made operating systems more secure through worldwide collaboration. Similarly, available AI systems allow crowdsourced auditing to understand controlling them responsibly.

For instance, Meta recently open-sourced LLMs like LLama to spur AI safety research and benefit society. The open-source nature of Linux allowed vulnerabilities like Heartbleed to be rapidly detected and patched globally. Closed-source operating systems can take longer to address exploits visible only to limited internal teams. Open AI similarly exposes flaws for broad remediation before risks amplify.

Second, open access lets society confront risks proactively rather than reactively. No one fully grasps the trajectory of increasingly powerful AI. However, open access allows us to monitor capabilities, find vulnerabilities, and build defenses before harm occurs. This is superior to having flaws exposed only upon leakage of a closed system. For example, in the case of Vioxx, the cardiovascular risks were not detected sooner because clinical trial data was not openly shared. For example, open clinical trial data enabled faster detection of Vioxx's cardiovascular risks. With AI, openness stress tests safety measures when the stakes are lower. 

Finally, some believe future AI could pose catastrophic risks beyond control. If so, we should carefully consider whether such technology should exist rather than limiting access to elites. For instance, debates continue on risks from gain-of-function viral research, where openness enables public discussion of such dilemmas. Similarly, open AI systems allow democratic deliberation on the technology's trajectory.

Open access and transparency, not limited to the privileged, is the path most likely to produce AI that is safe, ethical, and beneficial. Open sourcing builds collective responsibility through universal understanding and access. Restricting access is unlikely to achieve safety in the long run.