But the management of the most powerful systems, as well as decisions about their deployment, must be subject to strong public scrutiny. We believe that people around the world should democratically determine the limits and assumptions of AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that within these broad limits, individual users should have a lot of control over how AI uses them.
Given the risks and challenges, it’s worth considering why we’re building this technology at all.
At OpenAI, we have two fundamental reasons. First, we believe it will lead to a much better world than what we can imagine today (we’re already seeing early examples of this in areas such as education, creative work, and personal productivity). The world is facing many problems that we need much more help to solve. this technology can improve our societies and everyone’s creative ability to use these new tools will surely amaze us. The economic growth and the increase in the quality of life will be amazing.
Second, we believe it would be counterintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so enormous, the cost of building it is falling every year, the number of actors building it is growing rapidly, and it’s essentially part of the technological path we’re on, stopping it will require something like a global surveillance regime, and even that not guaranteed to work. So we have to get it right.