AI increasingly controls our military, energy and financial systems, but we do not reliably control AI. The risk of catastrophe is growing rapidly, as AI advances.

It’s an immensely complex problem, but The Large Hadron Collider shows what’s possible when scientific work matches the scale of a challenge. We need a similar international effort to avoid losing control to chaos, foreign forces, or AI itself.

Experts, including Nobel Laureate Geoffrey Hinton, warn that AI will develop the subgoals of survival and control, which help with any given goal. Some empirical observations support this theory, which may put AI in conflict with humanity, and further research is needed. 

Hinton said “It would be a shame if humanity disappeared because we didn’t bother to look for the solution.” He urges us to “get the brightest minds and put them on this problem.”

Working on the frontier will require compute resources similar to those planned by leading AI firms. This could be achieved through government funding, or by requiring AI firms to contribute a portion of their compute resources. 

The project is a powerful investment, as AI will be decisive in global economic competition. The compute can also drive breakthroughs in science and medicine. 

Leaders must urgently form a taskforce to plan perhaps the most important project in history - to secure our critical systems and our extraordinary future. AI can be an incredibly positive force - if it can be controlled.