Alex Petropoulos is a political commentator with Young Voices.
The labs at the forefront of AI development are currently engaged in a race towards more capable (and more dangerous) AI models. Last month, Google’s two AI subsidiaries, DeepMind and Google Brain, merged just to keep up with the rapid pace of development.
Although developers want to slow down to make their technologies as safe as possible, as long as they are in competition with one another, they can’t risk falling behind. Fearing that the competition will produce the next breakthrough, each lab is incentivised by the market’s rapid pace to bypass safety measures and pump out products. As a result, they are cutting corners with some of the most transformative technology the world will see.
To solve this problem, we desperately need more trust between AI labs. In order to create the safest artificial intelligence possible, developers must capitalise on that which makes them most human: their ability to cooperate. The simplest and surest way to achieve this cooperation is simply to bring them into a shared space.
Humans naturally build trust through the small everyday interactions that arise from sharing a physical community. The same could happen between developers. By bringing together the best and brightest minds working on advancing both AI capabilities and AI safety research, we could establish “AI Development Sanctuaries” – co-operative hubs for the development of AI and AI safety.
Importantly, co-operation and proximity doesn’t mean a complete removal of competition: We aren’t yet at the “merge all AI labs immediately” stage in AI development, and we must preserve some element of competition between developers in order to increase the chances of arriving at the best solutions. However, even a marginal increase in trust between AI companies could help slow down the rapid pace at which the market is moving and create space to develop safer technologies.
The UK would be a prime location for such a sanctuary. Already home to Google DeepMind – one of the frontrunners in the AI race – London is uniquely situated as both a political and technological capital. Establishing an AI Development Sanctuary would bring numerous political and economic benefits both today and into the future.
Apart from the long-term benefits for AI safety, pursuing AI Development Sanctuaries would provide much-needed economic growth for the UK right away. Although our current planning system impedes economic growth, the political motivation to house these sanctuaries would incentivise policymakers to work around the UK’s stringent and restrictive planning system, allowing for development not only in the AI sphere but also in the energy and housing sectors.
By using an “AI Development Veto” on any objections to construction, we’d be able to quickly and decisively build plentiful housing for researchers, build data centres and exascale computers to train advanced models and build grid infrastructure and new electricity generation to accommodate the massive energy draws associated with developing large AI models. Under the umbrella of national security and AI Development Sanctuaries, we can sidestep this stumbling block and build the necessary infrastructure that will benefit both the AI industry and the economy as a whole.
Trust between developers would accumulate over time, which is why the sooner we begin this project, the more likely creators will have built a cooperative foundation when it really counts. But AI development wouldn’t just benefit at the end of the line. Having everyone in centralised locations will allow for more effective monitoring, communication and transparency, all of which are vital to building the best and safest technologies.
Cooperation like that which sanctuaries would encourage is already imminent. As we progress into the later stages of the AI race and approach “strong AI” (AI that possesses the capabilities to have transformative positive change, but also cause catastrophic damage), AI labs will have to begin to cease work on their own projects and work to assist those leading the race (thus removing competition and allowing the leaders to spend adequate time on safety).
OpenAI, the current leader in the race, already has this “assist clause” in its founding charter. It’s important to create environments that would enable these transitions swiftly and smoothly. Having researchers already coexisting in the same physical community would do just that.
This plan is not without its risks. To avoid fuelling a state-sponsored AI race in which countries compete for labs, or allowing any one government to interfere or commandeer technology for military purposes, we would need to establish guarantees for sanctuaries that would grant them immunity from any single government, local or otherwise. These guarantees would protect labs from government intervention and incentivise developers to relocate.
We don’t have to pick and choose between safety and growth, or long-term versus short-term benefits. AI Development Sanctuaries could help make us richer and safer now and down the road as technologies grow increasingly powerful. But in order to build trust between developers by the time it really matters, we need to build them now.
Alex Petropoulos is a political commentator with Young Voices.
The labs at the forefront of AI development are currently engaged in a race towards more capable (and more dangerous) AI models. Last month, Google’s two AI subsidiaries, DeepMind and Google Brain, merged just to keep up with the rapid pace of development.
Although developers want to slow down to make their technologies as safe as possible, as long as they are in competition with one another, they can’t risk falling behind. Fearing that the competition will produce the next breakthrough, each lab is incentivised by the market’s rapid pace to bypass safety measures and pump out products. As a result, they are cutting corners with some of the most transformative technology the world will see.
To solve this problem, we desperately need more trust between AI labs. In order to create the safest artificial intelligence possible, developers must capitalise on that which makes them most human: their ability to cooperate. The simplest and surest way to achieve this cooperation is simply to bring them into a shared space.
Humans naturally build trust through the small everyday interactions that arise from sharing a physical community. The same could happen between developers. By bringing together the best and brightest minds working on advancing both AI capabilities and AI safety research, we could establish “AI Development Sanctuaries” – co-operative hubs for the development of AI and AI safety.
Importantly, co-operation and proximity doesn’t mean a complete removal of competition: We aren’t yet at the “merge all AI labs immediately” stage in AI development, and we must preserve some element of competition between developers in order to increase the chances of arriving at the best solutions. However, even a marginal increase in trust between AI companies could help slow down the rapid pace at which the market is moving and create space to develop safer technologies.
The UK would be a prime location for such a sanctuary. Already home to Google DeepMind – one of the frontrunners in the AI race – London is uniquely situated as both a political and technological capital. Establishing an AI Development Sanctuary would bring numerous political and economic benefits both today and into the future.
Apart from the long-term benefits for AI safety, pursuing AI Development Sanctuaries would provide much-needed economic growth for the UK right away. Although our current planning system impedes economic growth, the political motivation to house these sanctuaries would incentivise policymakers to work around the UK’s stringent and restrictive planning system, allowing for development not only in the AI sphere but also in the energy and housing sectors.
By using an “AI Development Veto” on any objections to construction, we’d be able to quickly and decisively build plentiful housing for researchers, build data centres and exascale computers to train advanced models and build grid infrastructure and new electricity generation to accommodate the massive energy draws associated with developing large AI models. Under the umbrella of national security and AI Development Sanctuaries, we can sidestep this stumbling block and build the necessary infrastructure that will benefit both the AI industry and the economy as a whole.
Trust between developers would accumulate over time, which is why the sooner we begin this project, the more likely creators will have built a cooperative foundation when it really counts. But AI development wouldn’t just benefit at the end of the line. Having everyone in centralised locations will allow for more effective monitoring, communication and transparency, all of which are vital to building the best and safest technologies.
Cooperation like that which sanctuaries would encourage is already imminent. As we progress into the later stages of the AI race and approach “strong AI” (AI that possesses the capabilities to have transformative positive change, but also cause catastrophic damage), AI labs will have to begin to cease work on their own projects and work to assist those leading the race (thus removing competition and allowing the leaders to spend adequate time on safety).
OpenAI, the current leader in the race, already has this “assist clause” in its founding charter. It’s important to create environments that would enable these transitions swiftly and smoothly. Having researchers already coexisting in the same physical community would do just that.
This plan is not without its risks. To avoid fuelling a state-sponsored AI race in which countries compete for labs, or allowing any one government to interfere or commandeer technology for military purposes, we would need to establish guarantees for sanctuaries that would grant them immunity from any single government, local or otherwise. These guarantees would protect labs from government intervention and incentivise developers to relocate.
We don’t have to pick and choose between safety and growth, or long-term versus short-term benefits. AI Development Sanctuaries could help make us richer and safer now and down the road as technologies grow increasingly powerful. But in order to build trust between developers by the time it really matters, we need to build them now.