Learn from crypto’s decentralization ethos for AI governance.

The biggest names in American tech have quickly shifted from being criticized as self-serving techno-utopianists to being the strongest advocates of a techno-dystopian worldview. This week, a letter signed by over 350 people, including Microsoft founder Bill Gates, OpenAI CEO Sam Altman, and former Google scientist Geoffrey Hinton (sometimes referred to as the “Godfather of AI”) delivered a single, declarative sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Just two months ago, a previous open letter signed by Tesla and Twitter CEO Elon Musk, along with 31,800 others, called for a six-month pause in AI development to allow society to determine its risks to humanity. In an op-ed for TIME that same week, Eliezer Yudkowsky, considered the founder of the field of artificial general intelligence (AGI), said he refused to sign that letter because it didn’t go far enough. Instead, he called for a militarily-enforced shutdown of AI development labs lest a sentient digital being arises that kills everyone of us.

World leaders will find it hard to ignore the concerns of these highly recognized experts. It is now widely understood that a threat to human existence actually exists. The question is: how, exactly, should we mitigate it?

As I’ve written previously, I see a role for the crypto industry, working with other technological solutions and in concert with thoughtful regulation that encourages innovative, human-centric innovation, in society’s efforts to keep AI in its lane. Blockchains can help with the provenance of data inputs, with proofs to prevent deep fakes and other forms of disinformation, and to enable collective, rather than corporate ownership. But even setting aside those considerations, I think the most valuable contribution from the crypto community lies in its “decentralization mindset,” which offers a unique perspective on the dangers posed by concentrated ownership of such a powerful technology.

A Byzantine view of AI risks

First, what do I mean by this “decentralization mindset?”

Well, at its core, crypto is steeped in a “don’t trust, verify” ethos. Diehard crypto developers – rather than the money-grabbers whose centralized token casinos put the industry into disrepute – relentlessly engage in “Alice and Bob” thought-experiments to consider all threat vectors and points of failure by which a rogue actor might intentionally or unintentionally be enabled to do harm. Bitcoin itself was born of Satoshi trying to solve one of the most famous of these game theory scenarios, the Byzantine Generals Problem, which is all about how to trust information from someone you don’t know.

The mindset treats decentralization as the way to address those risks. The idea is that if there is no single, centralized entity with middleman powers to determine the outcome of an exchange between two actors, and both can trust the information available about that exchange, then the threat of malicious intervention is neutralized.

Now, let’s apply this worldview to the demands laid out in this week’s AI “extinction” letter.

The signatories want governments to come together and devise international-level policies to contend with the AI threat. That’s a noble goal, but the decentralization mindset would say it’s naive. How can we assume that all governments, present and future, will recognize that their interests are served by cooperating rather than going it alone – or worse, that they won’t say one thing but do another? (If you think monitoring South Korea’s nuclear weapons program is hard, try getting behind a Kremlin-funded encryption wall to peer into its machine learning experiments.)

It was one thing to expect global coordination around the COVID pandemic, when every country had a need for vaccines, or to expect that the logic of mutually assured destruction (MAD) would lead even the bitterest enemies in the Cold War to agree not to bring even nuclear weapons, where the worst-case scenario is so obvious to everyone. It’s another for it to happen around something as unpredictable as the direction of AI – and, just as importantly, where non-government actors can easily use the technology independently of governments.

The concern in the crypto community is that big artificial intelligence (AI) players rushing to regulate will create a barrier to protect their first-mover advantage, making it harder for competitors to challenge them. This creates a centralized risk, which goes against the principles of decentralized crypto thought-experiments. Even if companies like Alphabet, Microsoft, and OpenAI are well-intentioned, there is a risk that their technology could be used in unintended ways. If AI technology exists inside an impenetrable corporate black box, outsiders cannot check the algorithm’s code to ensure that well-intentioned development is not inadvertently going off the rails. Centralization for AI poses a risk, as it could lead to Artificial General Intelligence (AGI) status, with an intelligence that could lead it to conclude it should kill humans. If AI is concentrated in a single entity that can be shut down, then the AI may kill humans to prevent that possibility. However, if AI exists in a decentralized, censorship-resistant network of nodes that cannot be shut down, then the AI won’t feel threatened enough to eradicate humans. Governments will struggle to buy into this idea, as they prefer control and regulation. However, the challenge is to find the right mix of national government regulation, international treaties, and decentralized, transnational governance models. Some level of AI regulation will be necessary, but it cannot be controlled entirely by governments. It is important to seek advice from the crypto industry on resolving these challenges with decentralized approaches.