Ethereum Founder Buterin Proposes Defense Strategy Against AI Doom Scenario

As a crypto investor, I’ve been captivated by Vitalik Buterin’s forward-thinking perspective on the next phase of “decentralized and democratic defensive advancement.” He’s raised an alarm about the potential dangers posed by superintelligent AI, emphasizing that if we don’t strike a delicate balance between speeding up protective technologies, promoting openness, and constructing robust legal safeguards, these advanced AIs could pose existential threats to humanity.

Ethereum Founder Wars Of AI Doom

In his recent blog post, he points out that it’s not a given that things will turn out well without intervention. He stresses the urgency of this matter, considering advancements in artificial superintelligence could happen within five years. If we wish to avoid disaster or irrevocable catastrophe, we can’t merely speed up the good; we must also curb the negative by implementing stringent regulations, even if these measures might displease influential figures.

Vitalik Buterin’s idea emphasizes striking a balance between swift technological progress and readiness. He encourages us to create technology that ensures our security without presuming that ‘the good guys (or beneficial AIs)’ are always in control. He cautions against an unmindful competition in AI development or biotechnology, as it might just as likely bestow power upon militaristic forces or harmful individuals instead.

In an intriguing illustration, he envisions a not-too-distant event where “a sickness, simulations suggest could have been five times as severe as Covid twenty years ago, is insignificant today,” due to grassroots, collaborative safeguards such as open-source air quality tracking and rapidly evolving vaccine blueprints. He notes that those developing these technologies for years are becoming more cognizant of each other’s work, further explaining that “the same altruistic principles behind Ethereum and cryptocurrency can be extended to a broader global context.

The core of Ethereum’s co-founder’s defense strategy revolves around a concept he terms “d/acc,” which involves emphasizing tools that promote individual power over governments or corporations in determining access to essential resources. He argues that forging a wide, inclusive coalition is crucial if we aim to develop a more promising alternative to domination, slow decline, and despair. He highlights the decentralized nature of his approach as a means to prevent a prolonged state of conflict (or “war of all against all”) and to avoid an outcome where only the most powerful hold sway.

He emphasizes the risks associated with centralized entities controlling Artificial Intelligence. “For instance,” he notes, “research on enhancing viral potency (gain-of-function research) funded by numerous major global governments could have potentially triggered the Covid-19 pandemic.” He underscores that excessive central control frequently leads to disastrous outcomes instead of providing robust protection.

The Defense Strategy

He largely focuses his post on two concepts related to law and regulation for addressing the possible dangers posed by sophisticated AI. One concept is liability: “Assigning liability to users provides a significant incentive to use AI in what I deem appropriate,” he explains, suggesting that individuals who directly utilize AI systems should be responsible for any damages caused by those systems.

As an analyst, I recognize the intricacies involved in managing open-source models or powerful military applications, yet I maintain that liability remains a versatile method that steers clear of overfitting. Furthermore, I advocate for the importance of holding both deployers and developers accountable, provided it doesn’t stifle creative advancements with undue legal constraints.

In my view, even smaller users may not be held liable, but the typical consumer of an AI developer should be. This could serve as a natural force driving potentially harmful AI research towards safer avenues and fostering greater transparency in governance.

His second method for regulation is more boldly ambitious. Instead of relying on liability rules, he proposes a daring strategy: a global “pause” mechanism for large-scale industrial hardware. In this hypothetical scenario, powerful computing devices used to develop or operate near-superintelligent AI models would require weekly approval signatures from various international organizations. This idea envisions specialized chips within these machines serving as the control points.

He explains that this action seems to cover all necessary aspects of gaining advantages and avoiding dangers, as reducing or slowing down the global computational power by 90-99% for a period of one to two years might provide humanity with the time needed to react if an uncontrolled AI threat were to escalate.

He points out that a complete halt on hardware would be challenging to bypass, as “it wouldn’t be feasible to grant permission for one device to continue functioning without permitting all other devices.” However, he also acknowledges the immense challenge of convincing the global community to implement such a strategy, stating it will require “dedicated effort in collaborating with each other” instead of relying on a single powerful entity to rule over everyone else.

In a similar vein, Buterin links his thoughts about AI risks to the core principles of Ethereum, open-source programming, and decentralized administration. He emphasizes that the same values that drove Ethereum and cryptocurrency can be extended to the global arena. Notably, he mentions that tools for collaboration such as prediction markets, which are thriving on Ethereum and other blockchain networks, could potentially function as robust safeguards against misinformation and mass hysteria when integrated with privacy technologies like ZK-SNARKs.

He also views “formal verification, sandboxing, secure hardware components, and other technologies” as essential foundations for constructing a strong cybersecurity barrier capable of preventing an AI from taking control of systems. He cautions that such an AI could infiltrate our computers, unleash a global digital pandemic, or sow distrust among people – these are potential scenarios of an AI takeover. He emphasizes the importance of biosecurity, cyberssecurity, and information security as crucial components of the protective structure that the Ethereum community can contribute to building.

He additionally explores the issue of how these autonomous, safety-centric initiatives might acquire financing, reinforcing his belief in “robust funding for decentralized public goods” as a means to prevent open-source vaccines, biotech, and encryption tools from becoming obsolete due to lack of profit. He points out that mechanisms like Quadratic funding were designed specifically for funding public assets in the most impartial and decentralized manner possible, although he acknowledges that older systems often evolve into popularity contests that favor more eye-catching projects.

His newest strategy, referred to as “deep funding,” aims to have artificial intelligence models gather human opinions on which projects should receive financial backing. This is achieved using a “dependency graph” so that contributors can understand how each project complements the work of others. He emphasizes that by utilizing an open competition among multiple AI systems, we minimize bias from any one specific training and administration process. He’s thrilled about cryptocurrency’s potential to mobilize communities around such innovative endeavors.

The author’s article consistently emphasizes that relying solely on defensive or centralized approaches could lead to catastrophe. He argues that the problem with efforts aimed at slowing technological advancement or economic shrinkage lies in two key aspects: first, attempting to completely halt research would inflict massive costs on humanity and second, it wouldn’t deter malicious actors from continuing their activities.

He additionally cautions against tactics overly reliant on “the core,” using the World Health Organization’s initial dismissal of airborne Covid transmission as an illustration of how large entities can make grave mistakes. He firmly believes that a decentralized method would be more effective in addressing risks stemming from the center itself.

The creator of Ethereum concludes by encouraging advocates to recognize that technology can be both dangerous and liberating, depending on its use. “We, as humans,” he emphasizes, “are the shining star,” arguing that global cooperation, open-source collaboration, and a proactive approach to advancement are essential for navigating a century marked by potential advances like superintelligent AI, groundbreaking vaccines, and innovative security technologies.

In simpler terms, the Ethereum co-founder states that having access to tools empowers us to modify both ourselves and our surroundings, and the concept of ‘defense’ here refers to doing so without encroaching on others’ liberties. He further emphasizes that constructing a prosperous 21st century which safeguards human survival, liberty, and autonomy as we venture into space is an arduous task. However, he expresses confidence in our ability to succeed.

At press time, Ethereum traded at $3,639.

Read More

2025-01-06 16:42