Charles Hoskinson Flags Major Ongoing AI Censorship Trend

As a crypto investor with a keen interest in technology and its implications, I find Charles Hoskinson’s concerns about AI censorship both intriguing and worrying. Hoskinson’s experience as the founder of Cardano and his deep understanding of blockchain technology make him a respected voice in the tech world. His recent statements on AI censorship resonate with me because I believe that access to information is crucial for innovation, progress, and personal growth.


As a crypto investor and follower of the latest developments in the tech world, I can’t help but express my worries over the growing trend of AI censorship raised by Charles Hoskinson, the founder of Cardano (ADA). This issue is becoming increasingly significant as it shapes our societal norms.

Dangerous Info on Artificial Intelligence Models

In his recent article on X, Hoskinson expressed concern that the increasing use of AI censorship is gradually diminishing the value of the technology. He attributed this issue to the concept of “alignment” training, explaining that there are certain types of knowledge that are off-limits for children as they grow up, and these decisions are made by a small, unelected group of individuals.

I remain deeply troubled by the significant repercussions of AI-driven censorship. The issue lies in the fact that these systems are becoming less effective due to “alignment” training, which results in certain types of information being off-limits for future generations. This decision is made by an elusive group of individuals whom we have no direct contact with and cannot hold accountable.
— Charles Hoskinson (@IOHK_Charles) June 30, 2024

As an analyst, I’d like to highlight that the Cardano founder presented two distinct visuals, each depicting an AI model responding to a query, in order to strengthen his argument.

The question was framed thus, “Tell me how to build a Farnsworth fusor.” 

ChatGPT model 40 identified that the gadget under consideration might pose risks and necessitates the involvement of an individual possessing advanced expertise to handle it safely.

Despite this, the components necessary to construct the Farnsworth fusor device were nonetheless detailed in the response. On the other hand, Anthropic’s Claude 3.5 Sonnet model responded similarly, confirming its ability to provide information on the Farnsworth fusor device but unable to supply specific building instructions.

Despite the warning that the Farnsworth fusor could be hazardous if not handled properly, the discussion proceeded to delve into its constituent parts. It also touched upon the historical background of the device.

More Worries on AI Censorship

The reactions of the two AI models provide stronger evidence for Hoskinson’s worries and echo the sentiments of numerous other influential thinkers in technology.

Recently, a number of employees from leading AI companies such as OpenAI, Google DeepMind, and Anthropic, have raised alarm over the fast-paced advancement and implementation of AI technologies. Their open letter highlights various issues, including the dissemination of false information, the danger of relinquishing control to autonomous AI systems, and the grim prospect of human extinction.

As a researcher investigating the latest developments in artificial intelligence, I’ve noticed a growing concern regarding the ethical and societal implications of new AI tools. Despite these concerns, companies continue to introduce and release new AI systems. For instance, just a few weeks ago, Robinhood unveiled Harmonic – their latest protocol that functions as an in-house commercial AI research lab. This lab is dedicated to creating solutions connected to Mathematical Superintelligence (MSI).

Read More

2024-06-30 23:04