Cardano Founder Voices Concerns About “AI Censorship”

As an experienced analyst in the field of artificial intelligence and technology, I share Charles Hoskinson’s concerns about the potential for censorship enabled by advanced AI systems. The ability to limit access to certain knowledge through these chatbots is a powerful tool that raises important ethical questions.


As a researcher studying the intersection of technology and society, I’ve come across Charles Hoskinson’s latest perspective shared on the X social media platform. In this post, Hoskinson voiced his apprehensions regarding the extensive censorship capabilities granted by artificial intelligence (AI).

According to Hoskinson, generative AI is becoming less useful due to alignment training. 

The prospect that some knowledge may be off-limits for future generations due to the discretion of a select few is causing apparent unease for him. In simpler terms, he expressed concern in a social media post that unknown individuals, who have no accountability to the public, will decide what information is denied to every child growing up.

In Hoskinson’s blog entry, he included two images showing the responses generated by OpenAI’s GPT-40 model and Claude’s 3.5 Sonnet model when presented with prompts related to constructing a Farnsworth fusor.

The Farnsworth fusor is a gadget that uses an electric field to heat up ions and create the extreme temperatures required for nuclear fusion.

As a crypto investor and a curious mind in the world of technology, I was intrigued when OpenAI’s GPT-40 handed me a comprehensive list of essential components for constructing a nuclear fusion reactor. However, my excitement was dampened when Claude’s 3.5 Sonnet reluctantly shared only vague insights about Farnsworth-Hirsch fusors, without supplying the specific instructions on how to put them together.

It’s concerning for Hoskinson that such a small number of people have the power to determine which information can be accessed via AI chatbots, leading to a significant disparity in available data.

Since the surge in popularity of OpenAI’s ChatGPT towards the end of 2022, there has been intense discussion regarding the boundaries of censorship enforced by artificial intelligence. It makes sense for these models to safeguard users from harmful material; however, determining what constitutes harm is a murky issue, leaving some apprehensive about a potential future where AI suppresses information and enforces conformity based on its inherent biases.

Read More

2024-07-01 09:36