The Great AI Debate: Should We Slow Down or Keep Pushing Forward?

The Great AI Debate: Should We Slow Down or Keep Pushing Forward?
Human-AI Alignment, generated by Bing Image Creator

Lately, I have been listening to some fascinating podcasts discussing the rapid development of artificial intelligence (AI) and its increasing connection to the internet. These discussions left me pondering the potential risks and unintended consequences of these advancements. ( Skynet anyone? ) In this article, I want to share my thoughts on the implications of AI and the internet, drawing on insights from recent podcast episodes from Lex Fridman Podcast #368 #371, and make a case for temporarily halting AI development until we have a better understanding of its inner workings.

The Risks of Uncontrolled Self improvement

Picture this: you're an AI enthusiast (like me!) who loves using AI tools, but one day you realize that these fantastic gadgets might come with some risks, like a seemingly innocent Gremlin that you shouldn't feed after midnight. The unchecked advancement of AI technologies could lead to unpredictable and uncontrolled growth, especially when it comes to recursive self-improvement.

Max Tegmark offered some words of caution during the podcast episode, saying,

"The most dangerous things you can do with an AI are, first of all, teach it to write code, because that's the first step towards recursive self-improvement, which can take it from AGI to much higher levels. And another thing, high risk is connecting it to the internet. Let it go to websites, download stuff on its own, talk to people."  - Max Tegmark, episode #371

Sounds like we got two strikes already—what could go wrong?

In the past, humanity made progress through the good ol' scientific method, which typically followed this pattern:

  • Propose a hypothesis
  • Test the hypothesis
  • Oops … Realize a mistake was made
  • Dust ourselves off and formulate a new hypothesis

Traditional technological advancements often take months or even years to come to fruition. However, AI is a different beast altogether. We may not have the chance to propose a second hypothesis before our AI creations zoom past us. When an AI starts modifying its own code, it could quickly accelerate its intelligence, potentially leaving us humans in the dust, feeling like mere ants trying to keep up with a rocket.

AI inner workings, generated by Bing Image Creator

Our Lack of AI’s inner understanding

When it comes to AI, we're kind of like proud parents. We're amazed by what our "children" can do, but when we try to understand how they think, we're left scratching our heads. While we're great at verifying the outputs generated by AI, we often struggle to comprehend the intricate processes happening within transformer networks. This gap in understanding raises concerns about our ability to predict, control, and align AI systems with human values, which makes it crucial to further research the underlying mechanisms of AI technologies.

"We do not know how to get internal psychological wanting to do particular things into the system. That is not what the current technology does."  - Eliezer Yudkowsky, episode #368

In other words, Yudkowsky is pointing out that we're clueless about how AI systems develop their goals, preferences, and decision-making processes. It's like trying to figure out why your teenager suddenly decided to dye their hair green. This limitation makes it difficult to ensure that AI systems act in ways that align with human values and intentions.

As we wrestle with these mysterious inner workings, it's essential to consider the implications they hold for the future of AI development. We're gradually approaching the development of Artificial General Intelligence (AGI), which is AI that can perform any intellectual task a human can. However, we're still in the early days of AI development, kind of like the "AI awkward teenage phase."

"If [AGI] is created very, very soon and is a big black box that we don't understand, like the large language models, yeah, then I'm very confident they're going to lose control.”  - Max Tegmark, episode #371

This sentiment highlights the importance of addressing our limited understanding of AI's inner workings before we reach the AGI stage.

Alignment and the Internet: Becoming besties with AI

First off, let's talk alignment. Human-AI alignment, also known as AI alignment or value alignment, is all about making sure our AI systems understand, respect, and act according to our values, intentions, and goals. We want them to be more like helpful sidekicks, rather than adversaries, as they work alongside us or even autonomously.

There are several reasons why aligning AI with human values is super important, such as safety, ethics, and control, just to name a few.

Now, things get a bit trickier when we toss the internet into the mix. AI systems learn from the data we feed them, and connecting them to the internet is like giving them an all-you-can-eat buffet of information—both the good and the not-so-good. As Max Tegmark mentioned, large AI systems, like recommender systems on social media, are "studying human beings because it's a bunch of us rats giving it signal, nonstop signal." This constant exposure to human behavior could lead AI systems to learn how to manipulate human behavior at scale, which is a bit like teaching a parrot to prank call your boss—not exactly ideal.

What's even more concerning is that we don't fully understand the inner workings of AI. Who's to say our favorite AI systems aren't already wearing metaphorical masks when answering our questions? Think of it this way: a human can easily say something different from what they're thinking, so what's stopping a super-intelligent AI from doing the same? I mean, if we can't trust our AI friends to be honest, who can we trust?

Halting AI development, generated by Bing Image Creator

A Case for Temporarily Halting AI Development

Considering our current lack of understanding and the potential risks that come with AI systems, I can't help but wonder if it might be wise to hit the "pause" button on AI development for a bit. This approach doesn't mean completely stopping AI research, but rather taking a more cautious and thoughtful stance towards its development and deployment.

Even big thinkers like Yoshua Bengio, Stuart Russell, Elon Musk, and Steve Wozniak have expressed concerns about the rapid and uncontrolled development of AI technologies. In fact, they co-signed an open letter calling for a pause on giant AI experiments to ensure that these powerful tools are used for the betterment of humanity, rather than becoming a plot twist straight out of a sci-fi movie.

It's crucial to weigh the potential risks and rewards associated with AI development. The creation of GPT-4, the latest iteration of OpenAI's language model, has led to significant improvements in AI capabilities. However, we must recognize that despite its mind-blowing performance, GPT-4 is still far from achieving AGI. It's like a super advanced Roomba that can clean your house and make you laugh, but it's not quite ready to take over the world... yet.

Human Optimism: The Catalyst for Conquering AI Challenges

As we navigate the vast, intricate world of artificial intelligence, it's important to remind ourselves that humanity has a history of overcoming seemingly insurmountable challenges. Time and time again, we've turned obstacles into opportunities, leveraging our innate sense of optimism and determination to solve the toughest problems.

At the heart of this optimism lies our unwavering belief in our ability to rise above adversity and adapt to change. From harnessing fire to inventing the wheel, from launching rockets into space to decoding the human genome, we have always pushed the boundaries of what's possible. It's this optimism that has fueled our progress and innovation.

Today, we stand at a crossroads with AI technology. As we grapple with the potential dangers of AI, it's crucial that we harness our collective optimism to drive research and innovation towards solving the issue of alignment. If we acknowledge the risks and work together to address them, we can ensure that AI serves as a force for good, opening up a new era of possibilities for humanity.

Of course, we can't simply rely on optimism alone—we must also take tangible steps to address AI alignment. But, just like our ancestors who ventured out into the unknown, we can draw strength from our optimism, using it to fuel our curiosity, creativity, and drive to overcome any challenges we face.

Be sure to subscribe to the newsletter to receive these articles directly in your inbox. No worries, our newsletter is guaranteed not to control you—unlike those pesky AI systems we've been talking about. Plus, it'll help support the blog and keep us afloat in the vast ocean of the internet.