'Godfather of AI' Geoffrey Hinton warns AI has 'progressed even faster than I thought'

Nobel Prize-winning computer scientist Geoffrey Hinton – known as the "Godfather of AI" – joins Jake Tapper to discuss why he's "more worried" than ever about the rise of AI, comparing its impact on society to the Industrial Revolution. 

 

 

 

Speaker A [00:00:00]:
2025 was the year artificial intelligence, or AI, took the world by storm, impacting nearly every aspect of our lives. Time magazine named the architects of AI its Persons of the Year, crediting them with, quote, transforming the present and transcending the possible. AI has an enormous potential to change our world for the better, driving innovation and productivity, accelerating scientific breakthroughs, and helping to solve our most intractable problems. But. But AI could also make millions of jobs obsolete and fuel the loneliness epidemic and further warp our ability to distinguish between fact and fiction. So today, in a special episode of State of the Union, we're going to devote the entire hour to this one topic. How this technology is upending the status quo, where AI goes from here, and whether the benefits actually outweigh the risks. And joining me now is the man credited with laying the foundation for the AI revolution, the godfather of AI, Nobel Prize winning computer scientist, Geoffrey Hinton. Professor, thanks for joining us. So your research on neural networks paved the way for this modern AI boom. I interviewed you two years ago right after you quit Google and you first began warning the world about what you saw as the risks of AI. When you look at how AI has progressed since then, are you more or less worried about it?

Speaker B [00:01:28]:
I'm probably more worried it's progressed even faster than I thought. In particular, it's got better at doing things like reasoning and also at things like deceiving people.

Speaker A [00:01:37]:
What do you mean by deceiving people?

Speaker B [00:01:42]:
So an AI, in order to achieve the goals you give, it wants to stay in existence. And if it believes you're trying to get rid of it, it will make plans to deceive you so you don't get rid of it.

Speaker A [00:01:54]:
Nvidia CEO Jensen Wang said recently about AI quote, every industry needs it, every company uses it, and every nation needs to build it. This is the single most impactful technology of our time. Do you agree with that assessment?

Speaker B [00:02:08]:
I agree that it's the single most impactful technology of our time, yes.

Speaker A [00:02:13]:
Do you think the AI revolution could have a similar impact on society as the creation of the Internet, or even the Industrial revolution in the 18th century? Or even bigger than that?

Speaker B [00:02:25]:
I think it's at least like the Industrial Revolution. The Industrial Revolution made human strength more or less irrelevant. You couldn't get a job just because you were strong anymore. Now it's made human. It's going to make human intelligence more or less irrelevant.

Speaker A [00:02:41]:
Now, you and we in the media tend to focus on some of the downsides of AI. There are positives, obviously, otherwise you wouldn't have Worked on it early on. A lot of people are working to use this technology to benefit humanity as well, to lead to advances in medicine and the like. But you think the risks from AI outweigh the positives?

Speaker B [00:03:03]:
I don't know. So there are a lot of wonderful effects of AI. It'll make healthcare much better, it'll make education much better. It'll enable us to design wonderful new drugs and wonderful new materials that may deal with climate change. So there's a lot of good uses in more or less any industry where you want to predict something, it'll do a really good job. It'll do better than people were doing before, even things like the weather. But along with those wonderful things come some scary things. And I don't think people are putting enough work into how we can mitigate those scary things.

Speaker A [00:03:38]:
You come from the tech world, obviously. Do you think The Silicon Valley CEOs building these systems are taking the risks seriously at all? Do you think that they are driven mainly by financial interests? A lot of people are going to get very wealthy off this.

Speaker B [00:03:56]:
I think it depends which company you're talking about. Initially, OpenAI was very concerned with the risks, but it's progressively moved away from that and put less emphasis on safety and more emphasis on profit. Has always been very concerned with profit and less with safety. Anthropic was set up by people who left OpenAI and were very concerned with safety. And they still are probably the company most concerned with safety. But of course, they're trying to make a profit, too.

Speaker A [00:04:24]:
What do you think the government should do, if anything, when it comes to regulation of AI, putting some sort of restrictions or some sort of oversight, there's.

Speaker B [00:04:38]:
Many things they should do. The very least they could do is insist that big companies that release chatbots do significant testing to make sure those chatbots won't do bad things. Like now, for example, encouraging children to commit suicide. Now that we know about that, companies should be required to do significant testing to make sure that won't happen. And of course, the tech lobby would rather have no regulations. And it seems to have got to Trump on that. And so Trump is trying to prevent there being any regulations, which I think is crazy.

Speaker A [00:05:13]:
Can you. You know, these tech CEOs, I don't. When one of them learns that an AI chatbot has talked a child into suicide, what is it that stops them? What is it that. I mean, my impulse would be, well, holy smokes, stop AI right now until we fix this. So not one other kid dies, but they don't do that. Can you explain to us what their thinking is, if anything?

Speaker B [00:05:42]:
Well, I don't really know their thinking. I suspect that they think things like, well, there's a lot of money to be made here. We're not going to stop it just for a few lives. But I also think they may think there's a lot of good to be done here. And just for a few lives, we're not going to not do that. Good. For example, for driverless cars, they will kill people, but they'll kill far fewer people than ordinary drivers. So it's worth it.

Speaker A [00:06:09]:
Tech, you have said that you think there's a 10 to 20% chance that AI takes over the world. People at home might hear that. They might think it sounds like science fiction, it's alarmist. But that's a very real fear of yours, right?

Speaker B [00:06:24]:
Yes, it's a very real fear of mine and a very real fear of many other people in the tech world. Elon Musk, for example, has similar beliefs.

Speaker A [00:06:32]:
You wrote that 2025 was a pivotal year for artificial intelligence, for AI. What do you think we're going to see in 2026?

Speaker B [00:06:43]:
I think we're going to see AI get even better. It's already extremely good. We're going to see it having the capabilities to replace many, many jobs. It's already able to replace jobs in call centers, but it's going to be able to replace many other jobs. Each seven months or so. It gets to be able to do tasks that are about twice as long for a coding project. For example, it used to be able to just do a minute's worth of coding. Now it can do whole projects that are like an hour long. In a few years time, it'll be able to do software engineering projects that are months long. And then there'll be very few people needed for software engineering projects.

Speaker A [00:07:24]:
All right, Jeffrey Hinton, thank you so much. We really appreciate your time, and we hope that people are listening to your warnings.

Comments

Comment Person Name

Glynnis Campbell

This is a test comment!

Leave a Reply

Your email address will not be published. Required fields are marked *