Almost a decade ago, Turing Award winner and now former Google employee Dr. Geoffrey Hinton enthusiastically exclaimed to his AI students, “Now I understand how the brain works!” Now, however, the “godfather of AI” has left Google – and you’re more likely to hear him desperately ringing the AI alarm bells.
In Exit the interview with The New York Times (opens in a new tab), Hinton expressed deep concern about the rapid development of AI, saying “It’s hard to see how you can prevent bad actors from using it for bad things.”
A direct line connects Hinton’s decades-long pioneering work in neural networks with today’s chatbots (ChatGPT, Google Bard, Bing AI). His groundbreaking discoveries more than a decade ago led Google to engage him in developing next-generation deep learning systems that could help computers interpret images, text, and speech in a similar way to how humans do.
Talking to WI red in 2014 (opens in a new tab)Hinton’s enthusiasm was clear: “I am very excited when we discover a way to improve neural networks – and when it is closely related to how the brain works.”
He was talking to a completely different Hinton The New York Times (opens in a new tab) this week to outline all the ways that AI could lead humanity off the track. Here are the most important conclusions:
The rush to compete means the bars fly off
While companies like Microsoft, Google and OpenAI often say they are taking a slow and cautious approach to AI development and chatbot deployment, said Dr. Hinton. The New York Times that he worries that in fact increased competition results in a less cautious approach. And he is undoubtedly right: earlier this year, we witnessed Google launch Bard, not ready for launch, only to be met with the unexpected arrival of Microsoft’s Bing AI, based on ChatGPT.
Can these companies balance the market imperative of being one step ahead of the competition (for example, Google remains #1 in search) with the greater good? Dr. Hinton is not convinced now.
Losing the truth
Dr. Hinton worries about the proliferation of AI leading to a deluge of fake content online. Of course, this is less of a future-proof concern than a real-time problem as people are now being regularly tricked by AI music counterfeits the vocal gifts of the masters (including the dead), AI news images (opens in a new tab) treated as real and generative images winning photo contests (opens in a new tab). Thanks to the power and pervasiveness of deep fakes, few of the movies we watch today can be taken at face value.
Still, Dr. Hinton may be right that this is only the beginning. Until Google, Microsoft, OpenAI, and others do something about it, we won’t be able to trust anything we see or even hear.
Ruined labor market
Dr. Hinton warned Times that artificial intelligence is supposed to do more than just tasks we don’t want to do.
Many of us have taken ChatBots like Bard and ChatGPT for a spin writing presentations, proposals, and even programming. Most of the output is not ready for primetime, but some of it is or is at least satisfactory.
They are now dozens of AI generated novels for sale on Amazon, and the Writers Guild of America in the US expressed concern that if they did not agree to a new contract, the studios could outsource their work to AI. And while there have been no massive layoffs directly related to AI, the development of these powerful tools is leading to some rethink your workforce.
Unexpected and unwanted behavior
One of the hallmarks of neural networks and AI for deep learning is that it can use huge amounts of data for self-learning. One of the unintended consequences of this human-brain-like power is that the AI can learn lessons you never expected. Self-defining AI can be the AI in these lessons. Dr Hinton said AI, which can not only program but also run its own programs, is of particular concern, especially when it comes to unintended outcomes.
AI will be smarter than us
Current AI often appears to be smarter than humans, but with its penchant for hallucinations and fabricated facts, it is far from our greatest minds. Dr. Hinton believes the day when AI will outsmart us is fast approaching, and certainly faster than he originally anticipated.
Artificial intelligence may be able to feign empathy – see its latest attempt as source of medical advice (opens in a new tab) – but that’s not the same as true human empathy. Super-intelligent systems that can figure everything out but fail to take into account how their choices may affect people are deeply disturbing.
While Dr. Hinton’s warnings are in stark contrast to his original enthusiasm for the tech sector he almost invented, something he said Wire about the work on artificial intelligence systems that mimic the human brain in 2014 now sounds strangely prophetic: “We are no longer crazy. Now we are the core of the madmen.”