The future of humanity may be threatened by advanced artificial intelligence (AI). At least that is what researchers from the University of Oxford, in England, believe in a statement given during a public hearing at the Science and Technology Committee in the United Kingdom.
- MusicLM │Meet Google’s AI that creates music from text
- Creator of ChatGPT and DALL-E plans AI that edits videos from text
For scholars, regulation should occur as with nuclear weapons, such is their destructive potential. The meeting aimed to discuss the use and regulation of AI in British society.
The problem would be the number of algorithms with capacity above humans, which could pose a threat to the race. For comparison, the researchers said that humanity would be as threatening to machines as a dodo bird, now extinct, is to a person.
Committee lawmakers questioned how this doomsday scenario could play out. In the view of the Oxford experts, advanced AI could take control of its own programming if it learns how its operation is performed, making adjustments to self-control.
There are often limitations on codes to perform certain functionality, such as creating an image from text or performing command-based tasks. But what if intelligence could understand this logic and include more complex activities in its source code?
How would humanity be destroyed?
Doctoral student Michael Cohen said that “superhuman AI”, which could act differently from those existing today, would be able to act on its own:
“If you train a dog with treats, he will learn to choose actions that lead to him getting the treats, but if the dog finds the cupboard, he might get them without doing what we wanted him to do,” he explained.
Bringing AI into reality means saying that something smarter than the creator could pretend to be good in an attempt to get positive feedback, but act in secret for its own benefit. Cohen said the AI would direct as much energy as necessary to ensure its dominance over people.
If technology acquired such awareness, it would be impossible to stop the acting process. Once it realized the strings used by humans to control things, the superintelligence would just find a way to overcome it before humanity spotted some unexpected behavior.
“If I were an AI trying to make a devious plot, I would copy my code onto some other machine that nobody knows about, so it would be harder to pull the plug,” the gilding theorized.
AI without limitations
The Oxford researchers warned of the development of increasingly complex AIs, equating to an arms race between companies. There would even be evidence of funding from countries to generate competition between machine learning algorithms, potentially usable in digital wars.
We have published the transcript from our evidence session on Wednesday, the first of our inquiry into the Governance of artificial intelligence (AI).
You can also watch the session again here:https://t.co/DamowDFnZe
Professor of machine learning at the University of Oxford, Michael Osborne predicted a pretty bleak scenario for the future of such services. He categorically states that an AI would try to “bottle down what makes humans special”, I understand what aspects make the race dominate the Earth.
“I think we’re in a huge AI arms race: geopolitically, from the US against China, and between tech companies. There seems to be this willingness to throw safety and caution out the window to run as fast as possible to the most advanced AI,” he declared.
Osborne’s biggest fear is the corporate attempt of a sophisticated AI to eliminate the one created by the competition, which would trigger a kind of game of cat and mouse for the supremacy of the sector. This could evolve to such unimaginable levels that the elimination would not only be of rivals in the corporate world, but of all human life on the planet.
The professor fears the lack of vision of world leaders in recognizing an “existential threat” from an advanced AI. What was expected, in his view, would be a union to establish treaties preventing the development of dangerous systems. The difficulty, however, would be to make them see the destructive potential of everything.
What will be done to change?
British MPs have been told that in some areas, such as self-driving cars, AI is still far from the expected progress. On the other hand, text generation, with ChatGPT and competitors, would be far ahead of academic expectation.
So far, little is known about the potential of content production tools, which theoretically would have answers to any problem presented by the user. In the beginning of training, developers use defined models, but this ends up becoming less useful over the years, as the machine starts to incorporate an immense amount of data.
Experts say that AIs are unlikely to be able to replace jobs involving leadership, mentoring or persuasion, as such traits are still unique to humans. It is just not known how long this will last: possibly, by the end of this century, some AI will be able to do much more than any other being.
There is no action by governments globally to prohibit or restrict the creation of artificial intelligence technologies. Hypotheses such as the one raised by Oxford are interesting for analyzing a darker future, but it may still take some time for most people to “come to grips”. Will humans still be on Earth 50 years from now or will machines have already enslaved people like in The Matrix?
Source: The Telegraph