Last week, it was announced on CBC that British-Canadian Computer Scientist Geoffrey Hinton had resigned from Google. He did so in order to speak out against potential dangers of the “monster” he helped to create–AI (Artificial Intelligence, ChatGPT, and many other platforms). Hinton foresees that AI will outdo human intelligence and endeavour.
Over the past 40+ years, Hinton has broken the ground on machine learning. Some view him as having made AI possible. Now, as CBC reports, he wants to see it slowed down and regulated for the good of society. Human intelligence will soon be supplanted by AI, he said in interviews with Don Pittis and Nil Koksal: “and due to national and business competition [notably between the US and China], there’s no obvious way to prevent it.”
On May 3rd, at the international “EmTech Digital” conference, Hinton commented that there’s a risk that AI could destroy humans: “In a few years time, they may be significantly more intelligent than people.”
He says we must try to keep AI controlled “when due to national and business competition, there’s no way to prevent” its supremacy. Hinton says AI can be benign but fears it’s inevitable that it will be used for harmful things. He fears what will happen to a world “where there [are] bad actors who want to build robot soldiers that kill people.”
As CBC business journalist Don Pittis cites, “Artificial General Intelligence (AGI) now joins climate change and nuclear Armaggedon as ways for humans to extinguish themselves.”
En route to that, Hinton remarks that new technology “will soon strip away jobs,” leading to a destabilizing gap between the rich and poor that current politics will be unable to solve.” Increases in productivity at work will see many workers turn redundant, so out of work, while the rich get richer and the poor, poorer.
“Fortune Business Insights” published a report that the global AI market size, valued in 2022 at “$428 Billion (USD) and in 2023 at 515.31 Billion (USD)” is now projected to grow by 2030 to “27 Trillion” (USD).
Daily and weekly activity on AI platforms is rapid-paced. Google DeepMind and LinkedIn have launched “Pi,” a competitor to ChatGPT, in order to keep current. The Canadian company, Thomson Reuters, says it’s planning “a deeper investment in artificial intelligence.”
Hinton has started a new Toronto-based company, “Cohere,” that is trying to raise $250 Million to mitigate the risk and loss that AI may usher in. One option for good, Hinton conceded, amidst so much worry, is “AI alignment, where intelligent computers work in harmony with humans, or at least don’t wipe us out.” Businesses have an opportunity to invest in a strategy to slow down the advance of “God-like AI.”
If such impending doom strikes fear into us, Hinton hopes that we can get the US and China to agree to fight against existential threats, as they did with nuclear weapons: “We’re all in the same boat with respect to existential threats so we all ought to cooperate on trying to stop it.” We need multi-government “treaties” like those which have prevented nuclear war.
In this period of uncertainty, when bullying, psychopath leaders (e.g. Putin, Trump) have already perpetrated wars and/or manipulated elections, Hinton says we want “symbiosis with AI–but it’s not guaranteed.”
And now it’s your turn. How do you think we can keep AI safe in different ways in global communities? What aspects of your life and work have already been radically altered by AI?