Advancements around artificial intelligence technology are pushing the world into 鈥渁 period of huge uncertainty,鈥 according to AI pioneer Geoffrey Hinton. As the technology becomes smarter, the 鈥済odfather of AI鈥 is highlighting six harms it may pose to humans.

While speaking at this year鈥檚 Collision tech conference in Toronto on Wednesday, Hinton explained that some of the danger around using AI stems from the possibility that it may develop a desire to control others.

鈥淲e have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control,鈥 Hinton said. 鈥淚f they do that, we鈥檙e in trouble.鈥

The cognitive psychologist and computer scientist resigned from Google earlier this year to speak more openly about the potential dangers of AI. Hinton has been voicing his concerns for months as AI technology has become more accessible to the public through tools such as ChatGPT.

Use of the AI chatbot has exploded since it was released in November 2022. Developed by OpenAI, an artificial intelligence research company, the tool is capable of imitating human-like conversation in response to prompts submitted by users. As a large language model, ChatGPT digests substantial amounts of data in text form and provides responses based on the information it has ingested.

But along with raising ethical issues related to plagiarism and the disclosure of personal information, ChatGPT has also produced offensive and biased results.

Hinton took centre stage at the conference and spoke to hundreds of attendees, some of whom sat on the floor after seats quickly filled up. More than 40,000 people from around the world descended upon Toronto for this year鈥檚 Collision tech conference, and nearly every talk touched on the wide-ranging implications of AI.

In his chat with Nick Thompson, CEO of The Atlantic, Hinton said large language models 鈥渟till can鈥檛 match鈥 human reasoning, although they are getting close. When Thompson asked if there is anything humans can do that a large language model could not replicate in the future, Hinton responded with 鈥淣o.鈥

鈥淲e鈥檙e just a machine 鈥 we鈥檙e just a big neural net,鈥 the British-Canadian scientist said. 鈥淭here鈥檚 no reason why an artificial neural net shouldn鈥檛 be able to do everything we can do.鈥

A fellow 鈥済odfather of AI,鈥 computer scientist Yann LeCun, shared his outlook on artificial intelligence at the Viva Technology conference in Paris earlier this month,

Hinton, LeCun and Yoshua Bengio won the A.M. Turing Award, known as the Nobel Prize of computing, in 2018.

鈥淭he effect of AI is to make people smarter,鈥 LeCun said on June 14. 鈥淵ou can think of AI as an amplifier of human intelligence and when people are smarter, better things happen.鈥

Hinton, however, remains skeptical that AI designed with good intentions will prevail over technology developed by bad actors.

鈥淚鈥檓 not convinced that good AI that is trying to stop bad AI getting control will win,鈥 he said.

Below are six key dangers AI may pose to humans, according to Hinton:

1. BIAS AND DISCRIMINATION

By training with data sets that are biased, AI technology and large language models such as ChatGPT are capable of producing responses that are equally biased, Hinton said.

For example, a post from one Twitter user in December 2022 shows the chatbot wrote code that said , a response that would have been derived from the data it was trained on. ChatGPT鈥檚 response to the prompt has since been updated and OpenAI has said it is .

Despite these challenges, Hinton said it鈥檚 relatively easy to limit the potential for bias and discrimination by freezing the behaviour exhibited by this technology, analyzing it and adjusting parameters to correct it.

2. BATTLE ROBOTS

The idea of armed forces around the world producing lethal autonomous weapons such as battle robots is a realistic one, Hinton said.

鈥淒efence departments are going to build them and I don鈥檛 see how you can stop them doing it,鈥 he said.

It may be helpful to develop a treaty similar to the Geneva Conventions in order to establish international legal standards around prohibiting the use of this kind of technology, Hinton said. But such an agreement should be developed sooner rather than later, he said.

Last month, a met to discuss lethal autonomous weapon systems. However, after 10 years of deliberation, international laws and regulations on the use of these weapon systems don鈥檛 yet exist.

Despite this, such technology is likely to continue to develop. Looking at the ongoing war in Ukraine, the country鈥檚 digital transformation minister, Mykhailo Fedorov, said fully autonomous killer drones were 鈥渁 logical and inevitable next step鈥 in weapons development, according to The Associated Press.

3. JOBLESSNESS

The development of large language models will help increase productivity among employees and in some cases, may replace the jobs of people who produce text, Hinton said.

Other experts have also shared their concerns over AI鈥檚 potential to replace human labour in the job market. But employers will be more likely to use AI to replace individual tasks rather than entire jobs, said Anil Verma, professor emeritus of industrial relations and human resources management at the University of Toronto鈥檚 Rotman School of Management.

Additionally, the adoption of this technology will happen 鈥済radually,鈥 said Verma, who specializes in the impact of AI and digital technologies on skills and jobs.

鈥淥ver time, some jobs will be lost, as they have been through every other wave of technology,鈥 Verma told CTVNews.ca in a telephone interview on May 24. 鈥淏ut it happened at a rate that we were able to adjust and adapt.鈥

While some may be hopeful that AI will help generate employment in new fields, Hinton said he is unsure of whether the technology will create more jobs than it will eliminate.

His recommendation to young people is to pursue careers in areas such as plumbing.

鈥淭he jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled,鈥 he said. 鈥淸Manual dexterity] is still hard [to replicate].鈥

4. ECHO CHAMBERS

One problem that has existed prior to the development of large language models and is likely to continue is the establishment of online echo chambers, Hinton said. These are environments where users come into contact with beliefs or ideas similar to their own. As a result, these perspectives are reinforced while other opinions are not considered.

There may be programs with algorithms using AI that have been trained on human emotion to expose users to a certain type of content, Hinton said. He brought up the example of large companies feeding users content that makes them 鈥渋ndignant鈥 to try to encourage them to click.

It鈥檚 an open question as to whether AI could be used to resolve this issue or make it worse, Hinton said.

5. EXISTENTIAL RISK

Finally, Hinton also raised concerns over the threat AI may pose to the existence of humanity. If this technology becomes much smarter than humans and is capable of manipulating them, it may take over, Hinton said. Humans have a strong, built-in urge to obtain control, and this is a trait AI will be able to develop, too, said Hinton.

鈥淭he more control you get, the easier it is to achieve things,鈥 he said. 鈥淚 think AI will be able to derive that, too. It鈥檚 good to get control so you can achieve other goals.鈥

Humans may not be able to overpower this desire for control, or regulate AI that may have bad intentions, Hinton said. This could contribute to the extinction or disappearance of humanity. While some may see this as a joke or an example of fearmongering, Hinton disagrees.

鈥淚t鈥檚 not just science fiction,鈥 he said. 鈥淚t is a real risk that we need to think about and we need to figure out in advance how to deal with it.鈥

6. FAKE 麻豆影视

AI also has the ability to disseminate fake news, Hinton said. As a result, it鈥檚 important to mark information that鈥檚 fake as such to prevent misinformation, he said.

Hinton pointed to governments that have made it a criminal offence to knowingly use or keep counterfeit money, and said something similar should be done with AI-generated content that is deliberately misleading. However, he said he is unsure whether this kind of approach is possible.

CAN ANYTHING BE DONE TO HELP?

Hinton said he has no idea how to make AI more likely to be a force for good than for bad. But before this technology becomes incredibly intelligent, he urged developers to work on understanding how AI might go wrong or try to overpower humans.

Companies developing AI technology should also put more resources into stopping AI from taking over rather than just making the technology better, he said.

鈥淲e seriously ought to worry about mitigating all the bad side-effects of [AI],鈥 he said.

With files from The Canadian Press