32 F
Denver
Monday, December 9, 2024
HomeWorld & NationCatholic News AgencyFormer Google scientist warns about the dangers of artificial intelligence

Former Google scientist warns about the dangers of artificial intelligence

A leading artificial intelligence researcher some call the “godfather” of AI says the technology is developing at a “scary” rate and warns that it should not expand beyond our ability to control it. He joins the voice of many leaders, including Pope Francis, who want to ensure ethical concerns are “built in” at the technology’s foundation.

Geoffrey Hinton, a longtime researcher at Google newly retired at age 75, has added his voice to those saying that the potential dangers of the new technology deserve scrutiny.

Software like the GPT-4 chatbot system, developed by the San Francisco start-up OpenAI, “eclipses a person in the amount of general knowledge it has and it eclipses them by a long way,” Hinton told BBC News. “In terms of reasoning, it’s not as good, but it does already do simple reasoning.”

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that,” he said.

GPT-4 is a large learning model trained on enormous amounts of data and historic and contemporary texts written by human authors. It can produce text on its own and generate blog posts, poems, and computer programs. It can engage in human-like conversations and provide answers to questions, the New York Times reported. However, these systems are still in the early stages of development and show various flaws: Despite speaking in a confident manner, GPT-4 presents incorrect information as factual and makes up information in incidents researchers call “hallucinations.”

Similar systems can generate convincing audio and photorealistic-style images and video, sometimes modeled after real people.

- Advertisement -

At present, these early versions of artificial intelligence lack self-awareness. There is debate about whether self-awareness is even possible for a digital creation.

Hinton, who now lives in Canada, was a pioneer in the creation and design of “neural networks,” the type of programming infrastructure that helps computers learn new skills and forms of analysis. It is used in many AI systems. He and two collaborators won the top honor of computing, the Turing Award, in 2018.

In Hinton’s analysis, these AI systems in development are very different from the software people are used to using.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said. Unlike biological intelligences like human beings, there can be many copies of the same digital systems with the same models of the world. Though they can learn separately, they share their knowledge “instantly.”

“So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person,” Hinton said.

For Hinton, one “worst case” or “nightmare” scenario is if a robot is developed and given the ability to create its own sub-goals, then decides its goal should be the maximization of its own power.

What are the risks of artificial intelligence?

Other researchers have voiced concern that these AI systems pose risks in the short, medium, and long term. Initial dangers include people wrongly trusting more effective and more convincing disinformation, including false information presented convincingly by an AI. Hoaxers and criminals might create fake phone calls that imitate the voice of a relative who claims to be in danger and needs money quickly.

Pope Francis was recently the subject of a widespread fake computer-generated photo. An image of the pope wearing a stylish white puffer coat went viral on social media sites with many people appearing to mistake the false photo for an authentic snapshot.

If AI successfully automates more tasks currently done by people, unemployment could become an issue, some fear. Internet content moderators, paralegals, personal assistants, and translators could see their jobs under pressure or replaced, the New York Times reported.

Long-term risks, like AI systems escaping human control and even destroying humanity, have long been a staple of science fiction. Some experts cite the unexpected behavior from AI systems currently being developed. If AI systems become interlinked with other internet services and become so powerful they can write their own code to self-modify themselves, out-of-control AI could become a real danger.

Pope Francis, other Catholics speak out

Pope Francis has said that science and technology have practical benefits and are evidence of man’s ability “to participate responsibly in God’s creative action.”

“From this perspective,” the pope said at a March 27 Vatican audience, “I am convinced that the development of artificial intelligence and machine learning has the potential to contribute in a positive way to the future of humanity; we cannot dismiss it.”

“At the same time, I am certain that this potential will be realized only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly,” he said.

The remarks came at a Vatican audience with participants in the Minerva Dialogues, a digital technologies-focused gathering of scientists, engineers, business leaders, lawyers, philosophers, Catholic theologians, ethicists, and members of the Roman Curia.

The pope encouraged these leaders to make “the intrinsic dignity of every man and every woman the key criterion” in evaluating emerging technologies.

Pope Francis said he welcomes the regulation of artificial intelligence so that it might contribute to a better world. He also said he is reassured to know many people working on new technologies put ethics, the common good, and the human person at the center. He emphasized his concern that digital technologies are increasing world inequality and reducing the human person to what can be known technologically.

The pontiff emphasized: “A person’s fundamental value cannot be measured by data alone.” Social and economic decision-making should be “cautious” about delegating its judgments to algorithms and the processing of data on an individual’s makeup and prior behavior.

“We cannot allow algorithms to limit or condition respect for human dignity or to exclude compassion, mercy, forgiveness, and above all, the hope that people are able to change,” he said.

At the 2020 assembly of the Pontifical Academy for Life, academy members joined presidents of IBM and Microsoft to sign a document calling for the ethical and responsible use of artificial intelligence technologies. The document focused on the ethics of algorithms and the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

RELATED ARTICLES

Most Popular

[bsa_pro_ad_space id=6]

[bsa_pro_ad_space id=6]