As Artificial Intelligence (AI) continues to attain breakthroughs in what’s possible, there is a growing concern about whether these innovations may become more powerful than their creators.
Matt Clifford, chairman of the UK’s Advanced Research and Invention Agency (ARIA), stressed this in a recent interview with a local news outlet.
AI Needs Regulation Within 2 Years
Clifford emphasized that AI needs to be regulated soon to curb the risk of becoming “very powerful” within the next two years, as humans do not currently have control over them.
“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today,” he said.
Commenting on the near-term and long-term risks that can sprout from using AI tools, Clifford said people could use AI-generated information to create bioweapons or conduct cyber attacks.
Clifford is not the only tech expert concerned about the risks tied to AI growth. In an open letter by the Center for AI Safety, 350 artificial intelligence experts endorsed the idea of AI being treated as an existential threat, just like nuclear weapons and pandemics threaten human existence.
Can AI Pose More Threats to Humanity?
Computer scientist and former Google employee Geoffrey Hinton also entertains the idea of AI overtaking power from humans. Earlier this month, he mentioned in an interview that humans are building intelligence that could outthink humanity and pose threats to our existence.
Hinton, considered one of the godfathers of AI, highlighted the efficiency and knowledge-sharing capabilities of digital intelligence compared to the limitations of biological intelligence. While admitting that AI comes with potential benefits, Hinton emphasized the need to mitigate and prevent any negative consequences tied to it.
Credit: Source link