BEIJING (Reuters) – A fraud in northern China that used sophisticated “deepfake” technology to convince a man to transfer money to a supposed friend has sparked concern about the potential of artificial intelligence (AI) techniques to aid financial crimes.
China has been tightening scrutiny of such technology and apps amid a rise in AI-driven fraud, mainly involving the manipulation of voice and facial data, and adopted new rules in January to legally protect victims.
Police in the city of Baotou, in the province of Inner Mongolia, said the perpetrator used AI-powered face-swapping technology to impersonate a friend of the victim during a video call and receive a transfer of 4.3 million yuan ($622,000).
He transferred the money in the belief that his friend needed to make a deposit during a bidding process, the police said in a statement on Saturday.
The man only realised he had been duped after the friend expressed ignorance of the situation, the police added, saying they had recovered most of the stolen funds and were working to trace the rest.
The case unleashed discussion on microblogging site Weibo about the threat to online privacy and security, with the hashtag “#AI scams are exploding across the country” gaining more than 120 million views on Monday.
“This shows that photos, voices and videos all can be utilised by scammers,” one user wrote. “Can information security rules keep up with these people’s techniques?”
($1=6.9121 Chinese yuan renminbi)
(Reporting by Ella Cao and Eduardo Baptista; Editing by Clarence Fernandez)
Credit: Source link