From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. At the top of the list of most serious threats? Deepfakes.
By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone's child or relatives over a video call.
The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.
The participants were asked to rank the list in order of concern, based on four criteria: the harm it could cause, the potential for criminal profit or gain, how easy the crime could be carried out and how difficult it would be to stop.
From ZDNet
No entries found