acm-header
Sign In

Communications of the ACM

ACM TechNews

AI Has a Hallucination Problem That's Proving Tough to Fix


View as: Print Mobile App Share:
How can we stop artifical intelligence from falling victim to erroneous images?

Deep neural network software is vulnerable to sabotage by hallucination.

Credit: Mai Shotz

Deep neural network software that is driving innovation in consumer gadgets and automated driving is vulnerable to sabotage by hallucination, experts say.

So far, artificial intelligence (AI) attacks have been demonstrated only in lab experiments, but experts say the issue must be addressed to ensure the safety of technology such as the vision systems of autonomous vehicles and voice assistants with spending ability.

In January, a machine-learning conference announced it had selected 11 new papers to be presented in April that propose ways to defend or detect AI attacks.

Just three days later, first-year Massachusetts Institute of Technology graduate student Anish Athalye, with colleagues from University of California, Berkeley, claimed to have "broken" seven of the new papers, including those from Google, Amazon, and Stanford University. Athalye's work has sparked academic debate, but experts agree that it remains unclear how to safeguard deep neural networks.

From Wired
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account