Sign In

Communications of the ACM

ACM TechNews

Medical AI, Radiologist Experts May Be Vulnerable to Adversarial Attacks

View as: Print Mobile App Share:
female radiologist examining an x-ray

Adversarial input fooled the diagnoses of radiologists and an AI model.

Credit: Getty Images

Researchers at the University of Pittsburgh and from China deceived an artificial intelligence (AI) breast cancer diagnosis model and human specialists with doctored mammograms.

The researchers trained a deep learning algorithm to differentiate between cancerous and benign cases with over 80% accuracy, then engineered an image-altering generative adversarial network. The model was tricked by 69.1% of the tampered images, while five human radiologists identified image authenticity with 29% to 71% accuracy, depending on the individual.

The study is published in Nature Communications.

"What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue," says Shandong Wu, an assistant professor in the Department of Biomedical Informatics at the University of Pittsburgh.

From News-Medical Life Sciences
View Full Article


Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account