Sign In

Communications of the ACM

ACM News

Machine Learning Has A Backdoor Problem

View as: Print Mobile App Share:

Machine learning backdoors are techniques that implant secret behaviors into trained ML models. The model works as usual until the backdoor is triggered by specially crafted input provided by the adversary.

Credit: 123RF

If an adversary gives you a machine learning model and secretly plants a malicious backdoor in it, what are the chances that you can discover it? Very little, according to a new paper by researchers at UC Berkeley, MIT, and the Institute of Advanced Study.

The security of machine learning is becoming increasingly critical as ML models find their way into a growing number of applications. The new study focuses on the security threats of delegating the training and development of machine learning models to third parties and service providers.

With the shortage of AI talent and resources, many organizations are outsourcing their machine learning work, using pre-trained models or online ML services. These models and services can become sources of attacks against the applications that use them.

From TechTalks
View Full Article



No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account