Sign In

Communications of the ACM

ACM Opinion

Machine-Learning Robustness, Foundation Models, and Reproducibility

View as: Print Mobile App Share:
Percy Liang, Stanford University associate professor of Computer Science.

Credit: Percy Liang

Percy Liang is an associate professor of Computer Science at Stanford University, Stanford, CA, USA. He also serves as director of the university's Center for Research on Foundation Models.

Percy Liang's research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets.

In this podcast interview, he covers topics such as semantic parsing, machine-learning (ML) robustness, foundation models and model robustness, foundation model bias and academic research, reproducibility and CodaLab, and more.

From The Gradient
Listen to Podcast


No entries found