Sign In

Communications of the ACM

ACM Opinion

The Tech and Social Impact of AI's Emerging Foundation Models

View as: Print Mobile App Share:
Percy Liang, associate professor in computer science at Stanford University

When we think about all these models, like GPT-3, we're drawn to what they can do, such as generate text, code, images, but [they] can be useful for a lot of different tasks.

Percy Liang is an associate professor in computer science at Stanford University.

Foundation models in AI are typically giant neural networks made up of millions and billions of parameters, trained on massive amounts of data and later fine-tuned for specific tasks.

In an interview, Percy Liang talks about Stanford's Center for Research on Foundation Models (CRFM) and addresses issues such as the role foundational models in society; how to be sure they're safe, fair, and reliable; and who or what will have the resources to build them.

From The Register
View Full Article


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account