Sign In

Communications of the ACM

ACM Opinion

Self-Supervised Learning and Large Language Models

View as: Print Mobile App Share:
A headshot of Stanford University PhD student Alex Tamkin.

Credit: Alex Tamkin

In an interview, Stanford PhD candidate Alex Tamkin discusses his research, which focuses on understanding, building, and controlling pre-trained models, especially in domain-general or multimodal settings.

Interview topics include viewmaker networks, opportunities and risks of foundation models, impacts of large language models, research culture, scientific communication, and more.

From The Gradient
View Full Article


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account