acm-header
Sign In

Communications of the ACM

Last byte

Reinventing Virtual Machines


Mendel Rosenblum

Credit: Stanford University

Stanford University professor Mendel Rosenblum, recipient of the inaugural ACM Charles P. "Chuck" Thacker Break-through in Computing Award, developed his groundbreaking virtual machines in the late 1990s as a way of enabling disparate software environments to share computing resources. Over the next two decades, these ideas would transform modern datacenters and power cloud computing services like Amazon Web Services, Microsoft Azure, and Google Cloud. Here, Rosenblum talks about scalability, systems design, and how the field has changed.

Virtual machines were pioneered by IBM in the 1960s. What prompted you to revisit the concept back in the 1990s?

When I got to Stanford, I joined up with John Hennessy, who was building a very large supercomputer with shared memory that scaled up to 4,000 processors. His group needed an operating system, because existing systems couldn't run on a machine of that size. That prompted me to start thinking about scalable operating systems. At the same time, I was working on another project about building operating systems on modern hardware. And I began trying to build simulation environments that you could run operating systems on.

This is the so-called SimOS machine simulator.

It was a piece of software that would run and look enough like the hardware machine that you could boot an operating system and all its applications on it. It was much, much slower than a real machine, but it let us model how the hardware was doing under realistic workloads.

Sequent Computer Systems, which was developing an operating system for multiprocessor machines, was interested in your work on scalable operating systems, but told you their team was too small to implement such a major change.

They also said they needed the machine to be able to run Microsoft Windows. So when I was on the plane back from Portland, it occurred to me that maybe if we just brought back the idea of virtual machines, and use that to sort of carve up these big machines we were building, we would be able to run existing operating systems on them.


"Technology has become such a prominent part of the world, and it's done a lot of good, but it's also enabled some not-so-good things."


Soon afterward, you and your students built a virtual machine monitor that could run multiple copies of commodity operating systems on a multiprocessor—starting with a MIPS processor and then, when you decided to commercialize your work, the Intel X86.

The Intel X86 was the dominant processor at the time, though we didn't really understand how complex it was. It was technically known as not virtualizable because it didn't have adequate hardware support. So we had to figure out some techniques for doing that. Linux was pretty simple, because it didn't use very much of the X86. But Windows was taking forever, and we kept discovering new things about the X86 architecture to figure out how to do it.

When you worked through all the problems, your prototype took eight hours to boot.

But it still had all the debugging stuff in it, along with everything else we needed to figure out when something went wrong. At the time, my wife was running the company, and she said, "eight hours, that's not going to work."

And I told her, "this is incredibly good news. We know everything we have to do to run Windows; we just need to figure out how to make it run fast enough."

VMWare was founded in 1998. Have your views changed at all since then about the relationship between universities and entrepreneurship?

Researchers, especially in applied fields like systems, are always trying to make an impact. And one of the biggest ways you can make an impact is to take your ideas and change the way that industry does something—that's sort of a best-case outcome. I'm very glad I was able to launch a company and move the industry to do things in a way that creates better outcomes for everyone.

The other aspect of it is probably not as noble—you can make a lot of money. You look around, and there's incredible wealth being created. If you're not part of that, you're going to end up being left behind or maybe not feel as successful.

VMware obviously had enormous impact on the industry. Are there things you would have done differently, knowing how the cloud computing industry has evolved since then?

VMware was fabulously successful in terms of a business venture. It had a unique product, and it was able to charge money for it. But one of the reasons it didn't become a standard virtualization platform is that major cloud computing vendors opted for an open-source solution rather than paying a high price for a proprietary piece of software.

I struggled with that at VMware. I really wanted to figure out how to make a product that was both successful and universally used. Would I do things differently now? It's a challenge. When you do something new, it takes a lot of resources to figure out how to do it, and you want to get rewarded for that. I know that people are launching more open-source companies now, but I just could not figure out how to do it.

How else has the field changed over the course of your career?

It used to be, in computer science, you would come up with an idea, and you basically looked at all the good things it could do and all the positive scenarios it supported. Now, we're seeing some very scary unintended consequences. Technology has become such a prominent part of the world, and it's done lot of good, but it's also enabled some not-so-good things. People expect computer scientists to anticipate those scenarios, and some of them imagine that all we technologists need to do is take more ethics courses. I'm a little skeptical of that.

You're back at Stanford after a leave of absence at BeBop, a development platform acquired by Google in 2016. What are you working on?

Stanford has a rule that in any seven-year window, you can take two years off for a sabbatical. In 1998, I took a leave to found VMware, and then more recently for BeBop, but I like it here at Stanford, with new students and new ideas coming in all the time.

One of the things I've been looking at recently is a very old problem. If you have a bunch of computers—in a datacenter, for example—they all have clocks. We still use a pretty old protocol called network time protocol, or NTP, to synchronize those clocks. Essentially, it involves stage messages saying, "I think my time is this. What do you think your time is?" You end up with clocks that are synchronized pretty well from a human point of view, because everyone's device looks like it's showing about the same time.

But for a computer, in a program, it's kind of worthless. I could easily send a message from one computer and find out that it arrived earlier than I sent it. So a colleague, Balaji Prabhakar, and I have been working on a new clock sync algorithm that can synchronize clocks down into the single-digit nanoseconds.

Back to Top

Author

Leah Hoffmann is a technology writer based in Piermont, NY, USA.


©2020 ACM  0001-0782/20/4

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2020 ACM, Inc.


Comments


Jean-Louis Lafitte

Pretty good article, which shows that Mendel is still far off the basic HDW virtualizer of Goldberg PhD...
so nothing white new
Regards,
Jean-Louis


Dmitry Zaitsev

As for virtual machines, i was all the way interested in benchmarks, say with LAPACK or something, how much they slowdown. Recently we found that Docker is fantastic from this point of view.
Best wishes,
Dmitry Zaitsev
http://daze.ho.ua


Displaying all 2 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: