acm-header
Sign In

Communications of the ACM

Viewpoint

The Success of the Web: A Triumph of the Amateurs


www keys, illustration

Credit: Jiri Hera

The World Wide Web was created during a fervent time for computing, approximately 30 years ago. Back then, the Internet had manifested itself as a global network and it was going beyond its original borders, penetrating both corporate and residential domains. The idea of personal computers had grown for over a decade and it was then consolidating with consumers. Just the right time for a unifying killer app: the Web.

Thirty years later, the success of the Web is unquestionable. It has been growing exponentially in size since its introduction in 1989. It is at the heart of the current retail practices and, more generally, of the corporate world. The Web is so prominent as an Inter-networked application that often the term "Internet" is improperly used to refer to the Web.

Such centrality in our economic structure and lives means the Web, as a political, social, and economic instrument, can be very powerful. So powerful that it can steer a presidential election, turn fantasies into commonly accepted facts, but also educate the less privileged, and give free, broad access to knowledge. The Web of today has become at least as powerful as books have been since Gutenberg's invention of the metal movable-type printing press in Europe in approximately 1450. As we are increasingly acknowledging its power, we must also reflect on its sociological, economic, and philosophical consequences. In a past Communications column, then-Editor-in-Chief Moshe Vardi looked at the business models that emerged and demands to "build a better Internet," and again the term Internet here is mostly referring to the Web.6 Similarly, Noah Kulwin collects prominent opinions and argues that "Something has gone wrong with the Internet."4 Both arguments are historical in nature and delve into how business models and data exploitation can turn a resource for humanity into a dangerous weapon.7

Back to Top

Alan Kay's 2012 Interview

Was such power purposely embedded into the Internet and later the Web? Obviously not. The inventors had genuine and benevolent intentions. It is simply very difficult to predict the impact and consequences of an invention. However, what is necessary, and I am glad to see the trend growing, is the quest for historical reflections. Something that we, computer scientists—researchers of a relatively young discipline—are not sufficiently accustomed to. To put it bluntly, as ACM Turing Award recipient Alan Kay does, "The lack of interest, the disdain for history is what makes computing not-quite-a-field." While we want computing to be recognized, as it should be, as a field.

The Alan Kay quote comes from a 2012 interview that appeared in Dr. Dobbs Journal.3 The interview is full of insightful and poignant remarks. Kay identifies the Web as an example of a technology that was proposed while ignoring the history of the field it was contributing to. He explains, "The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web in comparison is a joke. The Web was done by amateurs."

The term "amateur" does not necessarily have a negative connotation; it can simply denote someone who engages in an activity for the sole pleasure of doing so. Someone that does not have the professional background and voluminous experience to carry it out. In this sense, Kay is right. The Web was not created by someone with an education in computer science, but in physics. ACM Turing Award recipient Tim Berners-Lee worked on his hypertextual system as a personal project while at the European Organization for Nuclear Research (CERN). In his 1999 book published to celebrate 10 years of the Web, he recalls, "I wrote it in my spare time and for my personal use, and for no loftier reason than to help me remember the connections among the various people, computers, and projects at the lab [CERN]."2


There is a natural tension between a professional design and an amateuristic one.


The Web is a distributed hypertext system and, when it was proposed, it had already many ancestors such as Trigg's Textnet, Brown University's Intermedia, Gopher, and HyperCard.1 Out of these, I highlight two fundamental proposals that came into existence in the 1960s—more than two decades before the Web. Ted Nelson designed and implemented a system called Xanadu and coined the term hypertextuality in approximately 1963. In the same period, ACM Turing Award recipient Doug Engelbart led a large project at SRI called the Online System (NLS) in which information had a hypertextual organization, among many other pioneering features. NLS was presented publically in 1968 in what would be later known as the "Mother of All Demos." Berners-Lee appears to be unfamiliar with these systems and in his book admits to becoming aware of NLS only five years after having proposed the Web. In summary, the lack of knowledge of the relevant background, an education in a loosely related discipline, and its genesis as a "hobby" project, all conform to amateurism.

There is a natural tension between a professional design and an amateuristic one. The professional design builds on theoretical foundations, best practices, and knowledge of the state of the art. This helps prevent the repetition of mistakes and poor designs. At the same time, it puts a rich set of constraints and may result in overdesigned, secondary components. On the contrary, an amateuristic design frees the creator from cultural legacies and possible biases. It promotes creativity, at the price of increasing the risk of naive, avoidable flaws.

Back to Top

Patching the Web

In its original 1989 design, the Web is a stateless, distributed, linked information repository. Based on three simple ingredients—HTML, HTTP, and URL—it leaves great freedom to the way information is created, exchanged, and distributed. Furthermore, there is very little structure and meaning given to the exchange units of the components of the Web, that is, marked-up text enclosed in an application-level protocol. Such simple design has made it possible to easily build components for the Web (servers, browsers, content editors) and has proved a crucial factor for its broad adoption and rapid success. At the same time, it does not provide for well-defined hypertextual systems. When compared with a system like Xanadu, Ted Nelson is highly critical, "HTML is precisely what we were trying to prevent—ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management." A valid point for reflection that should also be considered is the current broader picture of copyright management and information source determination. Though the Web creator defends his choice as a pondered design decision, "When I designed HTML for the Web, I chose to avoid giving it more power than it absolutely needed—a principle of least power, which I have stuck to ever since. I could have used a language like Donald Knuth's TEX, which though it looks like a markup language is in fact a programming language. It would have allowed very fancy typography and all kinds of gimmicks, but there would have been little chance of turning Web pages into anything else. It would allow you to express absolutely anything on the page, but would also have allowed Web pages that could crash, or loop forever."

Kay's criticism to the Web is orthogonal and touches on the computational side rather than the purely hypertextual one. Instead of a Web browser interpreting text, he favors an operating system-level container capable of hosting distributed objects in execution. This would mean a computational Web with well-defined semantics of execution, rather than a stateless distributed hypertext information repository.

The Web of today significantly differs from the original design of 1989. With its growth and success, it had to cater to many needs and requirements. The stateless nature of the interactions was one of the first shortcomings to be addressed; hence, cookies were introduced. The origin of cookies can be traced back to a request that came from a client of Netscape who wanted to store online transaction information outside of their servers for the purpose of making a Web-based shop. The feature was added to the September 1994 Netscape browser release, and discussions on the specification of the cookies started soon after, reaching the status of an agreed RFC, called HTTP State Management Mechanism, in 1997. Similarly, we saw the appearance of scripting languages, embedded virtual machines, graphical rendering frameworks, and so on. What I claim is the Web has undergone continuous patching that has slowly and gradually brought it into the direction of a computational infrastructure. These patches have diverse origins, making the Web the result of a collective engineering effort. It is in fact thanks to the interested and volunteer effort of millions of people that the Web has evolved into what it is today. In my recent book,1 I identify and discuss five major patches that have to do with the computational nature of the Web or—better said—with the lack of it in the original design.


It is thanks to the interested and volunteer effort of millions of people that the Web has evolved into what it is today.


One can similarly look at the evolution from the point of view of security and content presentation patches. In the book, I also reflect on the engineering consequences of starting from an amateuristic design then collectively patched; the end result being the empowering of global adoption. I identify the crucial factors for the success of a patched Web in an evolving landscape of hypertextual proposals.

Why has the Web succeeded where other similar, coeval systems failed? The end-to-end argument may very well apply here.5 Based on a series of experiences with networked applications at MIT, the authors suggest that even well-engineered layered architectures may cause high inefficiency in development and operation of systems. Working at the application level is the most practical and effective solution. To bring the argument to the Web case, the reasoning goes that a very well designed system would have been impractical or impossible to build—like Xanadu—while something like the Web with a simple application-level pattern and related technologies was the way to widespread deployment and use.

Back to Top

Conclusion

Can an amateuristic design succeed? History tells us the answer is yes. For the Web, many would even argue that amateurism is the winning factor; few that it is the sole possible one. The indisputable success of the Web, however, still leaves the open-minded researcher wondering about what the world would have looked like today if a hypertextually, semantically well-defined system of moving computational objects would have succeeded instead. Would we have the same issues of computational efficiency and security? Would problems such as data privacy and protection, copyright management, and fake news be alleviated? And, most interestingly, would it have been as successful as the current Web is? History has not favored systems that preceded or competed with the Web. Xanadu, despite its thorough design, never gained any adoption, Gopher succumbed to the simplicity and openness of the Web, HyperCard enjoyed a temporary and confined success. Can we conclude that amateuristic simplicity always wins adoption over complex engineering? It is difficult to say. For sure, the Web has had a transformational effect on society. Something similar to the long-lasting effects of printed books; something that could accompany us for many generations.

Back to Top

References

1. Aiello, M. The Web Was Done by Amateurs: A Reflection on One of the Largest Collective Systems Ever Engineered. Springer-Nature, 2018.

2. Berners-Lee, T. and Fischetti, M. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor. Harper, San Francisco, 1999.

3. Binstock, A. Interview with Alan Kay. Dr. Dobb's Journal, online edition, 2012; http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442.

4. Kulwin, N. The Internet apologizes. New York Magazine (Mar. 2018), https://slct.al/2GZ6hGr.

5. Saltzer, J.H., Reed, D.P., and Clark, D.D. End-to-end arguments in system design. ACM Transactions on Computer Systems (TOCS) 2, 4 (Mar. 1984), 277–288.

6. Vardi, M.Y. How the hippies destroyed the Internet. Commun. ACM 61, 7 (July 2018), 9.

7. Vardi, M.V. To serve humanity. Commun. 62, 7 (July 2019), 7.

Back to Top

Author

Marco Aiello (aiellom@ieee.org) is Professor of Service Computing at the University of Stuttgart (D) and member of the European Academy of Sciences and Arts.


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.


 

No entries found