Sign In

Communications of the ACM


Objects Never? Well, Hardly Ever!

puzzle with signs and symbols

Credit: Marcello Bortolino /

At the 2005 SIGCSE (Special Interest Group in Computer Science Education) Symposium in St. Louis, MO, a packed audience listened to the Great Objects Debate: Should we teach "objects first" or "objects later"?1 In the objects-first approach, novices are taught object-oriented programming (OOP) in their initial introduction to programming, as opposed to an objects-later approach, where novices are first introduced to procedural programming, leaving OOP to the end of the first semester or the end of the first year. Kim Bruce and Michael Kölling spoke in favor of the objects-first approach, while their opponents Stuart Reges and Eliot Koffman argued for teaching procedural programming first. One of Bruce's arguments was: since OOP is dominant in the world of software development, it should be taught early. I later contacted Bruce to ask for a warrant for the dominance of OOP, but he could not give me one, nor could any of several other experts to whom I posed the same question.

I claim that the use of OOP is not as prevalent as most people believe, that it is not as successful as its proponents claim, and, therefore, that its central place in the CS curriculum is not justified.

Back to Top

Is OOP Dominant?

In assessing the dominance of OOP, we have to watch out for proxies. The extensive use of languages that support OOP proves nothing, because languages are chosen for a myriad of reasons, not necessarily for their suitability for OOP, nor for the suitability of OOP itself. Similarly, the use of a CASE tool that supports OOP is another proxy; these tools might just be convenient and effective for expressing the software design of a system, whether OOP is being used or not. Furthermore, many practices associated with OOP, such as decomposing software into modules and separating the interface from the implementation, are not limited to OOP; they are simply good software practice and have been supported by modern programming languages and systems for years.

The classical definition of OOP was given by Peter Wegner9: object-oriented = objects + classes + inheritance. The Java Swing GUI library, which makes massive use of inheritance, is frequently mentioned as a successful example of software that was designed using object-orientation and it certainly fits Wegner's definition. Is this style of programming truly dominant?

There is a claim that 90% of all code is being written for embedded systems.7 I could not locate the author's source, but it doesn't really matter since the claim is just as suspect as the claim that OOP is dominant. However, embedded system development is surely an important field of software, and, based upon my experience, I do not believe that OOP has a significant contribution to make here because the main challenges are not in the software design itself. The challenges arise from an "unfriendly environment": getting proprietary hardware to work, obtaining meaningful requirements while the system itself is being designed, integration with non-standard networks and busses, and, above all, determining out how to test and verify the software.

Consider another field where the dominance of OOP is questionable. The development and implementation of new algorithms form the heart of many applications areas like numerical simulation (for example, climate modeling) and image processing (for example of satellite imagery). The challenges arise from mathematical difficulties and demands for performance, and OOP has little to contribute to meeting these challenges.

Not only is there no evidence to back up the claims for the dominance of OOP, but there is criticism of OOP, some of it quite harsh.3,8 I, too, have found OOP to be extremely disappointing and I will explain my position from a personal perspective.

Back to Top

What the "Real World" is Really Like

Suppose you ask your students to design OOP software for a car; you would probably give a good grade for the example shown in Figure 1. The only problem is that the real world doesn't work this way. A wonderful image in a paper by Grimm5 shows a schematic diagram for the computer system of the Mercedes-Benz S-class car. The legend for the schematic diagram indicates there are over 50 controllers, 600,000 lines of code, hundreds of bus messages, thousands of signals, and three networks. The details of this system are proprietary, but I am confident that no one sat down and used OOP to "design the software," for example, by deriving classes as shown in Figure 1. Almost certainly, the various subsystems were subcontracted to different companies who jealously guard their software because they are engaged in merciless competition.

Isn't it just possible that my inability to profit from OOP reflects a problem with OOP itself and not my own incompetence?

The interface to the brake system will be implemented by network protocols and bus signals, and the commands to the brakes will be given as bits and bytes (or even by a hardware specification like "apply the brakes when lines 1 and 5 are asserted continuously for at least 10 milliseconds"). An abstract specification like void ApplyBrakes() is meaningless here. More importantly, what is likely to be changed is the interface, contrary to the OOP approach, which assumes that different implementations will be "swapped" at a single interface. Let us imagine that at some time in the future the brake manufacturer is asked to supply systems to Daimler competitor BMW. The mechanics, hydraulics, electronics, and algorithms will be reused, but the network protocols and bus signals will certainly require significant modification to fit the systems architecture used by BMW.

I believe that industrial systems are successful because the decomposition is not into classes, but into subsystems. The Mercedes-Benz car has, on the average, 600,000/50 = 12,000 source code lines per controller, so each individual subsystem can be developed by a relatively small team in a relatively short time. There is a need for talented systems engineers to specify and integrate the subsystems, but there is no overall grand software design where OOP might help.

Back to Top

Natural and Intuitive

In the 43 years since I first learned to program, I have frequently become excited about developments in programming, such as pattern matching (which I first encountered in SNOBOL) and strong type checking (a revelation when I first learned Pascal), and I found that these new constructs naturally and intuitively supported solutions to programming tasks. I have never had the same feeling about OOP, despite teaching it, writing textbooks on OOP languages, and developing pedagogical software in Java. During all this time, I found only one natural use of inheritance. (I developed a tool for learning distributed algorithms2 and found it convenient to declare an abstract class containing the common fields and methods of the algorithms and then to declare derived classes for specific algorithms.) Isn't it just possible that my inability to profit from OOP reflects a problem with OOP itself and not my own incompetence?

I am not the only one whose intuition fails when it comes to OOP. Hadar and Leron recently investigated the acquisition of OOP concepts by experienced software developers. They found that: "Under the demands of abstraction, formalization, and executability, the formal OO paradigm has come to sometimes clash with the very intuitions that produced it."6 Again, isn't it just possible that the intuition of experienced software engineers is perfectly OK, and that it is OOP that is not intuitive and frequently even artificial?

Back to Top

Reuse from the Trenches

One of the strongest claims in favor of OOP is that it facilitates reuse. I would like to see evidence to support this, because, in my experience, OOP makes reuse difficult, if not impossible. Here, I would like to describe two attempts at reuse where I truly felt that OOP was the problem and not the solution. I would like to emphasize that—as far as I can judge—these programs were designed according to the principles of OOP, and the quality of the design and programming was excellent.

I developed the first concurrency simulator for teaching based upon a Pascal interpreter written by Niklaus Wirth. Several years ago, while looking for a modern concurrency simulator, I found a third-generation descendant of my simulator: an interpreter written in Java, extended with a debugger that had a Swing-based GUI. I wished to modify this software to interpret additional byte codes and to expand the GUI by including an editor and a command to invoke the compiler.

The heart of an interpreter is a large switch/case-statement on the instruction codes. An often-cited advantage of OOP is its ability to replace these statements with dynamic dispatching. In the Java program, an abstract class for byte codes was defined, and from it, other abstract and concrete classes were derived for each of the byte codes. I simply found it more difficult (even with Eclipse) to browse and modify 80 classes than I did when there were 80 alternatives of a case-statement in Pascal.

This was only an annoyance; the real problem quickly surfaced. The extreme encapsulation encouraged by OOP caused untold complexity, because objects have to be passed to various classes via constructors. For example, in the original program, when a button is clicked to request the display of the history window, the statement performed in the event handler is as shown in Figure 2. Well, the history window is derived from an abstract window class, so OOP makes sense here, but there is one debugger, one debugger frame, one interpreter, and one window manager. Why can't these subsystems be declared publicly (and implemented privately) without the baggage of allocated objects and constructors? My attempt to modify the software was continually plagued by the need to access one of these subsystems from a class that had not been passed the proper object. This resulted in cascades of modifications and complicated the task considerably; in addition, it led to a decline in coherence and cohesion. As a result of this experience, I have ceased to automatically encapsulate everything; instead, I judge each case on its own merits. In general, I see nothing wrong with declaring record types and subsystem objects publicly, encapsulating only the implementation of data structures that are likely to change.

My second attempt at reusing OOP software involved a software tool VN that I developed for learning nondeterminism. It takes as input the XML description of a nondeterministic finite automaton that is generated by an interactive graphical tool for studying automata. To facilitate using VN as a single program, I decided to extract the graphics editor from the other tool. But OOP is about classes and Java enables the use of any public declaration anywhere in a program just by giving its fully expanded name. There were just enough such references to induce a cascade of dependencies when I tried to extract the Java package containing the graphics editor.

This is precisely the issue I raised with the imaginary brake system. What I wanted to reuse was the implementation of the graphics editor even if that meant modifying the interface. I saw that I would have had to study many of the 400 or so classes in 40 packages, just to extract one package. The effort did not seem worthwhile, so I gave up the idea of reusing the package and included the (very large) jar file of the other tool in my distribution.

I do not believe there is a "most successful" way of structuring software nor that any method is "dominant."

Back to Top


I suspect I know what your next question is going to be: What paradigm do you propose instead of OOP? Ever heretical, I would like to question the whole concept of programming paradigm. What paradigms are used to design bridges? My guess is the concept of paradigm does not exist there. Engineering design is done by using technology to implement requirements. The engineer starts from data (length of the bridge, depth of the water, characteristics of the river bed) and constraints (budget, schedule), and she has technology to use in her design: architecture (cables, stays, trusses) and materials (steel, concrete). I simply don't see a set of alternative "paradigms" for building bridges.

Similarly, the software engineer is faced with requirements and constraints, and is required to meet them with technology: computer architectures, communication links, operating systems, programming languages, libraries, and so on. Systems are constructed in complex ways from these technologies, and the concept of programming paradigm is of little use in the real world.

Back to Top


It is easy (and not incorrect) to dismiss what I have written as personal opinion and anecdotes, just as I have dismissed OOP as based upon personal opinion and anecdotes without solid evidence to support its claims. But the difference between me and the proponents of OOP is that I am not making any hegemonic claims for my opinions. I do not believe there is a "most successful" way of structuring software nor that any method is "dominant." This hegemony is particularly apparent in CS education, as evidenced by the objects-first vs. objects-later debate concerning teaching OOP to novices. No one questions whether OOP is at all appropriate for novices, and no one suggests an objects-as-an-upper-level-elective approach or an objects-in-graduate-school approach. Perhaps the time has come to do so.

Back to Top


I will conclude with a "to-do list":

  • Proponents of OOP should publish analyses of successes and failures of OOP, and use these to clearly and explicitly characterize the domains in which OOP can be recommended.
  • Software engineers should always use their judgment when choosing tools and techniques and not be carried away by unsubstantiated claims. Even if you are constrained to use a language or tool that supports OOP, that in itself is not a reason to use OOP as a design method if you judge it is as not appropriate.
  • Educators should ensure students are given a broad exposure to programming languages and techniques. I would especially like to see the education of novices become more diverse. No harm will come to them if they see objects very, very, late.

Back to Top


1. Astrachan, O., Bruce, K., Koffman, E., Kölling, M., and Reges, S. Resolved: Objects early has failed. SIGCSE Bulletin 37, 1 (Feb. 2005), 451–452. DOI:

2. Ben-Ari, M. Distributed algorithms in Java. SIGCSE Bulletin 29, 3 (Sept. 1997), 62–64. DOI:

3. Gabriel, R. Objects Have Failed: Notes for a Debate, (2002);

4. Gries, D. A principled approach to teaching OOP first. SIGCSE Bulletin 40, 1 (Feb. 2008), 31–35. DOI:

5. Grimm, K. Software technology in an automotive company: Major challenges. In Proceedings of the 25th international Conference on Software Engineering (Portland, OR, May 3–10, 2003). International Conference on Software Engineering. IEEE Computer Society, Washington, D.C., 498–503.

6. Hadar, I. and Leron, U. How intuitive is object-oriented design? Commun. ACM 51, 5 (May 2008), 41–46. DOI:

7. Hartenstein, R. The digital divide of computing. In Proceedings of the 1st Conference on Computing Frontiers (Ischia, Italy, Apr. 14–16, 2004). CF 2004. ACM, New York, 357–362. DOI:

8. Jacobs, B. Object Oriented Programming Oversold!;

9. Wegner, P. Dimensions of object-based language design. In Conference Proceedings on Object-Oriented Programming Systems, Languages and Applications (Orlando, FL, Oct. 4–8, 1987). N. Meyrowitz, Ed. OOPSLA 1987. ACM, New York, 168–182. DOI:

Back to Top


Mordechai (Moti) Ben-Ari ( is an associate professor in the Department of Science Teaching at Weizmann Institute of Science in Rehovot, Israel, and an ACM Distinguished Educator.

Back to Top



Back to Top


F1Figure 1. Example OOP software for a car.

F2Figure 2. Example statement performed in the event handler.

Back to top

Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


Michael Holakovsky

Ever read the book "The Art of Unix Programming" by Eric Raymond? Your arguments are basically the old concepts of Unix: make small components and use them together to fulfill something bigger.

Alan Kay

I think this article raises important issues.

A good example of a large system I consider "object-oriented" is the Internet. It has billions of completely encapsulated objects (the computers themselves)and uses a pure messaging system of "requests not commands", etc.

By contrast, I have never considered that most systems which call themselves "object-oriented" are even close to my meaning when I originally coined the term.

So part of the problem here is a kind of "colonization" of an idea -- which got popular because it worked so well in the ARPA/PARC community -- by many people who didn't take the trouble to understand why it worked so well.

And, in a design oriented field such as ours, fads are all to easy to hatch. It takes considerable will to resist fads and stay focused on the real issues.

Combine this with the desire to also include old forms (like data structures, types, and procedural programming) and you've got an enormous confusing mess of conflicting design paradigms.

And, the 70s ideas that worked so well are not strong enough to deal with many of the problems of today. However, the core of what I now have to call "real oop" -- namely encapsulated modules all the way down with pure messaging -- still hangs in there strongly because it is nothing more than an abstract view of complex systems.

The key to safety lies in the encapsulation. The key to scalability lies in how messaging is actually done (e.g. maybe it is better to only receive messages via "postings of needs"). The key to abstraction and compactness lies in a felicitous combination of design and mathematics.

The key to resolving many of these issues lies in carrying out education in computing in a vastly different way than is done today.

Best wishes,


John Fitzpatrick

The author's difficult experiences with re-use of object-oriented code come more, I think, from poorly-designed systems than flaws in OOP.

My attempts at the re-use of other people's code have often been frustrating, regardless of the programming paradigms (OOP, structured programming, or just plain code).

Conversely, I have had success with OOP on small teams. We constructed our code and refactored redundant code; we leveraged inheritance to push common code into parent classes and keep specific code in derived classes. Is this code re-use? Yes, in a small sense. Is it the grand idea of re-use of code by anyone, on another (unrelated) system? Certainly not.

Object-oriented programming will not guarantee understandable, re-usable code. Neither did structured programming, flowcharts, or high-level compilers.


It's the best we've got. (So far.)

Object-oriented programming lets us group (and split) our concepts. And as the good Mr. Kay observes, safety lies in encapsulation. OOP gives us that encapsulation.

Recent efforts in language design have given us dynamic languages and functional languages. These offer possibilities for programming. They build on OOP, just as OOP built on structured programming.

OOP may not be dominant, but it will be part of our future.

Paul Valckenaers

The core of OO lies elsewhere. OOP is about the tool. The name Simula-67, perhaps the Adam and Eve of OOP, gives a first clue. During his lecture in Leuven (B) in 1986, Jackson provided me a second clue: the world-of-interest is much more stable than the user requirements, software features or functionality. Note: Jackson was teaching on developing and programming administrative software in COBOL, not about OOP.

Jacksons example was about personnel administration where hiring, promoting people will remain a relatively constant in the domain. The report generating functionality and features, requested by personnel management and the laws, are likely to change a lot more frequently. Therefore, Jackson recommended modeling the world-of-interest first, including track-and-trace, and to implement the functionality and features required by the user second, where each feature interacts with this mirror image of the world-of-interest.

Allow me to add a third clue: when the world-of-interest is part of the real world, the pieces of your software that model parts of the real world inherit the consistency and coherence of the real world. Integrating such pieces of software is analogous to integrating road maps: they may have different conventions and include different aspects of the world-of-interest but they cannot conflict in the way policies, laws, rules, resource allocation decisions often do.

Therefore, true OO adopts the Unified Process with an additional constraint. At first, the use cases only serve to identify the relevant entities in the problem domain. They are to be forgotten while the developers create software that mirrors the problem domain. These developers must not rely on use case information to speed up, simplify this first effort. When a software model (or modeling facility) for the problem domain or world-of-interest is available, the use cases re-enter into the picture and the user needs are addressed. Thus, OO is about creating software artifacts whose validity and (re)usability solely depends on the presence stable counterparts in the real world.

Why is it so difficult to communicate this insight and message in the IT community? The answer is twofold. First, a lot of software developments (e.g. administrative applications) have a standardized world-of-interest that is so stable and omnipresent that the problem domain model has become implicit. Moreover, many of these applications have a world-of-interest that is artificial (and partially standardized by legacy). Without a community effort and common understanding, explicit problem domain modeling mirroring the real-world entities that are affected remains uninteresting and unfeasible for individual players.

Second, a lot of software developments cannot tolerate an explicit problem domain model in the final application (e.g. telecom, embedded systems where power consumption, execution speed are key concerns). They require the domain model to be compiled into the final code.

In view of IT being a young domain suffering from a shortage of talented developers and the affinity of IT professionals with the above two classes of software, the full contribution of OO remains largely untapped. However, if IT needs to penetrate application domains where the penalty of imposing an IT-centric problem domain model (cf. first) is prohibitive or where a compiled problem domain model is an unsolved issue, the OO approach as pioneered by Jackson represents the answer for which there are few alternatives. And these domains are important to society: traffic, production, logistic, energy, health

Therefore, teaching OOP from the start is not sufficient but if there are no compelling reasons to do otherwise, it may prepare the grounds for the right kind of OOD. If the Jackson approach presented here is equally well disseminated without OOP, then the issue remains open.

Mordechai Ben-Ari

A reader has brought to my attention Sornen Lauesen's article: "Real-Life Object-Oriented Systems" IEEE Software March/April 1998, 76-83. (For those without access to the "competition", a preliminary version appears at Lausesen's central finding is that in _real_ OO applications, especially in business, most objects tend to be "degenerate", that is, they are just data structures or libraries of procedures. This is consisent with Alan Kay's complaint that OO is not being used as originally conceived where objects do significant computation in response to receiving a message.

In the following issue of IEEE Software I found the article "Does OO Sync with How We Think?" by Les Hatton. Along with an empirical study (bugs in an OO program in C++ take much longer to fix than bugs in a similar non-OO C program), Hatton discusses the claim that thinking in terms of OO is natural, an issue I raised in conjunction with the research by Hadar and Leron. While Hatton finds that encapsulation _partially_ fits the way we think, he claims that this is not at all true with the other central concepts of OO -- inheritance and polymorphism. His conclusion: "But OO is not naturally and self-evidently associated with the least error-prone way of reasoning about the world and should not be considered a primary candidate for a more effective programming paradigm".

These papers describe empirical studies that support my views. What bothers me most are that proponents of OO cannot point to _empirical studies_ supporting their claims for the superiority of OO.

CACM Administrator

The following letter was published in the Letters to the Editor in the January 2011 CACM (
--CACM Administrator

Unlike Mordechai Ben-Ari's Viewpoint "Objects Never, Well, Hardly Ever!" (Sept. 2010), for me learning OOP was exciting when I was an undergraduate almost 30 years ago. I realized that programming is really a modeling exercise and the best models reduce the communication gap between computer and customer. OOP provides more tools and techniques for building good models than any other programming paradigm.

Viewing OOP from a modeling perspective makes me question Ben-Ari's choice of examples. Why would anyone expect the example of a car to be applicable to a real-time control system in a car? The same applies to the "interface" problem in supplying brake systems to two different customers. There would then be no need to change the "interface" to the internal control systems, contrary to Ben-Ari's position.

Consider, too, quicksort as implemented in Ruby:

def quicksort(v)
return v if v.nil? or
v.length <= 1
less, more = v[1..-1].
partition { |i| i < v[0] }
quicksort(less) + [v[0]] +

This concise implementation shows quicksort's intent beautifully. Can a nicer solution be developed in a non-OOP language? Perhaps, but only in a functional one. Also interesting is to compare this solution with those in 30+ other languages at, especially the Java versions. OO languages are not all created equal.

But is OOP dominant? I disagree with Ben-Ari's assertion that "...the extensive use of languages that support OOP proves nothing." Without OOP in our toolbox, our models would not be as beautiful as they could be. Consider again Ruby quicksort, with no obvious classes or inheritance, yet the objects themselves arrays, iterators, and integers are all class-based and have inheritance. Even if OOP is needed only occasionally, the fact that it is needed at all and subsumes other popular paradigms (such as structured programming) supports the idea that OOP is dominant.

I recognize how students taught flowcharts first (as I was) would have difficulty switching to an OO paradigm. But what if they were taught modeling first? Would OOP come more naturally, as it did for me? Moreover, do students encounter difficulties due to the choice of language in their first-year CS courses? I'm much more comfortable with Ruby than with Java and suspect it would be a better introductory CS language. As it did in the example, Ruby provides better support for the modeling process.

Henry Baragar

CACM Administrator

The following letter was published in the Letters to the Editor in the January 2011 CACM (
--CACM Administrator

I respect Mordechai Ben-Ari's Viewpoint (Sept. 2010), agreeing there is neither a "most successful" way of structuring software nor even a "dominant" way. I also agree that research into success and failure would inform the argument. However, he seemed to have fallen into the same all-or-nothing trap that often permeates this debate. OO offers a higher level of encapsulation than non-OO languages and allows programmers to view software realistically from a domain-oriented perspective, as opposed to a solution/machine-oriented perspective.

The notion of higher levels of encapsulation has indeed permeated many aspects of programmer thinking; for example, mobile-device and Web-application-development frameworks leverage these ideas, and the core tenets of OO were envisioned to solve problems involving software development prevalent at that time.

Helping my students become competent, proficient software developers, I find the ones in my introductory class move more easily from OOP-centric view to procedural view than in the opposite direction, but both types of experience are necessary, along with others (such as scripting). So, for me, how to start them off and what to emphasize are important questions. I like objects-first, domain-realistic software models, moving as needed into the nitty-gritty (such as embedded network protocols and bus signals). Today's OO languages may indeed reflect deficiencies, but returning to an environment with less encapsulation would mean throwing out the baby with the bathwater.

James B. Fenwick Jr.
Boone, NC

CACM Administrator

The following letter was published in the Letters to the Editor in the January 2011 CACM (
--CACM Administrator

The bells rang out as I read Mordechai Ben-Ari's Viewpoint (Sept. 2010) the rare, good kind, signaling I might be reading something of lasting importance. In particular, his example of an interpreter being "nicer" as a case/switch statement; some software is simply action-oriented and does not fit the object paradigm.

His secondary conclusion that Eastern societies place greater emphasis on "balance" than their Western counterparts, to the detriment of the West is equally important in software. Objects certainly have their place but should not be advocated to excess.

Alex Simonelis

CACM Administrator

The following letter was published as a Letter to the Editor in the November 2010 CACM (
--CACM Administrator

Though I agree with Mordechai Ben-Ari's Viewpoint "Objects Never? Well, Hardly Ever!" (Sept. 2010) saying that students should be introduced to procedural programming before object-oriented programming, dismissing OOP could mean throwing out the baby with the bathwater.

OOP was still in the depths of the research labs when I was earning my college degrees. I was not exposed to it for the first few years of my career, but it intrigued me, so I began to learn it on my own. The adjustment from procedural programming to OOP wasn't just a matter of learning a few new language constructs. It required a new way of thinking about problems and their solutions.

That learning process has continued. The opportunity to learn elegant new techniques for solving difficult problems is precisely why I love the field. But OOP is not the perfect solution, just one tool in the software engineer's toolbox. If it were the only tool, we would run the risk of repeating psychologist Abraham Maslow's warning that if the only tool you have is a hammer, every problem tends to look like a nail.

Learning any new software technique procedural programming, OOP, or simply what's next takes time, patience, and missteps. I have made plenty myself learning OOP, as well as other technologies, and continue to learn from and improve because of them.

For his next sabbatical, Ben-Ari might consider stepping back into the industrial world for a year or two. We've learned a great deal about OOP since he left for academia 15 years ago.

Jim Humelsine
Neptune, NJ

Displaying all 9 comments