David Lorge Parnas is well known for his insights into how best to teach software engineering. Parnas has been studying software design and development since 1969, and has received more than 25 awards for his contributions. In 2007, he shared the IEEE Computer Society's 60th anniversary award with computer pioneer Maurice Wilkes. He received B.S., M.S., and Ph.D. degrees in Electrical Engineering from Carnegie Mellon University. He has published more than 285 papers, many of which are considered classics. He designed the CEAB (Canadian Engineering Accreditation Board) accredited McMaster University Software Engineering program, where he is now professor emeritus.
He and seven colleagues have articulated an approach based on actionable capabilities rather than concepts. Communications columnist Peter J. Denning had a conversation with Parnas about these ideas.
PETER J. DENNING: You have a long-standing interest in methods for producing reliable and safe software. In 1971, you first articulated the principle of information hiding.2,3,4 Why is this still important?
DAVID LORGE PARNAS: The concept of information hiding is based on the observation that changes outside a program (such as a revision of other programs, hardware changes, or requirements changes) only affect the correctness of that program if information that would have to be used when showing the correctness of that program was invalidated by the change. Consequently, software should be organized so that dependence on information that is likely to change is restricted to a small, clearly identified, set of programs. Applying this principle results in programs that are easier to understand, easier to maintain, and less likely to contain errors.
As expressed in my early papers, the information hiding principle applies to all programs in any programming language. It serves as a guideline for software developers and maintainers. The first of the three papers introduced the basic idea; the second illustrated it for a simple system and the third showed the surprising implications of the principle when applied to more complex systems; that paper proposes a software structure that is very different from what is usually found in such systems. It also illustrates how that structure can be documented.
In 1968 and 1969, two famous NATO conferences discussed a new field—software engineering—to help us achieve reliable and trustworthy software systems. Why?
Most of those who attended the two conferences had been educated in either science or mathematics, but drifted into building programs for others to use (software). They realized that their education had taught them how to add to the world's knowledge, but not how to build products. Noting that traditional engineering education taught methods of designing and constructing reliable products, they proposed that that software development should be regarded and taught as an engineering discipline rather than as a branch of science.
How did universities respond to the conferences?
After the conferences, software engineering was treated as the area within computer science that studied ways to produce safe and reliable products. Some included project management within the field.
The initial response was to add a single one-semester course entitled "Software Engineering" to CS curriculums. It was not clear what should be taught in such a course. When I was assigned to teach that course, I asked, "What should it cover?" The only answer was "Dave, you are an engineer—you figure it out."
Later, as software became more important, CS departments defined software engineering "tracks" that included additional required courses—such as compilers, database systems, and software test methods. Important aspects of engineering such as interface design and fault tolerance were rarely included.
On a number of occasions, you wrote sharp critiques of many of these programmes.a What was the basis of your criticisms? Did they produce results?
My first critique 6 complained about many aspects of the "tracks." I pointed out they were teaching topics that interested the teachers rather than what the students would need to know as professional software developers.
My second major commentary7 was written after my university (McMaster) had formed a new department and was offering a program designed to be taught in the engineering faculty rather than the science faculty. It had no courses in common with the previously existing CS program. Students took the same first-year courses as all other engineering students; only at the end of that common first year could they select software engineering as their programme.
The point of both of these papers was that software engineering education should be considered professional education (like architecture, medicine, law or engineering) rather than based on a "liberal arts" model like physics or mathematics.
Our programme was designed to be accredited by the Canadian Engineering Education Board. In the year we graduated our first students, two other Canadian Universities were also accredited and graduated students.
Both papers generated a lot of discussion. The programme described in the 1998 paper7 was well received by students and employers. Unlike graduates of CS programmes, our SE graduates could be licensed as professional engineers after passing the usual law and ethics exams.
In 2017, you chaired a committee that wrote a report about software development programmes.1 The resulting paper takes a capabilities-based approach to specifying the goals of professional programs in software development. How does this approach differ from the more common approaches?
Previous efforts to prescribe the content of CS and SE programs were based on the concept of a "Body of Knowledge." They specified what the students should know when they graduated.
Noting that these were professional programs, we chose to specify what the graduates should be able to do upon graduation. Our goal was to allow individual institutions to choose the knowledge and methods that would be taught provided that they gave the graduates the required capabilities. Those who read the 2017 paper will see that the approach is quite different from earlier approaches to curriculum specification. It emphasizes software development capabilities, not the name and content of the courses.
The 2017 paper stresses the difference between CS programmes and professional software development programmes, basing your approach on very old observations by Brian Randell (an author of the reports on the 1968 and 1969 conferences) and Fred Brooks, author of the very popular book The Mythical Man Month. What did they say that attracted you?
Randell described software engineering as "multiperson development of multiversion programs." Brooks said that, in addition to writing code, software engineering required both combining separately written programs and "productizing" them—that is, making them suitable for use by people who had not written them. These topics usually received little or no attention in traditional programming courses.
We were able to identify a set of capabilities required to do the things that Randell and Brooks had identified as differentiating software development from basic programming. The accompanying table lists those capabilities but readers really should read the paper to see what is meant. They will find detailed, concrete justifications, definitions, and guidelines.
You used "software systems engineering" to denote the class of programmes you were discussing. That class included both specialized programmes and general software engineering programmes. You listed 12 capabilities that should be imparted to software systems engineers. They are all engineering capabilities. How should this be used?
The graduate capabilities list is intended as a checklist for those teaching software development. They should be asking, "Will our graduates have these capabilities?" The answer should be: "Yes, all of them." If not, the institutions should be redesigning their programmes so that they can answer "Yes!"
Do you advocate that SE and CS education would both be better if they were kept separate?
The two are as distinct as physics and mechanical engineering. The physics taught in both programmes would overlap but the engineers will be taught how to use the material to build reliable products while the physics majors are taught how to add to the body of knowledge that constitutes the science.
Professional programmes tend to be more tightly constrained than science programs because there are many things that a professional must know to be licensed and allowed to practice. A science student is often allowed to make more choices and become a specialist.
It is difficult (though not impossible) to have both types of programmes in one department.
You have said a professional software engineering programme would appeal to the students who want to learn how to build things for others to use. Are CS departments out of tune with most of their students?
The CS departments I have visited have a diverse set of students. Some want to be developers, while others want to be scientists. Many departments offer a compromise programme that is far from ideal for either group. That is why I prefer two distinct programmes taught by different (though not necessarily disjoint) sets of faculty members.
In 1985, you took a strong stand against the U.S. strategic defense initiative (SDI),5 which promised to build an automated ballistic missile defense (BMD) system that would allow the U.S. to abandon its intercontinental ballistic missiles (ICBMs). You maintained the software could not be trusted enough for the U.S. to eliminate its missiles. We have BMD systems today, were you wrong?
Not at all! SDI was predicted by its advocates to be ready in six years and capable of intercepting and destroying sophisticated missiles including newer designs designed to defeat a BMD system by taking evasive measures. The system described by President Reagan would have been impossible to test under realistic conditions. The BMD systems in use today (33 years later) are not reliable even when facing unsophisticated rockets. No ICBM systems have been dismantled because BMD systems cannot be trusted.
Noting that these were professional programmes, we chose to specify what the graduates should be able to do upon graduation.
Do you see a relationship between the BMD claims made in the 1980s and today's claims about artificial intelligence?
Both fields are characterized by hyperbolic claims, overly optimistic predictions, and a lack of precise definitions. Both will produce systems that cannot be trusted.9
In 2007, you published a short paper8 that criticized the evaluation of researchers by the number of papers they publish. What led you to publish such a paper?
I have served on many committees that evaluate faculty members for promotion and many others that evaluate research proposals. All too often, I have been disappointed to learn that most of my fellow committee members had not read any of the applicant's papers. They had merely counted the papers and (sometimes) estimated the selectivity of the journals and conferences. On two occasions colleagues complained when I started to discuss problems in the applicant's papers (which I had read). They said that the referees had already read the papers and approved them so I had no right to evaluate them. In effect, they said I was "out of line" in reading the papers and evaluating their contribution. The final straw came when someone published a computer program for doing the work of a committee by counting the papers and computing a score. The fact that such simple programs would often get the same result as the committee showed me that committee members were not doing their jobs. For example, referees of an individual paper cannot detect an author that publishes the same results several times using different titles and wording. We have scientists on the evaluation committees precisely because they have the expertise to read the papers and evaluate the contribution made by the author. If they don't do that, we don't need them. Sometimes a single paper is a far more important contribution than a dozen shallow or repetitive papers. Simply counting papers is not enough.
I have observed that people being evaluated for appointments or grants learn how to "play the game." If they see that they will be evaluated by people who won't read the papers but just count them, they know how to increase their score without actually improving the contribution. My 2007 paper discussed some techniques that researchers use to make themselves look better than they are.
1. Landwehr, C., Ludewig, J., Meersman, R., Parnas, D.L., Shoval, P., Wand, Y., Weiss, D., and Weyuker, E. Software systems engineering programmes: A capability approach. J. of Systems and Software 125, (2017), 354–364.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
It would be awesome if the references were links to the DOI in the ACM Digital Library. Trying to figure out which papers were referenced in here so I can read up on them is a bit tedious (worthwhile but tedious).
Displaying 1 comment