Sign In

Communications of the ACM



Although the Viewpoint by Peter Freeman and David Hart ("A Science of Design for Software-Intensive Systems," Aug. 2004) explored the idea that design must account for such factors as style and innovation, it didn't say that the underlying tools of computer science and engineering, on which design is based, are unstable, and that this is a primary cause for the lack of established design in the field.

Over the past few decades, machine code has been replaced by ad hoc high-level language programming, which in turn succumbed to structured programming, which led to object- oriented programming, which moved on to client-server programming and distributed objects, and even now is evolving toward Web services. And while there is overlap in the design principles of each of these trends, each also delivers its own new design elements.

I suspect that the fact that software is "soft," or an intellectual creation, makes it more slippery and less amenable to the setting of standards and principles than is the case for the "hard" engineering disciplines to which software developers and engineers often compare themselves. This means an additional complicating factor mitigating against the establishment of a strong design basis.

It is also entirely possible that software design principles may vary significantly depending on the application for which the software is intended.

It is therefore difficult to be confident that software design is moving toward a stable, solid basis. We may have no choice but to live with relatively few fundamental design super principles (such as "avoid unnecessary branches of logic"), even as we discover new principles to keep up with the evolving field.

It has been said that engineering is applied mathematics, or math subject to the rules imposed by reality. Software seems to be subject to fewer such rules and is therefore perhaps more like math itself. How does math do design? It provides a few super principles (such as "all work must be based on rigorous proof"), discovering the rest as it goes along.

Alex Simonelis

Back to Top

The Computer Didn't Make You Do It

Peter G. Neumann's "Inside Risks" column "The Big Picture" (Sept. 2004) listed major IT problem areas, including development practice, trust, infrastructure, privacy, accountability, intellectual property, and education. All certainly need attention. But in the meantime, I would like to suggest two positive additions.

First, we need a positive aspect of privacy, say, the goal of allowing everyone to surf the Web anonymously. Emphasizing only the negatives (such as electronic surveillance and the kind of ID checking people don't want) results in a weaker position than including the positive things that freedom would mean.

Second, I would add the misuse of computers by scientists and technocrats who make life-and-death decisions, blindly following whatever the computer apparently told them to do. For example, under the guise of studying the spread of epidemics, a medical simulation might recommend curing a disease by quarantining the victims to death.

Government agencies around the world are moving toward adapting this trivial solution as the main (or only) weapon in their disease-fighting arsenal.

This trend could even lead to the death of many innocent people, unaware that it is a mathematical travesty to use a trivial solution instead of seeking a workable cure.

Mike Brenner

Back to Top

Why Doesn't Everyone Telecommute?

Ralph D. Westfall's "Does Telecommuting Really Increase Productivity?" (Aug. 2004) could have been condensed into a single sentence: "In a thorough search of the literature, I found no studies that scientifically support the numerous claims that telecommuting substantially increases productivity." Or, in plain English: "Everybody says people working from home get lots more work done, but nobody has ever done a study that clearly shows it does so."

The rest of the article was speculation and innuendo: "authors claim that . . .," "potential gains may be limited by . . .," "if this energy is used to work longer hours . . . it may not be available to also increase the intensity of work." No new results were reported, and no new work was proposed, just the author's skeptical musings on the subject, dressed up with references and an equation from the Employment Standards Administration (part of the U.S. Department of Labor).

Westfall knows this, concluding: "If this were really happening [10% gains from telecommuting] companies that employ large numbers of knowledge workers would have adopted telecommuting on a large scale a long time ago . . ." Westfall has never worked in one of these large companies. The statement is pure wishful thinking.

Spike McLarty
Vashon, WA

Ralph D. Westfall's article (Aug. 2004) was a remarkable example of what is known as the Dixon Graphite Method. Given the desired conclusion, select the data to support it. In this case, the conclusions were that claims of telecommuting-related productivity increases are probably bogus and that if telecommuting is so great, why doesn't everyone do it?

Here, I address several of the flaws in the article. First, some productivity claims made for telecommuting seem excessive. Although they may be true for individual cases, it is difficult to believe that 100% productivity improvements would hold across large groups of telecommuters and for high average frequencies of telecommuting.

Second, the term "productivity" itself tends to be burdened with the baggage of time and motion studies—and the negative implications of the Hawthorne effect. Productivity tends to be viewed in terms of widgets produced per hour. In such cases, measuring productivity either directly in widgets or in hours worked is fairly straightforward.

For information workers, I prefer the term "effectiveness" as a way to focus on doing the right things effectively, as opposed to doing possibly wrong things efficiently. Measuring performance changes in information work can be extremely difficult on any absolute basis. Hence, in practice such evaluations are necessarily subjective to some extent, at least at the individual level.

Consequently, in my own work since the late 1980s, I have used control groups in evaluating telecommuter effectiveness. The members of the control groups are information workers who are as much like the telecommuters as possible, except they do not telecommute during the evaluation period (up to three years in order to test for Hawthorne effects).

The rule is that judging the performance of the telecommuters and nontelecommuters alike uses the same criteria. The reported effectiveness and other changes refer to the differences between the telecommuters and the control group. This allows one to more readily infer that the differential performance is associated with telecommuting, not some external factor.

I also elicit both self-evaluations and evaluations by direct supervisors. My experience is that both employee groups rate themselves higher than do their supervisors. I tend to use the supervisor evaluations in the interest of being conservative.

Presumably referring to my 1994 book Making Telecommuting Happen, Westfall wrote: "In his economic analyses of the project benefits, Nilles used a supposedly conservative 22% productivity gain based on the average of managers' subjective impressions of employee productivity gains." But he neglected to mention that the same set of supervisors also estimated the effectiveness changes of the control group members at 9%. Therefore, the effectiveness differential I used for impact analysis is 14% (including rounding errors), not the 22% Westfall used in his subsequent analysis. He clearly misrepresented my results and disparaged the data as subjective.

Westfall also wrote: "An alternate explanation [for large gains by one-day-per-week telecommuters] is that telecommuting also increases productivity on nontelecommuting days, but a rationale for such gains is not readily apparent." Although I have not specifically measured gains on nontelecommuting days, here's the rationale: Successful telecommuters necessarily become better organized, and their improved organizational habits carry over into the rest of their activities, sometimes to the amazement of family members.

The results of my research over three decades are that properly selected and trained telecommuters in a variety of jobs are, on average, 5% to 20% more effective than their nontelecommuting coworkers, provided that their supervisors are also properly trained.

Effectiveness improvements are but one element in the management decision to adopt—or reject—telecommuting. Companies often adopt telecommuting on the basis of facilities cost savings alone, betting that effectiveness will at least not suffer. Then there are the issues of employee retention and attracting new employees via telework arrangements. There are at least 29 million telecommuters in the U.S. today, constituting about 22% of the overall work force, according to my forecasts and a few surveys. There may be a comparable number of teleworkers in the rest of the world, primarily in Western Europe. This is hardly "a very telling indicator that telecommuting does not deliver, at least at the level of the whole organization."

For more, please also read my 1998 book Managing Telework, which Westfall also seems to have missed in his literature search.

Jack M. Nilles
Los Angeles

Author Responds:

I didn't structure my article as a disproof. I worded it deliberately, starting with the question mark in the title, to draw attention to the serious credibility issues associated with claims that telecommuting generates large productivity gains.

Productivity is a major issue in the U.S. economy. At least one source (www.clevelandfed. org/ Research/Com2001/0901.htm) projected that an annual productivity increase of just a half percentage point would make a $1.2 trillion cumulative difference in the federal budget over 10 years. Over 50 years, this seemingly small increase would also cut in half the long-term cost of fixing the Social Security system. If it truly can generate large productivity improvements, then relatively small increases in telecommuting should be sufficient to achieve the half-point gain. However, for that to happen, researchers will need to confront, rather than deny, the credibility problems.

A good starting point would be for Nilles to allow peer reviews of the research methods, analyses, and data that led to the counterintuitive finding in 1994 of "37% of the work being accomplished in 18% to 23% of the work week." If his research practices can be validated, the next step would be for other researchers to attempt to replicate these findings. If they succeed, the final step would be additional research to demonstrate that such individual productivity gains are translatable into cost savings and/or other tangible benefits at the organizational level.

This would be an expensive research program, but in the context of the astronomical numbers in the national budget and Social Security projections, the funding to support it would be trivial.

Ralph D. Westfall
Pomona, CA

Back to Top


Please address all Forum correspondence to the Editor, Communications, 1515 Broadway, New York, NY 10036; email:

©2004 ACM  0001-0782/04/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.


No entries found