acm-header
Sign In

Communications of the ACM

Forum

Forum


I have found that LOC, as discussed in Phillip G. Armour's "The Business of Software" column (Mar. 2004), is only weakly correlated with software functionality and complexity and not a useful metric for estimating system size. Systems with greater functionality generally have a greater LOC count, while the functionality-LOC relationship is overwhelmed by a number of factors.

First, using frameworks to handle the plumbing of enterprise systems can greatly reduce LOC. I have seen such systems built in the same language with quality-adjusted line counts for similar functionality differing by a factor of five. However, the difference in LOC paled beside the difference in maintainability among systems.

Second, higher-quality systems can have line counts less than 10% lower than their lower-quality counterparts with the same functionality. Developers with inferior skills produce more LOC per function point than highly skilled developers. Similarly, highly skilled developers under rushed conditions produce more code than they would under conditions in which they have enough time to do a thorough job. And line counts decline over the lives of long-term projects through refactoring.

My point is not that LOC is totally divorced from functionality, but that code volume is a highly flawed metric. I have seen much greater accuracy based on use cases prepared according to a standardized format (as in the user stories of extreme programming methodologists), somewhat similar to, though less formal than, the function point approach.

Michael J. Wax
San Marino, CA

I think Phillip G. Armour did not go far enough in moving away from LOC. Managers are not really interested in, as he said, the physical size of the finished product. Nor are they interested in a measurable "knowledge unit," which I bet is every bit as useless as LOC. Managers want to estimate the cost of development. The bottom line is still the bottom line.

LOC was once considered a good measure of development cost, a notion properly debunked by Armour, as in the following sentence near the end of his column: "The strongest independent correlating indicator was the number of customer-initiated change requests received in the first month of the project."

The number of such requests measures how well the requirements are understood. Experienced programmers typically have a better understanding of the problem domain and its tools, so their development costs are lower. Real-time systems are more difficult to understand and debug, so their development costs are higher.

Can we measure or estimate such an understanding a priori? The question is equivalent to the Halting Problem. In other words, No. We will continue to seek other metrics, including LOC and customer requests, and be frustrated by bad estimates.

Tom Pittman
Bolivar, MO

Phillip G. Armour's discussion of LOC resonated with my experience as a software project manager in India for the past nine years. Estimating and measuring metrics is always a challenge. Whether during the proposal-submission phase or during the project-inception/execution phase, the effort is always a learning experience; I constantly find myself wishing I had done the measuring sooner.

One lesson is that measurement must be customized to the project. LOC is such a generic metric, it simply doesn't make sense to compare it across projects industrywide or apply industrywide calculation charts. (Measuring LOC for different modules within a closely knit project does, however, provide guidance about relative size.) But each project definitely involves units of knowledge that can be measured, and the entire inception/execution phase is spent relating these units to LOC or some such size metric.

Madhu Banavati
Bangalore, India

Back to Top

Computers and Consciousness

In her "Security Watch" column (Mar. 2004), Rebecca T. Mercuri cited Stanley Jaki's claim that a machine cannot resort to "informal, self-reflecting intuitive steps" in reasoning because "This is precisely what a machine, being necessarily a purely formal system, cannot do, and this is why Gödel's theorem distinguishes in effect between self-conscious beings and inanimate objects." Roger Penrose and many others have said the same. However, I am not the only computer scientist to note that while it may be true for mathematical models of computation, it is certainly false for actual computers. Physical computers have access to randomness (such as the time between clicks on a Geiger counter) that takes them beyond the limitations of Gödel's theorem; see, for example, plato.stanford.edu/entries/turing-machine/.

Jef Raskin
Pacifica, CA

Author Responds:
Regarding these assertions, let me quote again from Stanley Jaki's 1969 book Brain, Mind, and Computers: "Turing merely shifted the issue when he rejected the bearing of Gödel's theorem on thinking machines on the ground that no one has proved as yet that the limitations implied in Gödel's theorem do not apply to man. But no nimble footwork can obliterate the fact that the point at issue is not to ascertain the limits of man's reasoning ability but rather to recognize the full weight of a conclusion which states that a machine cannot even in principle be constructed that could do what man can do, namely to secure his own consistency without relying on something extraneous."

I spent a number of years researching digital audio before wandering into the computer security field. I can recall numerous instances when it appeared that randomness might be applied to, say, having a computer rendition of music or an automatically generated composition seem less stilted or robotic. As it turns out, mere randomness cannot account for the nuances applied by the human mind in such processes. So the source and evolution of such knowledge remains distinctly human (such as through the digitization of recorded performances to simulate the observation-imitation process in musical training), particularly when rule-based codification fails. See also Ramon Lopez de Mantaras's and Josep Luis Arcos's "AI and Music from Composition to Expressive Performance" in AI Magazine 23, 3 (Fall 2002).

Rebecca T. Mercuri
Cambridge, MA

Back to Top

Computers, ACM, and Everyday Existence

Four items in the Dec. 2003 issue caught my attention because they illustrated how technology affects computer users beyond the ACM membership.

Halina Kaminski's Forum comment "Make Every Vote Count" suggested an interesting alternative to printed voting receipts. I am puzzled by the enthusiasm of ACM e-voting experts for printed receipts. If you don't trust the machine to record, report, and tally the vote correctly, how can you trust the printed receipts?

Moreover, if the system produces a single copy that is turned in for possible hand recount, voters must verify their votes before leaving the polling place—something only the most dedicated among them are likely to do. If two copies are printed, what happens if voters report errors after the fact? And unless the printed receipts are suitable for automated counting there will be resistance to the kind of errors that crop up in manual recounts.

Hal Berghel's Digital Village column "Malware Month" started me thinking about whether the homogeneity of today's software might be at least partly responsible for the success of attacks. Why not implement diversity by having each individually installed copy of the software modify itself by moving parts around in memory with encrypted pointers? Internally encrypting key file extensions also seems to be an obvious move.

William E. Spangler et al.'s "Using Data Mining to Profile TV Viewers" (a truly horrifying prospect) made me wonder whether most PVR subscribers (including those getting their programming from digital cable and satellite) realize how extensively their viewing habits can be monitored. The USA Patriot Act II seems likely to add such data to its list of easily accessible information.

The solution is simple. At least one provider I know does not require an upstream connection, except for ordering pay-per-view programming. If more consumers were aware of the monitoring issue, they would vote with their feet by using services that did not require a monitoring component.

Finally, Lauren Weinstein's Inside Risks column "The Devil You Know" should be required reading for all computer users. I'd now like to suggest an alternative approach called safe computing that involves the following steps: never upgrade or install operating system patches; never use broadband, with its always-on static IP addresses; use only non-Microsoft browsers and email clients; almost never open email attachments; never have Word permanently installed on your computer; never depend on virus-detection software; surf only with Java, Javascript, images, and cookies disabled, turning them on selectively as needed and only at trusted sites; and almost never download software.

Great issue!

Bob Ellis
Fountain Hill, AZ

Back to Top

Don't Dismiss Computers in the Classroom

Simone Santini (Forum, Dec. 2003) should have consulted ACM's eLearn Magazine to find out how technical and educational professionals seek effective and purposeful ways to use computer technology to extend and enhance classroom teaching—something that has been happening in the U.S. for the past 40 years. The argument that the technical literature has nothing to say about whether education needs computers is spurious and misplaced, akin to suggesting that primary school teachers should have told chip designers to implement multi-pipelined chip architectures to help preschoolers make better use of spelling programs.

Since the early 1970s, MIT educational and technical researchers have sought to produce educational programming languages to allow students to pursue cognitive modeling and other tasks to create deeper models of their understanding—something so ahead of prevailing classroom practice when it began that it was widely misunderstood. These early adopters saw the potential of computers to transform aspects of education. At the same time Seymour Papert raised concern about technocentrism—a naive belief in the transformative power of computers themselves—emphasizing the importance of teachers and pedagogy.

Today's ubiquitous use of the Internet helps teachers and students alike overcome parochial, often inward-looking, curricula and engage the educational community worldwide, freely sharing ideas, knowledge, and values—the core features of a free and democratic society.

While there is no doubt that much computer-based teaching is poorly conceived and executed, so is most face-to-face teaching. But the use of computers requires that teachers create real artifacts and examples to make explicit the things often glossed over in face-to-face situations. Romantic notions of good teachers and charismatic pedagogy are not supported by educational research.

As a tool, technology can help teachers mediate their interactions with students and course material. It's a pity that efforts to implement teachers' skills and knowledge are based on what seems to be widespread e-ignorance about the knowledge, skills, attitudes, and experience students need to participate in the knowledge economy.

Paul Nicholson
Melbourne, Australia

Back to Top

Author

Please address all Forum correspondence to the Editor, Communications, 1515 Broadway, New York, NY 10036; email: crawfordd@acm.org.


©2004 ACM  0002-0782/04/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.


 

No entries found