The public's disregard for privacy seals will lead them to a more accurate understanding of the privacy of their data than they could get from the careful study that Trevor Moores's article "Do Consumers Understand the Role of Privacy Seals in E-Commerce?" (Mar. 2005) presumes they should make.
Even the strictest privacy policy is of no use unless it is followed strictly; in today's Orwellian world, that is impossible. The USA PAT RIOT Act (since the name of the law is an acronym, where to divide it with spaces is arbitrary, and I prefer it this way) entitles the police to collect nearly all the information a business may hold about you, even without a court order. All information a Web site collects is thus immediately available to the U.S. Attorney General and his colleagues. If the Attorney General were a champion of human rights, perhaps we could be confident such power would not be abused.
The result is that no privacy shield on a U.S. organization's Web site is worth the pixels in which it is displayed. Now, more than ever, the only way to prevent the misuse of personal data is not to collect it in the first place.
Richard Stallman
Cambridge, MA
In his "Viewpoint" ("On the Nature of Computing," Feb. 2005), Jon Crowcroft proposed that we consider computing's innate agenda to be the virtual, rather than the natural or the artificial. The terms "natural" and "artificial" refer roughly to science and engineering. Crowcroft bundled two claims: computer science is fundamentally not about the natural or the artificial; and computer science is distinguished by its central concern for the virtual.
As for the first, computer science's agenda is, and always has been, information processes. Some are natural and some artificial. We study and imitate the natural; we design and test the artificial. Many of the artificial are inspired by their natural counterparts.
Many programs and computer systems are means to study, simulate, and replace natural information processes. For example, cognitive scientists hypothesize that intelligence is the result of information processes in our brains. Experts in human-computer interaction study the interactions among artificial information processes in computers and natural processes in humans. Bioinformaticians study the transcription of DNA as an information process that can be managed and repaired. Computational scientists study natural phenomena with computer simulations. Computer system people exploit the principle of locality, a direct manifestation of human problem-solving processes.
As for the second claim, computer scientists use "virtual" to mean abstract, simulated, incorporeal, or imagined. A virtual object is an abstraction; a virtual memory is a simulation; a virtual community is formed without physical contact; and a video game depicts an imaginary world. Much programming practice consists of designing abstractions—objects and functions—to solve problems, then letting computers execute the abstractions to produce real action.
What is distinctive about abstraction? Engineers, scientists, organizational leaders, and game makers do it all the time. Abstraction is an important but not the defining principle of computing.
Peter Denning
Monterey, CA
Author Responds:
Answering Peter Denning's second point first, I do not disagree that scientists and engineers abstract. However, what computer science does is to abstract abstraction itself, that is, it creates a virtualization of the ad hoc techniques that have emerged over thousands of years of science and engineering, and promotes it to a framework of formalisms and methodologies for abstraction, or virtualization, in my terms.
On the first point, I disagree that nature contains information processing; nature contains processes. Our community has only recently begun to place structures of semantic interpretation on them, abstracting them to information processing. Prior to that, we made up our own rules for information processing, moving from the concrete toward the virtual.
I am generally trying to move beyond reductionism, but all systems of thought are connected. Regarding the relationship of mathematics to both engineering and the sciences, we can see that computing is not an island community that found itself beamed down to Earth from nowhere.
Jon Crowcroft
Cambridge, U.K.
The Forum comment "Internet Voting Not Impossible" (Feb. 2005) by Wolter Pieters of The Netherlands and Joseph R. Kiniry of Ireland extolled the properties of an Internet voting system called RIES, or Rijnland Internet Election System, developed by the public water management authority of Rijnland and Mullpon in The Netherlands. They claimed that its use of hash functions provides voter verification and security against attack.
I was under the impression that Rebecca Mercuri and others in the field had demonstrated theoretically that the combination of voter anonymity and vote security was impossible in Internet voting.
Herbert Kanner
Palo Alto, CA
In his "Practical Programmer" column "The First Business Application: A Significant Milestone in Software History" (Mar. 2005), Robert L. Glass was right to say we should celebrate software milestones, as well as their hardware counterparts, but citing a commercial application program for LEO (the computer devised by J. Lyons, first used in 1954) was a strange choice. Business-oriented programs were surely running on Univac I as early as 1951. Meanwhile, Ada Augusta Lovelace's reputed first scientific program is also not a good choice, because Charles Babbage's Analytical Engine (not, as in the column, "Inference Engine," a fairly recent AI term) was never completed. I submit that the software milestones most worthy of bicentennial celebration are those associated with tools; without them production of application programs would be considerably more difficult.
Among the many upcoming milestones are Fortran in 2007, Algol in 2010, Cobol in 2011, and Unix in 2021. My book Milestones of Computer Science and Information Technology (Greenwood Press, Westport, CT, 2003) covered more than 600 of them, at least half relating to items other than hardware. Marking the dates of their creation would keep us busy celebrating for decades to come.
Edwin D. Reilly, Jr.
Albany, NY
I want to thank Robert L. Glass (Mar. 2005) for reminding the mainly U.S. readership of Communications that the world beyond North America is indeed a big place.
I joined the British computer company ICL in 1970, after almost six years in the U.S. IT industry. Over the next 30+ years, I met and worked with many of the people who had been involved in the LEO project(s). (One context for the column was that LEO stands for Lyons Electronic Office.) We can only wonder what would have happened to the notion of application software if J. Lyons had patented the term "Office."
Thanks, too, for the smile it put on my face.
Len Cohen
London, England
I wish I owned the rights and had the opportunity to put a copy of Phillip G. Armour's "The Business of Software" column ("Project Portfolios: Organizational Management of Risk," Mar. 2005) on the desk of every software engineer and manager. It should be required introductory reading in every software engineering course, and probably in every MBA course, as well.
Larry Brunelle
Allen, TX
I like that Phillip G. Armour (Mar. 2005) argued that it is irrational for people to be surprised when high-risk projects fail. Such surprises seem to be a global phenomenon in which people apparently expect miracles at no risk, then sue if the risk actually materializes.
Management (and people in general) will not change their attitudes soon, leaving us with the need to reduce project risk to improve their chances of success. As Armour pointed out, the value of a risk calculation comes from being able to make better decisions. The political aspects of risky projects can be dealt with by being professional in recognizing and presenting those risks to management throughout the project.
I also liked Armour's point that businesses should not undertake high-risk projects that lack a corresponding high level of return. That return is the motivation for performing risk calculations, though there is also a need to somehow measure the potential benefit of the systems being developed.
I was, however, irritated by the numbers Armour used. I think he got them wrong. Surely a 0.000001% chance of working properly is not a probability of one in 100,000; it is one in 100,000,000.
The column mixed up the ways of expressing a probability, but the numbers did not match up. If I'm wrong, please tell me. If there was a mistake, I want to thank you; it made me reread the column that much more thoroughly.
Philip Burgess
Allschwil, Switzerland
The nice discussion by Christian Collberg and Stephen Kobourov in "Self-Plagiarism in Computer Science" (Apr. 2005) could be usefully extended. I myself discussed the effects of digital technologies on this practice in my essay "Crossing the Divide" (ACM Transactions on Computer-Human Interaction, Mar. 2004), drawing from my experience as editor-in-chief of that publication from 1997 to 2003. I found that both ACM and IEEE used to permit (and practice) verbatim reproduction of refereed conference papers in journals and transactions; conference proceedings were either not widely available or considered archival. Today, however, because conference papers are archived in online digital libraries, republication is discouraged in policy; even partial overlap is harshly criticized by reviewers. In many fields, including computer science outside the U.S., proceedings are less likely to be archived. Republishing in journals is thus still required in some places in order to stake out a permanent place in the record.
Collberg and Kobourov noted that a cardinal indication of malfeasance is failure to cite similar previously published work. This implicit guidance can be turned into the following advice for authors: When in doubt, inquire with the program chair or editor, explaining the related audience overlap. When the overlap is limited, self-plagiarism may well be considered acceptable. For example, the American Association for Artificial Intelligence (www.aaai.org) has formally permitted self-plagiarism when a distinctly different audience is deemed to be involved.
Jonathan Grudin
Redmond, WA
©2005 ACM 0001-0782/05/0500 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.
No entries found