The conclusion by Peter J. Denning and Rick Hayes-Roth in their "The Profession of IT" column ("Decision Making in Very Large Networks," Nov. 2006) should have been that hastily formed networks are useful but only when information flow (as opposed to raw data flow) is high and coordination minimal. In any other context (such as the case they cited of the U.S. Federal Emergency Management Agency), these networks quickly become impractical.
Given that any such network is composed of entities that acknowledge no higher authority than themselves and is run by a board of senior members of those entities (suggested in the column as the solution to level-4 network leadership), the following factors would likely influence a network's performance and stability:
Finally, the column did not discuss what might have happened if FEMA had the authority over regional authorities (the military approach) to command their emergency response forces or even bypass obstructive regional leaders, possibly jailing them if they failed to cooperate.
Michael J. Lewchuk
Lewchuk outlines a number of interesting issues that were outside the scope of the column. We aimed to challenge the common assumption that a command-and-control hierarchy could make wise and timely decisions in a very large networked federation.
In most disaster and emergency situations, the responding parties have no choice but to join a hastily formed network. The question for them is not whether the network is useful but how to make it effective.
A mature networked organization figures out what to communicate and how to coordinate. The members then establish practices that accomplish this work, using continuous improvement to periodically reexamine its processes, structures, and communications, adjusting them as needed.
Hastily formed networks are by nature "immature," however, lacking time to work out such practices. In the ones established for disaster relief, the parties have a good deal of difficulty accepting a higher authority and must therefore employ cooperative structures rather than command-and-control structures. Given that they are not used to cooperative structures, it's likely they will be challenged with numerous breakdowns, like those cited by Lewchuk.
One important question now under study is how relief agencies might prepare in advance so if and when they must come together in a network they are ready to cooperate on the design that best fits the situation they face. No one design works for all emergencies.
In the case of the 2005 Hurricane Katrina disaster, FEMA couldn't have been successful as long as it embraced command-and-control thinking and structures. Too many entities needed to cooperate but didn't accept FEMA's authority. Threatening to put local officials who do not agree with FEMA bureaucrats in jail until they do as instructed would only have worsened the disaster.
Peter J. Denning
Casey G. Cegielski and Dianne J. Hall provided no evidence for the conclusions in their article "What Makes a Good Programmer?" (Oct. 2006). Where are the numbers? What was the range of productivity they measured? What was the strength of correlation of this dependent variable with the "independent variables"?
We omitted key statistical metrics in the interest of brevity and thus present some of them here. In our experiment, we measured productivity through written exams, lab exercises and exams, and a final comprehensive program. Performance correlated in the following ways:
When used as a set of predictors in regression analysis, our betas were:
We welcome further direct inquiries about the study.
Casey G. Cegielski
Dianne J. Hall
I was encouraged by Ray Giguette's article "Building Objects Out of Plato: Applying Philosophy, Symbolism, and Analogy to Software Design" (Oct. 2006) but also felt a little shortchanged in that his initial thesis spanned only the first three paragraphs. Plato and object-orientation share many other parallels; for example, the Third Man Argument, often cited as a criticism of the notion of forms, alludes to the process of object-oriented analysis. This is where, given a hierarchy of formseach "more perfect" than the one preceding itsome debate might ensue as to which attribute belongs to which form.
With respect to Giguette, perhaps the literary link was better modeled by the early 20th century Swiss linguist Ferdinand de Saussure in his dyadic model of signs. A signifier, he said (he was primarily interested in writing and speech) stands for a signified concept, or mental image.
The work of many other philosophers is also not too far removed from computer science. Wittgenstein is an example; his Tractatus Logico Philosophicus, regarded as a difficult work when read from the perspective of information systems engineering, may also be read as a bullet-point definition of an object-based information system. Alternatively, the AI "project" is a philosophical endeavor; can we understand the human condition if we mimic ourselves?
The problem with, and possibly the power of, these philosophical approaches is in their dualistic nature. They are rooted in, for example, the descriptive definition of object-orientation. A class "contains" members; a derived class "inherits from" base classes; and at the lowest level, a variable "represents" a value. Though some of these tuples may be transitive, everything conforms to some given reality.
Some modern IT systems attempt to break out of this mold; an example is XML technologies containing embedded type systems and possibly abstract classes with runtime bound methods. These systems follow the philosophical school of semiotics, inspired by the American philosopher C.S. Pierce, perhaps better known as the father of pragmatism. He employed a triadic model of signswhere the first two members of the sign denote the sign vehicle and the referred object, and the third denotes the resultant effect on the perceiver of the signmodeling a behavioral context. Triadic signs are also useful in literary analysis.
Indeed, this may be a better model of the "levels of interpretation," as noted by Giguette, that have also been used in a variety of scientific areas (such as the behavioralism of Charles Morris in the 1930s and 1940s). This linguistic approach to computing is still alive in the language action perspective (see the special section "Two Decades of the Language-Action Perspective" in Communications, May 2006) and organizational semiotics.
Perhaps IT should be taught to philosophy majors.
As researchers, we must work toward producing UbiComp, or ubiquitous computing, technologies the public will appreciate. Each technology can be used for good or for bad intentions. TV has, for example, brought us closer to events yet is also a tool of propaganda. Cars help us travel yet kill tens of thousands. Nuclear energy promises pollution-free electricity yet is the symbol of power and hegemony.
Researchers need to identify UbiComp applications that promise to outweigh their own inevitably bad uses. Invasion of privacy, lack of sustainability, widening the digital divide, possible oppression by governments all represent bad uses of UbiComp.
What are the potentially good uses of the UbiComp technology we design? I don't mean the enabling technology itself but concepts for the general public. For example, can a particular technology help stop crime, fight disease, build sustainable societies, reduce CO2 emissions, improve education, or prevent wars?
We may argue that technologists should limit themselves to producing technology without judgment and critical thinking. In fact, it is our duty to embed such thinking into our research. Society will ultimately decide on the good and bad points of UbiComp in terms of its applications, not its properties. We need to work toward that judgment day today.
©2007 ACM 0001-0782/07/0200 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.
No entries found