acm-header
Sign In

Communications of the ACM

Communications of the ACM

An Empirical Investigation of the Effectiveness of Systems Modeling and Verification Tools


Information system (IS) development generally is guided by some variant of systematic development procedures called methodologies (for example, [1]). There is diversity in the specific guidance provided because there are more than 1,000 methodologies, each with its own unique set of best practices and tools to guide development (for example, [4, 6, 9]). The best practices and tools are intended to minimize project failures. Nonetheless, system failure rates continue to be well above 50%, and perhaps as high as 85% (for example, [2, 8]).

A review of the literature reveals that poor specification of user requirements is a major reason for project failures (for example, [7]). However, most methodologies actually emphasize the importance of accurate and complete specifications for design. That is, methodologies recognize that design cannot be completed properly until the designers understand the system and its needs, and can translate those needs into technical requirements [5]. In fact, most methodologies begin with an assessment of users' needs. Further, since most methodologies recognize that changes earlier in the development process are faster, cheaper, and better than those made later, they mandate users evaluate assumptions and requirements before systems design and coding have begun [10].

Since methodologies exist to extract and translate system assumptions and requirements effectively, and tools exist to model the needs for later use, the question remains why we are not experiencing a reduction in failures. The profession must understand the inadequacies of the systems modeling and verification tools in order to address the system failure problem. Perhaps the problem lies in how we ask the question about the usefulness of the tools. Generally, IS professionals evaluate software development processes and tools in terms of how they help us proceed to the subsequent step in the prescribed methodology. Perhaps the question should be reframed to move the analysis from the tools' role in development to their role in reviewing.

We posit some system failures are experienced because users do not adequately understand the system for which we seek approval since the modeling tools do not adequately convey our understanding of the system to the users. Hence, while we perceive we are reviewing current and proposed system activity by seeking sign-off, and the users perceive they are providing good feedback, the modeling tools may, in fact, be obfuscating the discussion and thus the requirements' definition. If that is true, then it does not matter how well the modeling tools contribute to the preparation of requirements for subsequent stages, because they have failed in their contribution to the verification of requirements of the system. We posit that the evaluation of the effectiveness of the tools associated with methodologies should be related to the extent to which they help users perceive what analysts do and do not understand about the desired system. This, in turn, may be related to the extent to which the users were able or trained to read and interpret the requirements modeled using the tools.


We posit some system failures are experienced because users do not adequately understand the system for which we seek approval since the modeling tools do not adequately convey our understanding of the system to the users.


This article examines one aspect of methodologies—systems modeling tools—and its potential contribution to system failures. We examined the effectiveness of two modeling and verification tools from two conceptually different methodologies in contemporary use: data flow diagrams and use case diagrams.

Back to Top

Theoretical Development

Context. System development methodologies may be conceptually different; they may be process-oriented or object-oriented:1 whereas process-oriented approaches recommend decomposition into processes, object-oriented approaches recommend decomposition into objects [10]. A process is a sequence of operations performed on data whereas an object is a repository of both data and processes. Despite such differences, both approaches consider systems modeling and verification as important components of systems development [5]. They explicitly provide tools that can be used for modeling and verifying requirements. Process-oriented approaches recommend data flow diagrams (DFD) whereas object-oriented approaches recommend use case diagrams (UCD). These two tools are generally used by proponents for modeling requirements as well as communicating or verifying the requirements with users [3, 10].

The tools exhibit considerable differences in content just like the methodologies that prescribe them: DFDs, for instance, organize content in terms of external entities, processes, data flows, and data stores, whereas UCDs organize content in terms of actors, use cases, links, and interactions. This is consistent with the underlying philosophies of the respective methodologies. However, the differences are much more fundamental than the contents and how they are organized. DFDs, for instance, explicitly convey information regarding the direction of data flows and the actual data that flows between processes or external entities. UCDs, however, encapsulate critical information within its objects and do not explicitly convey such information to users. Thus DFDs may be considered a case of "information manifestation" whereas UCDs may be considered a case of "information encapsulation." As such, it is believed that DFDs, more than UCDs, provide support for users to verify system requirements.

The differences between the diagrams may be more significant for clients, system stakeholders, and others inexperienced in systems development. These individuals, typically not trained to read and interpret systems modeling tools, may be able to interpret DFDs more effectively than UCDs. For these novice users of the tools, UCDs are not likely to be effective since they encapsulate information within objects that may not be readily apparent or appreciable without proper training. Clients and stakeholders are not likely to resolve such hidden information or its roles without training. However, the novice users are likely to be more effective with DFDs because DFDs explicitly provide information that can be readily appreciated. Moreover, in the absence of training, the mental models of novices are more likely to correspond to "sequence of activities" (as in DFDs) rather than "interactions between objects" (as in UCDs). Hence, novice users are likely to be more effective with DFDs than UCDs.

However, individuals trained in the preparation of and reading of the associated documentation are likely to be more effective with DFDs than UCDs, but in very different ways. For trained users, UCDs pose no real problems since they can quite readily resolve encapsulated information by drawing on their training. Trained users can also more readily shift between different mental models represented by DFDs and UCDs. However, they are more likely to be effective with DFDs since DFDs provide rich and abundant information about the system. The uncertainty resulting from information encapsulation in the UCDs is not an issue with DFDs. Thus, trained users are likely to be more effective with DFDs than UCDs.

Back to Top

Research Method

Our study was conducted in a large public university. Students enrolled in various classes offered by the business school were solicited to participate. This was done by contacting instructors of nine sections and obtaining permission to involve their students. Four sections were introductory classes with representation from virtually all disciplines of the business school. The remaining five sections were advanced MIS classes that comprised students trained in both modeling tools.2 Participation in the study was voluntary; however, the instructors awarded credits for students who participated in the study. Neither the students nor the instructors were knowledgeable about the research objectives of the study. A total of 175 students agreed to participate in the study.

Diagrams. The university's student registration system was represented both as a DFD and a UCD.3 Since the participants were students, the registration system provided a realistic and known context for them. Prior to creating the diagrams, researchers gathered pertinent information from the various stakeholders in the university. To validate the two diagrams, the researchers distributed them to five experts knowledgeable on the use of both DFDs and UCDs and registration at the university. They were asked to evaluate the appropriateness of the information represented in each diagram, and comparability of both diagrams with regard to the student registration system at the university. The experts were provided information about the student registration system gathered from different stakeholders at the university. Each of the five experts agreed that diagrams were appropriate, and that they each represented the actual registration system. Minor changes were suggested by the experts, and the changes were incorporated into the diagrams.

Data Collection. Each student was given both diagrams and was requested to write a narrative of what they were able to understand from the diagrams. To minimize potential biases, the order in which they saw the two diagrams was controlled. In the first group, representing approximately half the participants, the DFD was provided first, and when students were finished, they were given the UCD. The order was reversed for the other half of the subjects. The students were given a week to complete each narrative. A total of 140 DFD narratives and 129 UCD narratives were obtained from the participants. This netted 117 matched sets of DFD and UCD narratives.

Coding. To evaluate the extent to which the tools effectively aided requirements verification, the narratives were coded for specific information conveyed by the diagrams. We created a common coding sheet for both DFDs and UCDs such that the narratives can be coded on similar content and information using five dimensions: external entities interfacing with the registration system (ACTORS), functions constituting the registration system (PROCESSES), interactions between actors and the registration system (INVOLVEMENTS), information flows for each function within the registration system (DIRECTIONS), and actual information transmitted within the registration system (SPECIFICITIES). In total, there were 11 actors, eight processes, 24 involvements, and 35 functions (such as, 35 DIRECTIONS and 35 SPECIFICITIES) in our experimental system.

Two coders, both doctoral students, independently coded all narratives after being trained on how to extract information on the five aspects identified here. Coding was accomplished in stages to control for quality and reliability of the coding. Initially, both coders were given 10 narratives and their coding was verified for consistency. The inter-rater reliability was high. Disagreements were discussed and resolved. This process was adopted for coding both DFD and UCD narratives. The codes assigned by the coders were then averaged to obtain the rating for each student.

Data Analysis. The effectiveness of the tools was evaluated based on the extent to which participants were able to extract information on the five dimensions noted earlier.

As a starting point, we compared the performance of the whole sample in reading the diagrams. For each participant, we first computed the differences between DFD and UCD ratings on all five dimensions. All differences were computed as DFD–UCD. Then we conducted pairwise t-tests to evaluate the extent to which the tools effectively aided requirements verification. We found the pairwise differences for all dimensions, except ACTORS, were significant (p< .05), as shown in Table 1. Overall, we found DFDs were more effective for requirements verification compared to UCDs.

For subsequent analyses, we separated the dataset into two groups. The first group comprised students from the introductory classes, who were considered novices, and therefore similar to clients and stakeholders in a development project. The second group comprised students from the advanced MIS classes, who were considered more similar to the analysts/designers in a project team. Then we conducted pairwise t-tests by group to evaluate the extent to which the tools effectively aided requirements verification for the two groups.

No pairwise difference was significant for the introductory subjects. That is, for users similar to clients and stakeholders not trained in methodologies, neither the DFDs nor UCDs were adequate for requirements verification. These results are shown in Table 2.

For trained users, we found the pairwise differences for all dimensions, except ACTORS, were significant (p < .05). Since understanding the actors of a system represents the lowest level of systems modeling, this implies the users found the DFD superior for representing anything other than the most basic of issues. Thus, for users trained in methodologies, DFDs were more effective than UCDs for requirements verification. These results are shown in Table 3.

Back to Top

Conclusion

This analysis showed mixed results. Overall, subjects were able to glean more information from DFDs than from UCDs. That is, subjects were able to understand more about the designers' understanding of the student registration system using the DFD than the UCD. Hence, as an analysis tool, the DFD seems to be superior to UCD. In particular, since DFDs provide more information, users are likely to identify a misunderstanding in the current processes or a misspecification of user requirements in the proposed systems. The DFD allows better verification of the models. This suggests the tool should be used, even if it does not port the information well to the next stage (and thus into the subsequent tool) prescribed by the methodology chosen for system development. While we did not test this question, we believe the DFD represents a more natural thought process for those who are not expertly competent in modeling than does the UCD, and thus is a more reliable tool to use with clients.

The fact that there was not more difference between the responses to the two tools provided by novice users was more surprising. We anticipated the novice users would have trouble with the UCDs because of "information encapsulation." However, the mean values of DFD and UCD ratings for novice users were not different on any of the five dimensions. This suggests the novice users find both diagrams equally problematic. This may be attributed to two reasons. First, novice users may have been totally overwhelmed by the "information manifestation" in DFDs. When queried after the analysis, users did express a concern that the DFD was somewhat overwhelming and therefore it was difficult to focus on specific details. This would suggest the need for greater layering of DFDs prior to sharing them with stakeholder groups.

Second, the problem might have been associated with the absence of formal training to read and interpret DFDs adequately. While users did not specify this as a problem when queried, it seems consistent with their being overwhelmed by the process. This, in conjunction with their similar inability to read the UCDs, suggests that simply to share the results with the users as diagrams, without adequate discussion and explanation, is insufficient. Such an evaluation does not seem to lead to adequate consideration for verification that the requirements are stated properly. Rather, it is critical that the users receive training on the diagrams and/or be walked through any particular diagram to ensure they understand the implications of what they see (or do not see). Analysts must be diligent to walk through the entire diagram with the users to ensure users understand and absorb it completely; unfortunately there is not a test they can apply to ensure that understanding otherwise.

Furthermore, the mean values of DFD and UCD ratings for trained users, while being significantly different, were still far less than the maximum possible values on at least three dimensions (such as, INVOLVEMENTS, DIRECTIONS, and SPECIFICITIES) considered crucial for verification. This suggests that even trained users may misinterpret both diagrams. This may mean that, despite their best efforts, professionals may not be proficient in representing users' needs with either diagramming technique. This seems to further emphasize the need for training on reading and interpreting the requirements tools, and also understanding the effectiveness of the modeling tools in the requirements verification process.

In conclusion, we believe the system failure problem may be addressed, at least in part, by reframing the question to deal with testing our analysis rather than building our analysis. That is, we need to gauge the extent to which users are able to effectively read and interpret the modeling tools before sign-off, such that the potential for system failures is alleviated. Finally, the need to train users to read and interpret modeling diagrams before they sign off on requirements verification cannot be overstated.

Back to Top

References

1. Avison, D.E. and Taylor, V. Information systems development methodologies: A classification according to the problem situation. J. Information Technology 12 (1997) 73–81.

2. Dalcher, D., and Drevin, L. Learning from information systems failures by using narrative and ante-narrative methods. In Proceedings for the 2003 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on Enablement Through Technology. South African Institute for Computer Scientists and Information Technologists.

3. Dobing, B. and Parsons, J. Understanding the role of use cases in UML: A review and research agenda. J. Database Management 11, 4 (2000), 28–36.

4. Fitzgerald, B. An empirical investigation into the adoption of systems development methodologies. Information & Management 34, 6 (1998) 317–328.

5. Hoffer, J.A., George, J.F. and Valacich, J.S. Modern Systems Analysis & Design. Prentice-Hall, Upper Saddle River, NJ, 2002.

6. Jayaratna, N. Understanding and Evaluating Methodologies, NISAD: A Systematic Framework. McGraw-Hill, Maidenhead, UK, 1994.

7. Ross, J.K. Project and requirements management—Driving software project success. J. Validation Technology 10, 3 (2004), 192.

8. Standish Group. Chaos. 1995.

9. Sutcliffe, A.G. Object-oriented systems development: Survey of structured methods. Information and Software Technology 33, 6 (1991), 433–442.

10. Vessey, I. and Conger, S. Requirements specification: Learning object, process, and data methodologies. Commun. ACM 37, 5 (May 1994), 102–113.

Back to Top

Authors

Anand Jeyaraj (anand.jeyaraj@wright.edu) is an assistant professor of information systems in the Raj Soin College of Business at Wright State University, Dayton, OH.

Vicki L. Sauter (vicki.sauter@umsl.edu) is a professor of information systems at University of Missouri—St. Louis.

Back to Top

Footnotes

1Methodologies are generally process-oriented (SDLC), data-oriented (JSD), or object-oriented (OOAD). However, data-oriented approaches are considered to be conceptually similar to object-oriented approaches [10] and are not discussed further.

2Students enrolled in these classes took systems analysis and systems design (separate classes) from different instructors. As such there was variability in their exposure to both diagramming tools, but there were no significant differences that can be explained by the sections in which they were enrolled.

3The data flow diagram and the use case diagram for the student registration system used in the study are available from the authors upon request.

Back to Top

Tables

T1Table 1. Pairwise comparisons of performance with modeling tools for all users.

T2Table 2. Pairwise comparisons of performance with modeling tools for novice users.

T3Table 3. Pairwise comparisons of performance with modeling tools for trained users.

Back to top


©2007 ACM  0001-0782/07/0600  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.


 

No entries found