acm-header
Sign In

Communications of the ACM

Kode vicious

The Unholy Trinity of Software Development


The Unholy Trinity of Software Development, illustrative photo

Credit: Andrey Popov

back to top 

Dear KV,

I have read your columns for a few years and you have written a few times about testing and documentation, but you have not written about the relationship between the two. Many large projects, including the one I am working with now, keep their tests and documentation separate from the code and from each other. I have argued with my team about putting both of these closer to, or even embedded in, the source code, but there are always a few engineers who refuse to allow anything but code and comments in the source files. Their argument is about maximizing the screen space for code context, but that seems like a more reasonable argument for a 25-line terminal than for a modern development environment. What arguments would you make to get this software team to adopt a more holistic approach to the system we are working on?

System Integrationist

Dear SI,

Questions like this bring to mind the fact that source code, documentation, and testing are the unholy trinity of software development, although many organizations like to see them as separate entities. It is interesting that while many groups pay lip service to "test-driven development," they do not include documentation in TDD.

I think you have found one of the reasons many developers fight against the integration of tests and documentation into source code, either directly into the source files or even close by in the same directory. The arguments against such integration include the one you mentioned about vertical space for code. Let's treat that one first.

Software developers like new toys. Of course they do: they work on computers and computers are toys to us, and everyone likes things that are shiny. If you visit a modern software company, what do you see besides a sea of Aeron chairs? Lots and lots of monitors, and many of those are of the 4K variety, meaning that a text editor, even with a large font, will give you more than 100 lines of code to look at—a 400% increase over the 80×25 monitors used to write code since the 1970s.

There is a school of thought that says a function or method of 100 lines is too long and should be broken down into smaller, more easily digestible chunks. If you work in such an environment, then there is no reason for the developers to argue against documentation or tests being integrated within the code itself, as a 25-line function can have 10 lines of documentation above it, and a reasonable documentation system, such as Doxygen, can assemble this set of "documentlettes" into a greater whole. That greater whole, by the way, needs to be reviewed by someone who can turn it into language suitable for others to read. The worst code documentation is the kind that is not kept in sync with the code itself. The second worst type of such documentation is where the engineer writes something obvious, foo() function returns bar, which is easily seen from reading the code.

The best type of this documentation explains what the function does, what its possible error conditions are, and what conditions are required when the code is called. Multithreaded applications really need to have the locking requirements described in such documentation blocks. For some products such as end-user-facing systems, these document blocks will not generally find their way into the final manual, but they will be very useful to the person who is responsible for writing that manual. Libraries and other systems that are consumed by other programmers absolutely must have this style of documentation for the preservation of the sanity of all involved.

On the integration of tests into the source code, well, KV may be a bit old fashioned, but I do see that this is a tad harder and probably requires the source code to have special features embedded in it, or that the source-code editing system have special features, or both. Even with more than 100 lines of vertical space in which to code and document, adding any significant number of conformance tests will definitely dwarf the code and make each source file quite large and unwieldy.


While many groups pay lip service to "test-driven development," they do not include documentation in TDD.


Code folding, a common feature of many text editors, may help if you really are hell-bent on keeping the unholy trinity together in one file. The top of each source file would include the over-arching documentation, a large block that describes the module and its purpose within the system. The source code would then be placed between the class/method/function documentation, and the conformance tests for the code would come last.

The main complaint you will encounter with the folding method is that it requires special characters and a smart code editor, although even Vim has code folding at this point. Since folding in the source file usually requires special tags, it would make sense to standardize these for the whole project so that there is one tag each for documentation, tests, and source code.

One advantage of combining all of these things is that the tools you use—including compilers, editors, test frameworks, and documentation extractors—can all point at the same directory hierarchy. Keeping tests, documentation, and code separate complicates the search paths for the various tools and leads to errors, such as pointing at the wrong place and getting the wrong tests, or similar problems.

Bringing together these three components so that they are easily developed in concert has a lot of advantages, but you are still going to have to help people past the mind-set of the terminal. These arguments often bring to mind the following scene from Neal Stephenson's Snow Crash, in which he describes how the main character, annoyingly named Hiro Protagonist, actually writes code: "... where everything that you see in the Metaverse, no matter how lifelike and beautiful and three-dimensional, reduces to a simple text file: a series of letters on an electronic page. It is a throwback to the days when people programmed computers through primitive teletypes and IBM punch cards."

The next time your teammates complain about vertical space wasted on tests or documentation, hand them a punch card.

KV

Dear KV,

In the past 10 years I have noticed that the number of CPU cores available to my software has been increasing, but that the frequency of those cores is not much more than it was when I left school. Multicore software was a big topic when the trend first began, but it does not seem to be discussed as much now that systems often have six or more cores. Most programmers seem to ignore the number of cores and write their code as they did when systems had only a single CPU. Is that just my impression, or does this mean that maybe I picked the wrong startup this year?

Core Curious

Dear Core,

The chief contribution of multicore hardware to software design has been to turn every system into a truly concurrent system. A recently released digital watch has two cores in it, and people still think "digital watches are a pretty neat idea" (as in Douglas Adams's The Hitchhiker's Guide to the Galaxy). When the current crop of computer languages was written, the only truly concurrent systems were rare and expensive beasts that were used in government research labs and other similarly rarefied venues. Now, any clown can buy a concurrent system off the Internet, install it in a datacenter, and push some code to it. In fact, such clowns can get such systems in the cloud at the push of a button. Would that software for such systems were as easily gotten!


The purpose of software is to process data and therefore to take things and change them or to mutate state.


Leaving aside the fact that most applications are now distributed systems implemented on many-core communicating servers, what can we say about the concurrent nature of modern software and hardware? The short answer is, "It's all crap," but that is not helpful or constructive, and KV is all about being helpful and constructive.

From our formative computer science years, we all know that in a concurrent system two different pieces of code can be executing simultaneously, and on a modern server, that number can easily be 32 or 64 rather than just two. As concurrency increases, so does complexity. Software is written to be executed as a set of linear steps, but depending on how the software is written, it may be broken down into many small parts that might all be running at the same time. As long as the software does not share any state between the concurrent code, everything is fine—well, as fine as any other nonconcurrent software. The purpose of software is to process data and therefore to take things and change them or to mutate state. The number of significant software systems that do not wind up sharing state between concurrent parts is very, very small.

Software that is written specifically without concurrency is, of course, easier to manage and debug, but it also wastes most of the processing power of the system on which it runs, and so more and more software is being converted from nonconcurrent into concurrent or being written for concurrency from scratch. For any significant system, it is probably easier to rewrite the software in a newer, concurrency-aware language than to try to retrofit older software with traditional concurrency primitives.

Now, I am sure you have read code that looks to be nonconcurrent—that is, it does not use threads in its process—and you might think that was fine, but, alas, nothing is ever really fine. Taking a collection of programs and having them share data through, for example, the file system or shared memory, a common early way of having some level of concurrency, does not protect the system from the evils of deadlock or other concurrency bugs. It is just as possible to deadlock software by passing a set of messages between two concurrent processes as it is to do the same sort of thing with Posix threads and mutexes. The problems all come down to the same things: idempotent updates of data and data structures, the avoidance of deadlocks, and the avoidance of starvation.

These topics are covered in books about operating systems, mostly because it was operating systems that first had these challenges. If, after this description, you are still curious, I recommend picking up one such book so that you at least understand the risks of concurrent systems and the land mines you are likely to step on as you build and debug, and debug, and debug such systems.

KV

q stamp of ACM QueueRelated articles
on queue.acm.org

Hickory Dickory Doc
Kode Vicious
http://queue.acm.org/detail.cfm?id=2791303

Microsoft's Protocol Documentation Program: Interoperability Testing at Scale
Nico Kicillof, Wolfgang Grieskamp, and Bob Binder
http://queue.acm.org/detail.cfm?id=1996412

Verification of Safety-critical Software
B. Scott Andersen and George Romanski
http://queue.acm.org/detail.cfm?id=2024356

Back to Top

Author

George V. Neville-Neil (kv@acm.org) is the proprietor of Neville-Neil Consulting and co-chair of the ACM Queue editorial board. He works on networking and operating systems code for fun and profit, teaches courses on various programming-related subjects, and encourages your comments, quips, and code snips pertaining to his Communications column.


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: