acm-header
Sign In

Communications of the ACM

Research highlights

Technical Perspective: The Simplicity of Cache Efficient Functional Algorithms


Scientific models are simplified descriptions of messy reality.

Depending on our purposes, some models may be better than others. Good models are abstract enough to be tractable, yet accurate enough to draw conclusions matching the aspects of reality we hope to understand.

Even good models have limits, so we must take care to use them only within their domain of applicability. Galileo's law for adding velocities is simple and accurate enough for most purposes, but a more complicated law is needed when adding relativistic velocities. Einstein's special theory of relativity tells us how to reason about space and time in the absence of acceleration or gravity, but we must turn to Einstein's general theory when gravitational effects are significant. That general theory has passed all experimental tests so far, but we know its equations break down at black holes and other singularities.

In general, scientific models seek compromise between tractability and accuracy. That compromise is evident within the cost models we use to analyze algorithms. In the following paper, Blelloch and Harper propose and demonstrate a more effective model for analyzing and describing the efficiency of functional algorithms.

Fifty years ago, modeling a computer system's main storage as random-access memory (RAM) was both simple and accurate. That RAM cost model remains simple, but it is no longer accurate.

Cache-aware estimates of real-world performance can be more accurate but are also more complicated. Even the ideal cache model, which assumes an unachievable perfect replacement policy, must account for both spatial and temporal locality.

Sometimes, however, we can improve both tractability and accuracy, as when we use asymptotic notation to describe costs. It is easier to come up with an asymptotic estimate for the number of steps in a computation than it is to count the exact number of steps. Counting the exact number of steps is more precise, but exact counts will be inaccurate models of the real world because compiler optimizations and hardware peculiarities break the estimate. Asymptotic estimates are more robustly accurate because they are less precise: They express costs at a higher (and more useful) level of abstraction.

Cache-oblivious algorithms provide another example of using abstraction to combine accurate cost estimates with tractability. The cache size M and cache line (block) size B are abstracted parameters that may show up in the asymptotic complexity of a cache-oblivious algorithm, but do not appear within the algorithm itself.

To prove that an algorithm is cache-oblivious, we must consider its spatial and temporal locality. Heretofore, most of those proofs have dealt with imperative algorithms over arrays. To extend the technique to functional algorithms, we need to make assumptions about the locality of object allocation and garbage collection.


In the following paper, Blelloch and Harper propose and demonstrate a more effective model for analyzing and describing the efficiency of functional algorithms.


Blelloch and Harper suggest we analyze the costs of functional algorithms by assuming objects are allocated sequentially in cache memory, with each new object adjacent to the previously allocated object, and garbage collection preserves this correspondence between allocation order and memory order. Their key abstraction defines a compact data structure as one for which there exists a fixed constant k, independent of the size of the structure, such that the data structure can be traversed with at most k cache misses per object. Using that abstraction and those two assumptions, the authors show how efficient cache-oblivious functional algorithms over lists and trees can be expressed and analyzed as easily as imperative algorithms over arrays.

Not all storage allocators and garbage collectors satisfy the critical assumptions of this paper, but some do. In time, as cache-oblivious functional algorithms become more common, we can expect even more implementations to satisfy those assumptions (or to come close enough).

After all, we computer scientists have a big advantage over the physicists: We can improve both the simplicity and the accuracy of our models by improving the reality we are modeling.

Back to Top

Author

William D. Clinger (will@ccs.neu.edu) is an associate professor in the College of Computer and Information Science at Northeastern University, Boston, MA.

Back to Top

Footnotes

To view the accompanying paper, visit doi.acm.org/10.1145/2776825


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: