Sign In

Communications of the ACM

Research Archive


The Research archive provides access to all Research articles published in past issues of Communications of the ACM.

December 2014

From Communications of the ACM

Technical Perspective: Rethinking Caches For Throughput Processors

As GPUs have become mainstream parallel processing engines, many applications targeting GPUs now have data locality more amenable to traditional caching. The architecture described in "Learning Your Limits" has a number of…

From Communications of the ACM

Learning Your Limit: Managing Massively Multithreaded Caches Through Scheduling

Learning Your Limit

This paper studies the effect of accelerating highly parallel workloads with significant locality on a massively multithreaded GPU.