acm-header
Sign In

Communications of the ACM

Research highlights

Technical Perspective: Algorithm Selection as a Learning Problem


The following paper by Gupta and Roughgarden—"Data-Driven Algorithm Design"—addresses the issue that the best algorithm to use for many problems depends on what the input "looks like." Certain algorithms work better for certain types of inputs, whereas other algorithms work better for others. This is especially the case for NP-hard problems, where we do not expect to ever have algorithms that work well on all inputs: instead, we often have various heuristics that each work better in different settings. Moreover, heuristic strategies often have parameters or hyperparameters that must be set in some way.


The authors present a theoretical formulation and analysis of algorithm selection using the well-developed framework of PAC-learning to analyze fundamental learning questions.


Traditional theoretical analysis of such problems would select among algorithms or parameter choices by looking at their worst-case performance, such as via their approximation ratio. Another traditional alternative would be to consider average-case analysis, looking at performance on some clean (typically uniform) distribution over inputs. However, the inputs arising in a given domain are typically neither worst-case nor average-case from a clean distribution: instead, they often have structure and can be thought of as drawn from some complex and difficult-to-char-acterize probability distribution over inputs associated with the given domain. This fact suggests treating algorithm selection as a learning problem—using previously seen examples of inputs from a given domain to tune parameters of a heuristic or to select among various choices of algorithm. Indeed, this approach has been used in practice for many years, for example, Hutter5 and Smith-Miles.6 What the authors present is a theoretical formulation and analysis of this process, using the well-developed framework of PAC-learning from learning theory to analyze fundamental questions. For instance, how many examples of previous inputs does one need to see to have confidence that optimizing parameters over these examples will produce a rule that is near optimal on average for future inputs drawn from the same (complex and difficult-to-characterize) probability distribution? The paper then provides answers to such questions by analyzing formal measures of complexity (in particular, the pseudo-dimension) of various natural algorithm families such as greedy algorithms for knapsack and other object assignment problems, resulting in bounds on the number of previous inputs that must be seen for good generalization.


 

No entries found

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account