individual-entry-BA-Blog

« The Science and Art of Decision Management at a Large Bank | Main | Why the MPS File is Still Useful »

Advances in Algorithms and Hardware Help Speed Optimization

Algorithm.jpg

The relationship between algorithmic developments and hardware improvements has led to the immense speed-up in optimization codes over the last few decades. There has long been a debate over which improvement has done the most for forward progress. Hardware developments seem to have taken two main routes: the supercomputer and at the lower end, the multi-processor, multicore chip.

The supercomputer has become even more “super”, but the special algorithms once developed for vector processing are no longer used. However, we should not ignore the role played by algorithm development in the recent past. Without it, the vector and parallel supercomputers could have conferred almost no advantage on linear programming (simplex) based methods. The one bright spot was in the implementation of “embarrassingly parallel” algorithms for solving discrete problems, such as mixed integer programs, by branch and bound or its relatives, often achieving super-linear speed-up with the number of processors.

The situation at the low-end has been different. It is now possible to solve some test LPs in a few seconds, which once took over an hour on a main frame. Many of the algorithmic advances in sparse matrix methods were published decades ago, but their efficiency has been dramatically improved by rules such as “never touch a zero”, and employing computer science concepts such as heaps. However, hardware has played a prominent role, and not just through Moore’s law. The basic laptop or desktop now has multiple processors, each often having multiple cores, in addition to multi-level caches, plus the speed-ups that occur over time. Thus hardware seems to be currently winning the speed race. Even so, studies suggest that over time, say 20 years, the speed-ups have been quite evenly divided between mathematics/algorithms and hardware improvements.

What of the future? It is difficult to foresee fundamental new algorithmic developments in basic LP and sparse matrix methods, but some areas of optimization which were under-developed are beginning to come into their own. One of these is non-linear programming (NLP), which after years in the shadows (computationally, that is – it is prominent in the literature) is starting to become ready for prime time. Some hardware developments – more cores for example - are easily foreseeable, but will there by new breakthroughs, and the algorithms to exploit them? Time will tell.

First time on the EDM blog?
Subscribe to the EDM blog feed or check out some other recent posts:

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451629b69e2017c364194c7970b

Listed below are links to weblogs that reference Advances in Algorithms and Hardware Help Speed Optimization:

Comments

Post a comment

Comments are moderated, and will not appear on this weblog until the author has approved them.

If you have a TypeKey or TypePad account, please Sign In.

Search Site


  • dmblog.fico.com

Subscribe

  • enter your email

Upcoming Events

  • FICO Tools & Analytics User Forum 2012
    BERLIN: September 11-12, 2012 LONDON: September 18-19, 2012 Gain new insights for improving business performance through advanced analytics and decision management tools.