Catalase Website IndexJamesTechno •Dynamic Programming •

Dynamic Programming


'Dynamic programming' is an efficient programming technique for solving certain combinatorial problems. It is particularly important in bioinformatics as it is the basis of sequence alignment algorithms for comparing protein and DNA sequences.

In the bioinformatics application Dynamic Programming gives a spectacular efficiency gain over a purely recursive algorithm. It converts what would be an [infeasible] O(2N) algorithm to an O(N2) one. Less widely known applications of dynamic programming are in parsing and in programs to play chess; both gain significant benefits from a Dynamic Programming formulation.

Dynamic Programming

Dynamic Programming is recursion's somewhat neglected cousin. Dynamic programming is the basis of comparison and alignment routines - such as the unix diff routine.

Sequence Alignment

The 'showcase' application for dynamic programming is in protein sequence alignment, for in this application it provides a stunning gain in efficiency. Using dynamic programming it is possible for an algorithm to evaluate all possible ways of aligning one sequence against another in a reasonable time, even though the number of such possible alignments grows exponentially with the length of the two sequences.

The Problem

Here is an example of two protein sequences. In these each letter represents one amino acid. (more realistic examples would have a few hundred amino acids).

    EAKKLNDAQAPKDN
    EAKSDEAEALKSDE

And here are the same two sequences with gaps (indicated by '-') inserted to bring them into a 'good' alignment.

    EAKKLNDAQAPK-DN
    EA-KSDEAEALKSDE
The score for an alignment is the sum of similarity scores for pairs of aligned letters, minus a penalty for each gap inserted. The problem is to find the highest scoring alignment.

When I first heard of the problem I assumed it was solved in a heuristic way exploring just some of the possibilities, but it isn't. All possibilities are considered.

Alignment Grid

The alignment algorithm is based on a grid. If the first sequence is placed horizontally and the second vertically any alignment corresponds to a path through this grid. Each position in the grid determines a position in the first and second sequence. Imagine putting an 'X' in the grid corresponding to each position in the alignment where an element from the first sequence is aligned against an element from the second sequence. Join these X's up in order - only horizontal vertical and diagonal steps are allowed. The path through the grid shows which elements of the first and second sequence were matched up and therefore determines the alignment.

Using paths in a grid to represent alignments provides a method of computing best alignments. A score can be placed in each cell of the grid, the score for the best partial alignment ending at that position in the two sequences. All the scores for the best alignments ending at (i,0) or ending at (0,j) are zero. The score at (i,j) can be calculated from the scores at (i-1,j), (i,j-1) and (i-1, j-1), so by filling in the scores in the grid the score for the best alignment can be found. Once the best score is found the path that leads to it and hence the alignment can be traced back through the grid rapidly.

In computing the scores each cell takes constant time to compute and so the overall algorithm has time complexity O(mn) where m and n are the lengths of the two sequences. [Textbooks usually cite O(n2) where n is now the maximum sequence length, but O(mn) gives a better feel for the performance when one sequence may be much longer than the other]

How Does Recursion Perform?

I've deliberately described the algorithm in such a way as to draw attention to its similarity to recursion. In the recursive formulation one writes a routine that calculates the score at (i,j) recursively. It calls itself to evaluate scores at (i-1,j), (i,j-1) and (i-1, j-1) before returning its result. A recursive approach would however be very inefficient. It would recalculate intermediate results ridiculously many times over. Aligning two sequences of length 100 by recursion would take around 50 years on a modern P.C. compared to fractions of a second for dynamic programming

So Dynamic Programming is always Better?

Does this mean that where Dynamic Programming is a possible alternative it is always 'better'? Not in my book. Descriptions of Dynamic Programming algorithms are often harder to understand than the recursive versions.

The ideal is to have the efficiency of Dynamic Programming with the clarity of recursion. This leads to the question:

"What is the essential difference between Dynamic Programming and Recursion?"

The essential difference is that Dynamic Programming keeps its intermediate results whereas recursion does not. This makes a huge difference to performance when a recursive function is called repeatedly with the same arguments. In fact Dynamic Programming is nothing more than recursion with the addition of a caching strategy. For the sequence comparison algorithm the caching strategy was to use a 2D array. In other situations sparse arrays and hashing are more appropriate.


Applications

Dynamic Programming is recursion with a caching of intermediate results. Any algorithm that is formulated recursively can be 'instantly' adapted to a Dynamic Programming version, with a potential speed gain where common sub results are re-used.

Two classic recursive algorithms are:

  • Recursive descent Parsing
  • Alpha-Beta search game playing algorithms, e.g. chess.

How could these benefit?


Top

Parsing

In practical recursive descent parsing of grammars e.g. for compilers, the grammars are 'disambiguated' and this avoids recalculation of sub results. Using the dynamic programming approach makes possible the parsing of some ambiguous grammars with only linear rather than exponential increase in running time.

There are two important details in the implementation. Rather than using a stack, one keeps two lists:

  • A list of requested function evaluations.
  • A list of function evaluations that have already been made.
In my implementation I was able to index into the second list so avoiding searching it, but in other contexts one could use hashing as an alternative.

With the two lists it is straightforward to modify the evaluation order to switch between a depth first or breadth first search, or indeed to some intermediate. One is calling the same functions as in recursion, it's just that using the cache gives freedom to choose the order of evaluation and also avoids calling a function twice with the same values. The choice of depth or breadth first amounts to whether one adds new requests on to the start or the end of the requests list.

A second detail is that an optimisation to the caching can be made by only caching function evaluation results where that function returns 'TRUE' - indicating that a valid sub-parse has been found. For this optimisation I needed to force a breadth first order, as breadth first order makes it easier to distinguish between a function evaluation that hasn't been made and one that has been made and has yielded 'FALSE'.


Top

Chess

In chess playing software, failure to recognise two lines of play that lead to the same position, particularly in endgames, can lead to a massive waste of computing resources. Rather than build such checks into an endgame analysis routine, I'd argue that they should be at a higher level of the design so that they benefit play in all cases of alternative routes to the same position.

In neither case does seeing recursion as a special case of Dynamic Programming add something that can't already be achieved by techniques specific to these applications. The advantage is simply recognition of an underlying principle. The principle provides a conceptual separation between the fundamental recursive algorithm and the caching strategy that makes it efficient, and this in turn can lead to less complex more efficient code.

© James Crook, June 1998.


Catalase Website IndexJamesTechno •Dynamic Programming •
"Dynamic Programming" page last updated 5-July-2003