Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Longest common subsequence
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algorithmic problem on pairs of sequences}} {{Distinguish|longest common substring}} [[File:Nubio Diff Screenshot3.png|thumb|Comparison of two revisions of an example file, based on their longest common subsequence (black)]] A '''longest common subsequence''' ('''LCS''') is the longest [[subsequence]] common to all sequences in a set of sequences (often just two sequences). It differs from the [[longest common substring]]: unlike substrings, subsequences are not required to occupy consecutive positions within the original sequences. The problem of computing longest common subsequences is a classic [[computer science]] problem, the basis of [[data comparison]] programs such as the [[diff utility|<code>diff</code> utility]], and has applications in [[computational linguistics]] and [[bioinformatics]]. It is also widely used by [[Revision control|revision control systems]] such as [[Git (software)|Git]] for [[Merge (revision control)|reconciling]] multiple changes made to a revision-controlled collection of files. <!-- todo: add definition and example --> For example, consider the sequences (ABCD) and (ACBAD). They have five length-2 common subsequences: (AB), (AC), (AD), (BD), and (CD); two length-3 common subsequences: (ABD) and (ACD); and no longer common subsequences. So (ABD) and (ACD) are their longest common subsequences. == Complexity == For the general case of an arbitrary number of input sequences, the problem is [[NP-hard]].<ref>{{cite journal| author = David Maier| title = The Complexity of Some Problems on Subsequences and Supersequences| journal = J. ACM| volume = 25| year = 1978| pages = 322–336| doi = 10.1145/322063.322075| publisher = ACM Press| issue = 2| s2cid = 16120634| doi-access = free}}</ref> When the number of sequences is constant, the problem is solvable in polynomial time by [[dynamic programming]]. Given <math>N</math> sequences of lengths <math>n_1, ..., n_N</math>, a naive search would test each of the <math>2^{n_1}</math> subsequences of the first sequence to determine whether they are also subsequences of the remaining sequences; each subsequence may be tested in time linear in the lengths of the remaining sequences, so the time for this algorithm would be :<math>O\left( 2^{n_1} \sum_{i>1} n_i\right).</math> For the case of two sequences of ''n'' and ''m'' elements, the running time of the dynamic programming approach is [[Big O notation|O]](''n'' × ''m'').<ref>{{cite journal |last1=Wagner |first1=Robert |last2=Fischer |first2=Michael |date=January 1974 |title=The string-to-string correction problem |journal=[[Journal of the ACM]] |volume=21 |issue=1 |pages=168–173 |doi=10.1145/321796.321811 |citeseerx=10.1.1.367.5281 |s2cid=13381535 }}</ref> For an arbitrary number of input sequences, the dynamic programming approach gives a solution in :<math>O\left(N \prod_{i=1}^{N} n_i\right).</math> There exist methods with lower complexity,<ref name="BHR00"> {{cite conference | author = L. Bergroth and H. Hakonen and T. Raita | conference = Proceedings Seventh International Symposium on String Processing and Information Retrieval. SPIRE 2000 | title = A survey of longest common subsequence algorithms | date =7–29 September 2000 |location=A Curuna, Spain | isbn = 0-7695-0746-8 | pages = 39–48 | doi = 10.1109/SPIRE.2000.878178 | publisher = IEEE Computer Society| s2cid = 10375334 }}</ref> which often depend on the length of the LCS, the size of the alphabet, or both. The LCS is not necessarily unique; in the worst case, the number of common subsequences is exponential in the lengths of the inputs, so the algorithmic complexity must be at least exponential.<ref>{{cite arXiv | author = Ronald I. Greenberg | title = Bounds on the Number of Longest Common Subsequences | date = 2003-08-06 | eprint = cs.DM/0301030}}</ref> == Solution for two sequences == The LCS problem has an [[optimal substructure]]: the problem can be broken down into smaller, simpler subproblems, which can, in turn, be broken down into simpler subproblems, and so on, until, finally, the solution becomes trivial. LCS in particular has [[overlapping subproblems]]: the solutions to high-level subproblems often reuse solutions to lower level subproblems. Problems with these two properties are amenable to [[dynamic programming]] approaches, in which subproblem solutions are [[memoization|memoized]], that is, the solutions of subproblems are saved for reuse. === Prefixes === The prefix ''S''<sub>''n''</sub> of ''S'' is defined as the first ''n'' characters of ''S''.<ref>{{cite book | last = Xia | first = Xuhua | title = Bioinformatics and the Cell: Modern Computational Approaches in Genomics, Proteomics and Transcriptomics | url = https://archive.org/details/bioinformaticsce00xiax_984 | url-access = limited | year = 2007 | publisher = Springer | location = New York | page = [https://archive.org/details/bioinformaticsce00xiax_984/page/n38 24] | isbn = 978-0-387-71336-6 }}</ref> For example, the prefixes of ''S'' = (AGCA) are :''S''<sub>0</sub> = () :''S''<sub>1</sub> = (A) :''S''<sub>2</sub> = (AG) :''S''<sub>3</sub> = (AGC) :''S''<sub>4</sub> = (AGCA). Let ''LCS''(''X'', ''Y'') be a function that computes a longest subsequence common to ''X'' and ''Y''. Such a function has two interesting properties. === First property === ''LCS''(''X''^''A'',''Y''^''A'') = ''LCS''(''X'',''Y'')^''A'', for all strings ''X'', ''Y'' and all symbols ''A'', where ^ denotes string concatenation. This allows one to simplify the ''LCS'' computation for two sequences ending in the same symbol. For example, ''LCS''("BANANA","ATANA") = ''LCS''("BANAN","ATAN")^"A", Continuing for the remaining common symbols, ''LCS''("BANANA","ATANA") = ''LCS''("BAN","AT")^"ANA". === Second property === If ''A'' and ''B'' are distinct symbols (''A''≠''B''), then ''LCS''(X^A,Y^B) is one of the maximal-length strings in the set { ''LCS''(''X''^''A'',''Y''), ''LCS''(''X'',''Y''^''B'') }, for all strings ''X'', ''Y''. For example, ''LCS''("ABCDEFG","BCDGK") is the longest string among ''LCS''("ABCDEFG","BCDG") and ''LCS''("ABCDEF","BCDGK"); if both happened to be of equal length, one of them could be chosen arbitrarily. To realize the property, distinguish two cases: *If ''LCS''("ABCDEFG","BCDGK") ends with a "G", then the final "K" cannot be in the LCS, hence ''LCS''("ABCDEFG","BCDGK") = ''LCS''("ABCDEFG","BCDG"). *If ''LCS''("ABCDEFG","BCDGK") does not end with a "G", then the final "G" cannot be in the LCS, hence ''LCS''("ABCDEFG","BCDGK") = ''LCS''("ABCDEF","BCDGK"). === ''LCS'' function defined === Let two sequences be defined as follows: <math>X=(x_1 x_2 \cdots x_m)</math> and <math>Y=(y_1 y_2 \cdots y_n)</math>. The prefixes of <math>X</math> are <math>X_0, X_1, X_2, \dots, X_m</math>; the prefixes of <math>Y</math> are <math>Y_0, Y_1, Y_2, \dots, Y_n</math>. Let <math>\mathit{LCS}(X_i,Y_j)</math> represent the set of longest common subsequence of prefixes <math>X_i</math> and <math>Y_j</math>. This set of sequences is given by the following. :<math> \mathit{LCS}(X_i,Y_j)=\begin{cases} \epsilon & \mbox{if }i=0\mbox{ or }j=0 \\ \mathit{LCS}(X_{i-1},Y_{j-1}) \hat{} x_i & \mbox{if }i,j>0\mbox{ and }x_i=y_j \\ \operatorname{\max}\{\mathit{LCS}(X_i,Y_{j-1}),\mathit{LCS}(X_{i-1},Y_j)\} & \mbox{if }i,j>0\mbox{ and }x_i\ne y_j. \end{cases} </math> To find the LCS of <math>X_i</math> and <math>Y_j</math>, compare <math>x_i</math> and <math>y_j</math>. If they are equal, then the sequence <math>\mathit{LCS}(X_{i-1},Y_{j-1})</math> is extended by that element, <math>x_i</math>. If they are not equal, then the longest among the two sequences, <math>\mathit{LCS}(X_i,Y_{j-1})</math>, and <math>\mathit{LCS}(X_{i-1},Y_j)</math>, is retained. (If they are the same length, but not identical, then both are retained.) The base case, when either <math>X_i</math> or <math>Y_i</math> is empty, is the [[empty string]], <math>\epsilon</math>. === Worked example === The longest subsequence common to ''R'' = (GAC), and ''C'' = (AGCAT) will be found. Because the ''LCS'' function uses a "zeroth" element, it is convenient to define zero prefixes that are empty for these sequences: ''R''<sub>0</sub> = ε; and ''C''<sub>0</sub> = ε. All the prefixes are placed in a table with ''C'' in the first row (making it a <u>c</u>olumn header) and ''R'' in the first column (making it a <u>r</u>ow header). {| class="wikitable" style="text-align:center" |+ LCS Strings |- ! || ε || A || G || C || A || T |- ! ε | ε || ε || ε || ε || ε || ε |- ! G | ε | | | | | |- ! A | ε | | | | | |- ! C | ε | | | | | |- |} This table is used to store the LCS sequence for each step of the calculation. The second column and second row have been filled in with ε, because when an empty sequence is compared with a non-empty sequence, the longest common subsequence is always an empty sequence. ''LCS''(''R''<sub>1</sub>, ''C''<sub>1</sub>) is determined by comparing the first elements in each sequence. G and A are not the same, so this LCS gets (using the "second property") the longest of the two sequences, ''LCS''(''R''<sub>1</sub>, ''C''<sub>0</sub>) and ''LCS''(''R''<sub>0</sub>, ''C''<sub>1</sub>). According to the table, both of these are empty, so ''LCS''(''R''<sub>1</sub>, ''C''<sub>1</sub>) is also empty, as shown in the table below. The arrows indicate that the sequence comes from both the cell above, ''LCS''(''R''<sub>0</sub>, ''C''<sub>1</sub>) and the cell on the left, ''LCS''(''R''<sub>1</sub>, ''C''<sub>0</sub>). ''LCS''(''R''<sub>1</sub>, ''C''<sub>2</sub>) is determined by comparing G and G. They match, so G is appended to the upper left sequence, ''LCS''(''R''<sub>0</sub>, ''C''<sub>1</sub>), which is (ε), giving (εG), which is (G). For ''LCS''(''R''<sub>1</sub>, ''C''<sub>3</sub>), G and C do not match. The sequence above is empty; the one to the left contains one element, G. Selecting the longest of these, ''LCS''(''R''<sub>1</sub>, ''C''<sub>3</sub>) is (G). The arrow points to the left, since that is the longest of the two sequences. ''LCS''(''R''<sub>1</sub>, ''C''<sub>4</sub>), likewise, is (G). ''LCS''(''R''<sub>1</sub>, ''C''<sub>5</sub>), likewise, is (G). {| class="wikitable" style="text-align:center" |+ "G" Row Completed |- ! || ε || A || G || C || A || T |- ! ε | ε || ε || ε || ε || ε || ε |- ! G | ε | <math>\overset{\ \ \uparrow}{\leftarrow}</math>ε | <math>\overset{\nwarrow}{\ }</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) |- ! A | ε | | | | | |- ! C | ε | | | | | |- |} For ''LCS''(''R''<sub>2</sub>, ''C''<sub>1</sub>), A is compared with A. The two elements match, so A is appended to ε, giving (A). For ''LCS''(''R''<sub>2</sub>, ''C''<sub>2</sub>), A and G do not match, so the longest of ''LCS''(''R''<sub>1</sub>, ''C''<sub>2</sub>), which is (G), and ''LCS''(''R''<sub>2</sub>, ''C''<sub>1</sub>), which is (A), is used. In this case, they each contain one element, so this LCS is given two subsequences: (A) and (G). For ''LCS''(''R''<sub>2</sub>, ''C''<sub>3</sub>), A does not match C. ''LCS''(''R''<sub>2</sub>, ''C''<sub>2</sub>) contains sequences (A) and (G); LCS(''R''<sub>1</sub>, ''C''<sub>3</sub>) is (G), which is already contained in ''LCS''(''R''<sub>2</sub>, ''C''<sub>2</sub>). The result is that ''LCS''(''R''<sub>2</sub>, ''C''<sub>3</sub>) also contains the two subsequences, (A) and (G). For ''LCS''(''R''<sub>2</sub>, ''C''<sub>4</sub>), A matches A, which is appended to the upper left cell, giving (GA). For ''LCS''(''R''<sub>2</sub>, ''C''<sub>5</sub>), A does not match T. Comparing the two sequences, (GA) and (G), the longest is (GA), so ''LCS''(''R''<sub>2</sub>, ''C''<sub>5</sub>) is (GA). {| class="wikitable" style="text-align:center" |+ "G" & "A" Rows Completed |- ! || ε || A || G || C || A || T |- ! ε | ε || ε || ε || ε || ε || ε |- ! G | ε | <math>\overset{\ \ \uparrow}{\leftarrow}</math>ε | <math>\overset{\nwarrow}{\ }</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) |- ! A | ε | <math>\overset{\nwarrow}{\ }</math>(A) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(A) & (G) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(A) & (G) | <math>\overset{\nwarrow}{\ }</math>(GA) | <math>\overset{\ }{\leftarrow}</math>(GA) |- ! C | ε | | | | | |- |} For ''LCS''(''R''<sub>3</sub>, ''C''<sub>1</sub>), C and A do not match, so ''LCS''(''R''<sub>3</sub>, ''C''<sub>1</sub>) gets the longest of the two sequences, (A). For ''LCS''(''R''<sub>3</sub>, ''C''<sub>2</sub>), C and G do not match. Both ''LCS''(''R''<sub>3</sub>, ''C''<sub>1</sub>) and ''LCS''(''R''<sub>2</sub>, ''C''<sub>2</sub>) have one element. The result is that ''LCS''(''R''<sub>3</sub>, ''C''<sub>2</sub>) contains the two subsequences, (A) and (G). For ''LCS''(''R''<sub>3</sub>, ''C''<sub>3</sub>), C and C match, so C is appended to ''LCS''(''R''<sub>2</sub>, ''C''<sub>2</sub>), which contains the two subsequences, (A) and (G), giving (AC) and (GC). For ''LCS''(''R''<sub>3</sub>, ''C''<sub>4</sub>), C and A do not match. Combining ''LCS''(''R''<sub>3</sub>, ''C''<sub>3</sub>), which contains (AC) and (GC), and ''LCS''(''R''<sub>2</sub>, ''C''<sub>4</sub>), which contains (GA), gives a total of three sequences: (AC), (GC), and (GA). Finally, for ''LCS''(''R''<sub>3</sub>, ''C''<sub>5</sub>), C and T do not match. The result is that ''LCS''(''R''<sub>3</sub>, ''C''<sub>5</sub>) also contains the three sequences, (AC), (GC), and (GA). {| class="wikitable" style="text-align:center" |+ Completed LCS Table |- ! || ε || A || G || C || A || T |- ! ε | ε || ε || ε || ε || ε || ε |- ! G | ε | <math>\overset{\ \ \uparrow}{\leftarrow}</math>ε | <math>\overset{\nwarrow}{\ }</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) | <math>\overset{\ }{\leftarrow}</math>(G) |- ! A | ε | <math>\overset{\nwarrow}{\ }</math>(A) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(A) & (G) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(A) & (G) | <math>\overset{\nwarrow}{\ }</math>(GA) | <math>\overset{\ }{\leftarrow}</math>(GA) |- ! C | ε | <math>\overset{\ \uparrow}{\ }</math>(A) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(A) & (G) | <math>\overset{\nwarrow}{\ }</math>(AC) & (GC) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(AC) & (GC) & (GA) | <math>\overset{\ \ \uparrow}{\leftarrow}</math>(AC) & (GC) & (GA) |- |} The final result is that the last cell contains all the longest subsequences common to (AGCAT) and (GAC); these are (AC), (GC), and (GA). The table also shows the longest common subsequences for every possible pair of prefixes. For example, for (AGC) and (GA), the longest common subsequence are (A) and (G). === Traceback approach === Calculating the LCS of a row of the LCS table requires only the solutions to the current row and the previous row. Still, for long sequences, these sequences can get numerous and long, requiring a lot of storage space. Storage space can be saved by saving not the actual subsequences, but the length of the subsequence and the direction of the arrows, as in the table below. {| class="wikitable" style="text-align:center" |+ Storing length, rather than sequences |- ! || ε || A || G || C || A || T |- ! ε | 0 || 0 || 0 || 0 || 0 || 0 |- ! G | 0 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>0 | <math>\overset{\nwarrow}{\ }</math>1 | <math>\overset{\ }{\leftarrow}</math>1 | <math>\overset{\ }{\leftarrow}</math>1 | <math>\overset{\ }{\leftarrow}</math>1 |- ! A | 0 | <math>\overset{\nwarrow}{\ }</math>1 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>1 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>1 | <math>\overset{\nwarrow}{\ }</math>2 | <math>\overset{\ }{\leftarrow}</math>2 |- ! C | 0 | <math>\overset{\ \uparrow}{\ }</math>1 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>1 | <math>\overset{\nwarrow}{\ }</math>2 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>2 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>2 |- |} The actual subsequences are deduced in a "traceback" procedure that follows the arrows backwards, starting from the last cell in the table. When the length decreases, the sequences must have had a common element. Several paths are possible when two arrows are shown in a cell. Below is the table for such an analysis, with numbers colored in cells where the length is about to decrease. The bold numbers trace out the sequence, (GA).<ref>{{cite book | author = [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]] and [[Clifford Stein]] | title = Introduction to Algorithms | publisher = MIT Press and McGraw-Hill | year = 2001 | isbn = 0-262-53196-8 | edition = 2nd | chapter = 15.4 | pages = 350–355 | title-link = Introduction to Algorithms }}</ref> {| class="wikitable" style="text-align:center" |+ Traceback example |- ! || ε || A || G || C || A || T |- ! ε | 0 || style="background:silver" | '''0''' || 0 || 0 || 0 || 0 |- ! G | style="background:silver" | 0 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>0 | style="background:silver;color:#FF6600" | <math>\overset{\nwarrow}{\ }</math>'''1''' | style="background:silver" | <math>\overset{\ }{\leftarrow}</math>'''1''' | <math>\overset{\ }{\leftarrow}</math>1 | <math>\overset{\ }{\leftarrow}</math>1 |- ! A | 0 | style="background:silver;color:#FF6600" | <math>\overset{\nwarrow}{\ }</math>1 | style="background:silver" | <math>\overset{\ \ \uparrow}{\leftarrow}</math>1 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>1 | style="background:silver;color:#FF6600" | <math>\overset{\nwarrow}{\ }</math>'''2''' | style="background:silver" | <math>\overset{\ }{\leftarrow}</math>'''2''' |- ! C | 0 | <math>\overset{\ \uparrow}{\ }</math>1 | <math>\overset{\ \ \uparrow}{\leftarrow}</math>1 | style="background:silver;color:#FF6600" | <math>\overset{\nwarrow}{\ }</math>2 | style="background:silver" | <math>\overset{\ \ \uparrow}{\leftarrow}</math>2 | style="background:silver" | <math>\overset{\ \ \uparrow}{\leftarrow}</math>'''2''' |- |} == Relation to other problems == For two strings <math>X_{1 \dots m}</math> and <math>Y_{1 \dots n}</math>, the length of the [[shortest common supersequence problem|shortest common supersequence]] is related to the length of the LCS by<ref name="BHR00" /> :<math>\left|SCS(X,Y)\right| = n + m - \left|LCS(X,Y)\right|.</math> The [[edit distance]] when only insertion and deletion is allowed (no substitution), or when the cost of the substitution is the double of the cost of an insertion or deletion, is: :<math>d'(X,Y) = n + m - 2 \cdot \left|LCS(X,Y)\right|.</math> == Code for the dynamic programming solution == {{unreferenced section|date=March 2013}} === Computing the length of the LCS === The function below takes as input sequences <code>X[1..m]</code> and <code>Y[1..n]</code>, computes the LCS between <code>X[1..i]</code> and <code>Y[1..j]</code> for all <code>1 ≤ i ≤ m</code> and <code>1 ≤ j ≤ n</code>, and stores it in <code>C[i,j]</code>. <code>C[m,n]</code> will contain the length of the LCS of <code>X</code> and <code>Y</code>.<ref name=":1">{{Introduction to Algorithms|3 |chapter=Dynamic Programming |pages=394}}</ref> '''function''' LCSLength(X[1..m], Y[1..n]) C = array(0..m, 0..n) '''for''' i := 0..m C[i,0] = 0 '''for''' j := 0..n C[0,j] = 0 '''for''' i := 1..m '''for''' j := 1..n '''if''' X[i] = Y[j] C[i,j] := C[i-1,j-1] + 1 '''else''' C[i,j] := max(C[i,j-1], C[i-1,j]) '''return''' C[m,n] Alternatively, [[memoization]] could be used. === Reading out a LCS === The following function [[backtracking|backtracks]] the choices taken when computing the <code>C</code> table. If the last characters in the prefixes are equal, they must be in an LCS. If not, check what gave the largest LCS of keeping <math>x_i</math> and <math>y_j</math>, and make the same choice. Just choose one if they were equally long. Call the function with <code>i=m</code> and <code>j=n</code>. '''function''' backtrack(C[0..m,0..n], X[1..m], Y[1..n], i, j) '''if''' i = 0 '''or''' j = 0 '''return''' "" '''if ''' X[i] = Y[j] '''return''' backtrack(C, X, Y, i-1, j-1) + X[i] '''if''' C[i,j-1] > C[i-1,j] '''return''' backtrack(C, X, Y, i, j-1) '''return''' backtrack(C, X, Y, i-1, j) === Reading out all LCSs === If choosing <math>x_i</math> and <math>y_j</math> would give an equally long result, read out both resulting subsequences. This is returned as a set by this function. Notice that this function is not polynomial, as it might branch in almost every step if the strings are similar. '''function''' backtrackAll(C[0..m,0..n], X[1..m], Y[1..n], i, j) '''if''' i = 0 '''or''' j = 0 '''return''' {""} '''if''' X[i] = Y[j] '''return''' {Z + X[i] '''for all''' Z '''in''' backtrackAll(C, X, Y, i-1, j-1)} R := {} '''if''' C[i,j-1] ≥ C[i-1,j] R := backtrackAll(C, X, Y, i, j-1) '''if''' C[i-1,j] ≥ C[i,j-1] R := R ∪ backtrackAll(C, X, Y, i-1, j) '''return''' R === Print the diff === This function will backtrack through the C matrix, and print the [[diff]] between the two sequences. Notice that you will get a different answer if you exchange <code>≥</code> and <code><</code>, with <code>></code> and <code>≤</code> below. '''function''' printDiff(C[0..m,0..n], X[1..m], Y[1..n], i, j) '''if''' i >= 0 '''and''' j >= 0 '''and''' X[i] = Y[j] printDiff(C, X, Y, i-1, j-1) print " " + X[i] '''else if''' j > 0 '''and''' (i = 0 '''or''' C[i,j-1] ≥ C[i-1,j]) printDiff(C, X, Y, i, j-1) print "+ " + Y[j] '''else if''' i > 0 '''and''' (j = 0 '''or''' C[i,j-1] < C[i-1,j]) printDiff(C, X, Y, i-1, j) print "- " + X[i] '''else''' print "" === Example === Let <math>X</math> be “<code>XMJYAUZ</code>” and <math>Y</math> be “<code>MZJAWXU</code>”. The longest common subsequence between <math>X</math> and <math>Y</math> is “<code>MJAU</code>”. The table <code>C</code> shown below, which is generated by the function <code>LCSLength</code>, shows the lengths of the longest common subsequences between prefixes of <math>X</math> and <math>Y</math>. The <math>i</math>th row and <math>j</math>th column shows the length of the LCS between <math>X_{1..i}</math> and <math>Y_{1..j}</math>. {| class="wikitable" style="text-align: center;" |- ! colspan="2" rowspan="2" | ! 0 !! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7 |- ! ε !! M !! Z !! J !! A !! W !! X !! U |- ! 0 !! ε | style="background:yellow" | '''0''' || 0 || 0 || 0 || 0 || 0 || 0 || 0 |- ! 1 !! X | style="background:yellow" | 0 || 0 || 0 || 0 || 0 || 0 || 1 || 1 |- ! 2 !! M | 0 || style="background:yellow" | '''1''' || style="background:yellow" | 1 || 1 || 1 || 1 || 1 || 1 |- ! 3 !! J | 0 || 1 || 1 || style="background:yellow" | '''2''' || 2 || 2 || 2 || 2 |- ! 4 !! Y | 0 || 1 || 1 || style="background:yellow" | 2 || 2 || 2 || 2 || 2 |- ! 5 !! A | 0 || 1 || 1 || 2 || style="background:yellow" | '''3''' || style="background:yellow" | 3 || style="background:yellow" | 3 || 3 |- ! 6 !! U | 0 || 1 || 1 || 2 || 3 || 3 || 3 || style="background:yellow" | '''4''' |- ! 7 !! Z | 0 || 1 || 2 || 2 || 3 || 3 || 3 || style="background: yellow" | 4 |} The <span style="background: yellow">highlighted</span> numbers show the path the function <code>backtrack</code> would follow from the bottom right to the top left corner, when reading out an LCS. If the current symbols in <math>X</math> and <math>Y</math> are equal, they are part of the LCS, and we go both up and left (shown in '''bold'''). If not, we go up or left, depending on which cell has a higher number. This corresponds to either taking the LCS between <math>X_{1..i-1}</math> and <math>Y_{1..j}</math>, or <math>X_{1..i}</math> and <math>Y_{1..j-1}</math>. == Code optimization == Several optimizations can be made to the algorithm above to speed it up for real-world cases. === Reduce the problem set === The C matrix in the naive algorithm [[quadratic growth|grows quadratically]] with the lengths of the sequences. For two 100-item sequences, a 10,000-item matrix would be needed, and 10,000 comparisons would need to be done. In most real-world cases, especially source code diffs and patches, the beginnings and ends of files rarely change, and almost certainly not both at the same time. If only a few items have changed in the middle of the sequence, the beginning and end can be eliminated. This reduces not only the memory requirements for the matrix, but also the number of comparisons that must be done. '''function''' LCS(X[1..m], Y[1..n]) start := 1 m_end := m n_end := n ''trim off the matching items at the beginning'' '''while''' start ≤ m_end '''and''' start ≤ n_end '''and''' X[start] = Y[start] start := start + 1 ''trim off the matching items at the end'' '''while''' start ≤ m_end '''and''' start ≤ n_end '''and''' X[m_end] = Y[n_end] m_end := m_end - 1 n_end := n_end - 1 C = array(start-1..m_end, start-1..n_end) ''only loop over the items that have changed'' '''for''' i := start..m_end '''for''' j := start..n_end ''the algorithm continues as before ...'' In the best-case scenario, a sequence with no changes, this optimization would eliminate the need for the C matrix. In the worst-case scenario, a change to the very first and last items in the sequence, only two additional comparisons are performed. === Reduce the comparison time === Most of the time taken by the naive algorithm is spent performing comparisons between items in the sequences. For textual sequences such as source code, you want to view lines as the sequence elements instead of single characters. This can mean comparisons of relatively long strings for each step in the algorithm. Two optimizations can be made that can help to reduce the time these comparisons consume. === Reduce strings to hashes === A [[hash function]] or [[checksum]] can be used to reduce the size of the strings in the sequences. That is, for source code where the average line is 60 or more characters long, the hash or checksum for that line might be only 8 to 40 characters long. Additionally, the randomized nature of hashes and checksums would guarantee that comparisons would short-circuit faster, as lines of source code will rarely be changed at the beginning. There are three primary drawbacks to this optimization. First, an amount of time needs to be spent beforehand to precompute the hashes for the two sequences. Second, additional memory needs to be allocated for the new hashed sequences. However, in comparison to the naive algorithm used here, both of these drawbacks are relatively minimal. The third drawback is that of [[hash collision|collisions]]. Since the checksum or hash is not guaranteed to be unique, there is a small chance that two different items could be reduced to the same hash. This is unlikely in source code, but it is possible. A cryptographic hash would therefore be far better suited for this optimization, as its entropy is going to be significantly greater than that of a simple checksum. However, the benefits may not be worth the setup and computational requirements of a cryptographic hash for small sequence lengths. === Reduce the required space === If only the length of the LCS is required, the matrix can be reduced to a <math>2\times \min(n,m)</math> matrix, or to a <math>\min(m,n)+1</math> vector as the dynamic programming approach requires only the current and previous columns of the matrix. [[Hirschberg's algorithm]] allows the construction of the optimal sequence itself in the same quadratic time and linear space bounds.<ref>{{cite journal|author-link = Dan Hirschberg|author=Hirschberg, D. S.|title=A linear space algorithm for computing maximal common subsequences|journal=Communications of the ACM|volume=18|issue=6|year=1975|pages=341–343|doi=10.1145/360825.360861|s2cid=207694727|doi-access=free}}</ref> === Reduce cache misses === Chowdhury and Ramachandran devised a quadratic-time linear-space algorithm<ref name="CR-06" /><ref name="CLR-08">{{cite journal |last1=Chowdhury |first1=Rezaul |last2=Le |first2=Hai-Son |last3=Ramachandran |first3=Vijaya |title=Cache-oblivious dynamic programming for bioinformatics |journal=IEEE/ACM Transactions on Computational Biology and Bioinformatics |date=July 2010 |volume=7 |issue=3 |pages=495–510 |doi=10.1109/TCBB.2008.94 |pmid=20671320 |s2cid=2532039 |url=https://ieeexplore.ieee.org/document/4609376}}</ref> for finding the LCS length along with an optimal sequence which runs faster than Hirschberg's algorithm in practice due to its superior cache performance.<ref name="CR-06">{{cite book |last1=Chowdhury |first1=Rezaul |last2=Ramachandran |first2=Vijaya |title=Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm - SODA '06 |chapter=Cache-oblivious dynamic programming |date=January 2006 |pages=591–600 |doi=10.1145/1109557.1109622 |isbn=0898716055 |s2cid=9650418 |chapter-url=https://dl.acm.org/doi/10.5555/1109557.1109622}}</ref> The algorithm has an asymptotically optimal cache complexity under the [[cache-oblivious#Idealized cache model|Ideal cache model]].<ref name="FLPR-12">{{cite journal |last1=Frigo |first1=Matteo |last2=Leiserson |first2=Charles E. |last3=Prokop |first3=Harald |last4=Ramachandran |first4=Sridhar |title=Cache-oblivious algorithms |journal=ACM Transactions on Algorithms |date=January 2012 |volume=8 |issue=1 |pages=1–22 |doi=10.1145/2071379.2071383 |url=https://dl.acm.org/doi/10.1145/2071379.2071383}}</ref> Interestingly, the algorithm itself is [[cache-oblivious]]<ref name="FLPR-12" /> meaning that it does not make any choices based on the cache parameters (e.g., cache size and cache line size) of the machine. === Further optimized algorithms === Several algorithms exist that run faster than the presented dynamic programming approach. One of them is [[Hunt–Szymanski algorithm]], which typically runs in <math>O((n + r)\log(n))</math> time (for <math>n > m</math>), where <math>r</math> is the number of matches between the two sequences.<ref>{{Cite book | url=https://books.google.com/books?id=mFd_grFyiT4C&q=hunt+szymanski+algorithm&pg=PA132 |title = Pattern Matching Algorithms|isbn = 9780195354348|last1 = Apostolico|first1 = Alberto|last2 = Galil|first2 = Zvi|date = 1997-05-29| publisher=Oxford University Press }}</ref> For problems with a bounded alphabet size, the [[Method of Four Russians]] can be used to reduce the running time of the dynamic programming algorithm by a logarithmic factor.<ref>{{citation | last1 = Masek | first1 = William J. | last2 = Paterson | first2 = Michael S. | author2-link = Mike Paterson | doi = 10.1016/0022-0000(80)90002-1 | issue = 1 | journal = Journal of Computer and System Sciences | mr = 566639 | pages = 18–31 | title = A faster algorithm computing string edit distances | volume = 20 | year = 1980| doi-access = free | hdl = 1721.1/148933 | hdl-access = free }}.</ref> == Behavior on random strings == {{main|Chvátal–Sankoff constants}} Beginning with {{harvtxt|Chvátal|Sankoff|1975}},<ref>{{citation | last1 = Chvátal | first1 = Václáv | author1-link = Václav Chvátal | last2 = Sankoff | first2 = David | author2-link = David Sankoff | journal = Journal of Applied Probability | mr = 0405531 | pages = 306–315 | title = Longest common subsequences of two random sequences | volume = 12 | issue = 2 | year = 1975 | doi=10.2307/3212444| jstor = 3212444 | s2cid = 250345191 }}.</ref> a number of researchers have investigated the behavior of the longest common subsequence length when the two given strings are drawn randomly from the same alphabet. When the alphabet size is constant, the expected length of the LCS is proportional to the length of the two strings, and the constants of proportionality (depending on alphabet size) are known as the [[Chvátal–Sankoff constants]]. Their exact values are not known, but upper and lower bounds on their values have been proven,<ref>{{citation | last = Lueker | first = George S. | doi = 10.1145/1516512.1516519 | issue = 3 | journal = [[Journal of the ACM]] | mr = 2536132 | at = A17 | title = Improved bounds on the average length of longest common subsequences | volume = 56 | year = 2009| s2cid = 7232681 }}.</ref> and it is known that they grow inversely proportionally to the square root of the alphabet size.<ref>{{citation | last1 = Kiwi | first1 = Marcos | last2 = Loebl | first2 = Martin | last3 = Matoušek | first3 = Jiří | author3-link = Jiří Matoušek (mathematician) | doi = 10.1016/j.aim.2004.10.012 | doi-access=free | issue = 2 | journal = [[Advances in Mathematics]] | mr = 2173842 | pages = 480–498 | title = Expected length of the longest common subsequence for large alphabets | volume = 197 | year = 2005| arxiv = math/0308234 }}.</ref> Simplified mathematical models of the longest common subsequence problem have been shown to be controlled by the [[Tracy–Widom distribution]].<ref>{{citation | last1 = Majumdar | first1 = Satya N. | last2 = Nechaev | first2 = Sergei | doi = 10.1103/PhysRevE.72.020901 | pmid = 16196539 | issue = 2 | journal = Physical Review E | mr = 2177365 | pages = 020901, 4 | title = Exact asymptotic results for the Bernoulli matching model of sequence alignment | volume = 72 | year = 2005| arxiv = q-bio/0410012 | bibcode = 2005PhRvE..72b0901M | s2cid = 11390762 }}.</ref> == Computing the longest palindromic subsequence of a string == For decades, it had been considered folklore that the longest palindromic subsequence of a string could be computed by finding the longest common subsequence between the string and its reversal, using the classical dynamic programming approach introduced by Wagner and Fischer. However, a formal proof of the correctness of this method was only established in 2024 by Brodal, Fagerberg, and Moldrup Rysgaard.<ref>{{cite conference | vauthors=((Brodal, G. S.)), ((Fagerberg, R.)), ((Rysgaard, C. M.)) | title=On Finding Longest Palindromic Subsequences Using Longest Common Subsequences | pages=35:1–35:16 | date=2024 | publisher=Schloss Dagstuhl – Leibniz-Zentrum für Informatik | doi=10.4230/lipics.esa.2024.35| doi-access=free }}</ref> == See also == * [[Longest increasing subsequence]] * [[Longest alternating subsequence]] * [[Levenshtein distance]] == References == {{reflist}} <!-- Dead note "GJ78": {{cite book|author = [[Michael R. Garey]] and [[David S. Johnson]] | year = 1979 | title = Computers and Intractability: A Guide to the Theory of NP-Completeness|url = https://archive.org/details/computersintract00gare_180 |url-access = limited | publisher = W.H. Freeman | isbn = 0-7167-1045-5| pages = [https://archive.org/details/computersintract00gare_180/page/n238 228]}} A421: SR10. --> == External links == {{Wikibooks|Algorithm implementation|Strings/Longest common subsequence|Longest common subsequence}} * [https://xlinux.nist.gov/dads/HTML/longestCommonSubsequence.html Dictionary of Algorithms and Data Structures: longest common subsequence] * [https://rosettacode.org/wiki/Longest_common_subsequence A collection of implementations of the longest common subsequence in many programming languages] * [https://www.codespeedy.com/find-longest-common-subsequence-in-python/ Find Longest Common Subsequence in Python] <!-- case of fixed number of input strings --> <!-- case of arbitrary number of input strings --> {{Strings |state=collapsed}} {{DEFAULTSORT:Longest Common Subsequence Problem}} [[Category:Problems on strings]] [[Category:Combinatorics]] [[Category:Dynamic programming]] [[Category:Polynomial-time problems]] [[Category:NP-complete problems]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Ambox
(
edit
)
Template:Citation
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Distinguish
(
edit
)
Template:Harvtxt
(
edit
)
Template:Introduction to Algorithms
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Strings
(
edit
)
Template:Unreferenced
(
edit
)
Template:Unreferenced section
(
edit
)
Template:Wikibooks
(
edit
)