Unification (computer science)

Revision as of 06:19, 23 May 2025 by imported>David Eppstein (rm one dead link that (when found on archive) turns out to have been useless, and replace the other)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x 1, y ↦ 2 } as its only solution.

Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range over first-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms. In higher-order unification, possibly restricted to higher-order pattern unification, terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used in proof assistants and higher-order logic programming, for example Isabelle, Twelf, and lambdaProlog. Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used in SMT solvers, term rewriting algorithms, and cryptographic protocol analysis.

Formal definitionEdit

A unification problem is a finite set Template:Math of equations to solve, where Template:Math are in the set <math>T</math> of terms or expressions. Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions, are allowed in an expression, the process is called higher-order unification, otherwise first-order unification. If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification, otherwise semantic or equational unification, or E-unification, or unification modulo theory.

If the right side of each equation is closed (no free variables), the problem is called (pattern) matching. The left side (with variables) of each equation is called the pattern.<ref>Template:Cite book</ref>

PrerequisitesEdit

Formally, a unification approach presupposes

  • An infinite set <math>V</math> of variables. For higher-order unification, it is convenient to choose <math>V</math> disjoint from the set of lambda-term bound variables.
  • A set <math>T</math> of terms such that <math>V \subseteq T</math>. For first-order unification, <math>T</math> is usually the set of first-order terms (terms built from variable and function symbols). For higher-order unification <math>T</math> consists of first-order terms and lambda terms (terms containing some higher-order variables).
  • A mapping <math>\text{vars}\colon T \rightarrow</math> <math>\mathbb{P}</math><math>(V)</math>, assigning to each term <math>t</math> the set <math>\text{vars}(t) \subsetneq V</math> of free variables occurring in <math>t</math>.
  • A theory or equivalence relation <math>\equiv</math> on <math>T</math>, indicating which terms are considered equal. For first-order E-unification, <math>\equiv</math> reflects the background knowledge about certain function symbols; for example, if <math>\oplus</math> is considered commutative, <math>t\equiv u</math> if <math>u</math> results from <math>t</math> by swapping the arguments of <math>\oplus</math> at some (possibly all) occurrences. <ref group=note>E.g. a ⊕ (bf(x)) ≡ a ⊕ (f(x) ⊕ b) ≡ (bf(x)) ⊕ a ≡ (f(x) ⊕ b) ⊕ a</ref> In the most typical case that there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. In this case, ≡ is called the free theory (because it is a free object), the empty theory (because the set of equational sentences, or the background knowledge, is empty), the theory of uninterpreted functions (because unification is done on uninterpreted terms), or the theory of constructors (because all function symbols just build up data terms, rather than operating on them). For higher-order unification, usually <math>t\equiv u</math> if <math>t</math> and <math>u</math> are alpha equivalent.

As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem { y = cons(2,y) } has no solution over the set of finite terms. However, it has the single solution { ycons(2,cons(2,cons(2,...))) } over the set of infinite tree terms. Similarly, the semantic first-order unification problem { ax = xa } has each substitution of the form { xa⋅...⋅a } as a solution in a semigroup, i.e. if (⋅) is considered associative. But the same problem, viewed in an abelian group, where (⋅) is considered also commutative, has any substitution at all as a solution.

As an example of higher-order unification, the singleton set { a = y(x) } is a syntactic second-order unification problem, since y is a function variable. One solution is { xa, y ↦ (identity function) }; another one is { y ↦ (constant function mapping each value to a), x(any value) }.

SubstitutionEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A substitution is a mapping <math>\sigma: V\rightarrow T</math> from variables to terms; the notation <math> \{x_1\mapsto t_1, ..., x_k \mapsto t_k\}</math> refers to a substitution mapping each variable <math>x_i</math> to the term <math>t_i</math>, for <math>i=1,...,k</math>, and every other variable to itself; the <math>x_i</math> must be pairwise distinct. Applying that substitution to a term <math>t</math> is written in postfix notation as <math>t \{x_1 \mapsto t_1, ..., x_k \mapsto t_k\}</math>; it means to (simultaneously) replace every occurrence of each variable <math>x_i</math> in the term <math>t</math> by <math>t_i</math>. The result <math>t\tau</math> of applying a substitution <math>\tau</math> to a term <math>t</math> is called an instance of that term <math>t</math>. As a first-order example, applying the substitution Template:Math to the term

<math>f(</math> <math>\textbf{x}</math> <math>, a, g(</math> <math>\textbf{z}</math> <math> ), y)</math>
yields  
<math>f(</math> <math>\textbf{h}(\textbf{a}, \textbf{y})</math> <math>, a, g(</math> <math>\textbf{b}</math> <math>), y).</math>

Generalization, specializationEdit

If a term <math>t</math> has an instance equivalent to a term <math>u</math>, that is, if <math>t\sigma \equiv u</math> for some substitution <math>\sigma</math>, then <math>t</math> is called more general than <math>u</math>, and <math>u</math> is called more special than, or subsumed by, <math>t</math>. For example, <math>x\oplus a</math> is more general than <math>a\oplus b</math> if ⊕ is commutative, since then <math>(x\oplus a) \{x\mapsto b\} = b\oplus a\equiv a\oplus b</math>.

If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other. For example, <math>f(x_1, a, g(z_1), y_1)</math> is a variant of <math>f(x_2, a, g(z_2), y_2)</math>, since <math display="block">f(x_1, a, g(z_1), y_1) \{x_1 \mapsto x_2, y_1 \mapsto y_2, z_1 \mapsto z_2\} = f(x_2, a, g(z_2), y_2) </math> and <math display="block">f(x_2, a, g(z_2), y_2) \{x_2 \mapsto x_1, y_2 \mapsto y_1, z_2 \mapsto z_1\} = f(x_1, a, g(z_1), y_1).</math> However, <math>f(x_1, a, g(z_1), y_1)</math> is not a variant of <math>f(x_2, a, g(x_2), x_2)</math>, since no substitution can transform the latter term into the former one. The latter term is therefore properly more special than the former one.

For arbitrary <math>\equiv</math>, a term may be both more general and more special than a structurally different term. For example, if ⊕ is idempotent, that is, if always <math>x \oplus x \equiv x</math>, then the term <math>x\oplus y</math> is more general than <math>z</math>,<ref group=note>since <math>(x\oplus y) \{x\mapsto z, y \mapsto z\} = z\oplus z \equiv z</math></ref> and vice versa,<ref group=note>since Template:Math</ref> although <math>x\oplus y</math> and <math>z</math> are of different structure.

A substitution <math>\sigma</math> is more special than, or subsumed by, a substitution <math>\tau</math> if <math>t\sigma</math> is subsumed by <math>t\tau</math> for each term <math>t</math>. We also say that <math>\tau</math> is more general than <math>\sigma</math>. More formally, take a nonempty infinite set <math>V</math> of auxiliary variables such that no equation <math>l_i \doteq r_i</math> in the unification problem contains variables from <math>V</math>. Then a substitution <math>\sigma</math> is subsumed by another substitution <math>\tau</math> if there is a substitution <math>\theta</math> such that for all terms <math>X\notin V</math>, <math>X\sigma \equiv X\tau\theta</math>.<ref name=Vukmirovic/> For instance <math> \{x \mapsto a, y \mapsto a \}</math> is subsumed by <math>\tau = \{x\mapsto y\}</math>, using <math>\theta=\{y\mapsto a\}</math>, but <math>\sigma = \{x\mapsto a\}</math> is not subsumed by <math>\tau = \{x\mapsto y\}</math>, as <math>f(x, y)\sigma = f(a, y)</math> is not an instance of <math>f(x, y) \tau = f(y, y)</math>.<ref>Template:Cite book</ref>

Solution setEdit

A substitution σ is a solution of the unification problem E if Template:Math for <math>i = 1, ..., n</math>. Such a substitution is also called a unifier of E. For example, if ⊕ is associative, the unification problem { xaax } has the solutions {xa}, {xaa}, {xaaa}, etc., while the problem { xaa } has no solution.

For a given unification problem E, a set S of unifiers is called complete if each solution substitution is subsumed by some substitution in S. A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable.

The set S is called minimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members.<ref>Template:Cite journal</ref> Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible.<ref name=Vukmirovic/> For first-order syntactical unification, Martelli and Montanari<ref name="Martelli.Montanari.1982">Template:Cite journal</ref> gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called the most general unifier.

Syntactic unification of first-order termsEdit

File:Triangle diagram of syntactic unification svg.svg
Schematic triangle diagram of syntactically unifying terms t1 and t2 by a substitution σ

Syntactic unification of first-order terms is the most widely used unification framework. It is based on T being the set of first-order terms (over some given set V of variables, C of constants and Fn of n-ary function symbols) and on ≡ being syntactic equality. In this framework, each solvable unification problem Template:Math has a complete, and obviously minimal, singleton solution set Template:Math. Its member Template:Mvar is called the most general unifier (mgu) of the problem. The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. Template:Math. Any unifier of the problem is subsumed<ref group=note>formally: each unifier τ satisfies Template:Math for some substitution ρ</ref> by the mgu Template:Mvar. The mgu is unique up to variants: if S1 and S2 are both complete and minimal solution sets of the same syntactical unification problem, then S1 = { σ1 } and S2 = { σ2 } for some substitutions Template:Math and Template:Math and Template:Math is a variant of Template:Math for each variable x occurring in the problem.

For example, the unification problem { xz, yf(x) } has a unifier { xz, yf(z) }, because

x { xz, yf(z) } = z = z { xz, yf(z) } , and
y { xz, yf(z) } = f(z) = f(x) { xz, yf(z) } .

This is also the most general unifier. Other unifiers for the same problem are e.g. { xf(x1), yf(f(x1)), zf(x1) }, { xf(f(x1)), yf(f(f(x1))), zf(f(x1)) }, and so on; there are infinitely many similar unifiers.

As another example, the problem g(x,x) ≐ f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f, respectively, and terms with different outermost function symbols are syntactically different.

Unification algorithmsEdit

Template:Quote box Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930.<ref>J. Herbrand: Recherches sur la théorie de la démonstration. Travaux de la société des Sciences et des Lettres de Varsovie, Class III, Sciences Mathématiques et Physiques, 33, 1930.</ref><ref>Template:Cite thesis Here: p.96-97</ref><ref name="HerbrandLectures">Template:Cite report Here: p.56</ref> But most authors attribute the first unification algorithm to John Alan Robinson (cf. box).<ref name="Robinson.1965">Template:Cite journal; Here: sect.5.8, p.32</ref><ref>Template:Cite journal</ref><ref group="note">Robinson used first-order syntactical unification as a basic building block of his resolution procedure for first-order logic, a great step forward in automated reasoning technology, as it eliminated one source of combinatorial explosion: searching for instantiation of terms.<ref>Template:Cite book Here: Introduction of sect.3.3.3 "Unification", p.72.</ref></ref> Robinson's algorithm had worst-case exponential behavior in both time and space.<ref name="HerbrandLectures"/><ref name="Champeaux">Template:Cite journal</ref> Numerous authors have proposed more efficient unification algorithms.<ref>Per Template:Harvtxt:

The following algorithm is commonly presented and originates from Template:Harvtxt.<ref group=note>Alg.1, p.261. Their rule (a) corresponds to rule swap here, (b) to delete, (c) to both decompose and conflict, and (d) to both eliminate and check.</ref> Given a finite set <math>G = \{ s_1 \doteq t_1, ..., s_n \doteq t_n \}</math> of potential equations, the algorithm applies rules to transform it to an equivalent set of equations of the form { x1u1, ..., xmum } where x1, ..., xm are distinct variables and u1, ..., um are terms containing none of the xi. A set of this form can be read as a substitution. If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or "fail" in that case. The operation of substituting all occurrences of variable x in problem G with term t is denoted G {xt}. For simplicity, constant symbols are regarded as function symbols having zero arguments.

<math>G \cup \{ t \doteq t \}</math> <math>\Rightarrow</math> <math>G</math>     delete
<math>G \cup \{ f(s_0, ..., s_k) \doteq f(t_0, ..., t_k) \}</math> <math>\Rightarrow</math> <math>G \cup \{ s_0 \doteq t_0, ..., s_k \doteq t_k \}</math>     decompose
<math>G \cup \{ f(s_0, \ldots,s_k) \doteq g(t_0,...,t_m) \}</math> <math>\Rightarrow</math> <math>\bot</math> if <math>f \neq g</math> or <math>k \neq m</math>     conflict
<math>G \cup \{ f(s_0,...,s_k) \doteq x \}</math> <math>\Rightarrow</math> <math>G \cup \{ x \doteq f(s_0,...,s_k) \}</math>     swap
<math>G \cup \{ x \doteq t \}</math> <math>\Rightarrow</math> <math>G\{x \mapsto t\} \cup \{ x \doteq t \}</math> if <math>x \not\in \text{vars}(t)</math> and <math>x \in \text{vars}(G)</math>     eliminate<ref group="note">Although the rule keeps xt in G, it cannot loop forever since its precondition xvars(G) is invalidated by its first application. More generally, the algorithm is guaranteed to terminate always, see below.</ref>
<math>G \cup \{ x \doteq f(s_0,...,s_k) \}</math> <math>\Rightarrow</math> <math>\bot</math> if <math>x \in \text{vars}(f(s_0,...,s_k))</math>     check

Occurs checkEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} An attempt to unify a variable x with a term containing x as a strict subterm xf(..., x, ...) would lead to an infinite term as solution for x, since x would occur as a subterm of itself. In the set of (finite) first-order terms as defined above, the equation xf(..., x, ...) has no solution; hence the eliminate rule may only be applied if xvars(t). Since that additional check, called occurs check, slows down the algorithm, it is omitted e.g. in most Prolog systems. From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see #Unification of infinite terms below.

Proof of terminationEdit

For the proof of termination of the algorithm consider a triple <math>\langle n_{var}, n_{lhs}, n_{eqn}\rangle</math> where Template:Math is the number of variables that occur more than once in the equation set, Template:Math is the number of function symbols and constants on the left hand sides of potential equations, and Template:Math is the number of equations. When rule eliminate is applied, Template:Math decreases, since x is eliminated from G and kept only in { xt }. Applying any other rule can never increase Template:Math again. When rule decompose, conflict, or swap is applied, Template:Math decreases, since at least the left hand side's outermost f disappears. Applying any of the remaining rules delete or check can't increase Template:Math, but decreases Template:Math. Hence, any rule application decreases the triple <math>\langle n_{var}, n_{lhs}, n_{eqn}\rangle</math> with respect to the lexicographical order, which is possible only a finite number of times.

Conor McBride observes<ref>Template:Cite journal</ref> that "by expressing the structure which unification exploits" in a dependently typed language such as Epigram, Robinson's unification algorithm can be made recursive on the number of variables, in which case a separate termination proof becomes unnecessary.

Examples of syntactic unification of first-order termsEdit

In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logical and operator. For mathematical notation, x,y,z are used as variables, f,g as function symbols, and a,b as constants.

Prolog notation Mathematical notation Unifying substitution Explanation
a = a { a = a } {} Succeeds. (tautology)
a = b { a = b } a and b do not match
X = X { x = x } {} Succeeds. (tautology)
a = X { a = x } { xa } x is unified with the constant a
X = Y { x = y } { xy } x and y are aliased
f(a,X) = f(a,b) { f(a,x) = f(a,b) } { xb } function and constant symbols match, x is unified with the constant b
f(a) = g(a) { f(a) = g(a) } f and g do not match
f(X) = f(Y) { f(x) = f(y) } { xy } x and y are aliased
f(X) = g(Y) { f(x) = g(y) } f and g do not match
f(X) = f(Y,Z) { f(x) = f(y,z) } Fails. The f function symbols have different arity
f(g(X)) = f(Y) { f(g(x)) = f(y) } { yg(x) } Unifies y with the term Template:Tmath
f(g(X),X) = f(Y,a) { f(g(x),x) = f(y,a) } { xa, yg(a) } Unifies x with constant a, and y with the term Template:Tmath
X = f(X) { x = f(x) } should be ⊥ Returns ⊥ in first-order logic and many modern Prolog dialects (enforced by the occurs check).

Succeeds in traditional Prolog and in Prolog II, unifying x with infinite term x=f(f(f(f(...)))).

X = Y, Y = a { x = y, y = a } { xa, ya } Both x and y are unified with the constant a
a = Y, X = Y { a = y, x = y } { xa, ya } As above (order of equations in set doesn't matter)
X = a, b = X { x = a, b = x } Fails. a and b do not match, so x can't be unified with both
File:Unification exponential blow-up svg.svg
Two terms with an exponentially larger tree for their least common instance. Its dag representation (rightmost, orange part) is still of linear size.

The most general unifier of a syntactic first-order unification problem of size Template:Mvar may have a size of Template:Math. For example, the problem Template:Tmath has the most general unifier Template:Tmath, cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work on directed acyclic graphs (dags) rather than trees.Template:Refn

Application: unification in logic programmingEdit

The concept of unification is one of the main ideas behind logic programming. Specifically, unification is a basic building block of resolution, a rule of inference for determining formula satisfiability. In Prolog, the equality symbol = implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment.

In Prolog:

  1. A variable can be unified with a constant, a term, or another variable, thus effectively becoming its alias. In many modern Prolog dialects and in first-order logic, a variable cannot be unified with a term that contains it; this is the so-called occurs check.
  2. Two constants can be unified only if they are identical.
  3. Similarly, a term can be unified with another term if the top function symbols and arities of the terms are identical and if the parameters can be unified simultaneously. Note that this is a recursive behavior.
  4. Most operations, including +, -, *, /, are not evaluated by =. So for example 1+2 = 3 is not satisfiable because they are syntactically different. The use of integer arithmetic constraints #= introduces a form of E-unification for which these operations are interpreted and evaluated.<ref>{{#invoke:citation/CS1|citation

|CitationClass=web }}</ref>

Application: type inferenceEdit

Type inference algorithms are typically based on unification, particularly Hindley-Milner type inference which is used by the functional languages Haskell and ML. For example, when attempting to infer the type of the Haskell expression True : ['x'], the compiler will use the type a -> [a] -> [a] of the list construction function (:), the type Bool of the first argument True, and the type [Char] of the second argument ['x']. The polymorphic type variable a will be unified with Bool and the second argument [a] will be unified with [Char]. a cannot be both Bool and Char at the same time, therefore this expression is not correctly typed.

Like for Prolog, an algorithm for type inference can be given:

  1. Any type variable unifies with any type expression, and is instantiated to that expression. A specific theory might restrict this rule with an occurs check.
  2. Two type constants unify only if they are the same type.
  3. Two type constructions unify only if they are applications of the same type constructor and all of their component types recursively unify.

Application: Feature Structure UnificationEdit

Template:See also

Unification has been used in different research areas of computational linguistics.<ref>Jonathan Calder, Mike Reape, and Hank Zeevat,, An algorithm for generation in unification categorial grammar. In Proceedings of the 4th Conference of the European Chapter of the Association for Computational Linguistics, pages 233-240, Manchester, England (10–12 April), University of Manchester Institute of Science and Technology, 1989.</ref><ref>Graeme Hirst and David St-Onge, [1] Lexical chains as representations of context for the detection and correction of malapropisms, 1998.</ref>

Order-sorted unificationEdit

Order-sorted logic allows one to assign a sort, or type, to each term, and to declare a sort s1 a subsort of another sort s2, commonly written as s1s2. For example, when reаsoning about biological creatures, it is useful to declare a sort dog to be a subsort of a sort animal. Wherever a term of some sort s is required, a term of any subsort of s may be supplied instead. For example, assuming a function declaration mother: animalanimal, and a constant declaration lassie: dog, the term mother(lassie) is perfectly valid and has the sort animal. In order to supply the information that the mother of a dog is a dog in turn, another declaration mother: dogdog may be issued; this is called function overloading, similar to overloading in programming languages.

Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sorts s1, s2 their intersection s1s2 to be declared, too: if x1 and x2 is a variable of sort s1 and s2, respectively, the equation x1x2 has the solution { x1 = x, x2 = x }, where x: s1s2. <ref>Template:Cite journal</ref> After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts.

Smolka generalized order-sorted logic to allow for parametric polymorphism. <ref>Template:Cite conference</ref> In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort list(X) may be declared (with X being a type parameter as in a C++ template), and from a subsort declaration intfloat the relation list(int) ⊆ list(float) is automatically inferred, meaning that each list of integers is also a list of floats.

Schmidt-Schauß generalized order-sorted logic to allow for term declarations. <ref>Template:Cite book</ref> As an example, assuming subsort declarations evenint and oddint, a term declaration like ∀ i : int. (i + i) : even allows to declare a property of integer addition that could not be expressed by ordinary overloading.

Unification of infinite termsEdit

Template:Expand section Background on infinite trees:

Unification algorithm, Prolog II:

Applications:

E-unificationEdit

E-unification is the problem of finding solutions to a given set of equations, taking into account some equational background knowledge E. The latter is given as a set of universal equalities. For some particular sets E, equation solving algorithms (a.k.a. E-unification algorithms) have been devised; for others it has been proven that no such algorithms can exist.

For example, if Template:Mvar and Template:Mvar are distinct constants, the equation Template:Tmath has no solution with respect to purely syntactic unification, where nothing is known about the operator Template:Tmath. However, if the Template:Tmath is known to be commutative, then the substitution Template:Math solves the above equation, since

Template:Tmath Template:Math
= Template:Tmath by substitution application
= Template:Tmath by commutativity of Template:Tmath
= Template:Tmath Template:Math by (converse) substitution application

The background knowledge E could state the commutativity of Template:Tmath by the universal equality "Template:Tmath for all Template:Math".

Particular background knowledge sets EEdit

Used naming conventions
Template:Math Template:Tmath = Template:Tmath Template:Mvar Associativity of Template:Tmath
Template:Math Template:Tmath = Template:Tmath Template:Mvar Commutativity of Template:Tmath
Template:Math Template:Tmath = Template:Tmath Template:Mvar Left distributivity of Template:Tmath over Template:Tmath
Template:Math Template:Tmath = Template:Tmath Template:Mvar Right distributivity of Template:Tmath over Template:Tmath
Template:Math Template:Tmath = Template:Mvar Template:Mvar Idempotence of Template:Tmath
Template:Math Template:Tmath = Template:Mvar Template:Mvar Left neutral element Template:Mvar with respect to Template:Tmath
Template:Math Template:Tmath = Template:Mvar     Template:Mvar     Right neutral element Template:Mvar with respect to Template:Tmath

It is said that unification is decidable for a theory, if a unification algorithm has been devised for it that terminates for any input problem. It is said that unification is semi-decidable for a theory, if a unification algorithm has been devised for it that terminates for any solvable input problem, but may keep searching forever for solutions of an unsolvable input problem.

Unification is decidable for the following theories:

Unification is semi-decidable for the following theories:

One-sided paramodulationEdit

If there is a convergent term rewriting system R available for E, the one-sided paramodulation algorithm<ref>N. Dershowitz and G. Sivakumar, Solving Goals in Equational Languages, Proc. 1st Int. Workshop on Conditional Term Rewriting Systems, Springer LNCS vol.308, pp. 45–55, 1988</ref> can be used to enumerate all solutions of given equations.

One-sided paramodulation rules
G ∪ { f(s1,...,sn) ≐ f(t1,...,tn) } ; S G ∪ { s1t1, ..., sntn } ; S     decompose
G ∪ { xt } ; S G { xt } ; S{xt} ∪ {xt} if the variable x doesn't occur in t     eliminate
G ∪ { f(s1,...,sn) ≐ t } ; S G ∪ { s1 ≐ u1, ..., sn ≐ un, rt } ; S     if f(u1,...,un) → r is a rule from R     mutate
G ∪ { f(s1,...,sn) ≐ y } ; S G ∪ { s1y1, ..., snyn, yf(y1,...,yn) } ; S if y1,...,yn are new variables     imitate

Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G, in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G, and on the choice of RTemplate:'s rules in mutate, different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f(...) ≐ g(...) }).

Example term rewrite system R
1 app(nil,z) z
2     app(x.y,z) x.app(y,z)

For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil; where cons(x,y) is written in infix notation as x.y for brevity; e.g. app(a.b.nil,c.d.nil) → a.app(b.nil,c.d.nil) → a.b.app(nil,c.d.nil) → a.b.c.d.nil demonstrates the concatenation of the lists a.b.nil and c.d.nil, employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R, both viewed as binary relations on terms. For example, app(a.b.nil,c.d.nil) ≡ a.b.c.d.nilapp(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R.

A successful example computation path for the unification problem { app(x,app(y,x)) ≐ a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate; v2, v3, ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule (1 or 2) is indicated in parentheses. From the last line, the unifying substitution S = { ynil, xa.nil } can be obtained. In fact, app(x,app(y,x)) {ynil, xa.nil } = app(a.nil,app(nil,a.nil)) ≡ app(a.nil,a.nil) ≡ a.app(nil,a.nil) ≡ a.a.nil solves the given problem. A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { ya.a.nil, xnil }; it is not shown here. No other path leads to a success.

Example unifier computation
Used rule G S
{ Template:Ifsubst style="color:red">app(x,app(y,x)) ≐ a.a.nil } {}
mutate(2) { xv2.v3, app(y,x) ≐ v4, Template:Ifsubst style="color:red">v2.app(v3,v4) ≐ a.a.nil } {}
decompose { Template:Ifsubst style="color:red">xv2.v3, app(y,x) ≐ v4, v2a, app(v3,v4) ≐ a.nil } {}
eliminate { app(y,v2.v3) ≐ v4, Template:Ifsubst style="color:red">v2a, app(v3,v4) ≐ a.nil } { xv2.v3 }
eliminate { Template:Ifsubst style="color:red">app(y,a.v3) ≐ v4, app(v3,v4) ≐ a.nil } { xa.v3 }
mutate(1) { ynil, a.v3v5, Template:Ifsubst style="color:red">v5v4, app(v3,v4) ≐ a.nil } { xa.v3 }
eliminate { Template:Ifsubst style="color:red">ynil, a.v3v4, app(v3,v4) ≐ a.nil } { xa.v3 }
eliminate { a.v3v4, Template:Ifsubst style="color:red">app(v3,v4) ≐ a.nil } { ynil, xa.v3 }
mutate(1) { a.v3v4, v3nil, Template:Ifsubst style="color:red">v4v6, v6a.nil } { ynil, xa.v3 }
eliminate { a.v3v4, Template:Ifsubst style="color:red">v3nil, v4a.nil } { ynil, xa.v3 }
eliminate { a.nilv4, Template:Ifsubst style="color:red">v4a.nil } { ynil, xa.nil }
eliminate { Template:Ifsubst style="color:red">a.nila.nil } { ynil, xa.nil }
decompose { Template:Ifsubst style="color:red">aa, nilnil } { ynil, xa.nil }
decompose { Template:Ifsubst style="color:red">nilnil } { ynil, xa.nil }
decompose     ⇒     {} { ynil, xa.nil }

NarrowingEdit

File:Triangle diagram of narrowing step svg.svg
Triangle diagram of narrowing step st at position p in term s, with unifying substitution σ (bottom row), using a rewrite rule Template:Math (top row)

If R is a convergent term rewriting system for E, an approach alternative to the previous section consists in successive application of "narrowing steps"; this will eventually enumerate all solutions of a given equation. A narrowing step (cf. picture) consists in

  • choosing a nonvariable subterm of the current term,
  • syntactically unifying it with the left hand side of a rule from R, and
  • replacing the instantiated rule's right hand side into the instantiated term.

Formally, if Template:Math is a renamed copy of a rewrite rule from R, having no variables in common with a term s, and the subterm Template:Math is not a variable and is unifiable with Template:Mvar via the mgu Template:Mvar, then Template:Mvar can be narrowed to the term Template:Math, i.e. to the term Template:Mvar, with the subterm at p replaced by Template:Mvar. The situation that s can be narrowed to t is commonly denoted as st. Intuitively, a sequence of narrowing steps t1t2 ↝ ... ↝ tn can be thought of as a sequence of rewrite steps t1t2 → ... → tn, but with the initial term t1 being further and further instantiated, as necessary to make each of the used rules applicable.

The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here):

app( x ,app(y, x ))
xv2.v3
app( v2.v3 ,app(y, v2.v3 )) v2.app(v3,app( y ,v2.v3))
ynil
v2.app(v3,app( nil ,v2.v3)) v2.app( v3 ,v2. v3 )
v3nil
v2.app( nil ,v2. nil ) v2.v2.nil

The last term, v2.v2.nil can be syntactically unified with the original right hand side term a.a.nil.

The narrowing lemma<ref>Template:Cite book</ref> ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term Template:Math and Template:Math, respectively, such that Template:Math is an instance of Template:Math.

Formally: whenever Template:Math holds for some substitution σ, then there exist terms Template:Math such that Template:Math and Template:Math and Template:Math for some substitution τ.

Higher-order unificationEdit

File:Goldfarb4a svg.svg
In Goldfarb's<ref name="Goldfarb.1981"/> reduction of Hilbert's 10th problem to second-order unifiability, the equation <math>X_1 * X_2 = X_3</math> corresponds to the depicted unification problem, with function variables <math>F_i</math> corresponding to <math>X_i</math> and <math>G</math> fresh.

Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification. Higher-order unification is undecidable,<ref name="Goldfarb.1981">Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Claudio Lucchesi: The Undecidability of the Unification Problem for Third Order Languages (Research Report CSRR 2059; Department of Computer Science, University of Waterloo, 1972)</ref> and such unification problems do not have most general unifiers. For example, the unification problem { f(a,b,a) ≐ d(b,a,c) }, where the only variable is f, has the solutions {f ↦ λxyz. d(y,x,c) }, {f ↦ λxyz. d(y,z,c) }, {f ↦ λxyz. d(y,a,c) }, {f ↦ λxyz. d(b,x,c) }, {f ↦ λxyz. d(b,z,c) } and {f ↦ λxyz. d(b,a,c) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Gérard Huet gave a semi-decidable (pre-)unification algorithm<ref>Gérard Huet: (1 June 1975) A Unification Algorithm for typed Lambda-Calculus, Theoretical Computer Science </ref> that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari<ref name="Martelli.Montanari.1982"/> with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet<ref>Gérard Huet: Higher Order Unification 30 Years Later</ref> and Gilles Dowek<ref>Gilles Dowek: Higher-Order Unification and Matching. Handbook of Automated Reasoning 2001: 1009–1062</ref> have written articles surveying this topic.

Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms. Higher-order pattern unification, due to Dale Miller,<ref>Template:Cite journal</ref> is another such subset. The higher-order logic programming languages λProlog and Twelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved.<ref>Template:Cite journal</ref> The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm.<ref name=Vukmirovic>Template:Cite journal</ref>

In computational linguistics, one of the most influential theories of elliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is Template:Math and the value of R (the semantic representation of the ellipsis) is determined by the equation Template:Math. The process of solving such equations is called Higher-Order Unification.<ref>Template:Cite book</ref>

Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory.<ref>Template:Cite book</ref>

See alsoEdit

NotesEdit

Template:Reflist

ReferencesEdit

Template:Reflist

Further readingEdit