Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Corecursion
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Type of algorithm in computer science}} {{distinguish|Mutual recursion}} {{essay|date=February 2020}} In [[computer science]], '''corecursion''' is a type of operation that is [[Dual (category theory)|dual]] to [[recursion (computer science)|recursion]]. Whereas recursion works [[analysis|analytically]], starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is ''[[Generative recursion#Structural versus generative recursion|generative recursion]]'', which may lack a definite "direction" inherent in corecursion and recursion. Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data (base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, such as [[stream (computing)|stream]]s, so long as it can be produced from simple data (base cases) in a sequence of ''finite'' steps. Where recursion may not terminate, never reaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, though it may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it produces and thus become non-''productive''. Many functions that are traditionally analyzed as recursive can alternatively, and arguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for example [[recurrence relation]]s such as the factorial. Corecursion can produce both [[Finite set|finite]] and [[Infinite set|infinite]] [[data structure]]s as results, and may employ [[self-reference|self-referential]] data structures. Corecursion is often used in conjunction with [[lazy evaluation]], to produce only a finite subset of a potentially infinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly important concept in [[functional programming]], where corecursion and [[codata (computer science)|codata]] allow [[total language]]s to work with infinite data structures. == Examples == Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarily of interest in functional programming, it can be illustrated using [[imperative programming]], which is done below using the [[Generator (computer programming)|generator]] facility in [[Python (programming language)|Python]]. In these examples [[local variable]]s are used, and [[Assignment (computer science)|assigned values]] imperatively (destructively), though these are not necessary in corecursion in [[Purely functional programming|pure functional programming]]. In pure functional programming, rather than assigning to local variables, these computed values form an invariable sequence, and prior values are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be computed). The assignments simply express this in the imperative paradigm and explicitly specify where the computations happen, which serves to clarify the exposition. === Factorial === A classic example of recursion is computing the [[factorial]], which is defined recursively by ''0! := 1'' and ''n! := n Γ (n - 1)!''. To ''recursively'' compute its result on a given input, a recursive function calls (a copy of) ''itself'' with a different ("smaller" in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless the ''base case'' has been reached. Thus a [[call stack]] develops in the process. For example, to compute ''fac(3)'', this recursively calls in turn ''fac(2)'', ''fac(1)'', ''fac(0)'' ("winding up" the stack), at which point recursion terminates with ''fac(0) = 1'', and then the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial call frame ''fac(3)'' that uses the result of ''fac(2) = 2'' to calculate the final result as ''3 Γ 2 = 3 Γ fac(2) =: fac(3)'' and finally return ''fac(3) = 6''. In this example a function returns a single value. This stack unwinding can be explicated, defining the factorial ''corecursively'', as an [[iterator]], where one ''starts'' with the case of <math>1 =: 0!</math>, then from this starting value constructs factorial values for increasing numbers ''1, 2, 3...'' as in the above recursive definition with "time arrow" reversed, as it were, by reading it ''backwards'' as {{nobreak|<math>n! \times (n+1) =: (n+1)!</math>.}} The corecursive algorithm thus defined produces a ''stream'' of ''all'' factorials. This may be concretely implemented as a [[Generator (computer programming)|generator]]. Symbolically, noting that computing next factorial value requires keeping track of both ''n'' and ''f'' (a previous factorial value), this can be represented as: :<math>n, f = (0, 1) : (n + 1, f \times (n+1))</math> or in [[Haskell (programming language)|Haskell]], <syntaxhighlight lang=haskell> (\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1) </syntaxhighlight> meaning, "starting from <math>n, f = 0, 1</math>, on each step the next values are calculated as <math>n+1, f \times (n+1)</math>". This is mathematically equivalent and almost identical to the recursive definition, but the <math>+1</math> emphasizes that the factorial values are being built ''up'', going forwards from the starting case, rather than being computed after first going backwards, ''down'' to the base case, with a <math>-1</math> decrement. The direct output of the corecursive function does not simply contain the factorial <math>n!</math> values, but also includes for each value the auxiliary data of its index ''n'' in the sequence, so that any one specific result can be selected among them all, as and when needed. There is a connection with [[denotational semantics]], where the [[Denotational semantics#Meanings of recursive programs|denotations of recursive programs]] is built up corecursively in this way. In Python, a recursive factorial function can be defined as:{{efn|Not validating input data.}} <syntaxhighlight lang="python"> def factorial(n: int) -> int: """Recursive factorial function.""" if n == 0: return 1 else: return n * factorial(n - 1) </syntaxhighlight> This could then be called for example as <code>factorial(5)</code> to compute ''5!''. A corresponding corecursive generator can be defined as: <syntaxhighlight lang="python"> def factorials(): """Corecursive generator.""" n, f = 0, 1 while True: yield f n, f = n + 1, f * (n + 1) </syntaxhighlight> This generates an infinite stream of factorials in order; a finite portion of it can be produced by: <syntaxhighlight lang="python"> def n_factorials(n: int): k, f = 0, 1 while k <= n: yield f k, f = k + 1, f * (k + 1) </syntaxhighlight> This could then be called to produce the factorials up to ''5!'' via: <syntaxhighlight lang="python"> for f in n_factorials(5): print(f) </syntaxhighlight> If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and the access into one function, <syntaxhighlight lang="python"> def nth_factorial(n: int): k, f = 0, 1 while k < n: k, f = k + 1, f * (k + 1) return f </syntaxhighlight> As can be readily seen here, this is practically equivalent (just by substituting <code>return</code> for the only <code>yield</code> there) to the accumulator argument technique for [[tail call|tail recursion]], unwound into an explicit loop. Thus it can be said that the concept of corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, where applicable. === Fibonacci sequence === In the same way, the [[Fibonacci sequence]] can be represented as: :<math>a, b = (0, 1) : (b, a+b)</math> Because the Fibonacci sequence is a [[recurrence relation]] of order 2, the corecursive relation must track two successive terms, with the <math>(b, -)</math> corresponding to shift forward by one step, and the <math>(-, a+b)</math> corresponding to computing the next term. This can then be implemented as follows (using [[parallel assignment]]): <syntaxhighlight lang="python"> def fibonacci_sequence(): a, b = 0, 1 while True: yield a a, b = b, a + b </syntaxhighlight> In Haskell, <syntaxhighlight lang=haskell> map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )</syntaxhighlight> === Tree traversal === [[Tree traversal]] via a [[depth-first]] approach is a classic example of recursion. Dually, [[breadth-first]] traversal can very naturally be implemented via corecursion. Iteratively, one may traverse a tree by placing its root node in a data structure, then iterating with that data structure while it is non-empty, on each step removing the first node from it and placing the removed node's ''child nodes'' back into that data structure. If the data structure is a [[Stack (abstract data type)|stack]] (LIFO), this yields depth-first traversal, and if the data structure is a [[Queue (abstract data type)|queue]] (FIFO), this yields breadth-first traversal: :<math>\text{trav}_\text{nn}(t) = \text{aux}_\text{nn}([t])</math> :<math>\text{aux}_\text{df}([t,\ ...ts]) = \text{val}(t) ;\ \text{aux}_\text{df}([\ ...\text{children}(t),\ ...ts\ ])</math> :<math>\text{aux}_\text{bf}([t,\ ...ts]) = \text{val}(t) ;\ \text{aux}_\text{bf}([\ ...ts,\ ...\text{children}(t)\ ])</math> Using recursion, a depth-first traversal of a tree is implemented simply as recursively traversing each of the root node's child nodes in turn. Thus the second child subtree is not processed until the first child subtree is finished. The root node's value is handled separately, whether before the first child is traversed (resulting in pre-order traversal), after the first is finished and before the second (in-order), or after the second child node is finished (post-order) {{--}} assuming the tree is binary, for simplicity of exposition. The call stack (of the recursive traversal function invocations) corresponds to the stack that would be iterated over with the explicit LIFO structure manipulation mentioned above. Symbolically, :<math>\text{df}_\text{in}(t) = [\ ...\text{df}_\text{in}(\text{left}(t)),\ \text{val}(t),\ ...\text{df}_\text{in}(\text{right}(t))\ ]</math> <!-- :<math>\text{df}_\text{pre}(t) = [\ \text{val}(t),\ ...flatMap(\text{df}_\text{pre})(\text{children}(t))\ ]</math> --> :<math>\text{df}_\text{pre}(t) = [\ \text{val}(t),\ ...\text{df}_\text{pre}(\text{left}(t)),\ ...\text{df}_\text{pre}(\text{right}(t))\ ]</math> :<math>\text{df}_\text{post}(t) = [\ ...\text{df}_\text{post}(\text{left}(t)),\ ...\text{df}_\text{post}(\text{right}(t)),\ \text{val}(t)\ ]</math> "Recursion" has two meanings here. First, the recursive invocations of the tree traversal functions <math>\text{df}_{xyz}</math>. More pertinently, we need to contend with how the resulting ''list of values'' is built here. Recursive, bottom-up output creation will result in the right-to-left tree traversal. To have it actually performed in the intended left-to-right order the sequencing would need to be enforced by some extraneous means, or it would be automatically achieved if the output were to be built in the top-down fashion, i.e. ''corecursively''. A breadth-first traversal creating its output in the top-down order, corecursively, can be also implemented by starting at the root node, outputting its value,{{efn|Breadth-first traversal, unlike depth-first, is unambiguous, and visits a node value before processing children.}} then breadth-first traversing the subtrees β i.e., passing on the ''whole list'' of subtrees to the next step (not a single subtree, as in the recursive approach) β at the next step outputting the values of all of their root nodes, then passing on ''their'' child subtrees, etc.{{efn|Technically, one may define a breadth-first traversal on an ordered, disconnected set of trees β first the root node of each tree, then the children of each tree in turn, then the grandchildren in turn, etc.}} In this case the generator function, indeed the output sequence itself, acts as the queue. As in the factorial example above, where the auxiliary information of the index (which step one was at, ''n'') was pushed forward, in addition to the actual output of ''n''!, in this case the auxiliary information of the remaining subtrees is pushed forward, in addition to the actual output. Symbolically, :<math>v, ts = ([], [\text{FullTree}]) : (\text{RootValues}(ts), \text{ChildTrees}(ts))</math> meaning that at each step, one outputs the list of values in this level's nodes, then proceeds to the next level's nodes. Generating just the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the list of lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list). This is extensionally equivalent to the <math>\text{aux}_\text{bf}</math> specification above. In Haskell, <syntaxhighlight lang=haskell> concatMap fst ( (\(v, ts) -> (rootValues ts, childTrees ts)) `iterate` ([], [fullTree]) )</syntaxhighlight> Notably, given an infinite tree,{{efn|Assume fixed [[branching factor]] (e.g., binary), or at least bounded, and balanced (infinite in every direction).}} the corecursive breadth-first traversal will traverse all nodes, just as for a finite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeed if traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf. This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures. One caveat still remains for trees with the infinite branching factor, which need a more attentive interlacing to explore the space better. See [[Dovetailing (computer science)|dovetailing]]. In Python, this can be implemented as follows.{{efn|First defining a tree class, say via: <syntaxhighlight lang="python"> class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right def __str__(self): return str(self.value) </syntaxhighlight> and initializing a tree, say via: <syntaxhighlight lang="python"> t = Tree(1, Tree(2, Tree(4), Tree(5)), Tree(3, Tree(6), Tree(7))) </syntaxhighlight> In this example nodes are labeled in breadth-first order: 1 2 3 4 5 6 7 }} The usual post-order depth-first traversal can be defined as:{{efn|Intuitively, the function iterates over subtrees (possibly empty), then once these are finished, all that is left is the node itself, whose value is then returned; this corresponds to treating a leaf node as basic.}} <syntaxhighlight lang="python"> def df(node): """Post-order depth-first traversal.""" if node is not None: df(node.left) df(node.right) print(node.value) </syntaxhighlight> This can then be called by <code>df(t)</code> to print the values of the nodes of the tree in post-order depth-first order. The breadth-first corecursive generator can be defined as:{{efn|1=Here the argument (and loop variable) is considered as a whole, possible infinite tree, represented by (identified with) its root node (tree = root node), rather than as a potential leaf node, hence the choice of variable name.}} <syntaxhighlight lang="python"> def bf(tree): """Breadth-first corecursive generator.""" tree_list = [tree] while tree_list: new_tree_list = [] for tree in tree_list: if tree is not None: yield tree.value new_tree_list.append(tree.left) new_tree_list.append(tree.right) tree_list = new_tree_list </syntaxhighlight> This can then be called to print the values of the nodes of the tree in breadth-first order: <syntaxhighlight lang="python"> for i in bf(t): print(i) </syntaxhighlight> ==Definition== {{technical|section|date=November 2010}} [[Initial and terminal objects|Initial data type]]s can be defined as being the [[least fixpoint]] ([[up to isomorphism]]) of some type equation; the [[isomorphism]] is then given by an [[initial algebra]]. Dually, final (or terminal) data types can be defined as being the [[greatest fixpoint]] of a type equation; the isomorphism is then given by a final [[F-coalgebra|coalgebra]]. If the domain of discourse is the [[category of sets]] and [[total function]]s, then final data types may contain infinite, [[Non-well-founded set theory|non-wellfounded]] values, whereas initial types do not.<ref>Barwise and Moss 1996.</ref><ref>Moss and Danner 1997.</ref> On the other hand, if the domain of discourse is the category of [[complete partial order]]s and [[Scott continuity|continuous functions]], which corresponds roughly to the [[Haskell (programming language)|Haskell]] programming language, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form an isomorphism.<ref>Smyth and Plotkin 1982.</ref> Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dual to the way that ordinary [[recursion]] recursively defines functions whose domain is an initial data type.<ref>Gibbons and Hutton 2005.</ref><!--G&H ascribe this definition to Barwise and Moss--> The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if one were to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistent with existing textbooks about Haskell.<ref>Doets and van Eijck 2004.</ref> The examples used in this article predate the attempts to define corecursion and explain what it is. ==Discussion== {{cleanup|section|reason=mix of discussion, examples, and related concepts|date=July 2012}} The rule for ''primitive corecursion'' on [[codata (computer science)|codata]] is the dual to that for [[primitive recursion]] on data. Instead of descending on the argument by [[pattern-matching]] on its constructors (that ''were called up before'', somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. "fields"), we ascend on the result by filling-in its "destructors" (or "observers", that ''will be called afterwards'', somewhere - so we're actually calling a constructor, creating another bit of the result to be observed later on). Thus corecursion ''creates'' (potentially infinite) codata, whereas ordinary recursion ''analyses'' (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it might not terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite. In "Programming with streams in Coq: a case study: the Sieve of Eratosthenes"<ref>Leclerc and Paulin-Mohring, 1994</ref> we find <syntaxhighlight lang="haskell"> hd (conc a s) = a tl (conc a s) = s (sieve p s) = if div p (hd s) then sieve p (tl s) else conc (hd s) (sieve p (tl s)) hd (primes s) = (hd s) tl (primes s) = primes (sieve (hd s) (tl s)) </syntaxhighlight> where primes "are obtained by applying the primes operation to the stream (Enu 2)". Following the above notation, the sequence of primes (with a throwaway 0 prefixed to it) and numbers streams being progressively sieved, can be represented as :<math>p, s = (0, [2..]) : (hd(s), sieve(hd(s),tl(s)))</math> or in Haskell, <syntaxhighlight lang="haskell"> (\(p, s@(h:t)) -> (h, sieve h t)) `iterate` (0, [2..]) </syntaxhighlight> The authors discuss how the definition of <code>sieve</code> is not guaranteed always to be ''productive'', and could become stuck e.g. if called with <code>[5,10..]</code> as the initial stream. Here is another example in Haskell. The following definition produces the list of [[Fibonacci numbers]] in linear time: <syntaxhighlight lang=haskell> fibs = 0 : 1 : zipWith (+) fibs (tail fibs) </syntaxhighlight> This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes are ever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniques are an important part of Haskell programming. This can be done in Python as well:<ref>Hettinger 2009.</ref> <syntaxhighlight lang="pycon"> >>> from itertools import tee, chain, islice >>> def fibonacci(): ... def deferred_output(): ... yield from output ... ... result, c1, c2 = tee(deferred_output(), 3) ... paired = (x + y for x, y in zip(c1, islice(c2, 1, None))) ... output = chain([0, 1], paired) ... return result >>> print(*islice(fibonacci(), 20), sep=', ') 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181 </syntaxhighlight> The definition of <code>zipWith</code> can be inlined, leading to this: <syntaxhighlight lang="haskell"> fibs = 0 : 1 : next fibs where next (a: t@(b:_)) = (a+b):next t </syntaxhighlight> This example employs a self-referential ''data structure''. Ordinary recursion makes use of self-referential ''functions'', but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can be rewritten as follows: <syntaxhighlight lang="haskell"> fibs = fibgen (0,1) fibgen (x,y) = x : fibgen (y,x+y) </syntaxhighlight> This employs only self-referential ''function'' to construct the result. If it were used with strict list constructor it would be an example of runaway recursion, but with [[lazy evaluation|non-strict]] list constructor this guarded recursion gradually produces an indefinitely defined list. {{anchor|corecursive queue breadth-first tree traversal}}Corecursion need not produce an infinite object; a corecursive queue<ref>Allison 1989; Smith 2009.</ref> is a particularly good example of this phenomenon. The following definition produces a [[Breadth-first search|breadth-first traversal]] of a binary tree in the ''top-down'' manner, in linear time (already incorporating the flattening mentioned [[#Tree traversal|above]]): <syntaxhighlight lang="haskell"> data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b) bftrav :: Tree a b -> [Tree a b] bftrav tree = tree : ts where ts = gen 1 (tree : ts) gen 0 p = [] gen len (Leaf _ : p) = gen (len-1) p gen len (Branch _ l r : p) = l : r : gen (len+1) p -- ----read---- ----write-ahead--- -- bfvalues tree = [v | (Branch v _ _) <- bftrav tree] </syntaxhighlight> This definition takes a tree and produces a list of its sub-trees (nodes and leaves). This list serves dual purpose as both the input queue and the result (<small>''<code>gen len p</code>''</small> produces its output <small>''<code>len</code>''</small> notches ahead of its input back-pointer, <small>''<code>p</code>''</small>, along the list). It is finite if and only if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; this can safely be elided if this definition is applied only to infinite trees. <!-- Even if the result is finite, this example depends on lazy evaluation due to the use of self-referential data structures. why the above was removed: it isn't true: --> This Haskell code uses self-referential data structure, but does not ''essentially'' depend on lazy evaluation. It can be straightforwardly translated into e.g. Prolog which is not a lazy language. What ''is'' essential is the ability to build a list (used as the queue) in the ''top-down'' manner. For that, Prolog has [[Tail call#Tail recursion modulo cons|tail recursion modulo cons]] (i.e. open ended lists). Which is also emulatable in Scheme, C, etc. using linked lists with mutable tail sentinel pointer: <syntaxhighlight lang="prolog"> bftrav( Tree, [Tree|TS]) :- bfgen( 1, [Tree|TS], TS). bfgen( 0, _, []) :- !. % 0 entries in the queue -- stop and close the list bfgen( N, [leaf(_) |P], TS ) :- N2 is N-1, bfgen( N2, P, TS). bfgen( N, [branch(_,L,R)|P], [L,R|TS]) :- N2 is N+1, bfgen( N2, P, TS). %% ----read----- --write-ahead-- </syntaxhighlight> Another particular example gives a solution to the problem of breadth-first labeling.<ref>Jones and Gibbons 1992.</ref> The function <code>label</code> visits every node in a binary tree in the breadth first fashion, replacing each label with an integer, each subsequent integer bigger than the last by 1. This solution employs a self-referential data structure, and the binary tree can be finite or infinite. <syntaxhighlight lang="haskell"> label :: Tree a b -> Tree Int Int label t = tn where (tn, ns) = go t (1:ns) go :: Tree a b -> [Int] -> (Tree Int Int, [Int]) go (Leaf _ ) (i:a) = (Leaf i , i+1:a) go (Branch _ l r) (i:a) = (Branch i ln rn, i+1:c) where (ln, b) = go l a (rn, c) = go r b </syntaxhighlight> Or in Prolog, for comparison, <syntaxhighlight lang="prolog"> label( Tree, Tn) :- label( Tree, [1|Ns], Tn, Ns). label( leaf(_), [I|A], leaf( I), [I+1|A]). label( branch(_,L,R),[I|A], branch(I,Ln,Rn),[I+1|C]) :- label( L, A, Ln, B), label( R, B, Rn, C). </syntaxhighlight> An [[apomorphism]] (such as an [[anamorphism]], such as [[Unfold (higher-order function)|unfold]]) is a form of corecursion in the same way that a [[paramorphism]] (such as a [[catamorphism]], such as [[Fold (higher-order function)|fold]]) is a form of recursion. The [[Coq (software)|Coq]] proof assistant supports corecursion and [[coinduction]] using the CoFixpoint command. == History == Corecursion, referred to as ''circular programming,'' dates at least to {{Harv|Bird|1984}}, who credits [[John Hughes (computer scientist)|John Hughes]] and [[Philip Wadler]]; more general forms were developed in {{Harv|Allison|1989}}. The original motivations included producing more efficient algorithms (allowing a single pass over data in some cases, instead of requiring multiple passes) and implementing classical data structures, such as [[doubly linked list]]s and queues, in functional languages. == See also == * [[Bisimulation]] * [[Coinduction]] * [[Recursion]] * [[Anamorphism]] == Notes == {{notelist}} == References == {{Reflist|2}} {{refbegin}} * {{Cite journal | last = Bird | first = Richard Simpson | author-link = Richard Bird (computer scientist)| title = Using circular programs to eliminate multiple traversals of data | doi = 10.1007/BF00264249 | journal = [[Acta Informatica]] | volume = 21 | issue = 3 | pages = 239β250 | year = 1984 | s2cid = 27392591 }} * {{cite journal | doi = 10.1002/spe.4380190202 | first = Lloyd |last=Allison | date = April 1989 | title = Circular Programs and Self-Referential Structures | url = http://www.csse.monash.edu.au/~lloyd/tildeFP/1989SPE/ | journal = Software: Practice and Experience | volume = 19 | issue = 2 | pages = 99β109 | s2cid = 21298473 | arxiv = 2403.01866 }} * {{cite tech report | author = Geraint Jones and [[Jeremy Gibbons]] | title = Linear-time breadth-first tree algorithms: An exercise in the arithmetic of folds and zips | institution = Dept of Computer Science, University of Auckland | year = 1992 | url = http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.5446 }} * {{cite book |author1=Jon Barwise |author2=Lawrence S. Moss |title=Vicious Circles |url=http://www.press.uchicago.edu/presssite/metadata.epl?mode=synopsis&bookkey=3630257 |publisher=Center for the Study of Language and Information |date=June 1996 |isbn=978-1-57586-009-1 |author1-link=Jon Barwise |access-date=2011-01-24 |archive-url=https://web.archive.org/web/20100621142601/http://www.press.uchicago.edu/presssite/metadata.epl?mode=synopsis&bookkey=3630257 |archive-date=2010-06-21 |url-status=dead }} * {{cite journal | doi = 10.1093/jigpal/5.2.231 |author1=Lawrence S. Moss |author2=Norman Danner | title = On the Foundations of Corecursion | journal = Logic Journal of the IGPL | volume = 5 | issue = 2 | pages = 231β257 | year = 1997 |citeseerx=10.1.1.40.4243 }} * {{cite book |author1=Kees Doets |author2=Jan van Eijck | title = The Haskell Road to Logic, Maths, and Programming | url = http://homepages.cwi.nl/~jve/HR/ | publisher = King's College Publications | date = May 2004 | isbn = 978-0-9543006-9-2 }} * {{cite journal | author = David Turner | date = 2004-07-28 | title = Total Functional Programming | url = http://www.jucs.org/jucs_10_7/total_functional_programming | journal = [[Journal of Universal Computer Science]] | volume = 10 | issue = 7 | pages = 751β768 | doi = 10.3217/jucs-010-07-0751 | author-link = David Turner (computer scientist) }} * {{cite journal |author1=Jeremy Gibbons |author2=Graham Hutton | title = Proof methods for corecursive programs | journal = [[Fundamenta Informaticae]] | volume = 66 | issue = 4 | pages = 353β366 | date = April 2005 | url = http://www.cs.nott.ac.uk/~gmh/bib.html#corecursion }} * {{citation | author = Leon P. Smith | date = 2009-07-29 | title = Lloyd Allison's Corecursive Queues: Why Continuations Matter | url = http://themonadreader.wordpress.com/2009/07/29/issue-14/ | journal = The Monad Reader | issue = 14 | pages = 37β68 }} * {{cite web | author = Raymond Hettinger | title = Recipe 576961: Technique for cyclical iteration | url = http://code.activestate.com/recipes/576961/ | date = 2009-11-19 }} * {{cite journal | author = M. B. Smyth and [[Gordon Plotkin|G. D. Plotkin]] | year = 1982 | title = The Category-Theoretic Solution of Recursive Domain Equations | journal = [[SIAM Journal on Computing]] | volume = 11 | issue = 4 | pages = 761β783 | doi = 10.1137/0211062 | s2cid = 8517995 | url = http://wrap.warwick.ac.uk/46312/1/WRAP_Smyth_cs-rr-014.pdf }} * {{cite book |author1=Leclerc, Francois |author2=Paulin-Mohring, Christine|author2link = Christine Paulin-Mohring | title = Programming with Streams in Coq: A Case Study: the Sieve of Eratosthenes | series = Types for Proofs and Programs: International Workshop TYPES '93. | year = 1993 | isbn = 978-3-540-58085-0 | pages = 191β212 | url = http://dl.acm.org/citation.cfm?id=189973.189981 | publisher = Springer-Verlag New York, Inc. }} {{refend}} [[Category:Theoretical computer science]] [[Category:Self-reference]] [[Category:Articles with example Haskell code]] [[Category:Articles with example Python (programming language) code]] [[Category:Functional programming]] [[Category:Category theory]] [[Category:Recursion]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:--
(
edit
)
Template:Ambox
(
edit
)
Template:Anchor
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite tech report
(
edit
)
Template:Cite web
(
edit
)
Template:Cleanup
(
edit
)
Template:Distinguish
(
edit
)
Template:Efn
(
edit
)
Template:Essay
(
edit
)
Template:Harv
(
edit
)
Template:Nobreak
(
edit
)
Template:Notelist
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Technical
(
edit
)