Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Denotational semantics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Historical development== Denotational semantics originated in the work of [[Christopher Strachey]] and [[Dana Scott]] published in the early 1970s.<ref name="ropas.snu.ac.kr"/><ref name="Research Group Technical Monograph 1971"/> As originally developed by Strachey and Scott, denotational semantics provided the meaning of a computer program as a [[function (mathematics)|function]] that mapped input into output.<ref name="Research Group Technical Monograph 1971"/> To give meanings to [[Recursion|recursively defined]] programs, Scott proposed working with [[Scott continuity|continuous functions]] between [[domain theory|domains]], specifically [[complete partial order]]s. As described below, work has continued in investigating appropriate denotational semantics for aspects of programming languages such as sequentiality, [[Denotational semantics#Denotational semantics of concurrency|concurrency]], [[Nondeterministic algorithm|non-determinism]] and [[local state]]. Denotational semantics has been developed for modern programming languages that use capabilities like [[Concurrent computing|concurrency]] and [[Exception handling|exceptions]], e.g., [[Concurrent ML]],<ref>John Reppy "Concurrent ML: Design, Application and Semantics" in Springer-Verlag, ''[[Lecture Notes in Computer Science]]'', Vol. 693. 1993</ref> [[Communicating sequential processes|CSP]],<ref name=Roscoe>[[A. W. Roscoe]]. "The Theory and Practice of Concurrency" Prentice-Hall. Revised 2005.</ref> and [[Haskell (programming language)|Haskell]].<ref>[[Simon Peyton Jones]], Alastair Reid, Fergus Henderson, [[Tony Hoare]], and Simon Marlow. "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.1525&rep=rep1&type=pdf A semantics for imprecise exceptions]" Conference on Programming Language Design and Implementation. 1999.</ref> The semantics of these languages is compositional in that the meaning of a phrase depends on the meanings of its subphrases. For example, the meaning of the [[Applicative programming language|applicative expression]] <code>f(E1,E2)</code> is defined in terms of semantics of its subphrases f, E1 and E2. In a modern programming language, E1 and E2 can be evaluated concurrently and the execution of one of them might affect the other by interacting through [[Object (computer science)|shared objects]] causing their meanings to be defined in terms of each other. Also, E1 or E2 might throw an exception which could [[Abort (computing)|terminate]] the execution of the other one. The sections below describe special cases of the semantics of these modern programming languages. ===Meanings of recursive programs=== Denotational semantics is ascribed to a program phrase as a function from an environment (holding current values of its free variables) to its denotation. For example, the phrase {{code|n*m}} produces a denotation when provided with an environment that has binding for its two free variables: {{code|n}} and {{code|m}}. If in the environment {{code|n}} has the value 3 and {{code|m}} has the value 5, then the denotation is 15.<ref name="Research Group Technical Monograph 1971"/> A function can be represented as a set of [[ordered pair]]s of argument and corresponding result values. For example, the set {(0,1), (4,3)} denotes a function with result 1 for argument 0, result 3 for the argument 4, and undefined otherwise. Consider for example the [[factorial]] function, which might be defined recursively as: <syntaxhighlight lang="c">int factorial(int n) { if (n == 0) then return 1; else return n * factorial(n-1); }</syntaxhighlight> To provide a meaning for this recursive definition, the denotation is built up as the limit of approximations, where each approximation limits the number of calls to factorial. At the beginning, we start with no calls - hence nothing is defined. In the next approximation, we can add the ordered pair (0,1), because this doesn't require calling factorial again. Similarly we can add (1,1), (2,2), etc., adding one pair each successive approximation because computing ''factorial(n)'' requires ''n+1'' calls. In the limit we get a [[total function]] from <math>\mathbb{N}</math> to <math>\mathbb{N}</math> defined everywhere in its domain. Formally we model each approximation as a [[partial function]] <math>\N \rightharpoonup \N</math>. Our approximation is then repeatedly applying a function implementing "make a more defined partial factorial function", i.e. <math>F : (\N \rightharpoonup \N) \to (\N \rightharpoonup \N) </math>, starting with the [[empty function]] (empty set). ''F'' could be defined in code as follows (using <code>Map<int,int></code> for <math>\N \rightharpoonup \N</math>): <syntaxhighlight lang="cpp"> int factorial_nonrecursive(Map<int,int> factorial_less_defined, int n) { if (n == 0) then return 1; else if (fprev = lookup(factorial_less_defined, n-1)) then return n * fprev; else return NOT_DEFINED; } Map<int,int> F(Map<int,int> factorial_less_defined) { Map<int,int> new_factorial = Map.empty(); for (int n in all<int>()) { if (f = factorial_nonrecursive(factorial_less_defined, n) != NOT_DEFINED) new_factorial.put(n, f); } return new_factorial; } </syntaxhighlight> Then we can introduce the notation ''F<sup>n</sup>'' to indicate [[iterated function|''F'' applied ''n'' times]]. * ''F''<sup>0</sup>({}) is the totally undefined partial function, represented as the set {}; * ''F''<sup>1</sup>({}) is the partial function represented as the set {(0,1)}: it is defined at 0, to be 1, and undefined elsewhere; * ''F''<sup>5</sup>({}) is the partial function represented as the set {(0,1), (1,1), (2,2), (3,6), (4,24)}: it is defined for arguments 0,1,2,3,4. This iterative process builds a sequence of partial functions from <math>\mathbb{N}</math> to <math>\mathbb{N}</math>. Partial functions form a [[chain-complete partial order]] using ⊆ as the ordering. Furthermore, this iterative process of better approximations of the factorial function forms an expansive (also called progressive) mapping because each <math>F^i\le F^{i+1}</math> using ⊆ as the ordering. So by a [[fixed-point theorem]] (specifically [[Bourbaki–Witt theorem]]), there exists a fixed point for this iterative process. In this case, the fixed point is the least upper bound of this chain, which is the full {{code|factorial}} function, which can be expressed as the [[Union (set theory)|union]] :<math>\bigcup_{i \in \mathbb N} F^i(\{\}). </math> The fixed point we found is the [[least fixed point]] of ''F'', because our iteration started with the smallest element in the domain (the empty set). To prove this we need a more complex fixed point theorem such as the [[Knaster–Tarski theorem]]. ===Denotational semantics of non-deterministic programs=== The concept of [[power domains]] has been developed to give a denotational semantics to non-deterministic sequential programs. Writing ''P'' for a power-domain constructor, the domain ''P''(''D'') is the domain of non-deterministic computations of type denoted by ''D''. There are difficulties with fairness and [[Unbounded nondeterminism|unboundedness]] in domain-theoretic models of non-determinism.<ref>{{cite journal |first=Paul Blain |last=Levy |title=Amb Breaks Well-Pointedness, Ground Amb Doesn't |journal=Electron. Notes Theor. Comput. Sci. |volume=173 |pages=221–239 |year=2007 |doi=10.1016/j.entcs.2007.02.036 |doi-access=free }}</ref> ===Denotational semantics of concurrency=== Many researchers have argued that the domain-theoretic models given above do not suffice for the more general case of [[Concurrency (computer science)|concurrent computation]]. For this reason various [[Concurrency (computer science)#Models|new models]] have been introduced. In the early 1980s, people began using the style of denotational semantics to give semantics for concurrent languages. Examples include [[Denotational semantics of the Actor model#Clinger.27s Model|Will Clinger's work with the actor model]]; Glynn Winskel's work with event structures and [[Petri nets]];<ref>''[https://www.cl.cam.ac.uk/~gw104/eventStructures82.pdf Event Structure Semantics for CCS and Related Languages]''. DAIMI Research Report, University of Aarhus, 67 pp., April 1983.</ref> and the work by Francez, Hoare, Lehmann, and de Roever (1979) on trace semantics for CSP.<ref>[[Nissim Francez]], [[C. A. R. Hoare]], Daniel Lehmann, and [[Willem-Paul de Roever]]. "[https://dspace.library.uu.nl/bitstream/handle/1874/24888/francez_79_Semantics+of+nondeterminism.pdf?sequence=1 Semantics of nondeterminism, concurrency, and communication]", ''Journal of Computer and System Sciences''. December 1979.</ref> All these lines of inquiry remain under investigation (see e.g. the various denotational models for CSP<ref name=Roscoe/>). Recently, Winskel and others have proposed the category of [[profunctor]]s as a domain theory for concurrency.<ref>{{cite journal |first1=Gian Luca |last1=Cattani |first2=Glynn |last2=Winskel |title=Profunctors, open maps and bisimulation |journal=Mathematical Structures in Computer Science |volume=15 |issue=3 |pages=553–614 |year=2005 |doi= 10.1017/S0960129505004718|doi-broken-date=2024-11-20 |citeseerx=10.1.1.111.6243 |s2cid=16356708 }}</ref><ref>{{cite journal |first1=Mikkel |last1=Nygaard |first2=Glynn |last2=Winskel |title=Domain theory for concurrency |journal=Theor. Comput. Sci. |volume=316 |issue=1–3 |pages=153–190 |year=2004 |doi=10.1016/j.tcs.2004.01.029 |doi-access=free }}</ref> ===Denotational semantics of state=== State (such as a heap) and simple [[imperative programming|imperative features]] can be straightforwardly modeled in the denotational semantics described above. The key idea is to consider a command as a partial function on some domain of states. The meaning of "{{code|1=x:=3}}" is then the function that takes a state to the state with {{code|3}} assigned to {{code|x}}. The sequencing operator "{{code|;}}" is denoted by composition of functions. Fixed-point constructions are then used to give a semantics to looping constructs, such as "{{code|while}}". Things become more difficult in modelling programs with local variables. One approach is to no longer work with domains, but instead to interpret types as [[functor]]s from some category of worlds to a category of domains. Programs are then denoted by [[natural transformation|natural]] continuous functions between these functors.<ref>[[Peter W. O'Hearn]], John Power, [[Robert D. Tennent]], Makoto Takeyama. Syntactic control of interference revisited. ''Electron. Notes Theor. Comput. Sci.'' 1. 1995.</ref><ref>Frank J. Oles. ''A Category-Theoretic Approach to the Semantics of Programming''. PhD thesis, [[Syracuse University]], New York, USA. 1982.</ref> ===Denotations of data types=== Many programming languages allow users to define [[recursive data type]]s. For example, the type of lists of numbers can be specified by <syntaxhighlight lang=sml>datatype list = Cons of nat * list | Empty</syntaxhighlight> This section deals only with functional data structures that cannot change. Conventional imperative programming languages would typically allow the elements of such a recursive list to be changed. For another example: the type of denotations of the [[untyped lambda calculus]] is <syntaxhighlight lang=sml>datatype D = D of (D → D)</syntaxhighlight> The problem of ''solving domain equations'' is concerned with finding domains that model these kinds of datatypes. One approach, roughly speaking, is to consider the collection of all domains as a domain itself, and then solve the recursive definition there. [[Polymorphism (computer science)|Polymorphic data types]] are data types that are defined with a parameter. For example, the type of α {{code|list}}s is defined by <syntaxhighlight lang=sml>datatype α list = Cons of α * α list | Empty</syntaxhighlight> Lists of natural numbers, then, are of type {{code|nat list}}, while lists of strings are of {{code|string list}}. Some researchers have developed domain theoretic models of polymorphism. Other researchers have also modeled [[parametric polymorphism]] within constructive set theories. A recent research area has involved denotational semantics for object and class based programming languages.<ref>{{cite journal |first1=Bernhard |last1=Reus |first2=Thomas |last2=Streicher |title=Semantics and logic of object calculi |journal=Theor. Comput. Sci. |volume=316|issue=1 |pages=191–213 |year=2004 |doi=10.1016/j.tcs.2004.01.030 |doi-access=free }}</ref> ===Denotational semantics for programs of restricted complexity=== Following the development of programming languages based on [[linear logic]], denotational semantics have been given to languages for linear usage (see e.g. [[proof net]]s, [[Coherent space|coherence spaces]]) and also polynomial time complexity.<ref>{{cite journal |first=P. |last=Baillot |title=Stratified coherence spaces: a denotational semantics for Light Linear Logic |journal=Theor. Comput. Sci. |volume=318 |issue=1–2 |pages=29–55 |year=2004 |doi=10.1016/j.tcs.2003.10.015 |doi-access=free }}</ref> ===Denotational semantics of sequentiality=== The problem of full [[Denotational semantics#Abstraction|abstraction]] for the sequential programming language [[Programming language for Computable Functions|PCF]] was, for a long time, a big open question in denotational semantics. The difficulty with PCF is that it is a very sequential language. For example, there is no way to define the [[logical disjunction#parallel-or|parallel-or]] function in PCF. It is for this reason that the approach using domains, as introduced above, yields a denotational semantics that is not fully abstract. This open question was mostly resolved in the 1990s with the development of [[game semantics]] and also with techniques involving [[logical relations]].<ref>{{cite journal |first1=P.W. |last1=O'Hearn |first2=J.G. |last2=Riecke |title=Kripke Logical Relations and PCF |journal=Information and Computation |volume=120 |issue=1 |pages=107–116 |date=July 1995 |doi=10.1006/inco.1995.1103 |s2cid=6886529 |url=https://surface.syr.edu/lcsmith_other/3 |doi-access=free }}</ref> For more details, see the page on PCF. ===Denotational semantics as source-to-source translation=== It is often useful to translate one programming language into another. For example, a concurrent programming language might be translated into a [[process calculus]]; a high-level programming language might be translated into byte-code. (Indeed, conventional denotational semantics can be seen as the interpretation of programming languages into the [[internal language]] of the category of domains.) In this context, notions from denotational semantics, such as full abstraction, help to satisfy security concerns.<ref>Martin Abadi. "Protection in programming-language translations". ''Proc. of ICALP'98''. LNCS 1443. 1998.</ref><ref>{{cite journal |first=Andrew |last=Kennedy |title=Securing the .NET programmingmodel |journal=Theor. Comput. Sci. |volume=364 |issue=3 |pages=311–7 |year=2006|doi=10.1016/j.tcs.2006.08.014 |doi-access=free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)