Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Interpreter (computing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Variations == === {{anchor|Compreter}}Bytecode interpreters === {{main|Bytecode}} There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, [[Emacs Lisp]] is compiled to [[bytecode]], which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written in [[C (programming language)|C]]). The compiled code in this case is machine code for a [[virtual machine]], which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also called ''compreters''.<ref name="Kühnel_1987_Kleincomputer">{{cite book |editor-first1=Rainer |editor-last1=Erlekampf |editor-first2=Hans-Joachim |editor-last2=Mönk |author-first=Claus |author-last=Kühnel |page=222 |title=Mikroelektronik in der Amateurpraxis |trans-title=Micro-electronics for the practical amateur |chapter=4. Kleincomputer - Eigenschaften und Möglichkeiten |trans-chapter=4. Microcomputer - Properties and possibilities |publisher={{ill|Militärverlag der Deutschen Demokratischen Republik|de}}, Leipzig |location=Berlin |date=1987 |orig-year=1986 |edition=3 |language=de |isbn=3-327-00357-2 |id=7469332}}</ref><ref name="Heyne_1984_Compreter">{{cite journal |title=Basic-Compreter für U880 |trans-title=BASIC compreter for U880 (Z80) |author-first=R. |author-last=Heyne |journal={{ill|radio-fernsehn-elektronik|de|Radio Fernsehen Elektronik}} |language=de |date=1984 |volume=1984 |issue=3 |pages=150–152}}</ref> In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated. [[Control table]]s - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic [[control flow]] via customized interpreters in similar fashion to bytecode interpreters. === Threaded code interpreters === {{main|Threaded code}} Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each "instruction" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is the [[Forth (programming language)|Forth]] code used in [[Open Firmware]] systems: the source language is compiled into "F code" (a bytecode), which is then interpreted by a [[virtual machine]].{{citation needed|date=January 2013}} === Abstract syntax tree interpreters === {{main|Abstract syntax tree}} In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree (AST), then execute the program following this tree structure, or use it to generate native code [[Just-in-time compilation|just-in-time]].<ref>[http://lambda-the-ultimate.org/node/716 AST intermediate representations], Lambda the Ultimate forum</ref> In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.<ref name="KistlerFranz1999">{{cite journal |last1=Kistler |first1=Thomas |last2=Franz |first2=Michael |author-link2=Michael Franz |title=A Tree-Based Alternative to Java Byte-Codes |journal=International Journal of Parallel Programming |volume=27 |issue=1 |date=February 1999 |pages=21–33 |issn=0885-7458 |doi=10.1023/A:1018740018601 |s2cid=14330985 |citeseerx=10.1.1.87.2257 |url=http://oberon2005.oberoncore.ru/paper/mf1997a.pdf |access-date=2020-12-20 }}</ref> Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime. However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.<ref>[http://webkit.org/blog/189/announcing-squirrelfish/ Surfin' Safari - Blog Archive » Announcing SquirrelFish]. Webkit.org (2008-06-02). Retrieved on 2013-08-10.</ref> === Just-in-time compilation === {{main|Just-in-time compilation}} Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time (JIT) compilation, a technique in which the intermediate representation is compiled to native [[machine code]] at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. The earliest published JIT compiler is generally attributed to work on [[Lisp (programming language)|LISP]] by [[John McCarthy (computer scientist)|John McCarthy]] in 1960.{{sfn|Aycock|2003|loc=2. JIT Compilation Techniques, 2.1 Genesis, p. 98}} [[Adaptive optimization]] is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. The latter technique is a few decades old, appearing in languages such as [[Smalltalk]] in the 1980s.<ref>L. Deutsch, A. Schiffman, [http://portal.acm.org/citation.cfm?id=800017.800542 Efficient implementation of the Smalltalk-80 system], Proceedings of 11th POPL symposium, 1984.</ref> Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with [[Java platform|Java]], the [[.NET Framework]], most modern [[JavaScript]] implementations, and [[Matlab]] now including JIT compilers.{{citation needed|date=January 2013}} ===Template Interpreter=== Making the distinction between compilers and interpreters yet again even more vague is a special interpreter design known as a template interpreter. Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),<ref name="auto">{{cite web|url=https://github.com/openjdk/jdk|title=openjdk/jdk|website=GitHub|date=18 November 2021}}</ref><ref>{{cite web|url=https://openjdk.java.net/groups/hotspot/docs/RuntimeOverview.html#Interpreter |title=HotSpot Runtime Overview |publisher=Openjdk.java.net |date= |accessdate=2022-08-06}}</ref> known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.<ref>{{Cite news|url=https://metebalci.com/blog/demystifying-the-jvm-jvm-variants-cppinterpreter-and-templateinterpreter/|title=Demystifying the JVM: JVM Variants, Cppinterpreter and TemplateInterpreter|website=metebalci.com}}</ref><ref>{{cite web |title=JVM template interpreter|website=ProgrammerSought|url=https://programmersought.com/article/5521858566/}}</ref> Due to its design, the template interpreter very strongly resembles a just-in-time compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,<ref name="auto"/> and the Ignition Interpreter in the Google V8 javascript execution engine. ===Self-interpreter=== {{main|Meta-circular evaluator}} A self-interpreter is a [[programming language]] interpreter written in a programming language which can interpret itself; an example is a [[BASIC programming language|BASIC]] interpreter written in BASIC. Self-interpreters are related to [[Self-hosting (compilers)|self-hosting compiler]]s. If no [[compiler]] exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or [[Assembler (computing)|assembler]]). By having a first interpreter such as this, the system is [[Bootstrapping (compilers)|bootstrapped]] and new versions of the interpreter can be developed in the language itself. It was in this way that [[Donald Knuth]] developed the TANGLE interpreter for the language [[WEB]] of the de-facto standard [[TeX]] [[typesetting|typesetting system]]. Defining a computer language is usually done in relation to an abstract machine (so-called [[operational semantics]]) or as a mathematical function ([[denotational semantics]]). A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded (it cannot define a language), but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting. An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether a [[closure (computer science)|closure]] in a [[Lisp programming language|Lisp]]-like language is implemented using closures in the interpreter language or implemented "manually" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; for example, a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language. Some languages such as [[Lisp programming language|Lisp]] and [[Prolog]] have elegant self-interpreters.<ref>Bondorf, Anders. "[https://web.archive.org/web/20181112101324/https://pdfs.semanticscholar.org/a089/c5ae66c3311b45de0aaddfa457e4eb821316.pdf Logimix: A self-applicable partial evaluator for Prolog]." Logic Program Synthesis and Transformation. Springer, London, 1993. 214-227.</ref> Much research on self-interpreters (particularly reflective interpreters) has been conducted in the [[Scheme (programming language)|Scheme programming language]], a dialect of Lisp. In general, however, any [[Turing completeness|Turing-complete]] language allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain of [[metaprogramming]] is the writing of [[domain-specific language]]s (DSLs). Clive Gifford introduced<ref>{{cite web |last1=Gifford |first1=Clive |title=Eigenratios of Self-Interpreters |url=http://eigenratios.blogspot.com/2006/11/wanted-eigenratios-of-brainfck-self.html |website=Blogger |access-date=10 November 2019}}</ref> a measure quality of self-interpreter (the eigenratio), the limit of the ratio between computer time spent running a stack of ''N'' self-interpreters and time spent to run a stack of {{nowrap|''N'' − 1}} self-interpreters as ''N'' goes to infinity. This value does not depend on the program being run. The book ''[[Structure and Interpretation of Computer Programs]]'' presents examples of [[meta-circular evaluator|meta-circular interpretation]] for Scheme and its dialects. Other examples of languages with a self-interpreter are [[Forth (programming language)|Forth]] and [[Pascal (programming language)|Pascal]]. === Microcode === {{main|Microcode}} Microcode is a very commonly used technique "that imposes an interpreter between the hardware and the architectural level of a computer".<ref name=Kent2813>{{cite book |last1=Kent |first1=Allen |last2=Williams |first2=James G. |title=Encyclopedia of Computer Science and Technology: Volume 28 - Supplement 13 |date=April 5, 1993 |publisher=Marcel Dekker, Inc |location=New York |isbn=0-8247-2281-7 |url=https://books.google.com/books?id=EjWV8J8CQEYC |access-date=Jan 17, 2016}}</ref> As such, the microcode is a layer of hardware-level instructions that implement higher-level [[machine code]] instructions or internal [[state machine]] sequencing in many [[digital processing]] elements. Microcode is used in general-purpose [[central processing unit]]s, as well as in more specialized processors such as [[microcontroller]]s, [[digital signal processor]]s, [[Channel I/O|channel controllers]], [[disk controller]]s, [[network interface controller]]s, [[network processor]]s, [[graphics processing unit]]s, and in other hardware. Microcode typically resides in special high-speed memory and translates machine instructions, [[state machine]] data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying [[electronics]] so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called '''microprogramming''' and the microcode in a particular processor implementation is sometimes called a '''microprogram'''. More extensive microcoding allows small and simple [[microarchitecture]]s to [[Emulator|emulate]] more powerful architectures with wider [[word length]], more [[execution unit]]s and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family. === Computer processor === Even a non microcoding computer processor itself can be considered to be a parsing immediate execution interpreter that is written in a general purpose hardware description language such as [[VHDL]] to create a system that parses the machine code instructions and immediately executes them.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)