Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Interpreter (computing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Compilers versus interpreters == [[File:Linker.svg|thumb|An illustration of the linking process. Object files and [[static library|static libraries]] are assembled into a new library or executable.]] Programs written in a [[high-level language]] are either directly executed by some kind of interpreter or converted into [[machine code]] by a compiler (and [[assembler (computing)|assembler]] and [[linker (computing)|linker]]) for the [[CPU]] to execute. While compilers (and assemblers) generally produce machine code directly executable by computer hardware, they can often (optionally) produce an intermediate form called [[object code]]. This is basically the same machine specific code but augmented with a [[symbol table]] with names and tags to make executable blocks (or modules) identifiable and relocatable. Compiled programs will typically use building blocks (functions) kept in a library of such object code modules. A [[linker (computing)|linker]] is used to combine (pre-made) library files with the object file(s) of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages (capable of generating the same object format). A simple interpreter written in a low-level language (e.g. [[assembly language|assembly]]) may have similar machine code blocks implementing functions of the high-level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high-level language typically uses another approach, such as generating and then walking a [[parse tree]], or by generating and executing intermediate software-defined instructions, or both. Thus, both compilers and interpreters generally turn source code (text files) into tokens, both may (or may not) generate a parse tree, and both may generate immediate instructions (for a [[stack machine]], [[Three-address code|quadruple code]], or by other means). The basic difference is that a compiler system, including a (built in or separate) linker, generates a stand-alone ''machine code'' program, while an interpreter system instead ''performs'' the actions described by the high-level program. A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all (i.e. until the program has to be changed) while an interpreter has to do ''some'' of this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work (including analysis of types, and similar) is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high-level languages without (many) dynamic data structures, checks, or [[type checking]]. In traditional compilation, the executable output of the [[Linker (computing)|linkers]] (.exe files or .dll files or a library, see picture) is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for small [[embedded systems]] are typically statically allocated, often hard coded in a [[NOR flash]] memory, as there is often no secondary storage and no operating system in this sense. Historically, most interpreter systems have had a self-contained editor built in. This is becoming more common also for compilers (then often called an [[Integrated development environment|IDE]]), although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.<ref>{{cite web|title=Why was the first compiler written before the first interpreter?|url=https://arstechnica.com/information-technology/2014/11/why-was-the-first-compiler-written-before-the-first-interpreter/|website=[[Ars Technica]]|date=8 November 2014|access-date=9 November 2014}}</ref> === Development cycle === During the [[software development cycle]], programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and [[linker (computing)|link]] all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a [[Make (software)|Makefile]] and program. The Makefile lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files. === Distribution === A [[compiler]] converts source code into binary instruction for a specific processor's architecture, thus making it less [[software portability|portable]]. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A [[cross compiler]] can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled. An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable, since the interpreter itself is part of what needs to be installed. The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of [[copyright]]. However, various systems of [[encryption]] and [[obfuscation]] exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a [[decompiler]] or [[disassembler]].{{citation needed|date=January 2013}} === Efficiency === The main disadvantage of interpreters is that an interpreted program typically runs more slowly than if it had been [[compiler|compiled]]. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle.<ref name="FOLDOC">{{FOLDOC|Interpreter}}</ref><ref>{{Cite web |title=Compilers vs. interpreters: explanation and differences |url=https://www.ionos.com/digitalguide/websites/web-development/compilers-vs-interpreters/ |access-date=2022-09-16 |website=IONOS Digital Guide |language=en}}</ref> Interpreting code is slower than running the compiled code because the interpreter must analyze each [[statement (computer science)|statement]] in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This [[run time (program lifecycle phase)|run-time]] analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at [[compile time]].<ref name="FOLDOC" /> There are various compromises between the [[development speed]] when using an interpreter and the execution speed when using a compiler. Some systems (such as some [[Lisp (programming language)|Lisps]]) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.{{citation needed|date=January 2013}} Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many [[BASIC]] interpreters replace [[keyword (computer programming)|keyword]]s with single [[byte]] [[Token threading|tokens]] which can be used to find the instruction in a [[jump table]].<ref name="FOLDOC" /> A few interpreters, such as the [[PBASIC]] interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a [[variable-length code]] requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation. {| class="wikitable collapsible collapsed" style="float:right; text-align:left;" |- ! Toy [[C (programming language)|C]] expression interpreter |- | <syntaxhighlight lang="C"> // data types for abstract syntax tree enum _kind { kVar, kConst, kSum, kDiff, kMult, kDiv, kPlus, kMinus, kNot }; struct _variable { int *memory; }; struct _constant { int value; }; struct _unaryOperation { struct _node *right; }; struct _binaryOperation { struct _node *left, *right; }; struct _node { enum _kind kind; union _expression { struct _variable variable; struct _constant constant; struct _binaryOperation binary; struct _unaryOperation unary; } e; }; // interpreter procedure int executeIntExpression(const struct _node *n) { int leftValue, rightValue; switch (n->kind) { case kVar: return *n->e.variable.memory; case kConst: return n->e.constant.value; case kSum: case kDiff: case kMult: case kDiv: leftValue = executeIntExpression(n->e.binary.left); rightValue = executeIntExpression(n->e.binary.right); switch (n->kind) { case kSum: return leftValue + rightValue; case kDiff: return leftValue - rightValue; case kMult: return leftValue * rightValue; case kDiv: if (rightValue == 0) exception("division by zero"); // doesn't return return leftValue / rightValue; } case kPlus: case kMinus: case kNot: rightValue = executeIntExpression(n->e.unary.right); switch (n->kind) { case kPlus: return + rightValue; case kMinus: return - rightValue; case kNot: return ! rightValue; } default: exception("internal error: illegal expression kind"); } } </syntaxhighlight> |} An interpreter might well use the same [[lexical analysis|lexical analyzer]] and [[parser]] as the compiler and then interpret the resulting [[abstract syntax tree]]. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained from [[C (programming language)|C]] expressions are shown in the box. === Regression === Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute.<ref>Theodore H. Romer, Dennis Lee, Geoffrey M. Voelker, Alec Wolman, Wayne A. Wong, Jean-Loup Baer, Brian N. Bershad, and Henry M. Levy, [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.41.2582&rep=rep1&type=pdf The Structure and Performance of Interpreters]</ref><ref>Terence Parr, Johannes Luber, [http://www.antlr.org/wiki/display/ANTLR3/The+difference+between+compilers+and+interpreters The Difference Between Compilers and Interpreters] {{Webarchive|url=https://web.archive.org/web/20140106012828/http://www.antlr.org/wiki/display/ANTLR3/The+difference+between+compilers+and+interpreters |date=2014-01-06 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)