Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Algorithmic efficiency
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Implementation concerns=== Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded,<ref name="KriegelSchubert2016">{{cite journal|last1=Kriegel|first1=Hans-Peter|author-link=Hans-Peter Kriegel|last2=Schubert|first2=Erich|last3=Zimek|first3=Arthur|author-link3=Arthur Zimek|title=The (black) art of runtime evaluation: Are we comparing algorithms or implementations?|journal=Knowledge and Information Systems|volume=52|issue=2|year=2016|pages=341β378|issn=0219-1377|doi=10.1007/s10115-016-1004-2|s2cid=40772241}}</ref> or the choice of a [[compiler]] for a particular language, or the [[compiler optimization|compilation options]] used, or even the [[operating system]] being used. In many cases a language implemented by an [[interpreter (computing)|interpreter]] may be much slower than a language implemented by a compiler.<ref name="fourmilab.ch">{{cite web|url=http://www.fourmilab.ch/fourmilog/archives/2005-08/000567.html |title=Floating Point Benchmark: Comparing Languages (Fourmilog: None Dare Call It Reason) |publisher=Fourmilab.ch |date=4 August 2005 |access-date=14 December 2011}}</ref> See the articles on [[just-in-time compilation]] and [[interpreted language]]s. There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include [[data alignment]], [[granularity#Data granularity|data granularity]], [[locality of reference|cache locality]], [[cache coherence|cache coherency]], [[garbage collection (computer science)|garbage collection]], [[instruction-level parallelism]], [[Multithreading (disambiguation)|multi-threading]]<!--Intentional link to DAB page--> (at either a hardware or software level), [[Simultaneous multithreading|simultaneous multitasking]], and [[subroutine]] calls.<ref name="steele1997">Guy Lewis Steele, Jr. "Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO". MIT AI Lab. AI Lab Memo AIM-443. October 1977.[http://dspace.mit.edu/handle/1721.1/5753]</ref> Some processors have capabilities for [[vector processor|vector processing]], which allow a [[SIMD|single instruction to operate on multiple operands]]; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of [[parallel computing|parallel processing]], or they could be easily reconfigured. As [[parallel computing|parallel]] and [[distributed computing]] grow in importance in the late 2010s, more investments are being made into efficient [[high-level programming language|high-level]] [[Application programming interface|API]]s for parallel and distributed computing systems such as [[CUDA]], [[TensorFlow]], [[Apache Hadoop|Hadoop]], [[OpenMP]] and [[Message Passing Interface|MPI]]. Another problem which can arise in programming is that processors compatible with the same [[instruction set architecture|instruction set]] (such as [[x86-64]] or [[ARM architecture|ARM]]) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to [[optimizing compiler]]s, which must have extensive knowledge of the specific [[Central processing unit|CPU]] and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to [[software emulation|emulate]] instructions not supported on a compilation target platform, forcing it to [[code generation (compiler)|generate code]] or [[linking (computing)|link]] an external [[library (computing)|library call]] to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in [[embedded system]]s with respect to [[floating-point arithmetic]], where small and [[low-power computing|low-power]] [[microcontroller]]s often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)