Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Speedup
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Process for increasing the performance between two systems solving the same problem}} {{Redirect|Speed up||Speed Up (disambiguation)}} {{wiktionary}} In [[computer architecture]], '''speedup''' is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with different resources. The notion of speedup was established by [[Amdahl's law]], which was particularly focused on [[Parallel computing|parallel processing]]. However, speedup can be used more generally to show the effect on performance after any resource enhancement. ==Definitions== Speedup can be defined for two different types of quantities: ''[[Latency (engineering)|latency]]'' and ''[[throughput]]''.<ref>{{cite web | last = Martin | first = Milo | title = Performance and Benchmarking | url=http://www.cis.upenn.edu/~milom/cis501-Fall12/lectures/04_performance.pdf | access-date = 5 June 2014 }}</ref> ''Latency'' of an architecture is the reciprocal of the execution speed of a task: : <math>L = \frac{1}{v} = \frac{T}{W},</math> where * ''v'' is the execution speed of the task; * ''T'' is the execution time of the task; * ''W'' is the execution workload of the task. ''Throughput'' of an architecture is the execution rate of a task: : <math>Q = \rho vA = \frac{\rho AW}{T} = \frac{\rho A}{L},</math> where * ''ρ'' is the execution density (e.g., the number of stages in an [[instruction pipeline]] for a [[pipeline (computing)|pipeline]]d architecture); * ''A'' is the execution capacity (e.g., the number of [[Central processing unit|processors]] for a parallel architecture). Latency is often measured in seconds per unit of execution workload. Throughput is often measured in units of execution workload per second. Another unit of throughput is [[instruction per cycle|instructions per cycle]] (IPC) and its reciprocal, [[cycle per instruction|cycles per instruction]] (CPI), is another unit of latency. Speedup is dimensionless and defined differently for each type of quantity so that it is a consistent metric. ===Speedup in latency=== Speedup in ''latency'' is defined by the following formula:<ref>{{cite book | last1 = Hennessy | first1 = John L. | last2 = David A. | first2 = Patterson | title = Computer Architecture: A Quantitive Approach | url = https://archive.org/details/computerarchitec00henn_754 | url-access = limited | location = Waltham, MA | publisher = [[Morgan Kaufmann]] | pages = [https://archive.org/details/computerarchitec00henn_754/page/n71 46]–47 | date = 2012 | isbn = 978-0-12-383872-8 }}</ref> : <math>S_\text{latency} = \frac{L_1}{L_2} = \frac{T_1W_2}{T_2W_1},</math> where * ''S''<sub>latency</sub> is the speedup in latency of the architecture 2 with respect to the architecture 1; * ''L''<sub>1</sub> is the latency of the architecture 1; * ''L''<sub>2</sub> is the latency of the architecture 2. Speedup in latency can be predicted from [[Amdahl's law]] or [[Gustafson's law]]. ===Speedup in throughput=== Speedup in ''throughput'' is defined by the formula:<ref>{{cite book | last = Baer | first = Jean-Loup | title = Microprocessor Architecture: From Simple Pipelines to Chip Multiprocessors | url = https://archive.org/details/microprocessorar00baer_825 | url-access = limited | location = New York | publisher = [[Cambridge University Press]] | pages = [https://archive.org/details/microprocessorar00baer_825/page/n25 10] | date = 2010 | isbn = 978-0-521-76992-1 }}</ref> : <math>S_\text{throughput} = \frac{Q_2}{Q_1} = \frac{\rho_2A_2T_1W_2}{\rho_1A_1T_2W_1} = \frac{\rho_2A_2}{\rho_1A_1}S_\text{latency},</math> where * ''S''<sub>throughput</sub> is the speedup in throughput of the architecture 2 with respect to the architecture 1; * ''Q''<sub>1</sub> is the throughput of the architecture 1; * ''Q''<sub>2</sub> is the throughput of the architecture 2. ==Examples== ===Using execution times=== We are testing the effectiveness of a [[branch predictor]] on the execution of a program. First, we execute the program with the standard branch predictor on the processor, which yields an execution time of 2.25 seconds. Next, we execute the program with our modified (and hopefully improved) branch predictor on the same processor, which produces an execution time of 1.50 seconds. In both cases the execution workload is the same. Using our speedup formula, we know :<math>S_\text{latency} = \frac{L_\text{old}}{L_\text{new}} = \frac{2.25~\mathrm{s}}{1.50~\mathrm{s}} = 1.5.</math> Our new branch predictor has provided a 1.5x speedup over the original. ===Using cycles per instruction and instructions per cycle=== We can also measure speedup in cycles per instruction (CPI) which is a latency. First, we execute the program with the standard branch predictor, which yields a CPI of 3. Next, we execute the program with our modified branch predictor, which yields a CPI of 2. In both cases the execution workload is the same and both architectures are not pipelined nor parallel. Using the speedup formula gives : <math>S_\text{latency} = \frac{L_\text{old}}{L_\text{new}} = \frac{3~\text{CPI}}{2~\text{CPI}} = 1.5.</math> We can also measure speedup in instructions per cycle ([[Instructions per cycle|IPC]]), which is a throughput and the inverse of CPI. Using the speedup formula gives : <math>S_\text{throughput} = \frac{Q_\text{new}}{Q_\text{old}} = \frac{0.5~\text{IPC}}{0.33~\text{IPC}} = 1.5.</math> We achieve the same 1.5x speedup, though we measured different quantities. ==Additional details== Let ''S'' be the speedup of execution of a task and ''s'' the speedup of execution of the part of the task that benefits from the improvement of the resources of an architecture. ''Linear speedup'' or ''ideal speedup'' is obtained when {{nowrap|1=''S'' = ''s''}}. When running a task with linear speedup, doubling the local speedup doubles the overall speedup. As this is ideal, it is considered very good [[scalability]]. ''Efficiency'' is a metric of the utilization of the resources of the improved system defined as : <math>\eta = \frac{S}{s}.</math> Its value is typically between 0 and 1. Programs with linear speedup and programs running on a single processor have an efficiency of 1, while many difficult-to-parallelize programs have efficiency such as 1/ln(''s''){{citation needed|date=November 2010}} that approaches 0 as the number of processors {{nowrap|1=''A'' = ''s''}} increases. In engineering contexts, efficiency curves are more often used for graphs than speedup curves, since * all of the area in the graph is useful (whereas in speedup curves half of the space is wasted); * it is easy to see how well the improvement of the system is working; * there is no need to plot a "perfect speedup" curve. In marketing contexts, speedup curves are more often used, largely because they go up and to the right and thus appear better to the less-informed. ==Super-linear speedup== Sometimes a speedup of more than ''A'' when using ''A'' processors is observed in [[parallel computing]], which is called ''super-linear speedup''. Super-linear speedup rarely happens and often confuses beginners, who believe the theoretical maximum speedup should be ''A'' when ''A'' processors are used. One possible reason for super-linear speedup in low-level computations is the [[CPU cache|cache effect]] resulting from the different [[Memory hierarchy|memory hierarchies]] of a modern computer: in parallel computing, not only do the numbers of processors change, but so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even all of the [[working set]] can fit into caches and the memory access time reduces dramatically, which causes the extra speedup in addition to that from the actual computation.<ref>{{cite conference | last1 = Benzi | first1 = John | last2 = Damodaran | first2 = M. | title = Parallel Three Dimensional Direct Simulation Monte Carlo for Simulating Micro Flows | conference = Parallel Computational Fluid Dynamics | book-title = Parallel Computational Fluid Dynamics 2007: Implementations and Experiences on Large Scale and Grid Computing | publisher = Springer | url = https://books.google.com/books?id=MOiZ2NJ8pywC | year = 2007 | page = 95 | access-date = 2013-03-21 }}</ref> An analogous situation occurs when searching large datasets, such as the genomic data searched by [[BLAST (biotechnology)|BLAST]] implementations. There the accumulated RAM from each of the nodes in a cluster enables the dataset to move from disk into RAM thereby drastically reducing the time required by e.g. mpiBLAST to search it.<ref>{{Cite web| title=Green Destiny + mpiBLAST = Bioinfomagic | url=http://people.cs.vt.edu/~feng/presentations/030903-ParCo.pdf | archive-url=https://web.archive.org/web/20080221173824/http://people.cs.vt.edu/~feng/presentations/030903-ParCo.pdf | archive-date=2008-02-21}}</ref> Super-linear speedups can also occur when performing [[backtracking]] in parallel: an exception in one thread can cause several other threads to backtrack early, before they reach the exception themselves.<ref>{{Cite book|last=Speckenmeyer|first=Ewald|title=Supercomputing |chapter=Superlinear speedup for parallel backtracking |series=Lecture Notes in Computer Science |date=1988 |volume=297|pages=985–993|doi=10.1007/3-540-18991-2_58|isbn=978-3-540-18991-6}}</ref> Super-linear speedups can also occur in parallel implementations of branch-and-bound for optimization:<ref>{{cite web|url=http://mat.tepper.cmu.edu/blog/?p=534#comment-3029|title=Gurobi versus CPLEX benchmarks|date=29 January 2009|website=cmu.edu|access-date=23 April 2018}}</ref> the processing of one node by one processor may affect the work other processors need to do for the other nodes. ==See also== * [[Amdahl's law]] * [[Gustafson's law]] * [[Brooks's law]] * [[Karp–Flatt metric]] * [[Parallel slowdown]] * [[Scalability]] ==References== {{reflist}} {{Parallel Computing}} [[Category:Analysis of parallel algorithms]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite web
(
edit
)
Template:Nowrap
(
edit
)
Template:Parallel Computing
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Wiktionary
(
edit
)