Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear speedup theorem
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Speeding up Turing machines by increasing tape symbol complexity}} In [[computational complexity theory]], the '''linear speedup theorem''' for [[Turing machine]]s states that given any [[real number|real]] ''c'' > 0 and any ''k''-tape Turing machine solving a problem in time ''f''(''n''), there is another ''k''-tape machine that solves the same problem in time at most {{nowrap|''f''(''n'')/''c'' + 2''n'' + 3}}, where ''k'' > 1.<ref name=papadi>{{cite book|author = Christos Papadimitriou|author-link = Christos Papadimitriou| title = Computational Complexity | chapter = 2.4. Linear speedup | publisher = Addison-Wesley | year = 1994}}</ref><ref name=sudkamp>{{cite book|author = Thomas A. Sudkamp|author-link = Thomas A. Sudkamp| title = Languages and Machines: An Introduction to the Theory of Computer Science | chapter = 14.2 Linear Speedup | publisher = Addison-Wesley | year = 1994}}</ref> If the original machine is [[Non-deterministic Turing machine|non-deterministic]], then the new machine is also non-deterministic. The constants 2 and 3 in 2''n'' + 3 can be lowered, for example, to ''n'' + 2.<ref name=papadi/> The theorem also holds for Turing machines with 1-way, [[Read-only Turing machine|read-only]] input tape and <math>k\ge 1</math> work tapes.<ref name="WW">{{cite book |last1=Wagner |first1=K. |title=Computational Complexity |last2=Wechsung |first2=G. |date=1986 |publisher=Springer |isbn=978-9027721464}}</ref> For single-tape Turing machines, linear speedup holds for machines with execution time at least <math>n^2</math>. It provably does not hold for machines with time <math>t(n)\in \Omega(n\log n)\cap o(n^2)</math>.<ref name="WW" /> ==Proof== The construction is based on packing several tape symbols of the original machine ''M'' into one tape symbol of the new machine ''N''. It has a similar effect as using longer words and commands in processors: it speeds up the computations but increases the machine size. How many old symbols are packed into a new symbol depends on the desired speed-up. Suppose the new machine packs three old symbols into a new symbol. Then the alphabet of the new machine is <math>\Sigma \cup \Sigma^3</math>: it consists of the original symbols and the packed symbols. The new machine has the same number ''k'' > 1 of tapes. A state of ''N'' consists of the following components: * the state of ''M''; * for each tape, three packed symbols that describe the packed symbol under the head, the packed symbol on the left, and the packed symbol on the right; and * for each tape, the original head position within the packed symbol under the head of ''N''. The new machine ''N'' starts with encoding the given input into a new alphabet (that is why its alphabet must include <math>\Sigma</math>). For example, if the input to 2-tape ''M'' is on the left, then after the encoding the tape configuration of ''N'' is on the right: :{| |- | [ # || _ || a || b || b || a || b || b || a || _ || ...] || || [ # || (_,_,_) || (_,_,_) || (_,_,_) || ...] |- | [ # || _ || _ || _ || _ || _ || _ || _ || _ || _ || ...] || || [ # || (_,a,b) || (b,a,b) || (b,a,_) || ...] |} The new machine packs three old symbols (e.g., the blank symbol ''_'', the symbol ''a'', and the symbol ''b'') into a new symbol (here (_,''a'',''b'')) and copies it the second tape, while erasing the first tape. At the end of the initialization, the new machine directs its head to the beginning. Overall, this takes 2''n'' + 3 steps. After the initialization, the state of ''N'' is <math>(q_0; ~~~?, (\_,\_,\_), ?; ~~~?, (\_,a,b), ?; ~~~ [1,1])</math>, where the symbol <math>?</math> means that it will be filled in by the machine later; the symbol <math>[1,1]</math> means that the head of the original machine points to the first symbols inside <math>(\_,\_,\_)</math> and <math>(\_,a,b)</math>. Now the machine starts simulating ''m'' = 3 transitions of ''M'' using six of its own transitions (in this concrete case, there will be no speed up, but in general ''m'' can be much larger than six). Let the configurations of ''M'' and ''N'' be: :{| |- | [ # || _ || _ || b || b || a || '''b''' || b || a || _ || ...] || || [ # || (_,a,b) || (b,a,b) || ('''b''',_,_) || ...] |- | [ # || _ || a || b || '''b''' || a || b || b || _ || _ || ...] || || [ # || (_,_,'''b''') || (b,a,b) || (b,a,_) || ...] |} where the bold symbols indicate the head position. The state of ''N'' is <math>(q; ~~~?, (\_,\_,b), ?; ~~~?, (b,\_,\_), ?; ~~~ [3,1])</math>. Now the following happens: * ''N'' moves right, left, left, right. After the four moves, the machine ''N'' has all its <math>?</math> filled, and its state becomes <math>(q; ~~~\#, (\_,\_,b), (b,a,b); ~~~(b,a,b), (b,\_,\_), (\_,\_,\_); ~~~ [3,1])</math> * Now ''N'' updates its symbols and state according to ''m'' = 3 transitions of the original machine. This may require two moves (update the current symbol and update one of its adjacent symbols). Suppose the original machine moves as follows (with the corresponding configuration of ''N'' on the right): :{| |- | [ # || _ || _ || _ || _ || _ || '''b''' || b || a || _ || ...] || || [ # || (_,a,b) || ('''b''',_,_) || (_,_,_) || ...] |- | [ # || _ || a || b || '''b''' || _ || _ || _ || _ || _ || ...] || || [ # || (_,_,_) || (_,_,'''b''') || (b,a,_) || ...] |} Thus, the state of ''N'' becomes <math>(q'; ~~~?, (\_,\_,b), ?; ~~~?, (b,\_,\_), ?; ~~~ [3,1])</math>. === Complexity === Initialization requires 2''n'' + 3 steps. In the simulation, 6 steps of ''N'' simulate ''m'' steps of ''M''. Choosing ''m'' > 6''c'' produces the running time bounded by <math>f(n)/c + 2n + 3.</math> == Tape compression == {{Anchor|Tape compression theorem}} The proof of the speedup theorem clearly hinges on the capability to compress storage by replacing the alphabet with a larger one. Specifically, it depends on the '''tape compression theorem''':<ref name=":1">{{Cite book |last1=Balcázar |first1=José Luis |title=Structural Complexity I |last2=Díaz |first2=Josep |last3=Gabarró |first3=Joaquim |publisher=Springer-Verlag |year=1988 |isbn=3-540-18622-0}}</ref>{{Pg|page=|location=Theorem 2.1, 2.2}}<blockquote>If a language <math>L</math> is accepted by a Turing machine within space <math>s(n)</math>, then for any <math>c > 0</math>, there exists some Turing machine that accepts it within space <math>c \cdot s(n)</math>. The same holds for non-deterministic Turing machines.</blockquote>For non-deterministic single-tape Turing machines of time complexity <math>T(n) \ge n^2</math>, linear speedup can be achieved without increasing the alphabet.<ref>{{cite journal |last1=Geffert |first1=Viliam |date=1993 |title=A speed-up theorem without tape compression |journal=Theoretical Computer Science |volume=118 |issue=1 |pages=49–65 |doi=10.1016/0304-3975(93)90362-W |doi-access=free}}</ref> == Dependence on the shape of storage == [[Kenneth W. Regan|Regan]]<ref>{{cite journal |last1=Regan |first1=Kenneth W. |title=Linear time and memory-efficient computation |journal=SIAM Journal on Computing |date=1996 |volume=25 |issue=1 |pages=133–168}}</ref> considered a property of a computational model called information vicinity. This property is related to the memory structure: a Turing machine has linear vicinity, while a [[Pointer machine#KUM|Kolmogorov-Uspenskii machine]] and other [[pointer machine]]s have an exponential one. Regan’s thesis is that the existence of linear speedup has to do with having a polynomial information vicinity. The salient point in this claim is that a model with exponential vicinity will not have speedup even if changing the alphabet is allowed (for models with a discrete memory that stores symbols). Regan did not, however, prove any general theorem of this kind. Hühne <ref>{{cite journal |last1=Hühne |first1=Martin |title=Linear Speed-Up Does not Hold on Turing Machines with Tree Storages |journal=Information Processing Letters |date=1993 |volume=47 |issue=6 |pages=313–318 |doi=10.1016/0020-0190(93)90078-N}}</ref> proved that if we require the speedup to be obtained by an on-line simulation (which is the case for the speedup on ordinary Turing machines), then linear speedup does not exist on machines with '''tree storage'''. ==References== {{Reflist}} [[Category:Theorems in computational complexity theory]] [[de:Speedup-Theorem]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Nowrap
(
edit
)
Template:Pg
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)