Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Simultaneous multithreading
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Details == The term ''multithreading'' is ambiguous, because not only can multiple threads be executed simultaneously on one CPU core, but also multiple tasks (with different [[page table]]s, different [[task state segment]]s, different [[protection ring]]s, different [[Task state segment#I/O port permissions|I/O permissions]], etc.). Although running on the same core, they are completely separated from each other. Multithreading is similar in concept to [[preemptive multitasking]] but is implemented at the thread level of execution in modern superscalar processors. Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being [[temporal multithreading]] (also known as super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executed in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads is decided by the chip designers. Two concurrent threads per CPU core are common, but some processors support many more.<ref>{{cite web |title=The First Direct Mesh-to-Mesh Photonic Fabric |url=https://hc2023.hotchips.org/assets/program/conference/day2/Interconnects/HC23.Intel.JasonHoward.v3.pdf#page=5 |access-date=2024-02-08 |archive-date=2024-02-08 |archive-url=https://web.archive.org/web/20240208201646/https://hc2023.hotchips.org/assets/program/conference/day2/Interconnects/HC23.Intel.JasonHoward.v3.pdf#page=5}}</ref> Because it inevitably increases conflict on shared resources, measuring or agreeing on its effectiveness can be difficult. However, measured [[Electrical efficiency|energy efficiency]] of SMT with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT ([[hyper-threading]]) implementations found that in 45 nm and 32 nm implementations, SMT is extremely energy efficient, even with in-order Atom processors.<ref name="asplos11">ASPLOS'11</ref> In modern systems, SMT effectively exploits concurrency with very little additional dynamic power. That is, even when performance gains are minimal the power consumption savings can be considerable.<ref name="asplos11"/> Some researchers{{who|date=June 2019}} have shown that the extra threads can be used proactively to seed a [[shared resource]] like a cache, to improve the performance of another single thread, and claim this shows that SMT does not only increase efficiency. Others{{who|date=June 2019}} use SMT to provide redundant computation, for some level of error detection and recovery. However, in most current cases, SMT is about hiding [[memory latency]], increasing efficiency, and increasing throughput of computations per amount of hardware used.{{citation needed|date=December 2018}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)