Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Thread (computing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Single-threaded vs multithreaded programs== In [[computer programming]], ''single-threading'' is the processing of one [[Machine code|instruction]] at a time.<ref>{{Cite book |first1=Raúl |last1=Menéndez |first2=Doug |last2=Lowe |url=https://books.google.com/books?id=j1t1u_UniU0C&q=%22single+threading%22 |title=Murach's CICS for the COBOL Programmer |date=2001 |publisher=Mike Murach & Associates |isbn=978-1-890774-09-7 |page=512}}</ref> In the formal analysis of the variables' [[Semantics (computer science)|semantics]] and process state, the term ''single threading'' can be used differently to mean "backtracking within a single thread", which is common in the [[functional programming]] community.<ref>{{Cite book |first1=Peter William |last1=O'Hearn |first2=R. D. |last2=Tennent |url=https://books.google.com/books?id=btp58ihqgccC&pg=PA157 |title=ALGOL-like languages |date=1997 |publisher=[[Birkhäuser Verlag]] |isbn=978-0-8176-3937-2 |volume=2 |page=157}}</ref> Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable [[parallel computing|parallel execution]] on a [[multiprocessing]] system. Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions. ===Threads and data synchronization=== {{anchor|Synchronization}} {{Main|Thread safety}} Threads in the same process share the same address space. This allows concurrently running code to [[coupling (computer science)|couple]] tightly and conveniently exchange data without the overhead or complexity of an [[inter-process communication|IPC]]. When shared between threads, however, even simple data structures become prone to [[Race condition#Computing|race conditions]] if they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate. To prevent this, threading [[application programming interface]]s (APIs) offer [[synchronization primitive]]s such as [[mutex]]es to [[Lock (computer science)|lock]] data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in a [[spinlock]]. Both of these may sap performance and force processors in [[symmetric multiprocessing]] (SMP) systems to contend for the memory bus, especially if the [[Lock (computer science)#Granularity|granularity]] of the locking is too fine. Other synchronization APIs include [[condition variables]], [[critical section]]s, [[Semaphore (programming)|semaphores]], and [[Monitor (synchronization)|monitors]]. ===Thread pools=== {{Main|Thread pool pattern}} A popular programming pattern involving threads is that of [[Thread pool pattern|thread pools]] where a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management. ===Multithreaded programs vs single-threaded programs pros and cons=== Multithreaded applications have the following advantages vs single-threaded ones: * ''Responsiveness'': multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a ''worker thread'' that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with [[non-blocking I/O]] and/or [[Unix signals]] being available for obtaining similar results.<ref>{{Cite journal |first=Sergey |last=Ignatchenko |title=Single-Threading: Back to the Future? |url=http://accu.org/index.php/journals/1634 |journal=[[Overload (magazine)|Overload]] |issue=97 |pages=16–19 |date=August 2010 |publisher=[[ACCU (organisation)|ACCU]]}}</ref> * ''Parallelization'': applications looking to use multicore or multi-CPU systems can use multithreading to split data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either concurrently on one core or in parallel on multiple cores. GPU computing environments like [[CUDA]] and [[OpenCL]] use the multithreading model where dozens to hundreds of threads run in [[Data parallelism|parallel across data]] on a [[Manycore processor|large number of cores]]. This, in turn, enables better system utilization, and (provided that synchronization costs don't eat the benefits up), can provide faster program execution. Multithreaded applications have the following drawbacks: * ''[[#Synchronization|Synchronization]]'' complexity and related bugs: when using shared resources typical for threaded programs, the [[programmer]] must be careful to avoid [[Race condition#Computing|race conditions]] and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to [[Rendezvous problem|rendezvous]] in time in order to process the data in the correct order. Threads may also require [[mutual exclusion|mutually exclusive]] operations (often implemented using [[lock (computer science)|mutexes]]) to prevent common data from being read or overwritten in one thread while being modified by another. Careless use of such primitives can lead to [[deadlock (computer science)|deadlock]]s, livelocks or [[race condition|races]] over resources. As [[Edward A. Lee]] has written: "Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly non-deterministic, and the job of the programmer becomes one of pruning that nondeterminism."<ref name="Lee" /> * ''Being untestable''. In general, multithreaded programs are non-deterministic, and as a result, are untestable. In other words, a multithreaded program can easily have bugs which never manifest on a test system, manifesting only in production.<ref>{{Cite journal |title=Multi-threading at Business-logic Level is Considered Harmful |url=https://accu.org/journals/overload/23/128/ignatchenko_2134/ |first=Sergey |last=Ignatchenko |journal=[[Overload (magazine)|Overload]] |issue=128 |pages=4–7 |date=August 2015 |publisher=[[ACCU (organisation)|ACCU]]}}</ref><ref name="Lee">{{Cite web |first=Edward |last=Lee |date=January 10, 2006 |title=The Problem with Threads |url=http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html |publisher=UC Berkeley}}</ref> This can be alleviated by restricting inter-thread communications to certain well-defined patterns (such as message-passing). * ''Synchronization costs''. As thread context switch on modern CPUs can cost up to 1 million CPU cycles,<ref>{{Cite web |author='No Bugs' Hare |date=12 September 2016 |title=Operation Costs in CPU Clock Cycles |url=http://ithare.com/infographics-operation-costs-in-cpu-clock-cycles/}}</ref> it makes writing efficient multithreading programs difficult. In particular, special attention has to be paid to avoid inter-thread synchronization from being too frequent.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)