Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lock (computer science)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Synchronization mechanism for enforcing limits on access to a resource}} In [[computer science]], a '''lock''' or '''mutex''' (from [[mutual exclusion]]) is a [[synchronization primitive]] that prevents state from being modified or accessed by multiple [[threads (computer science)|threads of execution]] at once. Locks enforce mutual exclusion [[concurrency control]] policies, and with a variety of possible methods there exist multiple unique implementations for different applications. ==Types== Generally, locks are ''advisory locks'', where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement ''mandatory locks'', where attempting unauthorized access to a locked resource will force an [[exception handling|exception]] in the entity attempting to make the access. The simplest type of lock is a binary [[semaphore (programming)|semaphore]]. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade. Another way to classify locks is by what happens when the [[lock strategy]] prevents the progress of a thread. Most locking designs [[Blocking (computing)|block]] the [[Execution (computers)|execution]] of the [[Thread (computer science)|thread]] requesting the lock until it is allowed to access the locked resource. With a [[spinlock]], the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process rescheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread. Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more [[Atomic (computer science)|atomic]] instructions such as "[[test-and-set]]", "[[fetch-and-add]]" or "[[compare-and-swap]]". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation. [[Uniprocessor]] architectures have the option of using [[uninterruptible sequence]]s of instructions—using special instructions or instruction prefixes to disable [[interrupt]]s temporarily—but this technique does not work for [[multiprocessor]] shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial [[Synchronization (computer science)|synchronization]] issues. The reason an [[atomic operation]] is required is because of concurrency, where more than one task executes the same logic. For example, consider the following [[C (programming language)|C]] code: <syntaxhighlight lang="c"> if (lock == 0) { // lock free, set it lock = myPID; } </syntaxhighlight> The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. [[Dekker's algorithm|Dekker's]] or [[Peterson's algorithm]] are possible substitutes if atomic locking operations are not available. Careless use of locks can result in [[deadlock (computer science)|deadlock]] or [[livelock]]. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at [[Run time (program lifecycle phase)|run-time]]. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.) Some languages do support locks syntactically. An example in [[C Sharp (programming language)|C#]] follows: <syntaxhighlight lang="csharp"> public class Account // This is a monitor of an account { // Use `object` in versions earlier than C# 13 private readonly Lock _balanceLock = new(); private decimal _balance = 0; public void Deposit(decimal amount) { // Only one thread at a time may execute this statement. lock (_balanceLock) { _balance += amount; } } public void Withdraw(decimal amount) { // Only one thread at a time may execute this statement. lock (_balanceLock) { _balance -= amount; } } } </syntaxhighlight> C# introduced {{Mono|System.Threading.Lock}} in C# 13 on [[.NET]] 9. The code <code>lock(this)</code> can lead to problems if the instance can be accessed publicly.<ref>{{cite web|title=lock Statement (C# Reference)| date=4 February 2013 |url=http://msdn.microsoft.com/en-us/library/c5kehkcz(v=vs.100).aspx}}</ref> Similar to [[Java (programming language)|Java]], C# can also synchronize entire methods, by using the MethodImplOptions{{Not a typo|.}}Synchronized attribute.<ref> {{cite web | access-date = 2011-11-22 | page = ?? | publisher = MSDN | title = ThreadPoolPriority, and MethodImplAttribute | url = http://msdn.microsoft.com/en-us/magazine/cc163896.aspx}} </ref><ref> {{cite web |access-date=2011-11-22 |title=C# From a Java Developer's Perspective |url=http://www.25hoursaday.com/CsharpVsJava.html#attributes |url-status=dead |archive-url=https://archive.today/20130102015335/http://www.25hoursaday.com/CsharpVsJava.html |archive-date=2013-01-02 }} </ref> <syntaxhighlight lang="csharp"> [MethodImpl(MethodImplOptions.Synchronized)] public void SomeMethod() { // do stuff } </syntaxhighlight> ==Granularity== Before being introduced to lock granularity, one needs to understand three concepts about locks: * ''lock overhead'': the extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage; * ''lock [[resource contention|contention]]'': this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row); * ''[[deadlock (computer science)|deadlock]]'': the situation when each of at least two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever. There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization. An important property of a lock is its ''[[granularity (parallel computing)|granularity]]''. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less ''lock overhead'' when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased ''lock contention''. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a ''deadlock''.{{Citation needed|date=July 2011}} In a [[database management system]], for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users. ==Database locks== {{main|Lock (database)}} [[Lock (database)|Database locks]] can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using [[two-phase locking|2-phased locks]] ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using [[Wait-for graph|waits-for graphs]]. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps. There are mechanisms employed to manage the actions of multiple [[concurrent user]]s on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are ''pessimistic locking'' and ''optimistic locking'': * ''Pessimistic locking'': a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration. :: Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development. * ''[[Optimistic locking]]'': this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users. :: Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications,<ref>{{cite web | url=http://msdn.microsoft.com/en-us/library/ms978496.aspx | title=Designing Data Tier Components and Passing Data Through Tiers | publisher=[[Microsoft]] | date=August 2002 | access-date=2008-05-30 | archive-url=https://web.archive.org/web/20080508154329/http://msdn.microsoft.com/en-us/library/ms978496.aspx | archive-date=2008-05-08 | url-status=dead }}</ref> where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications. == Lock compatibility table == Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called ''incompatible''; otherwise the locks are ''compatible''. Often, lock types blocking interactions are presented in the technical literature by a ''Lock compatibility table''. The following is an example with the common, major lock types: {| class="wikitable" style="text-align:center;" |+Lock compatibility table |- ! Lock type !! read-lock !! write-lock |- ! read-lock | '''✔''' || '''X''' |- ! write-lock | '''X''' || '''X''' |} *'''✔''' indicates compatibility *'''X''' indicates incompatibility, i.e., a case when a lock of the first type (in left column) on an object blocks a lock of the second type (in top row) from being acquired on the same object (by another transaction). An object typically has a queue of waiting requested (by transactions) operations with respective locks. The first blocked lock for operation in the queue is acquired as soon as the existing blocking lock is removed from the object, and then its respective operation is executed. If a lock for operation in the queue is not blocked by any existing lock (existence of multiple compatible locks on a same object is possible concurrently), it is acquired immediately. '''Comment:''' In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no".<ref>{{Cite web |date=2018-03-07 |title=Lock Based Concurrency Control Protocol in DBMS |url=https://www.geeksforgeeks.org/lock-based-concurrency-control-protocol-in-dbms/ |access-date=2023-12-28 |website=GeeksforGeeks |language=en-US}}</ref> ==Disadvantages== Lock-based resource protection and thread/process synchronization have many disadvantages: * Contention: some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls, blocks, or enters an infinite loop, other threads waiting for the lock may wait indefinitely until the computer is [[power cycling|power cycled]]. * Overhead: the use of locks adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a [[race condition]].) * Debugging: bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate, such as [[deadlock (computer science)|deadlock]]s. * Instability: the optimal balance between lock overhead and lock contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance). * Composability: locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions. * [[Priority inversion]]: a low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding. [[Priority inheritance]] can be used to reduce priority-inversion duration. The [[priority ceiling protocol]] can be used on uniprocessor systems to minimize the worst-case priority-inversion duration, as well as prevent [[deadlock (computer science)|deadlock]]. * [[Lock convoy|Convoying]]: all other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault. Some [[concurrency control]] strategies avoid some or all of these problems. For example, a [[funnel (Concurrent computing)|funnel]] or [[serializing tokens]] can avoid the biggest problem: deadlocks. Alternatives to locking include [[non-blocking synchronization]] methods, like [[lock-free and wait-free algorithms|lock-free]] programming techniques and [[transactional memory]]. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the ''application'' level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application. In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.{{Citation needed|date=November 2013}} ===Lack of composability=== One of lock-based programming's biggest problems is that "locks don't [[Function composition (computer science)|compose]]": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. [[Simon Peyton Jones]] (an advocate of [[software transactional memory]]) gives the following example of a banking application:<ref>{{Cite encyclopedia |title=Beautiful concurrency |first=Simon |last=Peyton Jones |encyclopedia=Beautiful Code: Leading Programmers Explain How They Think |editor1-first=Greg |editor1-last=Wilson |editor2-first=Andy |editor2-last=Oram |publisher=O'Reilly |year=2007 |url=http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/beautiful.pdf}}</ref> design a class {{mono|Account}} that allows multiple concurrent clients to deposit or withdraw money to an account, and give an algorithm to transfer money from one account to another. The lock-based solution to the first part of the problem is: '''class''' Account: '''member''' balance: Integer '''member''' mutex: Lock '''method''' deposit(n: Integer) mutex.lock() balance ← balance + n mutex.unlock() '''method''' withdraw(n: Integer) deposit(−n) The second part of the problem is much more complicated. A {{mono|transfer}} routine that is correct ''for sequential programs'' would be '''function''' transfer(from: Account, to: Account, amount: Integer) from.withdraw(amount) to.deposit(amount) In a concurrent program, this algorithm is incorrect because when one thread is halfway through {{mono|transfer}}, another might observe a state where {{mono|amount}} has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by putting locks on both accounts prior to changing either one, but then the locks have to be placed according to some arbitrary, global ordering to prevent deadlock: '''function''' transfer(from: Account, to: Account, amount: Integer) '''if''' from < to ''// arbitrary ordering on the locks'' from.lock() to.lock() '''else''' to.lock() from.lock() from.withdraw(amount) to.deposit(amount) from.unlock() to.unlock() This solution gets more complicated when more locks are involved, and the {{mono|transfer}} function needs to know about all of the locks, so they cannot be [[Encapsulation (object-oriented programming)|hidden]]. ==Language support== {{see also|Barrier (computer science)}} Programming languages vary in their support for synchronization: * [[Ada (programming language)|Ada]] provides protected objects that have visible protected subprograms or entries<ref>{{cite book | author = ISO/IEC 8652:2007 | title = Ada 2005 Reference Manual | chapter = Protected Units and Protected Objects | chapter-url = http://www.adaic.com/standards/1zrm/html/RM-9-4.html | quote = A protected object provides coordinated access to shared data, through calls on its visible protected operations, which can be protected subprograms or protected entries. | access-date = 2010-02-27 }}</ref> as well as rendezvous.<ref>{{cite book | author = ISO/IEC 8652:2007 | title = Ada 2005 Reference Manual | chapter = Example of Tasking and Synchronization | chapter-url = http://www.adaic.com/standards/1zrm/html/RM-9-11.html | access-date = 2010-02-27 }}</ref> * The ISO/IEC [[C (programming language)|C]] standard provides a standard [[mutual exclusion]] (locks) [[application programming interface]] (API) since [[C11 (C standard revision)|C11]]. The current ISO/IEC [[C++]] standard supports [[C++0x#Threading facilities|threading facilities]] since [[C++11]]. The [[OpenMP]] standard is supported by some compilers, and allows [[critical sections]] to be specified using pragmas. The [[POSIX Threads|POSIX pthread]] API provides lock support.<ref>{{cite web | url=http://www.cs.cf.ac.uk/Dave/C/node31.html#SECTION003110000000000000000 | title=Mutual Exclusion Locks | last=Marshall|first=Dave | date=March 1999 | access-date=2008-05-30}}</ref> [[Visual C++]] provides the <code>synchronize</code> attribute of methods to be synchronized, but this is specific to COM objects in the [[Microsoft Windows|Windows]] architecture and [[Visual C++]] compiler.<ref>{{cite web | url=http://msdn.microsoft.com/en-us/library/34d2s8k3(VS.80).aspx | title=Synchronize | publisher=msdn.microsoft.com | access-date=2008-05-30}}</ref> C and C++ can easily access any native operating system locking features. * [[C Sharp (programming language)|C#]] provides the <code>lock</code> keyword on a thread to ensure its exclusive access to a resource. * [[Visual Basic (.NET)]] provides a <code>SyncLock</code> keyword like C#'s <code>lock</code> keyword. * [[Java (programming language)|Java]] provides the keyword <code>synchronized</code> to lock code blocks, [[Method (computer programming)|methods]] or [[Object (computer science)|objects]]<ref>{{cite web | url=http://java.sun.com/docs/books/tutorial/essential/concurrency/sync.html | title=Synchronization | publisher=[[Sun Microsystems]] | access-date=2008-05-30}}</ref> and libraries featuring concurrency-safe data structures. * [[Objective-C]] provides the keyword <code>@synchronized</code><ref>{{cite web | url=https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Multithreading/ThreadSafety/ThreadSafety.html | title=Apple Threading Reference | publisher=Apple, inc | access-date=2009-10-17}}</ref> to put locks on blocks of code and also provides the [[Class (computer programming)|classes]] NSLock,<ref>{{cite web | url=https://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSLock_Class/Reference/Reference.html | title=NSLock Reference | publisher=Apple, inc | access-date=2009-10-17}}</ref> NSRecursiveLock,<ref>{{cite web | url=https://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSRecursiveLock_Class/Reference/Reference.html | title=NSRecursiveLock Reference | publisher=Apple, inc | access-date=2009-10-17}}</ref> and NSConditionLock<ref>{{cite web | url=https://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSConditionLock_Class/Reference/Reference.html | title=NSConditionLock Reference | publisher=Apple, inc | access-date=2009-10-17}}</ref> along with the NSLocking protocol<ref>{{cite web | url=https://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Protocols/NSLocking_Protocol/Reference/Reference.html | title=NSLocking Protocol Reference | publisher=Apple, inc | access-date=2009-10-17}}</ref> for locking as well. * [[PHP]] provides a file-based locking <ref>{{cite web | url=http://php.net/manual/en/function.flock.php | title=flock }}</ref> as well as a <code>Mutex</code> class in the <code>pthreads</code> extension.<ref>{{cite web | url=http://php.net/manual/en/class.mutex.php | title=The Mutex class | access-date=2016-12-29 | archive-date=2017-07-04 | archive-url=https://web.archive.org/web/20170704152552/http://php.net/manual/en/class.mutex.php | url-status=dead }}</ref> * [[Python (programming language)|Python]] provides a low-level [[Mutual exclusion|mutex]] mechanism with a <code>Lock</code> [[Class (computer programming)|class]] from the <code>threading</code> module.<ref>{{cite web | url=http://effbot.org/zone/thread-synchronization.htm | title=Thread Synchronization Mechanisms in Python | last=Lundh | first=Fredrik | date=July 2007 | access-date=2008-05-30 | archive-date=2020-11-01 | archive-url=https://web.archive.org/web/20201101025814/http://effbot.org/zone/thread-synchronization.htm | url-status=dead }}</ref> * The ISO/IEC [[Fortran]] standard (ISO/IEC 1539-1:2010) provides the <code>lock_type</code> derived type in the intrinsic module <code>iso_fortran_env</code> and the <code>lock</code>/<code>unlock</code> statements since [[Fortran#Fortran 2008|Fortran 2008]].<ref>{{cite web | url=https://wg5-fortran.org/N1801-N1850/N1824.pdf | title=Coarrays in the next Fortran Standard | author=John Reid | year=2010 | access-date=2020-02-17}}</ref> * [[Ruby (programming language)|Ruby]] provides a low-level [[Mutual exclusion|mutex]] object and no keyword.<ref>{{cite web | url=https://docs.ruby-lang.org/en/master/Thread/Mutex.html | title=class Thread::Mutex}}</ref> * [[Rust (programming language)|Rust]] provides the <code>Mutex<T></code><ref>{{cite web |title=std::sync::Mutex - Rust |url=https://doc.rust-lang.org/std/sync/struct.Mutex.html |website=doc.rust-lang.org |access-date=3 November 2020}}</ref> struct.<ref>{{cite web |title=Shared-State Concurrency - The Rust Programming Language |url=https://doc.rust-lang.org/book/ch16-03-shared-state.html |website=doc.rust-lang.org |access-date=3 November 2020}}</ref> * [[x86 assembly language]] provides the <code>LOCK</code> prefix on certain operations to guarantee their atomicity. * [[Haskell]] implements locking via a mutable data structure called an <code>MVar</code>, which can either be empty or contain a value, typically a reference to a resource. A thread that wants to use the resource ‘takes’ the value of the <code>MVar</code>, leaving it empty, and puts it back when it is finished. Attempting to take a resource from an empty <code>MVar</code> results in the thread blocking until the resource is available.<ref name="marlow_conc_haskell">{{cite book|last1=Marlow|first1=Simon|author1-link=Simon Marlow|date=August 2013|title=Parallel and Concurrent Programming in Haskell|url=https://www.oreilly.com/library/view/parallel-and-concurrent/9781449335939/|publisher=[[O’Reilly Media]]|isbn=9781449335946|chapter=Basic concurrency: threads and MVars}}</ref> As an alternative to locking, an implementation of [[software transactional memory]] also exists.<ref>{{cite book|last1=Marlow|first1=Simon|author1-link=Simon Marlow|date=August 2013|title=Parallel and Concurrent Programming in Haskell|url=https://www.oreilly.com/library/view/parallel-and-concurrent/9781449335939/|publisher=[[O’Reilly Media]]|isbn=9781449335946|chapter=Software transactional memory}}</ref> *[[Go (programming language)|Go]] provides a low-level [[Mutual exclusion|Mutex]] object in standard's library [https://pkg.go.dev/sync sync] package.<ref>{{Cite web|title=sync package - sync - pkg.go.dev|url=https://pkg.go.dev/sync#Mutex|access-date=2021-11-23|website=pkg.go.dev}}</ref> It can be used for locking code blocks, [[Method (computer programming)|methods]] or [[Object (computer science)|objects]]. ==Mutexes vs. semaphores== {{excerpt|Semaphore (programming)#Semaphores vs. mutexes}} ==See also== * [[Critical section]] * [[Double-checked locking]] * [[File locking]] * [[Lock-free and wait-free algorithms]] * [[Monitor (synchronization)]] * [[Mutual exclusion]] * [[Read/write lock pattern]] ==References== {{reflist|2}} ==External links== * [https://web.archive.org/web/20110620203242/http://www.futurechips.org/tips-for-power-coders/parallel-programming-understanding-impact-critical-sections.html Tutorial on Locks and Critical Sections] {{Design patterns}} {{DEFAULTSORT:Lock (Computer Science)}} [[Category:Concurrency control]] [[Category:Software design patterns]] [[Category:Programming language comparisons]] <!-- Hidden categories below --> [[Category:Articles with example C code]] [[Category:Articles with example C Sharp code]] [[Category:Articles with example Java code]] [[Category:Articles with example pseudocode]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite encyclopedia
(
edit
)
Template:Cite web
(
edit
)
Template:Design patterns
(
edit
)
Template:Excerpt
(
edit
)
Template:Main
(
edit
)
Template:Mono
(
edit
)
Template:Not a typo
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)