Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cache coherence
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Equivalence of all cached copies of a memory location}} [[File:Cache coherence.svg|thumb|upright=1.7|If two clients have a cached copy of a particular memory block and one client changes the block, the other client's copy must be invalidated or updated. If it is not, the system is in an incoherent state: it contains two different records of the same memory block which both claim to be up-to-date.]] [[File:Non Coherent.gif|thumb|Incoherent caches: The caches have different values of a single address location.]] In [[computer architecture]], '''cache coherence''' is the uniformity of shared resource data that is stored in multiple [[Cache (computing)|local caches]]. In a cache coherent system, if multiple clients have a cached copy of the same region of a shared memory resource, all copies are the same. Without cache coherence, a change made to the region by one client may not be seen by others, and errors can result when the data used by different clients is mismatched.<ref name="advances_in_computers">{{Cite book |last=Marowka |first=Ami |url=https://linkinghub.elsevier.com/retrieve/pii/S0065245810790021 |title=Advances in Computers |chapter=Chapter 2 - Pitfalls and Issues of Manycore Programming |date=2010-01-01 |publisher=Elsevier |volume=79 |pages=71β117 |doi=10.1016/s0065-2458(10)79002-1|isbn=978-0-12-381027-4 }}</ref> A [[Cache coherency protocols (examples)|cache coherence protocol]] is used to maintain cache coherency. The two main types are [[Bus snooping|snooping]] and [[Directory-based cache coherence|directory-based]] protocols. Cache coherence is of particular relevance in [[multiprocessing]] systems, where each [[Central processing unit|CPU]] may have its own local cache of a shared memory resource. [[File:Coherent.gif|thumb|Coherent caches: The value in all the caches' copies is the same.]] ==Overview== In a [[Shared memory architecture|shared memory]] multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.<ref name=":1">{{Cite book|url=http://sc.tamu.edu/systems/eos/nehalem.pdf|title=The Architecture of the Nehalem Processor and Nehalem-EP SMP Platforms|last=E. Thomadakis|first=Michael|publisher=Texas A&M University|year=2011|pages=30|url-status=dead|archive-url=https://web.archive.org/web/20140811023120/http://sc.tamu.edu/systems/eos/nehalem.pdf|archive-date=2014-08-11}}</ref> The following are the requirements for cache coherence:<ref name=":0">{{Cite book|title=Fundamentals of parallel multicore architecture|last=Yan|first=Solihin|oclc=884540034}}</ref> ;Write Propagation: Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches. ;Transaction Serialization: Reads/Writes to a single memory location must be seen by all processors in the same order. Theoretically, coherence can be performed at the load/store [[granularity]]. However, in practice it is generally performed at the granularity of cache blocks.<ref name=":2">{{Cite book|title=A primer on memory consistency and cache coherence|author1=Sorin, Daniel J.|author2=Hill, Mark D.|author3=Wood, David Allen|date=2011-01-01|publisher=Morgan & Claypool Publishers|oclc=726930429}}</ref> ==Definition== Coherence defines the behavior of reads and writes to a single address location.<ref name=":0" /> In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence:<ref name=":3">{{Cite book|title=Computer Organization and Design - 4th Edition|last=Patterson and Hennessy|isbn=978-0-12-374493-7}}</ref> # In a read made by a processor P to a location X that follows a write by the same processor P to X, with no writes to X by another processor occurring between the write and the read instructions made by P, X must always return the value written by P. # In a read made by a processor P1 to location X that follows a write by another processor P2 to X, with no other writes to X made by any processor occurring between the two accesses and with the read and write being sufficiently separated, X must always return the value written by P2. This condition defines the concept of coherent view of memory. Propagating the writes to the shared memory location ensures that all the caches have a coherent view of the memory. If processor P1 reads the old value of X, even after the write by P2, we can say that the memory is incoherent. The above conditions satisfy the Write Propagation criteria required for cache coherence. However, they are not sufficient as they do not satisfy the Transaction Serialization condition. To illustrate this better, consider the following example: A multi-processor system consists of four processors - P1, P2, P3 and P4, all containing cached copies of a shared variable ''S'' whose initial value is 0. Processor P1 changes the value of ''S'' (in its cached copy) to 10 following which processor P2 changes the value of ''S'' in its own cached copy to 20. If we ensure only write propagation, then P3 and P4 will certainly see the changes made to ''S'' by P1 and P2. However, P3 may see the change made by P1 after seeing the change made by P2 and hence return 10 on a read to ''S''. P4 on the other hand may see changes made by P1 and P2 in the order in which they are made and hence return 20 on a read to ''S''. The processors P3 and P4 now have an incoherent view of the memory. Therefore, in order to satisfy Transaction Serialization, and hence achieve Cache Coherence, the following condition along with the previous two mentioned in this section must be met: * Writes to the same location must be sequenced. In other words, if location X received two different values A and B, in this order, from any two processors, the processors can never read location X as B and then read it as A. The location X must be seen with values A and B in that order.<ref>Neupane, Mahesh (April 16, 2004). [https://web.archive.org/web/20100620091706/http://cse.csusb.edu/schubert/tutorials/csci610/w04/MN_Cache_Coherence.pdf "Cache Coherence"] (PDF). Archived from [http://cse.csusb.edu/schubert/tutorials/csci610/w04/MN_Cache_Coherence.pdf the original] (PDF) on 20 June 2010.</ref> The alternative definition of a coherent system is via the definition of [[sequential consistency]] memory model: "the cache coherent system must appear to execute all threadsβ loads and stores to a ''single'' memory location in a total order that respects the program order of each thread".<ref name=":2" /> Thus, the only difference between the cache coherent system and sequentially consistent system is in the number of address locations the definition talks about (single memory location for a cache coherent system, and all memory locations for a sequentially consistent system). Another definition is: "a multiprocessor is cache consistent if all writes to the same memory location are performed in some sequential order".<ref>{{Cite journal|last1=Steinke|first1=Robert C.|last2=Nutt|first2=Gary J.|date=2004-09-01|title=A Unified Theory of Shared Memory Consistency|journal=J. ACM|volume=51|issue=5|pages=800β849|doi=10.1145/1017460.1017464|issn=0004-5411|arxiv=cs/0208027|s2cid=3206071}}</ref> Rarely, but especially in algorithms, coherence can instead refer to the [[locality of reference]]. Multiple copies of the same data can exist in different cache simultaneously and if processors are allowed to update their own copies freely, an inconsistent view of memory can result. == Coherence mechanisms == {{main|Cache coherency protocols (examples)}} The two most common mechanisms of ensuring coherency are ''[[Bus sniffing|snooping]]'' and ''[[Directory-based cache coherence|directory-based]]'', each having their own benefits and drawbacks.<ref>{{cite book |title=Computer Architecture A Quantitative Approach |last1=Patterson |first1=David A. |last2=Hennessy |first2=John L. |isbn=1-55860-069-8 |date=1990 |pages=467β468 |publisher=Morgan Kaufmann Publishers}}</ref> Snooping based protocols tend to be faster, if enough [[Memory bandwidth|bandwidth]] is available, since all transactions are a request/response seen by all processors. The drawback is that snooping isn't scalable. Every request must be broadcast to all nodes in a system, meaning that as the system gets larger, the size of the (logical or physical) bus and the bandwidth it provides must grow. Directories, on the other hand, tend to have longer latencies (with a 3 hop request/forward/respond) but use much less bandwidth since messages are point to point and not broadcast. For this reason, many of the larger systems (>64 processors) use this type of cache coherence. === Snooping === {{main|Bus snooping}} : First introduced in 1983,<ref>{{Cite journal|title=Ravishankar, Chinya; Goodman, James (February 28, 1983). "Cache Implementation for Multiple Microprocessors"|url=https://www.cs.ucr.edu/~ravi/Papers/NWConf/ravishankar_83.pdf|journal=Proceedings of IEEE COMPCON: 346β350.}}</ref> snooping is a process where the individual caches monitor address lines for accesses to memory locations that they have cached.<ref name=":3" /> The ''write-invalidate protocols'' and ''write-update protocols'' make use of this mechanism. : For the snooping mechanism, a snoop filter reduces the snooping traffic by maintaining a plurality of entries, each representing a cache line that may be owned by one or more nodes. When replacement of one of the entries is required, the snoop filter selects for the replacement of the entry representing the cache line or lines owned by the fewest nodes, as determined from a presence vector in each of the entries. A temporal or other type of algorithm is used to refine the selection if more than one cache line is owned by the fewest nodes.<ref>Rasmus Ulfsnes (June 2013). [http://www.diva-portal.org/smash/get/diva2:649627/FULLTEXT01.pdf "Design of a Snoop Filter for Snoop-Based Cache Coherency Protocols"] {{Webarchive|url=https://web.archive.org/web/20140201160231/http://www.diva-portal.org/smash/get/diva2:649627/FULLTEXT01.pdf |date=2014-02-01 }} (PDF). ''diva-portal.org''. Norwegian University of Science and Technology. Retrieved 2014-01-20.</ref> === Directory-based === {{main|Directory-based cache coherence}} : In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to its cache. When an entry is changed, the directory either updates or invalidates the other caches with that entry. [[Distributed shared memory]] systems mimic these mechanisms in an attempt to maintain consistency between blocks of memory in loosely coupled systems.<ref>{{cite web |title=Lecture 18: Snooping vs. Directory Based Coherency |url=https://people.eecs.berkeley.edu/~pattrsn/252F96/Lecture18.pdf |website=Berkeley.edu |access-date=14 May 2023}}</ref> ==Coherence protocols== Coherence protocols apply cache coherence in multiprocessor systems. The intention is that two clients must never see different values for the same shared data. The protocol must implement the basic requirements for coherence. It can be tailor-made for the target system or application. Protocols can also be classified as snoopy or directory-based. Typically, early systems used directory-based protocols where a directory would keep a track of the data being shared and the sharers. In snoopy protocols, the transaction requests (to read, write, or upgrade) are sent out to all processors. All processors snoop the request and respond appropriately. Write propagation in snoopy protocols can be implemented by either of the following methods: ;Write-invalidate: When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location, which forces a read from main memory of the new value on its next access.<ref name=":3" /> ;Write-update: When a write operation is observed to a location that a cache has a copy of, the cache controller updates its own copy of the snooped memory location with the new data. If the protocol design states that whenever any copy of the shared data is changed, all the other copies must be "updated" to reflect the change, then it is a write-update protocol. If the design states that a write to a cached copy by any processor requires other processors to discard or invalidate their cached copies, then it is a write-invalidate protocol. However, scalability is one shortcoming of broadcast protocols. Various models and protocols have been devised for maintaining coherence, such as [[MSI protocol|MSI]], [[MESI protocol|MESI]] (aka Illinois), [[MOSI protocol|MOSI]], [[MOESI protocol|MOESI]], [[MERSI protocol|MERSI]], [[MESIF protocol|MESIF]], [[Write-once (cache coherence)|write-once]], Synapse, Berkeley, [[Firefly (cache coherence protocol)|Firefly]] and [[Dragon protocol]].<ref name=":1" /> In 2011, [[ARM Ltd]] proposed the AMBA 4 ACE<ref>{{Cite book|title=Formal Analysis of the ACE Specification for Cache Coherent Systems-on-Chip. In Formal Methods for Industrial Critical Systems|last=Kriouile|date=16 September 2013|publisher=Springer Berlin Heidelberg|isbn=978-3-642-41010-9}}</ref> for handling coherency in [[System on a chip|SoCs]]. The AMBA CHI (Coherent Hub Interface) specification<ref>{{Cite web|last=Ltd|first=Arm|title=AMBA {{!}} AMBA 5|url=https://developer.arm.com/architectures/system-architectures/amba/amba-5|access-date=2021-04-27|website=Arm Developer|language=en}}</ref> from [[Arm Ltd.|ARM Ltd]], which belongs to AMBA5 group of specifications defines the interfaces for the connection of fully coherent processors. ==See also== * [[Consistency model]] * [[Directory-based coherence]] * [[Memory barrier]] * [[Non-uniform memory access]] (NUMA) * [[False sharing]] ==References== {{Reflist|30em}} ==Further reading== *{{cite book |last1=Patterson |first1=David |author-link1=David Patterson (scientist) |last2=Hennessy |first2=John |author-link2=John L. Hennessy |title=Computer Organization and Design |year=2009 |edition=4th |publisher=[[Morgan Kaufmann]] |isbn=978-0-12-374493-7}} *{{cite book |last1=Handy |first1=Jim |title=The Cache Memory Book |year=1998 |edition=2nd |publisher=[[Morgan Kaufmann]] |isbn=9780123229809}} *{{cite book |last1=Sorin |first1=Daniel |last2=Hill |first2=Mark |last3=Wood |first3=David |title=A Primer on Memory Consistency and Cache Coherence |year=2011 |publisher=[[Morgan and Claypool]] |isbn=978-1608455645 |url=https://www.ece.cmu.edu/~ece847c/S15/lib/exe/fetch.php?media=part2_2_sorin12.pdf|access-date=20 October 2017}} *{{cite journal|last1=Steinke|first1=Robert C.|last2=Nutt|first2=Gary J.|title=A unified theory of shared memory consistency|journal=Journal of the ACM|date=1 September 2004|volume=51|issue=5|pages=800β849|doi=10.1145/1017460.1017464|arxiv=cs/0208027|s2cid=3206071}} {{Parallel_computing}} [[Category:Cache coherency]] [[Category:Parallel computing]] [[Category:Concurrent computing]] [[Category:Consistency models]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Main
(
edit
)
Template:Parallel computing
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)