Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cache replacement policies
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Simple recency-based policies === ==== {{Anchor|LRU}}Least Recently Used (LRU) ==== Discards least recently used items first. This algorithm requires keeping track of what was used and when, which is cumbersome. It requires "age bits" for [[CPU cache#Cache entries|cache lines]], and tracks the least recently used cache line based on these age bits. When a cache line is used, the age of the other cache lines changes. LRU is [[Page replacement algorithm#Variants on LRU|a family of caching algorithms]], that includes 2Q by Theodore Johnson and Dennis Shasha<ref>{{Cite journal |last1=Johnson |first1=Theodore |last2=Shasha |first2=Dennis |date=1994-09-12 |title=2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm |journal=Proceedings of the 20th International Conference on Very Large Data Bases |series=VLDB '94 |location=San Francisco, CA |publisher=Morgan Kaufmann Publishers Inc. |pages=439–450 |isbn=978-1-55860-153-6 |s2cid=6259428 |url=http://www.vldb.org/conf/1994/P439.PDF}}</ref> and LRU/K by Pat O'Neil, Betty O'Neil and Gerhard Weikum.<ref>{{Cite book|publisher = ACM|date = 1993|location = New York, NY, USA|isbn = 978-0-89791-592-2|pages = 297–306|doi = 10.1145/170035.170081|first1 = Elizabeth J.|last1 = O'Neil|author-link=Elizabeth O'Neil|first2 = Patrick E.|last2 = O'Neil|first3 = Gerhard|last3 = Weikum| title=Proceedings of the 1993 ACM SIGMOD international conference on Management of data - SIGMOD '93 | chapter=The LRU-K page replacement algorithm for database disk buffering |citeseerx = 10.1.1.102.8240|s2cid = 207177617}}</ref> The access sequence for the example is A B C D E D F: [[File:Lruexample.png|center|alt=Graphic example of an LRU algorithm]] When A B C D is installed in the blocks with sequence numbers (increment 1 for each new access) and E is accessed, it is a [[CPU cache#Cache miss|miss]] and must be installed in a block. With the LRU algorithm, E will replace A because A has the lowest rank (A(0)). In the next-to-last step, D is accessed and the sequence number is updated. F is then accessed, replacing B{{snd}}which had the lowest rank, (B(1)). ==== {{anchor|Time aware least recently used (TLRU)}}Time-aware, least-recently used ==== Time-aware, least-recently-used (TLRU)<ref>{{cite book|author=Bilal, Muhammad|title=16th International Conference on Advanced Communication Technology |chapter=Time Aware Least Recent Used (TLRU) cache management policy in ICN |year=2014 |display-authors=etal|pages=528–532|bibcode=2018arXiv180100390B|arxiv=1801.00390|doi=10.1109/ICACT.2014.6779016|isbn=978-89-968650-3-2|s2cid=830503}}</ref> is a variant of LRU designed for when the contents of a cache have a valid lifetime. The algorithm is suitable for network cache applications such as [[information-centric networking]] (ICN), [[content delivery network]]s (CDNs) and distributed networks in general. TLRU introduces a term: TTU (time to use), a timestamp of content (or a page) which stipulates the usability time for the content based on its locality and the content publisher. TTU provides more control to a local administrator in regulating network storage. When content subject to TLRU arrives, a cache [[Node (computer science)|node]] calculates the local TTU based on the TTU assigned by the content publisher. The local TTU value is calculated with a locally-defined function. When the local TTU value is calculated, content replacement is performed on a subset of the total content of the cache node. TLRU ensures that less-popular and short-lived content is replaced with incoming content. ==== {{anchor|MRU|Most recently used (MRU)}}Most-recently-used (MRU) ==== Unlike LRU, MRU discards the most-recently-used items first. At the 11th [[VLDB conference]], Chou and DeWitt said: "When a file is being repeatedly scanned in a [looping sequential] reference pattern, MRU is the best [[page replacement algorithm|replacement algorithm]]."<ref>Hong-Tai Chou and David J. DeWitt. [http://www.vldb.org/conf/1985/P127.PDF An Evaluation of Buffer Management Strategies for Relational Database Systems.] VLDB, 1985.</ref> Researchers presenting at the 22nd VLDB conference noted that for random access patterns and repeated scans over large [[Data set|datasets]] (also known as cyclic access patterns), MRU cache algorithms have more hits than LRU due to their tendency to retain older data.<ref>Shaul Dar, Michael J. Franklin, Björn Þór Jónsson, Divesh Srivastava, and Michael Tan. [http://www.vldb.org/conf/1996/P330.PDF Semantic Data Caching and Replacement.] VLDB, 1996.</ref> MRU algorithms are most useful in situations where the older an item is, the more likely it is to be accessed. The access sequence for the example is A B C D E C D B: [[File:Mruexample.png|center|alt=Diagram of an MRU algorithm]] A B C D are placed in the cache, since there is space available. At the fifth access (E), the block which held D is replaced with E since this block was used most recently. At the next access (to D), C is replaced since it was the block accessed just before D. ==== {{Anchor|SLRU}}Segmented LRU (SLRU) ==== An SLRU cache is divided into two segments: probationary and protected. Lines in each segment are ordered from most- to least-recently-accessed. Data from misses is added to the cache at the most-recently-accessed end of the probationary segment. Hits are removed from where they reside and added to the most-recently-accessed end of the protected segment; lines in the protected segment have been accessed at least twice. The protected segment is finite; migration of a line from the probationary segment to the protected segment may force the migration of the LRU line in the protected segment to the most-recently-used end of the probationary segment, giving this line another chance to be accessed before being replaced. The size limit of the protected segment is an SLRU parameter which varies according to [[Input/output|I/O]] workload patterns. When data must be discarded from the cache, lines are obtained from the LRU end of the probationary segment.<ref>Ramakrishna Karedla, J. Spencer Love, and Bradley G. Wherry. Caching Strategies to Improve Disk System Performance. In ''[[Computer (magazine)|Computer]]'', 1994.</ref> ==== {{anchor|LRU Approximations}}LRU approximations ==== LRU may be expensive in caches with higher [[Associative property|associativity]]. Practical hardware usually employs an approximation to achieve similar performance at a lower hardware cost. ===== Pseudo-LRU (PLRU) ===== {{Further|Pseudo-LRU}} For [[CPU caches]] with large [[CPU cache#Associativity|associativity]] (generally > four ways), the implementation cost of LRU becomes prohibitive. In many CPU caches, an algorithm that almost always discards one of the least recently used items is sufficient; many CPU designers choose a PLRU algorithm, which only needs one bit per cache item to work. PLRU typically has a slightly-worse miss ratio, slightly-better [[CAS latency|latency]], uses slightly less power than LRU, and has a lower [[Overhead (computing)|overhead]] than LRU. Bits work as a binary tree of one-bit pointers which point to a less-recently-used sub-tree. Following the pointer chain to the leaf node identifies the replacement candidate. With an access, all pointers in the chain from the accessed way's leaf node to the root node are set to point to a sub-tree which does not contain the accessed path. The access sequence in the example is A B C D E: [[File:Plruexample.png|center|alt=Graphic example of pseudo-LRU]] When there is access to a value (such as A) and it is not in the cache, it is loaded from memory and placed in the block where the arrows are pointing in the example. After that block is placed, the arrows are flipped to point the opposite way. A, B, C and D are placed; E replaces A as the cache fills because that was where the arrows were pointing, and the arrows which led to A flip to point in the opposite direction (to B, the block which will be replaced on the next cache miss). ===== {{anchor|CLOCK-Pro}}Clock-Pro ===== The LRU algorithm cannot be implemented in the critical path of computer systems, such as [[operating system]]s, due to its high overhead; [[Page replacement algorithm#Clock|Clock]], an approximation of LRU, is commonly used instead. Clock-Pro is an approximation of [[LIRS caching algorithm|LIRS]] for low-cost implementation in systems.<ref>{{cite journal |last1=Jiang |first1=Song |last2=Chen |first2=Feng |last3=Zhang |first3=Xiaodong |date=2005 |title=CLOCK-Pro: An Effective Improvement of the CLOCK Replacement |url=http://web.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-05-3.pdf |journal=Proceedings of the Annual Conference on USENIX Annual Technical Conference |publisher=USENIX Association |pages=323–336}}</ref> Clock-Pro has the basic Clock framework, with three advantages. It has three "clock hands" (unlike Clock's single "hand"), and can approximately measure the reuse distance of data accesses. Like LIRS, it can quickly evict one-time-access or low-[[Locality of reference|locality]] data items. Clock-Pro is as complex as Clock, and is easy to implement at low cost. The buffer-cache replacement implementation in the 2017 version of [[Linux]] combines LRU and Clock-Pro.<ref>{{cite web |date=2017-12-30 |title=Linux Memory Management: Page Replacement Design. |url=https://linux-mm.org/PageReplacementDesign |access-date=2020-06-30}}</ref><ref>{{cite web |date=2005-08-16 |title=A CLOCK-Pro page replacement implementation |url=https://lwn.net/Articles/147879/ |access-date=2020-06-30 |publisher=[[LWN.net]]}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)