Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Page replacement algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Least recently used=== The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as [[adaptive replacement cache]]), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible. The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process. Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the [[operating system]] selects the page with the lowest counter and swaps it out. Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations. One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool. On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU). ====Variants on LRU==== # LRU-K<ref>{{cite conference |first2=Patrick E. |last2=O'Neil |url=https://www.cs.cmu.edu/~christos/courses/721-resources/p297-o_neil.pdf |title=The LRU-K page replacement algorithm for database disk buffering |first1=Elizabeth J. |last1=O'Neil |date=25-28 May 1993 |conference=1993 ACM SIGMOD international conference on Management of data |conference-url=https://dl.acm.org/citation.cfm?id=170035 |publisher=ACM |location=Washington, D.C., USA |pages=297β306 |url-status=live |language=en |isbn=0-89791-592-5 |doi=10.1145/170035.170081 |display-authors=1 |last3=Weikum |first3=Gerhard |archive-url=http://archive.wikiwix.com/cache/20190906015742/https://www.cs.cmu.edu/~christos/courses/721-resources/p297-o_neil.pdf |archive-date=6 September 2019 |citeseerx=10.1.1.18.1434}}</ref> evicts the page whose K-th most recent access is furthest in the past. For example, LRU-1 is simply LRU whereas LRU-2 evicts pages according to the time of their penultimate access. LRU-K improves greatly on LRU with regards to locality in time. # The [[Adaptive replacement cache|ARC]]<ref>{{cite conference |first2=Dharmendra S. |last2=Modha |name-list-style=amp |url=http://www.usenix.org/events/fast03/tech/full_papers/megiddo/megiddo.pdf |title=ARC: A Self-Tuning, Low Overhead Replacement Cache |first1=Nimrod |last1=Megiddo |date=31 March β 2 April 2003 |conference=2nd USENIX Conference on File and Storage Technologies (FAST '03) |conference-url=https://www.usenix.org/conference/fast03 |publisher=USENIX Association |archive-url=https://web.archive.org/web/20100208162647/http://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf |archive-date=8 February 2010 |url-status=live |location=San Francisco, CA, USA |pages=115β130 |language=en}}</ref> algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access. It is particularly resistant to sequential scans. # The 2Q<ref>{{cite conference |first2=Dennis |last2=Shasha |url=http://www.vldb.org/conf/1994/P439.PDF |title=2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm |first1=Theodore |last1=Johnson |date=12-15 September 1994 |conference=20th International Conference on Very Large Data Bases |conference-url=https://dblp.org/db/conf/vldb/vldb94 |publisher=Morgan Kaufmann |archive-url=http://archive.wikiwix.com/cache/20200317170617/http://www.vldb.org/conf/1994/P439.PDF |archive-date=17 March 2020 |location=Santiago de Chile, Chile |pages=439β450 |isbn=1-55860-153-8 |url-status=live |language=en |access-date=31 July 2005 }}</ref> algorithm improves upon the LRU and LRU/2 algorithm. By having two queues, one for hot-path items and the other for slow-path items, items are first placed in the slow-path queue and after a second access of the items placed in the hot-path items. Because references to added items are longer hold than in the LRU and LRU/2 algorithm, it has a better hot-path queue which improves the hit rate of the cache. A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, [[LIRS caching algorithm|LIRS]]) can be found in Megiddo & Modha 2004.<ref>{{cite journal|last1=Megiddo|first1=Nimrod|last2=Modha|first2=Dharmendra S.|name-list-style=amp|url=http://dbs.uni-leipzig.de/file/ARC.pdf|title=Outperforming LRU with an Adaptive Replacement Cache Algorithm|doi=10.1109/MC.2004.1297303|year=2004|journal=Computer|volume=37|issue=4|pages=58|citeseerx=10.1.1.231.498|archive-url=http://archive.wikiwix.com/cache/20121021000000/http://dbs.uni-leipzig.de/file/ARC.pdf|archive-date=21 October 2012|publisher=IEEE Computer Society|s2cid=5507282|url-status=live|access-date=20 September 2013}}</ref> LRU is a marking algorithm, so it is <math> \tfrac{k}{k-h+1}</math>-competitive.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)