Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Page replacement algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Page replacement algorithms== There are a variety of page replacement algorithms:<ref name='lecture notes jones'/> ===The theoretically optimal page replacement algorithm=== The theoretically optimal page replacement algorithm (also known as OPT, [[Clairvoyance|clairvoyant]] replacement algorithm, or [[László Bélády|Bélády's]] optimal page replacement policy)<ref>{{cite web|url=http://www.read.cs.ucla.edu/111/2006fall/notes/lec11|title=CS111 Lecture 11 notes|archive-url=https://web.archive.org/web/20090109175934/http://www.read.cs.ucla.edu/111/2006fall/notes/lec11|archive-date=9 January 2009|last1=Torrez|first1=Paul|last2=Hallner|first2=Tim|last3=Mathrani|first3=Kiran|last4=Bhaskar|first4=Anu|display-authors=1|url-status=dead|website=UCLA Computer Science Department}}</ref><ref>{{cite conference |title=Characterization of Web reference behavior revisited: Evidence for Dichotomized Cache management |first1=Hyokyung |last1=Bahn |last2=Noh |first2=Sam H. |date=12-14 February 2003 |conference=International Conference on Information Networking 2003 |conference-url=https://link.springer.com/book/10.1007/b13389 |publisher=Springer-Verlag |location=Jeju, South Korea |pages=1018–1027 |isbn=978-3-540-40827-7 |doi=10.1007/978-3-540-45235-5_100 |language=en}}</ref><ref name='lecture notes jones'>{{cite web|last1=Jones|first1=Douglas W.|website=University of Iowa Department of Computer Science|url=http://www.cs.uiowa.edu/~jones/opsys/fall95/notes/0908.html|title=22C:116 Lecture Notes|archive-url=https://archive.today/20120630230917/http://homepage.cs.uiowa.edu/~jones/opsys/fall95/notes/0908.html|archive-date=30 June 2012|url-status=live|access-date=18 March 2008}}</ref> is an algorithm that works as follows: when a page needs to be swapped in, the [[operating system]] swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds. This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist<ref>{{cite conference |title=Back to the Future: Leveraging Belady's Algorithm for Improved Cache Replacement |first1=Akanksha |last1=Jain |first2=Calvin |last2=Lin |date=2016 |conference=International Symposium on Computer Architecture (ISCA) |publisher=IEEE |location=Seoul, South Korea |doi=10.1109/ISCA.2016.17 |language=en |url=https://www.cs.utexas.edu/~lin/papers/isca16.pdf}}</ref> that can offer near-optimal performance —<!-- on the first run of a program, (conflicts with following sentense) --> the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs. Analysis of the paging problem has also been done in the field of [[online algorithm]]s. Efficiency of randomized online algorithms for the paging problem is measured using [[amortized analysis]]. ===Not recently used=== The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well. At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current timer interval are marked with a referenced bit. When a page needs to be replaced, the [[operating system]] divides the pages into four classes: <br /> :3. referenced, modified :2. referenced, not modified :1. not referenced, modified :0. not referenced, not modified Although it does not seem possible for a page to be modified yet not referenced, this happens when a class 3 page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-referenced (within the last timer interval) page is less important than a not-modified page that is intensely referenced. NRU is a marking algorithm, so it is <math> \tfrac{k}{k-h+1}</math>-competitive. ===First-in, first-out=== The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of the [[operating system]]. The idea is obvious from the name – the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences [[Bélády's anomaly]]. In simple words, on a page fault, the frame that has been in memory the longest is replaced. FIFO page replacement algorithm is used by the [[OpenVMS]] operating system, with some modifications.<ref name="Silber">{{Cite book|language=en|title=Operating system concepts|last1=Silberschatz|first1=Abraham|date=14 December 2004|publisher=John Wiley & Sons|last2=Galvin|first2=Peter Baer|last3=Gagne|first3=Greg|isbn=0-47169-466-5|edition=7th|page=339|location=Hoboken, NJ, USA|oclc=56913661}}</ref> Partial second chance is provided by skipping a limited number of entries with valid translation table references,<ref>[http://mx.isti.cnr.it/cgi-bin/conan?key=Sys_Parameters~TBSKIPWSL&title=VMS%20Help VMS Help — Sys Parameters, TBSKIPWSL]</ref> and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used. FIFO is a conservative algorithm, so it is <math> \tfrac{k}{k-h+1}</math>-competitive. ===Second-chance=== A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO. As its name suggests, Second-chance gives every page a "second-chance" – an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced. ===Clock=== Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced.<ref name="Tanenbaum">{{cite book |first=Andrew S. |last=Tanenbaum |date=2001 |title=Modern Operating Systems |edition=2nd |publisher=Prentice-Hall |location=Upper Saddle River, NJ, USA |isbn=978-0-13-031358-4 |lccn=00051666 |ol=24214243M |oclc=45284637 |page=[https://archive.org/details/modernoperatings00tane/page/218 218 (4.4.5)] |url=https://archive.org/details/modernoperatings00tane/page/218 }}</ref> This algorithm was first described in 1969 by [[Fernando J. Corbató]].<ref>{{cite book|title=Festschrift: In Honor of P. M. Morse|chapter=A Paging Experiment with the Multics System|chapter-url=https://www.multicians.org/paging-experiment.pdf|last1=Corbató|first1=Fernando J.|year=1969|pages=217–228|publisher=[[MIT Press]]}}</ref> ====Variants of clock==== * GCLOCK: Generalized clock page replacement algorithm.<ref>{{cite journal|last1=Smith|first1=Alan Jay|title=Sequentiality and prefetching in database systems|journal=ACM Transactions on Database Systems|volume=3|issue=3|pages=223–247|date=September 1978|doi=10.1145/320263.320276|publisher=ACM|language=en|location=New York, NY, USA|s2cid=11611563|doi-access=free}}</ref> * Clock-Pro keeps a circular list of information about recently referenced pages, including all M pages in memory as well as the most recent M pages that have been paged out. This extra information on paged-out pages, like the similar information maintained by [[adaptive replacement cache|ARC]], helps it work better than LRU on large loops and one-time scans.<ref>{{cite conference |first2=Feng |last2=Chen |url=http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-05-3.pdf |title=CLOCK-Pro: an effective improvement of the CLOCK replacement |first3=Xiaodong |last3=Zhang |first1=Song |last1=Jiang |date=10-15 April 2005 |conference=2005 USENIX Annual Technical Conference |conference-url=https://www.usenix.org/conference/2005usenixannualtechnicalconference |publisher=USENIX Association |archive-url=http://archive.wikiwix.com/cache/20190612095142/http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-05-3.pdf |archive-date=12 June 2019 |location=Anaheim, CA, USA |page=35 |url-status=live |language=en |access-date=24 March 2009 }}</ref> * WSclock.<ref>{{cite conference |first2=John L. |last2=Hennessy |url=http://infolab.stanford.edu/~manku/quals/zpapers/81-wsclock.pdf.gz |title=WSCLOCK—a simple and effective algorithm for virtual memory management |format=gzipped PDF |first1=Richard W. |last1=Carr |date=14-16 December 1981 |conference=Eighth ACM symposium on Operating systems principles |conference-url=https://dl.acm.org/citation.cfm?doid=800216 |publisher=ACM |archive-url=https://web.archive.org/web/20070610070838/http://infolab.stanford.edu/~manku/quals/zpapers/81-wsclock.pdf.gz |archive-date=10 June 2007 |location=Pacific Grove, CA, USA |pages=87–95 |url-status=live |language=en |isbn=0-89791-062-1 |doi=10.1145/800216.806596}}</ref> By combining the Clock algorithm with the concept of a working set (i.e., the set of pages expected to be used by that process during some time interval), the performance of the algorithm can be improved. In practice, the "aging" algorithm and the "WSClock" algorithm are probably the most important page replacement algorithms.<ref>{{cite web|last1=Gottlieb|first1=Allan|title=WSClock|url=http://www.cs.nyu.edu/courses/spring09/V22.0202-002/wsclock-davis.html|website=New York University Computer Science Department|access-date=12 June 2019|archive-url=https://archive.today/20120730042750/http://www.cs.nyu.edu/courses/spring09/V22.0202-002/wsclock-davis.html|archive-date=30 July 2012|url-status=live}}</ref><ref>{{cite web|last1=Tanenbaum|first1=Andrew S.|title=Page Replacement Algorithms|url=http://www.informit.com/articles/article.aspx?p=25260&seqNum=11|website=InformIT|access-date=12 June 2019|archive-url=https://archive.today/20120910015221/http://www.informit.com/articles/article.aspx?p=25260&seqNum=11|archive-date=10 September 2012|url-status=live}}</ref> * Clock with Adaptive Replacement (CAR) is a page replacement algorithm that has performance comparable to [[Adaptive replacement cache|ARC]], and substantially outperforms both LRU and CLOCK.<ref>{{cite conference |first2=Dharmendra S. |last2=Modha |name-list-style=amp |url=http://usenix.org/publications/library/proceedings/fast04/tech/full_papers/bansal/bansal.pdf |title=CAR: Clock with Adaptive Replacement |first1=Sorav |last1=Bansal |date=31 March – 2 April 2004 |conference=3rd USENIX Conference on File and Storage Technologies (FAST '04) |conference-url=https://www.usenix.org/conference/fast04 |publisher=USENIX Association |archive-url=https://web.archive.org/web/20040731112559/http://www.almaden.ibm.com/cs/people/dmodha/clockfast.pdf |archive-date=31 July 2004| url-status=live |location=San Francisco, CA, USA |pages=187–200 |language=en |citeseerx=10.1.1.105.6057}}</ref> The algorithm CAR is self-tuning and requires no user-specified magic parameters. CLOCK is a conservative algorithm, so it is <math> \tfrac{k}{k-h+1}</math>-competitive. ===Least recently used=== The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as [[adaptive replacement cache]]), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible. The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process. Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the [[operating system]] selects the page with the lowest counter and swaps it out. Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations. One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool. On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU). ====Variants on LRU==== # LRU-K<ref>{{cite conference |first2=Patrick E. |last2=O'Neil |url=https://www.cs.cmu.edu/~christos/courses/721-resources/p297-o_neil.pdf |title=The LRU-K page replacement algorithm for database disk buffering |first1=Elizabeth J. |last1=O'Neil |date=25-28 May 1993 |conference=1993 ACM SIGMOD international conference on Management of data |conference-url=https://dl.acm.org/citation.cfm?id=170035 |publisher=ACM |location=Washington, D.C., USA |pages=297–306 |url-status=live |language=en |isbn=0-89791-592-5 |doi=10.1145/170035.170081 |display-authors=1 |last3=Weikum |first3=Gerhard |archive-url=http://archive.wikiwix.com/cache/20190906015742/https://www.cs.cmu.edu/~christos/courses/721-resources/p297-o_neil.pdf |archive-date=6 September 2019 |citeseerx=10.1.1.18.1434}}</ref> evicts the page whose K-th most recent access is furthest in the past. For example, LRU-1 is simply LRU whereas LRU-2 evicts pages according to the time of their penultimate access. LRU-K improves greatly on LRU with regards to locality in time. # The [[Adaptive replacement cache|ARC]]<ref>{{cite conference |first2=Dharmendra S. |last2=Modha |name-list-style=amp |url=http://www.usenix.org/events/fast03/tech/full_papers/megiddo/megiddo.pdf |title=ARC: A Self-Tuning, Low Overhead Replacement Cache |first1=Nimrod |last1=Megiddo |date=31 March – 2 April 2003 |conference=2nd USENIX Conference on File and Storage Technologies (FAST '03) |conference-url=https://www.usenix.org/conference/fast03 |publisher=USENIX Association |archive-url=https://web.archive.org/web/20100208162647/http://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf |archive-date=8 February 2010 |url-status=live |location=San Francisco, CA, USA |pages=115–130 |language=en}}</ref> algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access. It is particularly resistant to sequential scans. # The 2Q<ref>{{cite conference |first2=Dennis |last2=Shasha |url=http://www.vldb.org/conf/1994/P439.PDF |title=2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm |first1=Theodore |last1=Johnson |date=12-15 September 1994 |conference=20th International Conference on Very Large Data Bases |conference-url=https://dblp.org/db/conf/vldb/vldb94 |publisher=Morgan Kaufmann |archive-url=http://archive.wikiwix.com/cache/20200317170617/http://www.vldb.org/conf/1994/P439.PDF |archive-date=17 March 2020 |location=Santiago de Chile, Chile |pages=439–450 |isbn=1-55860-153-8 |url-status=live |language=en |access-date=31 July 2005 }}</ref> algorithm improves upon the LRU and LRU/2 algorithm. By having two queues, one for hot-path items and the other for slow-path items, items are first placed in the slow-path queue and after a second access of the items placed in the hot-path items. Because references to added items are longer hold than in the LRU and LRU/2 algorithm, it has a better hot-path queue which improves the hit rate of the cache. A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, [[LIRS caching algorithm|LIRS]]) can be found in Megiddo & Modha 2004.<ref>{{cite journal|last1=Megiddo|first1=Nimrod|last2=Modha|first2=Dharmendra S.|name-list-style=amp|url=http://dbs.uni-leipzig.de/file/ARC.pdf|title=Outperforming LRU with an Adaptive Replacement Cache Algorithm|doi=10.1109/MC.2004.1297303|year=2004|journal=Computer|volume=37|issue=4|pages=58|citeseerx=10.1.1.231.498|archive-url=http://archive.wikiwix.com/cache/20121021000000/http://dbs.uni-leipzig.de/file/ARC.pdf|archive-date=21 October 2012|publisher=IEEE Computer Society|s2cid=5507282|url-status=live|access-date=20 September 2013}}</ref> LRU is a marking algorithm, so it is <math> \tfrac{k}{k-h+1}</math>-competitive. ===Random=== Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. [[OS/390]] uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the [[Intel i860]] processor used a random replacement policy (Rhodehamel 1989<ref>{{cite conference |title=The Bus Interface and Paging Units of the i860 Microprocessor |first=Michael W. |last=Rhodehamel |date=2-4 October 1989 |conference=1989 IEEE International Conference on Computer Design: VLSI in Computers and Processors |conference-url=https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=257 |publisher=IEEE |location=Cambridge, MA, USA |pages=380–384 |isbn=0-8186-1971-6 |doi=10.1109/ICCD.1989.63392 |language=en |id=INSPEC Accession Number 3719504}}</ref>). ===Not frequently used (NFU)=== The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary. The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows. The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values. ===Aging=== The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this in chronological order: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen. The following [[Python (programming language)|Python]] code simulates the aging algorithm. Counters <math>V_i</math> are initialized with {{val|0}} and updated as described above via <math>V_i \leftarrow (R_i \ll (k-1)) | (V_i \gg 1)</math>, using [[Arithmetic shift|arithmetic shift operators]]. <syntaxhighlight lang="python"> from collections.abc import Sequence def simulate_aging(Rs: Sequence, k: int) -> None: # Simulate aging print(" t | R-bits (0-{length}) | Counters for pages 0-{length}".format(length=len(Rs))) Vs = [0] * len(Rs[0]) for t, R in enumerate(Rs): Vs[:] = [R[i] << (k - 1) | V >> 1 for i, V in enumerate(Vs)] print("{:02d} | {} | [{}]".format(t, R, ", ".join(["{:0{}b}".format(V, k) for V in Vs]))) </syntaxhighlight> In the given example of R-bits for 6 pages over 5 clock ticks, the function prints the following output, which lists the R-bits for each clock tick {{mvar|t}} and the individual counter values <math>V_i</math> for each page in [[Binary number|binary]] representation.<ref>{{cite book |first1=Andrew S. |last1=Tanenbaum |first2=Herbert |last2=Bos |date=2015 |title=Modern Operating Systems |edition=4th |publisher=Pearson |location=Boston, MA, USA |page=215 |isbn=978-0-13-359162-0 |ol=25620855M}}</ref> <syntaxhighlight lang="pycon"> >>> Rs = [[1,0,1,0,1,1], [1,1,0,0,1,0], [1,1,0,1,0,1], [1,0,0,0,1,0], [0,1,1,0,0,0]] >>> k = 8 >>> simulate_aging(Rs, k) t | R-bits (0-5) | Counters for pages 0-5 00 | [1, 0, 1, 0, 1, 1] | [10000000, 00000000, 10000000, 00000000, 10000000, 10000000] 01 | [1, 1, 0, 0, 1, 0] | [11000000, 10000000, 01000000, 00000000, 11000000, 01000000] 02 | [1, 1, 0, 1, 0, 1] | [11100000, 11000000, 00100000, 10000000, 01100000, 10100000] 03 | [1, 0, 0, 0, 1, 0] | [11110000, 01100000, 00010000, 01000000, 10110000, 01010000] 04 | [0, 1, 1, 0, 0, 0] | [01111000, 10110000, 10001000, 00100000, 01011000, 00101000] </syntaxhighlight> Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest {{val|16|/|32}} (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price. ===Longest distance first (LDF) page replacement algorithm=== The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF, locality is based on distance not on the used references. In the LDF, replace the page which is on longest distance from the current page. If two pages are on same distance then the page which is next to current page in anti-clock rotation will get replaced.{{cn|date=July 2022}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)