Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Virtual memory
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Computer memory management technique}} {{About|the computer memory management technique| the technique of pooling multiple storage devices|Storage virtualization| the TBN game show|Virtual Memory (game show)}} {{distinguish|VRAM}} {{Use dmy dates|date=January 2023}} [[File:Virtual memory.svg|thumb|250px|Virtual memory combines active [[random-access memory|RAM]] and inactive memory on [[Direct-access storage device|DASD]]{{efn|Early systems used [[Drum memory|drums]]; contemporary systems use [[Disk storage|disks]] or [[Solid-state drive|solid state memory]]}} to form a large range of contiguous addresses.]] In [[computing]], '''virtual memory''', or '''virtual storage''',{{efn|IBM uses the term '''virtual storage''' on mainframe operating systems. This usage runs from [[TSS (operating system)|TSS]]<ref>{{cite book | title = System/360 Model 67 Time Sharing System Preliminary Technical Summary | id = C20-1647-0 | section = SYSTEM COMPONENTS: Dynamic Relocation | page = 21 | year = 1966 | url = http://bitsavers.org/pdf/ibm/360/tss/C20-1647-0_360-67_TSS_Tech.pdf | section-url = http://bitsavers.org/pdf/ibm/360/tss/C20-1647-0_360-67_TSS_Tech.pdf#page=21 | publisher = IBM }} </ref> on the [[IBM System/360 Model 67|360/67]] through [[z/OS]]<ref>{{cite book | title = z/OS Version 2 Release 4 z/OS Introduction and Release Guide | id = GA32-0887-40 | date = 22 September 2020 | section = BCP (Base Control Program) | section-url = https://www-01.ibm.com/servers/resourcelink/svc00100.nsf/pages/zOSV2R4ga320887/$file/e0za100_v2r4.pdf#page=24 | page = 3 | url = https://www-01.ibm.com/servers/resourcelink/svc00100.nsf/pages/zOSV2R4ga320887/$file/e0za100_v2r4.pdf | publisher = IBM }} </ref> on [[z/Architecture]].}} is a [[Memory management (operating systems)|memory management]] technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine"<ref>{{cite book|last1=Bhattacharjee|first1=Abhishek|last2=Lustig|first2=Daniel|title=Architectural and Operating System Support for Virtual Memory|date=2017|publisher=Morgan & Claypool Publishers|isbn=9781627056021|page=1|url=https://books.google.com/books?id=roM4DwAAQBAJ|access-date=16 October 2017}}</ref> which "creates the illusion to users of a very large (main) memory".<ref>{{cite book|last1=Haldar|first1=Sibsankar|last2=Aravind|first2=Alex Alagarsamy|title=Operating Systems|date=2010|publisher=Pearson Education India|isbn=978-8131730225|page=269|url=https://books.google.com/books?id=orZ0CLxEMXEC&pg=PA269|access-date=16 October 2017}}</ref> The computer's [[operating system]], using a combination of hardware and software, maps [[memory address]]es used by a program, called ''[[Virtual address space|virtual addresses]]'', into ''physical addresses'' in [[computer memory]]. [[Main storage#Primary storage|Main storage]], as seen by a process or task, appears as a contiguous [[address space]] or collection of contiguous [[Memory segmentation|segments]]. The operating system manages [[virtual address space]]s and the assignment of real memory to virtual memory.<ref>{{Cite journal |last1=Zhou |first1=Xiangrong |last2=Petrov |first2=Peter |date=1 December 2008 |title=Direct address translation for virtual memory in energy-efficient embedded systems |url=https://dl.acm.org/doi/10.1145/1457246.1457251 |journal=ACM Transactions on Embedded Computing Systems |language=en |volume=8 |issue=1 |pages=1–31 |doi=10.1145/1457246.1457251 |s2cid=18156695 |issn=1539-9087|url-access=subscription }}</ref> Address translation hardware in the CPU, often referred to as a [[memory management unit]] (MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities, utilizing, e.g., [[disk storage]], to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer. The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, ability to share memory used by [[Library (computing)|libraries]] between processes, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of [[paging]] or segmentation. == Properties == Virtual memory makes application programming easier by hiding [[Fragmentation (computer)|fragmentation]] of physical memory; by delegating to the kernel the burden of managing the [[Computer data storage#Hierarchy of storage|memory hierarchy]] (eliminating the need for the program to handle [[overlay (programming)|overlays]] explicitly); and, when each process is run in its own dedicated address space, by obviating the need [[Relocation (computer science)|to relocate]] program code or to access memory with [[Addressing mode#PC-relative|relative addressing]]. [[Memory virtualization]] can be considered a generalization of the concept of virtual memory. == Usage == Virtual memory is an integral part of a modern [[computer architecture]]; implementations usually require hardware support, typically in the form of a [[memory management unit]] built into the [[central processing unit|CPU]]. While not necessary, [[emulators]] and [[virtual machine]]s can employ hardware support to increase performance of their virtual memory implementations.<ref>{{cite web|title = AMD-V™ Nested Paging|url = http://developer.amd.com/wordpress/media/2012/10/NPT-WP-1%201-final-TM.pdf|publisher = AMD|access-date = 28 April 2015}}</ref> Older operating systems, such as those for the [[mainframe computer|mainframes]] of the 1960s, and those for personal computers of the early to mid-1980s (e.g., [[DOS]]),<ref>{{cite web |title=Windows Version History |url=http://support.microsoft.com/kb/32905 |date=23 September 2011 |publisher=Microsoft |access-date=9 March 2015 |archive-url=https://web.archive.org/web/20150108044055/http://support.microsoft.com/kb/32905 |archive-date=8 January 2015}}</ref> generally have no virtual memory functionality,{{Dubious|Prevalence of virtual memory operating systems in the 1960s and early 1970s|reason=a lot of paging computers were shipped in the 1960|date=November 2010}} though notable exceptions for mainframes of the 1960s include: * the [[Atlas Supervisor]] for the [[Atlas (computer)|Atlas]] * [[THE multiprogramming system]] for the [[Electrologica X8]] (software based virtual memory without hardware support) * [[Burroughs MCP|MCP]] for the [[Burroughs Corporation|Burroughs]] [[B5000]] * [[Michigan Terminal System|MTS]], [[TSS/360]] and [[CP/CMS]] for the [[IBM System/360 Model 67]] * [[Multics]] for the [[GE-600 series|GE 645]] * The [[Time Sharing Operating System]] for the [[RCA Spectra 70]]/46 During the 1960s and early '70s, computer memory was very expensive. The introduction of virtual memory provided an ability for software systems with large memory demands to run on computers with less real memory. The savings from this provided a strong incentive to switch to virtual memory for all systems. The additional capability of providing virtual address spaces added another level of security and reliability, thus making virtual memory even more attractive to the marketplace. Most modern operating systems that support virtual memory also run each [[process (computing)|process]] in its own dedicated [[address space]]. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as [[OS/VS1]] and [[OS/VS2 (SVS)|OS/VS2 SVS]]) and even modern ones (such as [[IBM i]]) are [[single address space operating system]]s that run all processes in a single address space composed of virtualized memory. [[Embedded system]]s and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased [[Deterministic algorithm|determinism]]; virtual memory systems trigger unpredictable [[trap (computing)|traps]] that may produce unwanted and unpredictable delays in response to input, especially if the trap requires that data be read into main memory from secondary memory. The hardware to translate virtual addresses to physical addresses typically requires a significant chip area to implement, and not all chips used in embedded systems include that hardware, which is another reason some of those systems do not use virtual memory. ==History== In the 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as [[Overlay (programming)|overlaying]]. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use.<ref name="denning">{{cite journal |author-link=Peter J. Denning |last=Denning |first=Peter |title=Before Memory Was Virtual |journal=In the Beginning: Recollections of Software Pioneers |year=1997 |url=http://denninginstitute.com/pjd/PUBS/bvm.pdf}}</ref> To allow for [[multiprogramming]] and [[computer multitasking|multitasking]], many early systems divided memory between multiple programs without virtual memory, such as early models of the [[PDP-10]] via [[Processor register|registers]]. A claim that the concept of virtual memory was first developed by German physicist [[Fritz-Rudolf Güntsch]] at the [[Technische Universität Berlin]] in 1956 in his doctoral thesis, ''Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation'',<ref>{{cite journal |first=Elke |last=Jessen |title=Origin of the Virtual Memory Concept |journal=[[IEEE Annals of the History of Computing]] |volume=26 |issue=4 |year=2004 |pages=71–72}}</ref><ref name=springer>{{Cite journal |last1=Jessen |first1=E. |title= Die Entwicklung des virtuellen Speichers |doi=10.1007/s002870050034 |journal=Informatik-Spektrum |issn=0170-6012 |volume=19 |issue=4 |pages=216–219 |year=1996 |s2cid=11514875 |language=de}}</ref> does not stand up to careful scrutiny. The computer proposed by Güntsch (but never built) had an address space of 10<sup>5</sup> words which mapped exactly onto the 10<sup>5</sup> words of the drums, i.e. the addresses were real addresses and there was no form of indirect mapping, a key feature of virtual memory. What Güntsch did invent was a form of [[cache memory]], since his high-speed memory was intended to contain a copy of some blocks of code or data taken from the drums. Indeed, he wrote (as quoted in translation{{sfnp | Jessen | 2004}}): "The programmer need not respect the existence of the primary memory (he need not even know that it exists), for there is only one sort of addresses {{sic}} by which one can program as if there were only one storage." This is exactly the situation in computers with cache memory, one of the earliest commercial examples of which was the IBM System/360 Model 85.<ref>{{citation |last=Liptay |first=J. S. |title=Structural Aspects of the System/360 Model 85 – The Cache |journal=[[IBM Systems Journal]] |volume=7 |pages=15–21 |year=1968 |doi=10.1147/sj.71.0015}}</ref> In the Model 85 all addresses were real addresses referring to the main core store. A semiconductor cache store, invisible to the user, held the contents of parts of the main store in use by the currently executing program. This is exactly analogous to Güntsch's system, designed as a means to improve performance, rather than to solve the problems involved in multi-programming. [[File:University of Manchester Atlas, January 1963.JPG|thumb|The University of Manchester [[Atlas Computer]] was the first computer to feature true virtual memory.]] The first true virtual memory system was that implemented at the [[University of Manchester]] to create a one-level storage system<ref>{{citation | last1=Kilburn | first1=T. | last2=Edwards | first2=D. B. G. | last3=Lanigan | first3=M. J. | last4=Sumner | first4=F. H. | title=One-level Storage System | journal=IRE Trans EC-11 | pages=223–235 | year=1962| issue=2 | doi=10.1109/TEC.1962.5219356 }}</ref> as part of the [[Atlas Computer]]. It used a [[paging]] mechanism to map the virtual addresses available to the programmer onto the real memory that consisted of 16,384 words of primary [[magnetic-core memory|core memory]] with an additional 98,304 words of secondary [[drum memory]].<ref>{{cite web|url=https://www.ourcomputerheritage.org/Maincomp/Fer/ccs-f5x2.pdf|title=Ferranti Atlas 1 & 2 – Systems Architecture|date=12 November 2009}}</ref> The addition of virtual memory into the Atlas also eliminated a looming programming problem: planning and scheduling data transfers between main and secondary memory and recompiling programs for each change of size of main memory.<ref>{{cite encyclopedia |last=Denning |first=Peter J. |entry=Virtual memory |date=1 January 2003 |entry-url=https://dl.acm.org/doi/abs/10.5555/1074100.1074903 |encyclopedia=Encyclopedia of Computer Science |pages=1832–1835 |publisher=John Wiley and Sons |isbn=978-0-470-86412-8 |access-date=10 January 2023}}</ref> The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.<ref name="denning"/>{{rp|page=2}}<ref>{{cite journal |first=R. J. |last=Creasy |url=http://pages.cs.wisc.edu/~stjones/proj/vm_reading/ibmrd2505M.pdf |title=The origin of the VM/370 time-sharing system |journal=[[IBM Journal of Research & Development]] |volume=25 |issue=5 |date=September 1981 |page=486|doi=10.1147/rd.255.0483 }}</ref><ref>{{cite web|url=http://www.computer50.org/kgill/atlas/atlas.html|title=The Atlas|archive-url=https://web.archive.org/web/20141006103119/http://www.computer50.org/kgill/atlas/atlas.html|archive-date=6 October 2014|url-status=usurped}}</ref> As early as 1958, [[Robert S. Barton]], working at Shell Research, suggested that main storage should be allocated automatically rather than have the programmer being concerned with overlays from secondary memory, in effect virtual memory.<ref name="Waychoff 1979">{{cite web |last1=Waychoff |first1=Richard |title=Stories About the B5000 and People Who Were There |url= https://archive.computerhistory.org/resources/access/text/2016/06/102724640-05-01-acc.pdf |website=Computer History Museum}}</ref>{{rp|49}}<ref name="IEEE-Computer-Aug-1977">{{cite web |title=IEEE Computer August 1977 David Bulman's Letter to the Editor |url=https://www.computer.org/csdl/magazine/co/1977/08/01646583/13rRUxbCbsW |website=IEEE}}</ref> By 1960 Barton was lead architect on the Burroughs B5000 project. From 1959 to 1961, W. R. Lonergan was manager of the Burroughs Product Planning Group which included Barton, [[Donald Knuth]] as consultant, and Paul King. In May 1960, UCLA ran a two-week seminar "Using and Exploiting Giant Computers" to which Paul King and two others were sent. Stan Gill gave a presentation on virtual memory in the Atlas I computer. Paul King took the ideas back to Burroughs and it was determined that virtual memory should be designed into the core of the B5000.{{r|Waychoff 1979|p=3}}. Burroughs Corporation released the B5000 in 1964 as the first commercial computer with virtual memory.<ref>{{Cite book|first=Harvey G.|last=Cragon|title=Memory Systems and Pipelined Processors|publisher=Jones and Bartlett Publishers|page=113|year=1996|url=https://books.google.com/books?id=q2w3JSFD7l4C|isbn=978-0-86720-474-2}}</ref> IBM developed{{efn|IBM had previously used the term 'hypervisor' for the [[IBM System/360 Model 65|360/65]],<ref>{{cite conference | url = https://www.computer.org/csdl/proceedings/afips/1971/5077/00/50770163.pdf | title = System/370 integrated emulation under OS and DOS | first = Gary R. |last=Allred | page = 164 | conference = 1971 [[Spring Joint Computer Conference]] | volume = 38 | doi = 10.1109/AFIPS.1971.58 | date = May 1971 | publisher = AFIPS Press | access-date = 12 June 2022 }}</ref> but that did not involve virtual memory.}} the concept of [[hypervisor]]s in their [[IBM CP-40|CP-40]] and [[CP-67]], and in 1972 provided it for the [[IBM System/370|S/370]] as Virtual Machine Facility/370.<ref>{{cite book |url=http://www.vm.ibm.com/pubs/HCSF8A50.PDF |title=z/VM built on IBM Virtualization Technology General Information Version 4 Release 3.0 |id=GC24-5991-04 |date=12 April 2002}}</ref> IBM introduced the Start Interpretive Execution ('''SIE''') instruction as part of 370-XA on the 3081, and VM/XA versions of [[VM (operating system)|VM]] to exploit it. Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult-to-build specialized hardware; initial implementations slowed down access to memory slightly.<ref name="denning" /> There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over;<ref name="denning" /> an [[IBM]] research team led by [[David Sayre]] showed that their virtual memory overlay system consistently worked better than the best manually controlled systems.<ref>{{Cite journal | last1 = Sayre | first1 = D. | title = Is automatic 'folding' of programs efficient enough to displace manual? | doi = 10.1145/363626.363629 | journal = Communications of the ACM | volume = 12 | issue = 12 | pages = 656–660 | year = 1969 | s2cid = 15655353 }}</ref> Throughout the 1970s, the IBM 370 series running their virtual-storage based operating systems provided a means for business users to migrate multiple older systems into fewer, more powerful, mainframes that had improved price/performance. The first [[minicomputer]] to introduce virtual memory was the Norwegian [[NORD-1]]; during the 1970s, other minicomputers implemented virtual memory, notably [[VAX]] models running [[OpenVMS|VMS]]. Virtual memory was introduced to the [[x86]] architecture with the [[protected mode]] of the [[Intel 80286]] processor, but its segment swapping technique scaled poorly to larger segment sizes. The [[Intel 80386]] introduced paging support underneath the existing [[Segmentation (memory)|segmentation]] layer, enabling the page fault exception to chain with other exceptions without [[double fault]]. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation.<ref>{{cite web | title = Difference Between Paging and Segmentation | url = https://unstop.com/blog/difference-between-paging-and-segmentation | website = Unstop | access-date = 14 December 2024 }}</ref> ==Paged virtual memory== {{See also|Memory paging}} {{More citations needed section|date=December 2010}} Nearly all current implementations of virtual memory divide a [[virtual address space]] into [[Page (computer memory)|page]]s, blocks of contiguous virtual memory addresses. Pages on contemporary{{efn|IBM [[DOS/360 and successors#DOS/VS|DOS/VS]], [[OS/VS1]] and [[DOS/VS]] only supported 2 KB pages.}} systems are usually at least 4 [[kilobyte]]s in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes.<ref>{{cite book|last1=Quintero|first1=Dino |display-authors=et al.|title=IBM Power Systems Performance Guide: Implementing and Optimizing|date=1 May 2013|publisher=IBM Corporation|isbn=978-0738437668|page=138|url=https://books.google.com/books?id=lHTJAgAAQBAJ&pg=PA138|access-date=18 July 2017}}</ref> ===Page tables=== [[Page table]]s are used to translate the virtual addresses seen by the application into [[physical address]]es used by the [[Computer hardware|hardware]] to process instructions;<ref>{{cite book|last1=Sharma|first1=Dp|title=Foundation of Operating Systems|date=2009|publisher=Excel Books India|isbn=978-81-7446-626-6|page=62|url=https://books.google.com/books?id=AjWh-o7eICMC&pg=PA62|access-date=18 July 2017}}</ref> such hardware that handles this specific translation is often known as the [[memory management unit]]. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a [[page fault]] [[trap (computing)|exception]], invoking the paging supervisor component of the [[operating system]]. Systems can have, e.g., one page table for the whole system, separate page tables for each address space or process, separate page tables for each segment; similarly, systems can have, e.g., no segment table, one segment table for the whole system, separate segment tables for each address space or process, separate segment tables for each ''region'' in a tree{{efn|On [[IBM Z]]<ref>{{cite book | title = z/Architecture - Principles of Operation | id = SA22-7832-13 | edition = Fourteenth | date = May 2022 | section = Translation Tables | section-url = http://publibfp.dhe.ibm.com/epubs/pdf/a227832d.pdf#page=152 | pages = 3-46-3-53 | url = http://publibfp.dhe.ibm.com/epubs/pdf/a227832d.pdf | publisher = [[IBM]] | access-date = January 18, 2023 }} </ref> there is a 3-level tree of regions for each address space.}} of region tables for each address space or process. If there is only one page table, different applications [[multiprogramming|running at the same time]] use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses. Some earlier systems with smaller real memory sizes, such as the [[SDS 940]], used ''[[Page address register|page registers]]'' instead of page tables in memory for address translation. ===Paging supervisor=== This part of the operating system creates and manages page tables and lists of free page frames. In order to ensure that there will be enough free page frames to quickly resolve page faults, the system may periodically steal allocated page frames, using a [[page replacement algorithm]], e.g., a [[least recently used]] (LRU) algorithm. Stolen page frames that have been modified are written back to auxiliary storage before they are added to the free queue. On some systems the paging supervisor is also responsible for managing translation registers that are not automatically loaded from page tables. Typically, a page fault that cannot be resolved results in an abnormal termination of the application. However, some systems allow the application to have exception handlers for such errors. The paging supervisor may handle a page fault exception in several different ways, depending on the details: *If the virtual address is invalid, the paging supervisor treats it as an error. *If the page is valid and the page information is not loaded into the MMU, the page information will be stored into one of the page registers. *If the page is uninitialized, a new page frame may be assigned and cleared. *If there is a stolen page frame containing the desired page, that page frame will be reused. *For a fault due to a write attempt into a read-protected page, if it is a copy-on-write page then a free page frame will be assigned and the contents of the old page copied; otherwise it is treated as an error. *If the virtual address is a valid page in a memory-mapped file or a paging file, a free page frame will be assigned and the page read in. In most cases, there will be an update to the page table, possibly followed by purging the Translation Lookaside Buffer (TLB), and the system restarts the instruction that causes the exception. If the free page frame queue is empty then the paging supervisor must free a page frame using the same [[page replacement algorithm]] for page stealing. ===Pinned pages=== Operating systems have memory areas that are ''pinned'' (never swapped to secondary storage). Other terms used are ''locked'', ''fixed'', or ''wired'' pages. For example, [[interrupt]] mechanisms rely on an array of pointers to their handlers, such as [[I/O]] completion and [[page fault]]. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable. Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example: * The paging supervisor code and drivers for secondary storage devices on which pages reside must be permanently pinned, as otherwise paging would not even work because the necessary code would not be available. * Timing-dependent components may be pinned to avoid variable paging delays. * [[Data buffer]]s that are accessed directly by peripheral devices that use [[direct memory access]] or [[I/O channel]]s must reside in pinned pages while the I/O operation is in progress because such devices and the [[Bus (computing)|buses]] to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a [[IOMMU|memory management unit for I/O]], transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. For example, the data could come from a measurement sensor unit and lost real time data that got lost because of a page fault can not be recovered. In IBM's operating systems for [[System/370]] and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consuming [[Supervisor Call instruction]]). [[Multics]] used the term "wired". [[OpenVMS]] and [[Microsoft Windows|Windows]] refer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable. The [[Single UNIX Specification]] also uses the term "locked" in the specification for {{code|lang=c|mlock()}}, as do the {{code|lang=c|mlock()}} [[man pages]] on many [[Unix-like]] systems. ====Virtual-real operation==== In [[OS/VS1]] and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used for [[interrupt]] mechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed).{{citation needed|date=September 2022}} ===Thrashing=== When [[paging]] and [[Paging#Page stealing|page stealing]] are used, a problem called "[[thrashing (computer science)|thrashing]]"<ref name="Thrashing">{{cite web |title=Thrashing |url= https://public.support.unisys.com/aseries/docs/ClearPath-MCP-20.0/86000387-514/section-000023183.html |website=Unisys}}</ref> can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task's [[working set]] is the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes. A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token<ref>{{Cite journal | author1=Song Jiang | author2=Xiaodong Zhang | title=Token-ordered LRU: an effective page replacement policy and its implementation in Linux systems |journal=Performance Evaluation |issn=0166-5316 |volume=60 |issue=1–4 |year=2005 |pages = 5–29 |doi=10.1016/j.peva.2004.10.002}}</ref> is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and to release the memory pages to other processes. A time stamp is used to handover the token one by one. The first version of swap-token was implemented in Linux 2.6.<ref name="swap-token-page">{{cite web|first=Xiaodong |last=Zhang<!--No authorship info on the page but it is under Zhang's personal site at OSU. Don't link [[Xiaodong Zhang]], that's someone else--> |publisher=Ohio State University |url=https://web.cse.ohio-state.edu/~zhang.574/swaptoken-PE-05.html|title=Swap Token effectively minimizes system thrasing effects and is adopted in OS kernels|url-status=dead|archive-url=https://web.archive.org/web/20231207203355/https://web.cse.ohio-state.edu/~zhang.574/swaptoken-PE-05.html|archive-date=2023-12-07}}</ref> The second version is called preempt swap-token and is also in Linux 2.6.<ref name="swap-token-page" /> In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be. ==Segmented virtual memory== Some systems, such as the [[Burroughs Corporation|Burroughs]] B5500,<ref>{{cite book|author=Burroughs|id=1021326|title=Burroughs B5500 Information Processing System Reference Manual|url=http://bitsavers.org/pdf/burroughs/B5000_5500_5700/1021326_B5500_RefMan_May67.pdf|year=1964|publisher=[[Burroughs Corporation]]|access-date=28 November 2013}}</ref> and the current Unisys MCP systems<ref name="“MCP-VM">{{cite web |title=Unisys MCP Virtual Memory |url= https://public.support.unisys.com/aseries/docs/ClearPath-MCP-20.0/86000387-514/section-000023206.html |website=Unisys}}</ref> use segmentation instead of paging, dividing virtual address spaces into variable-length segments. Using segmentation matches the allocated memory blocks to the logical needs and requests of the programs, rather than the physical view of a computer, although pages themselves are an artificial division in memory. The designers of the B5000 would have found the artificial size of pages to be [[Procrustes|Procrustean]] in nature, a story they would later use for the exact data sizes in the [[Burroughs B1700|B1000]].<ref name="“B1000-Procrustes">{{cite book |chapter=Design of the Burroughs B1700 |chapter-url= https://dl.acm.org/doi/10.1145/1479992.1480060 |website=ACM|date= 1972 |doi= 10.1145/1479992.1480060 |last1= Wilner |first1= W. T. |title= Proceedings of the December 5–7, 1972, fall joint computer conference, Part I on – AFIPS '72 (Fall, part I) |pages= 489–497 |isbn= 978-1-4503-7912-0 }}</ref> In the Burroughs and Unisys systems, each memory segment is described by a master [[Data descriptor|descriptor]] which is a single absolute descriptor which may be referenced by other relative (copy) descriptors, effecting sharing either within a process or between processes. Descriptors are central to the working of virtual memory in MCP systems. Descriptors contain not only the address of a segment, but the segment length and status in virtual memory indicated by the 'p-bit' or 'presence bit' which indicates if the address is to a segment in main memory or to a secondary-storage block. When a non-resident segment (p-bit is off) is accessed, an interrupt occurs to load the segment from secondary storage at the given address, or if the address itself is 0 then allocate a new block. In the latter case, the length field in the descriptor is used to allocate a segment of that length. A further problem to thrashing in using a segmented scheme is checkerboarding,<ref name="Checkerboarding">{{cite web |title=Checkerboarding |url= https://public.support.unisys.com/aseries/docs/ClearPath-MCP-20.0/86000387-514/section-000023184.html |website=Unisys}}</ref> where all free segments become too small to satisfy requests for new segments. The solution is to perform memory compaction to pack all used segments together and create a large free block from which further segments may be allocated. Since there is a single master descriptor for each segment the new block address only needs to be updated in a single descriptor, since all copies refer to the master descriptor. Paging is not free from fragmentation – the fragmentation is internal to pages ([[internal fragmentation]]). If a requested block is smaller than a page, then some space in the page will be wasted. If a block requires larger than a page, a small area in another page is required resulting in large wasted space. The fragmentation thus becomes a problem passed to programmers who may well distort their program to match certain page sizes. With segmentation, the fragmentation is external to segments ([[external fragmentation]]) and thus a system problem, which was the aim of virtual memory in the first place, to relieve programmers of such memory considerations. In multi-processing systems, optimal operation of the system depends on the mix of independent processes at any time. Hybrid schemes of segmentation and paging may be used. The [[Intel 80286]] supports a similar segmentation scheme as an option, but it is rarely used. Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as [[Multics]] and [[IBM System/38]], are usually paging-predominant, segmentation providing memory protection.<ref>{{Cite book|url = http://www.bitsavers.org/pdf/ge/GE-645/LSB0468_GE-645_System_Manual_Jan1968.pdf|title = GE-645 System Manual|pages = 21–30|date = January 1968|access-date = 25 February 2022}}</ref><ref>{{cite web|last1=Corbató |first1=F. J.|author-link=Fernando J. Corbató|last2=Vyssotsky |first2=V. A.|author2-link=Victor A. Vyssotsky|title=Introduction and Overview of the Multics System |url=http://www.multicians.org/fjcc1.html |access-date=13 November 2007 }}</ref><ref>{{cite web|last1=Glaser |first1=Edward L. |last2=Couleur |first2=John F. |last3=Oliver |first3=G. A. |name-list-style=amp |title=System Design of a Computer for Time Sharing Applications |url=http://www.multicians.org/fjcc2.html }}</ref> In the [[Intel 80386]] and later [[IA-32]] processors, the segments reside in a [[32-bit]] linear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assisted [[x86 virtualization]] solutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM, guest OS or guest application stack needs three.<ref>{{Cite web|url=https://old.hotchips.org/wp-content/uploads/hc_archives/hc17/1_Sun/HC17.T1P2.pdf|first1=J. E. |last1=Smith |first2=R. |last2=Uhlig |date=14 August 2005 |title=''Virtual Machines: Architectures, Implementations and Applications'', HOTCHIPS 17, Tutorial 1, part 2}}</ref>{{rp|22}} The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces. This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space.<ref>{{Cite journal|last1=Bensoussan |first1=André|last2=Clingen|first2=Charles T.|last3=Daley|first3=Robert C.|date=May 1972|title=The Multics Virtual Memory: Concepts and Design|journal=Communications of the ACM|volume=15|issue=5|pages=308–318|url=http://www.multicians.org/multics-vm.html|doi=10.1145/355602.361306|citeseerx=10.1.1.10.6731|s2cid=6434322}}</ref> This is not the same as the mechanisms provided by calls such as [[mmap]] and [[Win32]]'s MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap.<ref>{{cite web |title=Multics Execution Environment |url=http://www.multicians.org/exec-env.html |website=Multicians.org |access-date=9 October 2016}}</ref> This eliminates the need for a [[Linker (computing)|linker]] completely<ref name="denning" /> and works when different processes map the same file into different places in their private address spaces.<ref>{{Cite book|first=Elliott I.|last=Organick|title=The Multics System: An Examination of Its Structure|publisher=MIT Press|year=1972|isbn=978-0-262-15012-5|url-access=registration|url=https://archive.org/details/multicssystemex00orga}}</ref> ==Address space swapping== Some operating systems provide for swapping entire [[address space]]s, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation. IBM's [[MVS]], from [[OS/360 and successors#OS/VS2 SVS and MVS|OS/VS2 Release 2]] through [[z/OS]], provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible<ref>The most important requirement is that the program be APF authorized.</ref> main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable using a SYSEVENT [[Supervisor Call instruction]] (SVC); certain changes<ref>e.g., requesting use of preferred memory</ref> in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP.<ref>{{cite web |title=Control swapping (DONTSWAP, OKSWAP, TRANSWAP) |series=z/OS MVS Programming: Authorized Assembler Services Reference SET-WTO SA23-1375-00 |url=http://pic.dhe.ibm.com/infocenter/zos/v2r1/index.jsp?topic=%2Fcom.ibm.zos.v2r1.ieaa400%2Fswap.htm |date=1990–2014 |website=IBM Knowledge Center |access-date=9 October 2016}}</ref> Swapping does not necessarily require memory management hardware, if, for example, multiple jobs are swapped in and out of the same area of storage. ==See also== {{cmn|colwidth=30em| * [[Processor design]] * [[Page (computer memory)]] * [[Cache replacement policies]] * [[Memory management]] * [[Memory management (operating systems)]] * [[Protected mode]], an [[x86]] mode that allows for virtual memory. * [[CUDA|CUDA Pinned memory]] * [[Heterogeneous System Architecture]], a series of specifications intended to unify CPU and GPU memory }} ==Notes== {{Notelist}} ==References== {{Reflist}} ==Further reading== * Hennessy, John L.; and Patterson, David A.; ''Computer Architecture, A Quantitative Approach'' ({{ISBN|1-55860-724-2}}) ==External links== {{wikisource|The Paging Game}} * [http://pages.cs.wisc.edu/~remzi/OSTEP Operating Systems: Three Easy Pieces], by Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau. Arpaci-Dusseau Books, 2014. Relevant chapters: [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-intro.pdf Address Spaces] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-mechanism.pdf Address Translation] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-segmentation.pdf Segmentation] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-paging.pdf Introduction to Paging] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-tlbs.pdf TLBs] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-smalltables.pdf Advanced Page Tables] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-beyondphys.pdf Swapping: Mechanisms] [http://pages.cs.wisc.edu/~remzi/OSTEP/vm-beyondphys-policy.pdf Swapping: Policies] * {{cite web |url=http://archive.michigan-terminal-system.org/documentation/documents/timeSharingSupervisorPrograms-1971.pdf |title=Time-Sharing Supervisor Programs |archive-url=https://web.archive.org/web/20121101100208/https://1a9f2076-a-62cb3a1a-s-sites.googlegroups.com/site/michiganterminalsystem/documentation/documents/timeSharingSupervisorPrograms-1971.pdf?attachauth=ANoY7cq50xGif66cFkdyUd2V34MN67JzvBSC2d8Bh9YyKRUdJhWIZajTpdyvTUA9GlC6qgTpQc6Vwy1PIj94BwEehXgqir8V9Pa7C1e8gDt_KDW_nCp7wtn34odFoL2BK9pBYRP2KMSY6TpHua2OZkiAAN78JDgbMiidGIzEd-Mqns5h0AMxTB2Oj4glJmdV6GU6yZ1aNWeP876U3BSdlsTv0FrJmI8eG8KQnC7K7dvmR95s7Yi2IAm9iTletQuFbtosE7cSVrfXBj5-Kxn-wQmP4mSJaSjTC0n2-HbyHwQ7bzfHePbE0lQ%3D&attredirects=0 |archive-date=2012-11-01 |url-status=dead}} by Michael T. Alexander in ''Advanced Topics in Systems Programming'', University of Michigan Engineering Summer Conference 1970 (revised May 1971), compares the scheduling and resource allocation approaches, including virtual memory and paging, used in four mainframe operating systems: [[CP-67]], [[TSS/360]], [[Michigan Terminal System|MTS]], and [[Multics]]. * [http://linux-mm.org/ LinuxMM: Linux Memory Management]. * [https://gnulinuxclub.org/birth-of-linux-kernel/ Birth of Linux Kernel], mailing list discussion. * {{webarchive |url=https://web.archive.org/web/20100622062522/http://msdn2.microsoft.com/en-us/library/ms810616.aspx |date=22 June 2010 |title=The Virtual-Memory Manager in Windows NT, Randy Kath, Microsoft Developer Network Technology Group, 12 December 1992 }} {{Memory management navbox}} {{Authority control}} {{DEFAULTSORT:Virtual Memory}} [[Category:Virtual memory| ]] [[Category:Department of Computer Science, University of Manchester]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite encyclopedia
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Cmn
(
edit
)
Template:Code
(
edit
)
Template:Distinguish
(
edit
)
Template:Dubious
(
edit
)
Template:Efn
(
edit
)
Template:Fix
(
edit
)
Template:ISBN
(
edit
)
Template:Memory management navbox
(
edit
)
Template:More citations needed section
(
edit
)
Template:Notelist
(
edit
)
Template:R
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:See also
(
edit
)
Template:Sfnp
(
edit
)
Template:Short description
(
edit
)
Template:Sic
(
edit
)
Template:Sister project
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)
Template:Wikisource
(
edit
)