Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Memory management
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Computer memory management methodology}} {{Redirect|Memory allocation|memory allocation in the brain|Neuronal memory allocation}} {{about|memory management in an [[Virtual address space|address space]]|management of physical memory|Memory management (operating systems)}} {{More footnotes|date=April 2014}} {{OS}} '''Memory management''' (also '''dynamic memory management''', '''dynamic storage allocation''', or '''dynamic memory allocation''') is a form of [[Resource management (computing)|resource management]] applied to [[computer memory]]. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single [[Process (computing)|process]] might be underway at any time.<ref name=":0" /> Several methods have been devised that increase the effectiveness of memory management. [[Virtual memory]] systems separate the [[memory address]]es used by a process from actual physical addresses, allowing separation of processes and increasing the size of the [[virtual address space]] beyond the available amount of [[Random-access memory|RAM]] using [[paging]] or swapping to [[secondary storage]]. The quality of the virtual memory manager can have an extensive effect on overall system [[Computer performance|performance]]. The system allows a computer to appear as if it may have more memory available than physically present, thereby allowing multiple processes to share it. In some [[operating system]]s, e.g. [[Burroughs MCP|Burroughs/Unisys MCP]],<ref name=Unisys-MCP-Memory>{{cite book |chapter=Unisys MCP Managing Memory |chapter-url=https://public.support.unisys.com/aseries/docs/ClearPath-MCP-20.0/86000387-514/chapter-000002004.html |title=System Operations Guid |publisher=[[Unisys]]}}</ref> and [[OS/360 and successors]],<ref>{{cite book | publisher = IBM Corporation | title = IBM Operating System/360 Concepts and Facilities | date = 1965 | edition = First | section = Main Storage Allocation | section-url = http://bitsavers.org/pdf/ibm/360/os/R01-08/C28-6535-0_OS360_Concepts_and_Facilities_1965.pdf#page=72 | page = 74 | series = IBM Systems Reference Library | url = http://bitsavers.org/pdf/ibm/360/os/R01-08/C28-6535-0_OS360_Concepts_and_Facilities_1965.pdf | access-date = Apr 3, 2019 }} </ref> memory is managed by the operating system.{{NoteTag|However, the run-time environment for a language processor may subdivide the memory dynamically acquired from the operating system, e.g., to implement a stack.}} In other operating systems, e.g. [[Unix-like]] operating systems, memory is managed at the application level. Memory management within an address space is generally categorized as either [[manual memory management]] or automatic memory management. == {{anchor|HEAP}} Manual memory management == [[File:External Fragmentation.svg|thumb|450px|An example of external fragmentation]] {{main|Manual memory management}} The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool{{NoteTag|In some operating systems, e.g., [[OS/360]], the free storage may be subdivided in various ways, e.g., subpools in [[OS/360]], below the line, above the line and above the bar in [[z/OS]].}} of memory called the ''heap''{{NoteTag|Not to be confused with the unrelated [[Heap (data structure)|heap]] data structure.}} or ''free store''. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations. In the C language, the function which allocates memory from the heap is called {{code|malloc}} and the function which takes previously allocated memory and marks it as "free" (to be used by future allocations) is called {{code|free}}. {{NoteTag|A simplistic implementation of these two functions can be found in the article "Inside Memory Management".<ref>{{cite web |url=https://developer.ibm.com/tutorials/l-memory/ |title=Inside Memory Management |website=IBM DeveloperWorks |author=Jonathan Bartlett}}</ref> }} Several issues complicate the implementation, such as [[fragmentation (computer)#External fragmentation|external fragmentation]], which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's [[metadata (computing)|metadata]] can also inflate the size of (individually) small allocations. This is often managed by [[chunking (computing)|chunking]]. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" (i.e. that there are no "[[memory leak]]s"). === Efficiency === The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by [[Digital Equipment Corporation]] illustrates the [[computational overhead|overheads]] involved for a variety of allocators. The lowest average [[instruction path length]] required to allocate a single memory slot was 52 (as measured with an instruction level [[Profiling (computer programming)|profiler]] on a variety of software).<ref name=":0">{{Cite journal | doi = 10.1002/spe.4380240602| title = Memory allocation costs in large C and C++ programs| journal = Software: Practice and Experience| volume = 24| issue = 6| pages = 527–542| date=June 1994 | last1 = Detlefs | first1 = D. | last2 = Dosser | first2 = A. | last3 = Zorn | first3 = B. | url = https://users.cs.northwestern.edu/~robby/uc-courses/15400-2008-spring/spe895.pdf | citeseerx = 10.1.1.30.3073| s2cid = 14214110}}</ref> === Implementations === Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a [[Pointer (computer programming)|pointer]] [[reference (computer science)|reference]]. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the [[kernel (operating system)|kernel]], and may use any of the following methods: ==== {{Anchor|FIXED-SIZE}}Fixed-size blocks allocation ==== {{main|Memory pool}} Fixed-size blocks allocation, also called memory pool allocation, uses a [[free list]] of fixed-size blocks of memory (often all of the same size). This works well for simple [[embedded system]]s where no large objects need to be allocated but suffers from [[Fragmentation (computing)|fragmentation]] especially with long memory addresses. However, due to the significantly reduced overhead, this method can substantially improve performance for objects that need frequent allocation and deallocation, and so it is often used in [[video games]]. ==== Buddy blocks ==== {{details|Buddy memory allocation}} In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certain [[power of two]] in size, or blocks of some other convenient size progression. All blocks of a particular size are kept in a sorted [[linked list]] or [[Tree data structure|tree]] and all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and split. One of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the correspondingly larger-sized buddy-block list. ==== Slab allocation ==== {{main|Slab allocation}} This memory allocation mechanism preallocates memory chunks suitable to fit objects of a certain type or size.<ref name="silberschatz">{{cite book |first1 = Abraham |last1 = Silberschatz |author1-link = Abraham Silberschatz |first2 = Peter B. |last2 = Galvin |title = Operating system concepts |publisher = Wiley |year = 2004 |isbn = 0-471-69466-5 }}</ref> These chunks are called caches and the allocator only has to keep track of a list of free cache slots. Constructing an object will use any one of the free cache slots and destructing an object will add a slot back to the free cache slot list. This technique alleviates memory fragmentation and is efficient as there is no need to search for a suitable portion of memory, as any open slot will suffice. ==== Stack allocation ==== {{main|Stack-based memory allocation}} Many [[Unix-like]] systems as well as [[Microsoft Windows]] implement a function called {{code|alloca}} for dynamically allocating stack memory in a way similar to the heap-based {{code|malloc}}. A compiler typically translates it to inlined instructions manipulating the stack pointer.<ref>{{man|3|alloca|Linux}}</ref> Although there is no need of manually freeing memory allocated this way as it is automatically freed when the function that called {{code|alloca}} returns, there exists a risk of overflow. And since alloca is an ''ad hoc'' expansion seen in many systems but never in POSIX or the C standard, its behavior in case of a stack overflow is undefined. A safer version of alloca called {{code|_malloca}}, which reports errors, exists on Microsoft Windows. It requires the use of {{code|_freea}}.<ref>{{cite web |title=_malloca |url=https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/malloca?view=vs-2019 |website=Microsoft CRT Documentation | date=26 October 2022 |language=en-us}}</ref> [[gnulib]] provides an equivalent interface, albeit instead of throwing an SEH exception on overflow, it delegates to malloc when an overlarge size is detected.<ref>{{cite web |title=gnulib/malloca.h |url=https://github.com/coreutils/gnulib/blob/master/lib/malloca.h |website=GitHub |access-date=24 November 2019}}</ref> A similar feature can be emulated using manual accounting and size-checking, such as in the uses of {{code|alloca_account}} in glibc.<ref>{{cite web |title=glibc/include/alloca.h |url=https://github.com/bminor/glibc/blob/780684eb04298977bc411ebca1eadeeba4877833/include/alloca.h |publisher=Beren Minor's Mirrors |date=23 November 2019}}</ref> == Automated memory management == The proper management of memory in an application is a difficult problem, and several different strategies for handling memory management have been devised. === Automatic management of call stack variables === {{see also|Automatic variable|Call stack}} In many programming language implementations, the runtime environment for the program automatically allocates memory in the [[call stack]] for non-static [[local variable]]s of a [[subroutine]], called [[automatic variable]]s, when the subroutine is called, and automatically releases that memory when the subroutine is exited. Special declarations may allow local variables to retain values between invocations of the procedure, or may allow local variables to be accessed by other subroutines. The automatic allocation of local variables makes [[Recursion (computer science)|recursion]] possible, to a depth limited by available memory. === Garbage collection === {{main|Garbage collection (computer science)}} Garbage collection is a strategy for automatically detecting memory allocated to objects that are no longer usable in a program, and returning that allocated memory to a pool of free memory locations. This method is in contrast to "manual" memory management where a programmer explicitly codes memory requests and memory releases in the program. While automatic garbage collection has the advantages of reducing programmer workload and preventing certain kinds of memory allocation bugs, garbage collection does require memory resources of its own, and can compete with the application program for processor time. === Reference counting === {{main|Reference counting}} Reference counting is a strategy for detecting that memory is no longer usable by a program by maintaining a counter for how many independent pointers point to the memory. Whenever a new pointer points to a piece of memory, the programmer is supposed to increase the counter. When the pointer changes where it points, or when the pointer is no longer pointing to any area or has itself been freed, the counter should decrease. When the counter drops to zero, the memory should be considered unused and freed. Some reference counting systems require programmer involvement and some are implemented automatically by the compiler. A disadvantage of reference counting is that [[circular reference]]s can develop which cause a memory leak to occur. This can be mitigated by either adding the concept of a "weak reference" (a reference that does not participate in reference counting, but is notified when the area it is pointing to is no longer valid) or by combining reference counting and garbage collection together. === Memory pools === {{main|Region-based memory management}} A memory pool is a technique of automatically deallocating memory based on the state of the application, such as the lifecycle of a request or transaction. The idea is that many applications execute large chunks of code which may generate memory allocations, but that there is a point in execution where all of those chunks are known to be no longer valid. For example, in a web service, after each request the web service no longer needs any of the memory allocated during the execution of the request. Therefore, rather than keeping track of whether or not memory is currently being referenced, the memory is allocated according to the request or lifecycle stage with which it is associated. When that request or stage has passed, all associated memory is deallocated simultaneously. == Systems with virtual memory == {{main|Memory protection|Shared memory (interprocess communication)}} [[Virtual memory]] is a method of decoupling the memory organization from the physical hardware. The applications operate on memory via ''virtual addresses''. Each attempt by the application to access a particular virtual memory address results in the virtual memory address being translated to an actual ''physical address''.<ref>{{cite book |last1=Tanenbaum |first1=Andrew S. |title=Modern Operating Systems |date=1992 |publisher=Prentice-Hall |location=Englewood Cliffs, N.J. |isbn=0-13-588187-0 |page=90}}</ref> In this way the addition of virtual memory enables granular control over memory systems and methods of access. In virtual memory systems the operating system limits how a [[Process (computing)|process]] can access the memory. This feature, called [[memory protection]], can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another. Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information. [[Shared memory (interprocess communication)|Shared memory]] is one of the fastest techniques for [[inter-process communication]]. Memory is usually classified by access rate into [[primary storage]] and [[secondary storage]]. Memory management systems, among other operations, also handle the moving of information between these two levels of memory. ==Memory management in Burroughs/Unisys MCP systems{{r|Unisys-MCP-Memory}}== An operating system manages various resources in the computing system. The memory subsystem is the system element for managing memory. The memory subsystem combines the hardware memory resource and the MCP OS software that manages the resource. The memory subsystem manages the physical memory and the virtual memory of the system (both part of the hardware resource). The virtual memory extends physical memory by using extra space on a peripheral device, usually disk. The memory subsystem is responsible for moving code and data between main and virtual memory in a process known as overlaying. Burroughs was the first commercial implementation of virtual memory (although developed at Manchester University for the Ferranti Atlas computer) and integrated virtual memory with the system design of the B5000 from the start (in 1961) needing no external memory management unit (MMU).<ref name="Waychoff 1979">{{cite web |last1=Waychoff |first1=Richard |title=Stories About the B5000 and People Who Were There |url= https://archive.computerhistory.org/resources/access/text/2016/06/102724640-05-01-acc.pdf |website=Computer History Museum}}</ref>{{rp|48}} The memory subsystem is responsible for mapping logical requests for memory blocks to physical portions of memory (segments) which are found in the list of free segments. Each allocated block is managed by means of a segment descriptor,<ref name="MCP Descriptor">{{cite book |title=The Descriptor |url=http://www.bitsavers.org/pdf/burroughs/LargeSystems/B5000_5500_5700/5000-20002-P_The_Descriptor_-_A_Definition_of_the_B_5000_Information_Processing_System_196102.pdf |publisher=[[Burroughs Corporation]] |date=February 1961}}</ref> a special control word containing relevant metadata about the segment including address, length, machine type, and the p-bit or ‘presence’ bit which indicates whether the block is in main memory or needs to be loaded from the address given in the descriptor. [[Burroughs large systems descriptors|Descriptors]] are essential in providing memory safety and security so that operations cannot overflow or underflow the referenced block (commonly known as buffer overflow). Descriptors themselves are protected control words that cannot be manipulated except for specific elements of the MCP OS (enabled by the UNSAFE block directive in [[NEWP]]). Donald Knuth describes a similar system in Section 2.5 ‘Dynamic Storage Allocation’ of [[The Art of Computer Programming|‘Fundamental Algorithms’]].{{disputed inline|talk=Memory management#Burroughs/Unisys MCP memory management system discussed in Knuth?|date=November 2024}} == Memory management in OS/360 and successors == IBM [[System/360]] does not support virtual memory.{{NoteTag|Except on the Model 67}} Memory isolation of [[Job (computing)|jobs]] is optionally accomplished using [[Memory protection#Protection keys|protection keys]], assigning storage for each job a different key, 0 for the supervisor or 1–15. Memory management in [[OS/360 and successors|OS/360]] is a [[Supervisory program|supervisor]] function. Storage is requested using the <code>GETMAIN</code> macro and freed using the <code>FREEMAIN</code> macro, which result in a call to the supervisor ([[Supervisor Call instruction|SVC]]) to perform the operation. In OS/360 the details vary depending on how the system is [[System generation|generated]], e.g., for [[OS/360 and successors#PCP|PCP]], [[OS/360 and successors#MFT|MFT]], [[OS/360 and successors#MVT|MVT]]. In OS/360 MVT, suballocation within a job's ''region'' or the shared ''System Queue Area'' (SQA) is based on ''subpools'', areas a multiple of 2 KB in size—the size of an area protected by a protection key. Subpools are numbered 0–255.{{sfn|OS360Sup|loc=|pages=[http://bitsavers.org/pdf/ibm/360/os/R21.7_Apr73/GC28-6646-7_Supervisor_Services_and_Macro_Instructions_Rel_21.7_Sep74.pdf#page=100 82]-85}} Within a region subpools are assigned either the job's storage protection or the supervisor's key, key 0. Subpools 0–127 receive the job's key. Initially only subpool zero is created, and all user storage requests are satisfied from subpool 0, unless another is specified in the memory request. Subpools 250–255 are created by memory requests by the supervisor on behalf of the job. Most of these are assigned key 0, although a few get the key of the job. Subpool numbers are also relevant in MFT, although the details are much simpler.{{sfn|OS360Sup|loc=|pages=[http://bitsavers.org/pdf/ibm/360/os/R21.7_Apr73/GC28-6646-7_Supervisor_Services_and_Macro_Instructions_Rel_21.7_Sep74.pdf#page=100 82]}} MFT uses fixed ''partitions'' redefinable by the operator instead of dynamic regions and PCP has only a single partition. Each subpool is mapped by a list of control blocks identifying allocated and free memory blocks within the subpool. Memory is allocated by finding a free area of sufficient size, or by allocating additional blocks in the subpool, up to the region size of the job. It is possible to free all or part of an allocated memory area.<ref name="SupvrLogic">{{cite book |publisher=IBM Corporation |title=Program Logic: IBM System/360 Operating System MVT Supervisor |date=May 1973 |pages=107–137 |url=http://bitsavers.org/pdf/ibm/360/os/R21.7_Apr73/plm/GY28-6659-7_MVT_Supervisor_PLM_Rel_21.7_May73.pdf |access-date=Apr 3, 2019}}</ref> The details for [[OS/VS1]] are similar{{sfn|OSVS1Dig||loc=[https://bitsavers.org/pdf/ibm/370/OS_VS1/GC24-5091-5_OS_VS1_Release_6_Programmers_Reference_Digest_197609.pdf#page=114 p. 2.37-2.39]}} to those for MFT and for MVT; the details for [[OS/VS2]] are similar to those for MVT, except that the page size is 4 KiB. For both OS/VS1 and OS/VS2 the shared ''System Queue Area'' (SQA) is nonpageable. In [[MVS]] the address space<ref>{{cite book | title = Introduction to OS/VS2 Release 2 | id = GC28-0661-1 | date = March 1973 | edition = first | section = Virtual Storage Layout | section-url = http://bitsavers.org/pdf/ibm/370/OS_VS2/Release_2_1973/GC28-0661-1_Introduction_to_OS_VS2_Release_2_Mar73.pdf#page=37 | page = 37 | publisher = [[IBM]] | series = Systems | url = http://bitsavers.org/pdf/ibm/370/OS_VS2/Release_2_1973/GC28-0661-1_Introduction_to_OS_VS2_Release_2_Mar73.pdf | access-date = July 15, 2024 }} </ref> includes an additional pageable shared area, the ''Common Storage Area'' (CSA), and two additional private areas, the nonpageable ''local system queue area'' (LSQA) and the pageable ''System Work area'' (SWA). Also, the storage keys 0–7 are all reserved for use by privileged code. == See also == * [[Dynamic array]] * [[Out of memory]] * [[Heap pollution]] == Notes == {{NoteFoot}} == References == {{reflist}} == Bibliography == * [[Donald Knuth]]. ''Fundamental Algorithms'', Third Edition. Addison-Wesley, 1997. {{ISBN|0-201-89683-4}}. Section 2.5: Dynamic Storage Allocation, pp. 435–456. * [http://buzzan.tistory.com/m/post/view/id/428 Simple Memory Allocation Algorithms]{{webarchive |url=https://web.archive.org/web/20160305050619/http://buzzan.tistory.com/m/post/428 |date=5 March 2016}} (originally published on OSDEV Community) * {{Cite book | doi = 10.1007/3-540-60368-9_19| chapter = Dynamic storage allocation: A survey and critical review| title = Memory Management| volume = 986| pages = 1–116| series = Lecture Notes in Computer Science| year = 1995| last1 = Wilson | first1 = P. R. | last2 = Johnstone | first2 = M. S. | last3 = Neely | first3 = M. | last4 = Boles | first4 = D. | isbn = 978-3-540-60368-9| citeseerx = 10.1.1.47.275 |chapter-url=http://www.cs.northwestern.edu/~pdinda/icsclass/doc/dsa.pdf}} * {{Cite conference | doi = 10.1145/378795.378821| chapter = Composing High-Performance Memory Allocators| title = Proceedings of the ACM SIGPLAN 2001 conference on Programming language design and implementation| conference = [[Programming Language Design and Implementation|PLDI]] '01| pages = 114–124| date=June 2001 | last1 = Berger | first1 = E. D. | last2 = Zorn | first2 = B. G. | last3 = McKinley | first3 = K. S. | author3-link = Kathryn S. McKinley| isbn = 1-58113-414-2| chapter-url = http://www.cs.umass.edu/%7Eemery/pubs/berger-pldi2001.pdf| citeseerx = 10.1.1.1.2112| s2cid = 7501376}} * {{Cite conference | doi = 10.1145/582419.582421| chapter = Reconsidering Custom Memory Allocation| title = Proceedings of the 17th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications| conference = [[OOPSLA]] '02| pages = 1–12| date=November 2002 | last1 = Berger | first1 = E. D. | last2 = Zorn | first2 = B. G. | last3 = McKinley | first3 = K. S. | author3-link = Kathryn S. McKinley| isbn = 1-58113-471-1| chapter-url = http://people.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf| citeseerx = 10.1.1.119.5298| s2cid = 481812}} ; OS360Sup : {{cite book | title = OS Release 21 IBM System/360 Operating System Supervisor Services and Macro Instructions | id = GC28-6646-7 | date = September 1974 | edition = Eighth | series = IBM Systems Reference Library | url = http://bitsavers.org/pdf/ibm/360/os/R21.7_Apr73/GC28-6646-7_Supervisor_Services_and_Macro_Instructions_Rel_21.7_Sep74.pdf | publisher = [[IBM]] | ref = {{sfnref|OS360Sup}} }} ; OSVS1Dig :{{cite book | title = OS/VS1 Programmer's Reference Digest Release 6 | id = GC24-5091-5 with TNLs | date = September 15, 1976 | edition = Sixth | url = http://bitsavers.org/pdf/ibm/370/OS_VS1/GC24-5091-5_OS_VS1_Release_6_Programmers_Reference_Digest_197609.pdf | series = Systems | publisher = [[IBM]] | ref = {{sfnref|OSVS1Dig}} }} == External links == {{Wikibooks}} * [http://memory-mgr.sourceforge.net/ "Generic Memory Manager" C++ library] * [https://code.google.com/p/arena-memory-allocation/downloads/list Sample bit-mapped arena memory allocator in C] * [http://www.gii.upv.es/tlsf/ TLSF: a constant time allocator for real-time systems] * [https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_cpp_dynamic-memory.php Slides on Dynamic memory allocation] * [http://www.flounder.com/inside_storage_allocation.htm Inside A Storage Allocator] * [http://www.memorymanagement.org/ The Memory Management Reference] :* [http://www.memorymanagement.org/articles/alloc.html The Memory Management Reference, Beginner's Guide Allocation] * [http://linux-mm.org/ Linux Memory Management] * {{usurped|1=[https://web.archive.org/web/20120510133117/http://www.enderunix.org/docs/memory.pdf Memory Management For System Programmers]}} * [http://www.puredevsoftware.com/ VMem - general malloc/free replacement. Fast thread safe C++ allocator] * [https://www.infostore.co.in/2021/08/operating-system-memory-management.html Operating System Memory Management] {{-}} {{Memory management navbox}} {{Authority control}} [[Category:Memory management| ]] [[Category:Computer architecture]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:-
(
edit
)
Template:About
(
edit
)
Template:Anchor
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Code
(
edit
)
Template:Details
(
edit
)
Template:Disputed inline
(
edit
)
Template:ISBN
(
edit
)
Template:Main
(
edit
)
Template:Man
(
edit
)
Template:Memory management navbox
(
edit
)
Template:More footnotes
(
edit
)
Template:NoteFoot
(
edit
)
Template:NoteTag
(
edit
)
Template:OS
(
edit
)
Template:R
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:See also
(
edit
)
Template:Sfn
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Usurped
(
edit
)
Template:Webarchive
(
edit
)
Template:Wikibooks
(
edit
)