Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Locality of reference
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Hierarchical memory === {{main|Memory hierarchy}} Hierarchical memory is a hardware optimization that takes the benefits of spatial and temporal locality and can be used on several levels of the memory hierarchy. [[Paging]] obviously benefits from temporal and spatial locality. A cache is a simple example of exploiting temporal locality, because it is a specially designed, faster but smaller memory area, generally used to keep recently referenced data and data near recently referenced data, which can lead to potential performance increases. Data elements in a cache do not necessarily correspond to data elements that are spatially close in the main memory; however, data elements are brought into cache one [[cache line]] at a time. This means that spatial locality is again important: if one element is referenced, a few neighboring elements will also be brought into cache. Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely together can be kept in the [[Processor register|machine registers]]. Some programming languages (such as [[C (programming language)|C]]) allow the programmer to suggest that certain variables be kept in registers. Data locality is a typical memory reference feature of regular programs (though many irregular memory access patterns exist). It makes the hierarchical memory layout profitable. In computers, memory is divided into a hierarchy in order to speed up data accesses. The lower levels of the memory hierarchy tend to be slower, but larger. Thus, a program will achieve greater performance if it uses memory while it is cached in the upper levels of the memory hierarchy and avoids bringing other data into the upper levels of the hierarchy that will displace data that will be used shortly in the future. This is an ideal, and sometimes cannot be achieved. Typical memory hierarchy (access times and cache sizes are approximations of typical values used {{As of|2013|lc=on}} for the purpose of discussion; actual values and actual numbers of levels in the hierarchy vary): * [[CPU register]]s (8β256 registers) – immediate access, with the speed of the innermost core of the processor * L1 [[CPU cache]]s (32 KB to 512 [[kilobyte|KB]]) – fast access, with the speed of the innermost memory bus owned exclusively by each core * L2 CPU caches (128 KB to 24 [[megabyte|MB]]) – slightly slower access, with the speed of the [[memory bus]] shared between twins of cores * L3 CPU caches (2 MB up to a max of 64 [[megabyte|MB]]) – even slower access, with the speed of the memory bus shared between even more cores of the same processor * Main [[physical memory]] ([[random-access memory|RAM]]) (256 MB to 64 [[gigabyte|GB]]) – slow access, the speed of which is limited by the spatial distances and general hardware interfaces between the processor and the memory modules on the [[motherboard]] * Disk ([[virtual memory]], [[file system]]) (1 GB to 256 [[terabyte|TB]]) – very slow, due to the narrower (in bit width), physically much longer data channel between the main board of the computer and the disk devices, and due to the extraneous software protocol needed on the top of the slow hardware interface * Remote memory (other computers or the cloud) (practically unlimited) – speed varies from very slow to extremely slow Modern machines tend to read blocks of lower memory into the next level of the memory hierarchy. If this displaces used memory, the [[operating system]] tries to predict which data will be accessed least (or latest) and move it down the memory hierarchy. Prediction algorithms tend to be simple to reduce hardware complexity, though they are becoming somewhat more complicated.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)