Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cache replacement policies
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Overview== The average memory reference time is<ref name="ajsmith" /> : <math>T = m \times T_m + T_h + E</math> where : <math>m</math> = miss ratio = 1 - (hit ratio) : <math>T_m</math> = time to make main-memory access when there is a miss (or, with a multi-level cache, average memory reference time for the next-lower cache) : <math>T_h</math>= latency: time to reference the cache (should be the same for hits and misses) : <math>E</math> = secondary effects, such as queuing effects in multiprocessor systems A cache has two primary figures of merit: latency and hit ratio. A number of secondary factors also affect cache performance.<ref name="ajsmith"> Alan Jay Smith. "Design of CPU Cache Memories". Proc. IEEE TENCON, 1987. [http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-357.pdf] </ref> The hit ratio of a cache describes how often a searched-for item is found. More efficient replacement policies track more usage information to improve the hit rate for a given cache size. The latency of a cache describes how long after requesting a desired item the cache can return that item when there is a hit. Faster replacement strategies typically track of less usage information—or, with a direct-mapped cache, no information—to reduce the time required to update the information. Each replacement strategy is a compromise between hit rate and latency. Hit-rate measurements are typically performed on [[Benchmark (computing)|benchmark]] applications, and the hit ratio varies by application. Video and audio streaming applications often have a hit ratio near zero, because each bit of data in the stream is read once (a compulsory miss), used, and then never read or written again. Many cache algorithms (particularly [[Page replacement algorithm#Least recently used|LRU]]) allow streaming data to fill the cache, pushing out information which will soon be used again ([[cache pollution]]).<ref>Paul V. Bolotoff. [http://alasir.com/articles/cache_principles/ "Functional Principles of Cache Memory"] {{Webarchive|url=https://web.archive.org/web/20120314200150/http://alasir.com/articles/cache_principles/ |date=14 March 2012 }}. 2007.</ref> Other factors may be size, length of time to obtain, and expiration. Depending on cache size, no further caching algorithm to discard items may be needed. Algorithms also maintain [[cache coherence]] when several caches are used for the same data, such as multiple database servers updating a shared data file.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)