Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Memory hierarchy
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Computer memory architecture}} [[File:ComputerMemoryHierarchy.svg|thumb|300px|Diagram of the computer memory hierarchy]] {{Memory types}} {{distinguish|Learning pyramid}} In [[computer architecture]], the '''memory hierarchy''' separates [[computer storage]] into a hierarchy based on [[Response time (technology)|response time]]. Since response time, [[Computational complexity|complexity]], and [[Computer data storage|capacity]] are related, the levels may also be distinguished by their [[Computer performance|performance]] and controlling technologies.<ref name="toyzee" /> Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level [[computer programming|programming]] constructs involving [[locality of reference]]. Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories {{math|(''m''<sub>1</sub>, ''m''<sub>2</sub>, ..., ''m<sub>n</sub>'')}} in which each member {{mvar|m<sub>i</sub>}} is typically smaller and faster than the next highest member {{math|''m''<sub>''i''+1</sub>}} of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer. There are four major storage levels.<ref name="toyzee">{{cite book |last1=Toy |first1=Wing |last2=Zee |first2=Benjamin |title=Computer Hardware/Software Architecture |year=1986 |publisher=Prentice Hall |isbn=0-13-163502-6 |page=[https://archive.org/details/computerhardware0000toyw/page/30 30] |url=https://archive.org/details/computerhardware0000toyw/page/30 }}</ref> * ''Internal''{{dash}}[[processor register]]s and [[CPU cache|cache]]. * Main{{dash}}the system [[Random-access memory|RAM]] and controller cards. * On-line mass storage{{dash}}secondary storage. * Off-line bulk storage{{dash}}tertiary and off-line storage. This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level for [[virtual memory]] when designing a [[computer architecture]], and one can include a level of [[nearline storage]] between online and offline storage.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)