Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Von Neumann architecture
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Mitigations==== {{update section|reason=Everything in the list has been implemented in ordinary desktop computers to some degree, often extensively. NUMA has been common on multichip-module workstation processors for years now. Are they helping this problem? Has the problem space changed?|date=April 2025}} There are several known methods for mitigating the Von Neumann performance bottleneck. For example, the following all can improve performance:{{why|date=November 2015}} * Providing a [[CPU cache|cache]] between the CPU and the [[main memory]]. * Providing separate caches or separate access paths for data and instructions (the so-called [[Modified Harvard architecture]]). * Using [[branch predictor]] algorithms and logic. * Providing a limited CPU stack or other on-chip [[scratchpad memory]] to reduce memory access. * Implementing the CPU and the [[memory hierarchy]] as a [[System on a chip|system on chip]], providing greater [[locality of reference]] and thus reducing latency and increasing throughput between [[processor register]]s and main memory. The problem can also be sidestepped somewhat by using [[parallel computing]], using for example the [[non-uniform memory access]] (NUMA) architecture—this approach is commonly employed by [[supercomputer]]s. It is less clear whether the ''intellectual bottleneck'' that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence.{{citation needed|date=December 2010}} Modern [[functional programming]] and [[object-oriented programming]] are much less geared towards "pushing vast numbers of words back and forth"{{how?|reason=Objects are just vast numbers of words arranged into clear structures. Functional languages potentially push even more data around since the functions themselves may be passed around as parameters and they'll eventually be working on data|date=April 2025}} than earlier languages like [[FORTRAN]] were, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers.{{citation needed|date=April 2025}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)