Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Memory-prediction framework
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== The basic theory: recognition and prediction in bi-directional hierarchies == The central concept of the memory-prediction framework is that bottom-up inputs are matched in a [[hierarchy]] of [[Recall (memory)|recognition]], and evoke a series of top-down expectations encoded as potentiations. These expectations interact with the bottom-up signals to both analyse those inputs and generate [[prediction]]s of subsequent expected inputs. Each hierarchy level remembers frequently observed temporal sequences of input patterns and generates labels or 'names' for these sequences. When an input sequence matches a memorized sequence at a given level of the hierarchy, a label or 'name' is propagated up the hierarchy β thus eliminating details at higher levels and enabling them to learn higher-order sequences. This process produces increased invariance at higher levels. Higher levels predict future input by matching partial sequences and projecting their expectations to the lower levels. However, when a mismatch between input and memorized/predicted sequences occurs, a more complete representation propagates upwards. This causes alternative 'interpretations' to be activated at higher levels, which in turn generates other predictions at lower levels. Consider, for example, the process of [[Visual perception|vision]]. Bottom-up information starts as low-level [[retina]]l signals (indicating the presence of simple visual elements and contrasts). At higher levels of the hierarchy, increasingly meaningful information is extracted, regarding the presence of [[wikt:line|line]]s, [[region]]s, [[motion (physics)|motion]]s, etc. Even further up the hierarchy, activity corresponds to the presence of specific objects β and then to behaviours of these objects. Top-down information fills in details about the recognized objects, and also about their expected behaviour as time progresses. The sensory hierarchy induces a number of differences between the various levels. As one moves up the hierarchy, [[Knowledge representation|representations]] have increased: * Extent β for example, larger areas of the visual field, or more extensive tactile regions. * Temporal stability β lower-level entities change quickly, whereas, higher-level percepts tend to be more stable. * [[Abstraction]] β through the process of successive extraction of invariant features, increasingly abstract entities are recognized. The relationship between sensory and motor processing is an important aspect of the basic theory. It is proposed that the motor areas of the [[Cerebral cortex|cortex]] consist of a behavioural hierarchy similar to the sensory hierarchy, with the lowest levels consisting of explicit motor commands to musculature and the highest levels corresponding to abstract prescriptions (e.g. 'resize the browser'). The sensory and motor hierarchies are tightly coupled, with behaviour giving rise to sensory expectations and sensory [[perception]]s driving motor processes. Finally, it is important to note that all the memories in the cortical hierarchy have to be learnt β this information is not pre-wired in the brain. Hence, the process of extracting this [[Knowledge representation|representation]] from the flow of inputs and behaviours is theorized as a process that happens continually during [[cognition]]. === Other terms === Hawkins has extensive training as an electrical engineer. Another way to describe the theory (hinted at in his book) is as a [[Computational learning theory|learning]] [[hierarchy]] of [[Feed forward (control)|feed forward]] [[stochastic]] [[state machine]]s. In this view, the brain is analyzed as an encoding problem, not too dissimilar from future-predicting error-correction codes. The hierarchy is a hierarchy of [[abstraction]], with the higher level machines' states representing more abstract conditions or events, and these states predisposing lower-level machines to perform certain transitions. The lower level machines model limited domains of experience, or control or interpret sensors or effectors. The whole system actually controls the organism's behavior. Since the state machine is "feed forward", the organism responds to future events predicted from past data. Since it is hierarchical, the system exhibits behavioral flexibility, easily producing new sequences of behavior in response to new sensory data. Since the system learns, the new behavior adapts to changing conditions. That is, the evolutionary purpose of the brain is to predict the future, in admittedly limited ways, so as to change it.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)