Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cognitive model
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Dynamical systems== In the traditional [[Computer Simulation|computational approach]], [[Mental representation|representations]] are viewed as static structures of discrete [[Symbolic system|symbols]]. [[Cognition]] takes place by transforming static symbol structures in [[Discrete time|discrete]], sequential steps. [[Sensory system|Sensory]] information is transformed into symbolic inputs, which produce symbolic outputs that get transformed into [[Motor system|motor]] outputs. The entire system operates in an ongoing cycle. What is missing from this traditional view is that human cognition happens [[Continuous function|continuously]] and in real time. Breaking down the processes into discrete time steps may not fully [[Mathematical model|capture]] this behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set or [[State space (dynamical system)|state space]], representing the totality of overall states the system could be in.<ref>van Gelder, T. (1998). [https://static.cambridge.org/resource/id/urn:cambridge.org:id:binary:20170214115306027-0503:S0140525X98311735:S0140525X9800173Xa.pdf The dynamical hypothesis in cognitive science] {{Webarchive|url=https://web.archive.org/web/20180701164843/https://static.cambridge.org/resource/id/urn:cambridge.org:id:binary:20170214115306027-0503:S0140525X98311735:S0140525X9800173Xa.pdf |date=2018-07-01 }}. Behavioral and Brain Sciences, 21, 615-665.</ref> The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.<ref>van Gelder, T. & Port, R. F. (1995). [http://cogsci.ucd.ie/connectionism/Articles/vanGelderPort.pdf It's about time: An overview of the dynamical Approach to cognition] {{Webarchive|url=https://web.archive.org/web/20171117141023/http://cogsci.ucd.ie/connectionism/Articles/vanGelderPort.pdf |date=2017-11-17 }}. In R.F. Port and T. van Gelder (Eds.), Mind as motion: Explorations in the Dynamics of Cognition. (pp. 1-43). Cambridge, Massachusetts: MIT Press.</ref> A typical [[Dynamical system|dynamical]] model is [[Formalism (mathematics)|formalized]] by several [[differential equation]]s that describe how the system's state changes over time. By doing so, the form of the space of possible [[Orbit (dynamics)|trajectories]] and the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlying [[Mechanism (biology)|mechanisms]] that manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs. ===Early dynamical systems=== ====Associative memory==== Early work in the application of dynamical systems to cognition can be found in the model of [[Hopfield network]]s.<ref>Hopfield, J. J. (1982). [http://www.pnas.org/content/pnas/79/8/2554.full.pdf Neural networks and physical systems with emergent collective computational abilities]. PNAS, 79, 2554-2558.</ref><ref>Hopfield, J. J. (1984). [http://www.pnas.org/content/pnas/81/10/3088.full.pdf Neurons with graded response have collective computational properties like those of two-state neurons]. PNAS, 81, 3088-3092.</ref> These networks were proposed as a model for [[associative memory (psychology)|associative memory]]. They represent the neural level of [[memory]], modeling systems of around 30 neurons which can be in either an on or off state. By letting the [[Neural network|network]] learn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled with [[Euclidean vector|vectors]] which can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for. ==== Language acquisition==== By taking into account the [[Evolutionary developmental biology|evolutionary development]] of the human [[nervous system]] and the similarity of the [[brain]] to other organs, [[Jeffrey Elman|Elman]] proposed that [[language]] and cognition should be treated as a dynamical system rather than a digital symbol processor.<ref>Elman, J. L. (1995). [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.7356&rep=rep1&type=pdf Language as a dynamical system]. In R.F. Port and T. van Gelder (Eds.), Mind as motion: Explorations in the Dynamics of Cognition. (pp. 195-223). Cambridge, Massachusetts: MIT Press.</ref> Neural networks of the type Elman implemented have come to be known as [[Recurrent neural networks|Elman networks]]. Instead of treating language as a collection of static [[Lexicon|lexical]] items and [[grammar]] rules that are learned and then used according to fixed rules, the dynamical systems view defines the [[lexicon]] as regions of state space within a dynamical system. Grammar is made up of [[attractor]]s and repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.<ref>Elman, J. L. (1991). [https://link.springer.com/content/pdf/10.1007/BF00114844.pdf Distributed representations, simple recurrent networks, and grammatical structure]. Machine Learning, 7, 195-225.</ref> ====Cognitive development==== A classic developmental error has been investigated in the context of dynamical systems:<ref>Spencer, J. P., Smith, L. B., & Thelen, E. (2001). Tests of dynamical systems account of the A-not-B error: The influence of prior experience on the spatial memory abilities of two-year-olds. Child Development, 72(5), 1327-1346.</ref><ref name="Thelen">Thelen E., Schoner, G., Scheier, C., Smith, L. B. (2001). [https://static.cambridge.org/resource/id/urn:cambridge.org:id:binary:20170214114618504-0222:S0140525X01223917:S0140525X01003910a.pdf The dynamics of embodiment: A field theory of infant preservative reaching] {{Webarchive|url=https://web.archive.org/web/20180701111727/https://static.cambridge.org/resource/id/urn:cambridge.org:id:binary:20170214114618504-0222:S0140525X01223917:S0140525X01003910a.pdf |date=2018-07-01 }}. Behavioral and Brain Sciences, 24, 1-86.</ref> The [[A-not-B error]] is proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.<ref name="Thelen" /> ====Locomotion==== One proposed mechanism of a dynamical system comes from analysis of continuous-time [[recurrent neural networks]] (CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuron [[central pattern generator]] (CPG) can be used to represent systems such as leg movements during walking.<ref>Chiel, H. J., Beer, R. D., & Gallagher, J. C. (1999). Evolution and analysis of model CPGs for walking. Journal of Computational Neuroscience, 7, 99-118.</ref> This CPG contains three [[motor neuron]]s to control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generate [[torque]] in the leg joint. One feature of this pattern is that neuron outputs are either [[Binary number|off or on]] most of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out. ===Modern dynamical systems=== ====Behavioral dynamics==== Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”,<ref name="Warren">Warren, W. H. (2006). [http://rci.rutgers.edu/~persci/speakers/Warren_P%26A_PR06.pdf The dynamics of perception and action] {{Webarchive|url=https://web.archive.org/web/20170918115329/http://rci.rutgers.edu/~persci/speakers/Warren_P%26A_PR06.pdf |date=2017-09-18 }}. Psychological Review, 113(2), 359-389. doi: 10.1037/0033-295X.113.2.358</ref> treats the [[Intelligent agent|agent]] and the environment as a pair of [[Coupling (physics)|coupled]] dynamical systems based on classical dynamical systems theory. In this formalization, the information from the [[Environment (systems)|environment]] informs the agent's behavior and the agent's actions modify the environment. In the specific case of [[Motor cognition|perception-action cycles]], the coupling of the environment and the agent is formalized by two [[Function (mathematics)|functions]]. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together<ref>Beer, R. D. (2000). Dynamical approaches to cognitive science. Trends in Cognitive Sciences, 4(3), 91-99.</ref><ref>Beer, R. D. (2003). [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.3706&rep=rep1&type=pdf The dynamics of active categorical perception in an evolved model agent]. Adaptive Behavior, 11(4), 209-243. doi: 10.1177/1059712303114001</ref> =====Adaptive behaviors===== Behavioral dynamics have been applied to locomotive behavior.<ref name="Warren" /><ref>Fajen, B., R., & Warren, W. H. (2003). [http://www.rc.unesp.br/ib/e_fisica/aplab/obstacle%20avoidance.pdf Behavioral dynamics of steering, obstacle avoidance, and route selection]. Journal of Experimental Psychology: Human Perception and Performance, 29, 343-362.</ref><ref>Fajen, B. R., Warren, W. H., Temizer, S., & Kaelbling, L. P. (2003). [http://cs.ait.ac.th/~mdailey/cvreadings/Fajen-Dynamical.pdf A dynamical model of visually-guided steering, obstacle avoidance, and route selection]. International Journal of Computer Vision, 54, 15-34.</ref> Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment. ====Open dynamical systems==== In an extension of classical [[dynamical systems theory]],<ref>Hotton, S., & Yoshimi, J. (2010). The dynamics of embodied cognition. International Journal of Bifurcation and Chaos, 20(4), 943-972. {{doi|10.1142/S0218127410026241}}</ref> rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system. =====Embodied cognition===== In the context of dynamical systems and [[embodied cognition]], representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:<ref>Hotton, S., & Yoshimi, J. (2011). [https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1551-6709.2010.01151.x Extending dynamical systems theory to model embodied cognition]. Cognitive Science, 35, 444-479. doi: 10.1111/j.1551-6709.2010.01151.x</ref> # Instances where the environment and agent must work together to achieve a goal, referred to as "intimacy". A classic example of intimacy is the behavior of simple agents working to achieve a goal (e.g., insects traversing the environment). The successful completion of the goal relies fully on the coupling of the agent to the environment.<ref>Haugeland, J. (1996). [https://philpapers.org/rec/HAUMEA Mind embodied and embedded]. In J. Haugeland (Ed.), Having thought: Essays in the metaphysics of mind (pp. 207-237). Cambridge, Massachusetts: Harvard University Press.</ref> # Instances where the use of external artifacts improves the performance of tasks relative to performance without these artifacts. The process is referred to as "offloading". A classic example of offloading is the behavior of [[Scrabble]] players; people are able to create more words when playing Scrabble if they have the tiles in front of them and are allowed to physically manipulate their arrangement. In this example, the Scrabble tiles allow the agent to offload [[working memory]] demands on to the tiles themselves.<ref>Maglio, P., Matlock, T., Raphaely, D., Chernickym B., & Kirsh, D. (1999). [https://www.researchgate.net/profile/David_Kirsh/publication/2518995_Interactive_Skill_in_Scrabble/links/0912f51230e9fea0bb000000/Interactive-Skill-in-Scrabble.pdf Interactive skill in scrabble]. In M. Hahn & S. C. Stoness (Eds.), Proceedings of twenty-first annual conference of the Cognitive Science Society, (pp. 326-330). Mahwah, NJ: Lawrence Erlbaum Associates.</ref> # Instances where a functionally equivalent external artifact replaces functions that are normally performed internally by the agent, which is a special case of offloading. One famous example is that of human (specifically the agents Otto and Inga) navigation in a complex environment with or without assistance of an artifact.<ref>Clark, A., & Chalmers, D. (1998). [https://www.era.lib.ed.ac.uk/bitstream/handle/1842/1312/TheExtendedMind.pdf?sequence=1&isAllowed=y The extended mind]. Analysis, 58(1), 7-19.</ref> # Instances where there is not a single agent. The individual agent is part of larger system that contains multiple agents and multiple artifacts. One famous example, formulated by [[Edwin Hutchins|Ed Hutchins]] in his book ''Cognition in the Wild'', is that of navigating a naval ship.<ref>Hutchins, E., (1995). [https://books.google.com/books?id=CGIaNc3F1MgC Cognition in the wild]. Cambridge, Massachusetts: MIT Press.</ref> The interpretations of these examples rely on the following [[Reason|logic]]: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)