Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Semantic memory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Network models === [[Neural network|Networks]] of various sorts play an integral part in many theories of semantic memory. Generally speaking, a network is composed of a set of nodes connected by links. The nodes may represent concepts, words, perceptual features, or nothing at all. The links may be weighted such that some are stronger than others or, equivalently, have a length such that some links take longer to traverse than others. All these features of networks have been employed in models of semantic memory. ====Teachable language comprehender==== One of the first examples of a network model of semantic memory is the teachable language comprehender (TLC).<ref>{{cite journal | last1 = Collins | first1 = A. M. | last2 = Quillian | first2 = M. R. | year = 1969 | title = Retrieval time from semantic memory | journal = Journal of Verbal Learning and Verbal Behavior | volume = 8 | issue = 2| pages = 240β247 | doi=10.1016/s0022-5371(69)80069-1| s2cid = 60922154 }}</ref> In this model, each node is a word, representing a concept (like ''bird''). Within each node is stored a set of properties (like "can fly" or "has wings") as well as links to other nodes (like ''chicken''). A node is directly linked to those nodes of which it is either a subclass or superclass (i.e., ''bird'' would be connected to both ''chicken'' and ''animal''). Properties are stored at the highest category level to which they apply; for example, "is yellow" would be stored with ''canary'', "has wings" would be stored with ''bird'' (one level up), and "can move" would be stored with ''animal'' (another level up). Nodes may also store negations of the properties of their superordinate nodes (i.e., "NOT-can fly" would be stored with "penguin"). Processing in TLC is a form of [[spreading activation]].<ref>Collins, A. M. & Quillian, M. R. (1972). How to make a language user. In E. Tulving & W. Donaldson (Eds.), ''Organization of memory'' (pp. 309-351). New York: Academic Press.</ref> When a node becomes active, that activation spreads to other nodes via the links between them. In that case, the time to answer the question "Is a chicken a bird?" is a function of how far the activation between the nodes for ''chicken'' and ''bird'' must spread, or the number of links between those nodes. The original version of TLC did not put weights on the links between nodes. This version performed comparably to humans in many tasks, but failed to predict that people would respond faster to questions regarding more typical category instances than those involving less typical instances.<ref>{{cite journal | last1 = Rips | first1 = L. J. | last2 = Shoben | first2 = E. J. | last3 = Smith | first3 = F. E. | year = 1973 | title = Semantic distance and the verification of semantic relations | journal = Journal of Verbal Learning and Verbal Behavior | volume = 14 | pages = 665β681 | doi = 10.1016/s0022-5371(73)80056-8 }}</ref> [[Allan M. Collins|Allan Collins]] and Quillian later updated TLC to include weighted connections to account for this effect,<ref>{{cite journal | last1 = Collins | first1 = A. M. | last2 = Loftus | first2 = E. F. | year = 1975 | title = A spreading-activation theory of semantic processing | journal = Psychological Review | volume = 82 | issue = 6| pages = 407β428 | doi=10.1037/0033-295x.82.6.407| s2cid = 14217893 }}</ref> which allowed it to explain both the familiarity effect and the typicality effect. Its biggest advantage is that it clearly explains [[priming (psychology)|priming]]: information from memory is more likely to be retrieved if related information (the "prime") has been presented a short time before. There are still a number of memory phenomena for which TLC has no account, including why people are able to respond quickly to obviously false questions (like "is a chicken a meteor?") when the relevant nodes are very far apart in the network.<ref>{{cite journal | last1 = Glass | first1 = A. L. | last2 = Holyoak | first2 = K. J. | last3 = Kiger | first3 = J. I. | year = 1979 | title = Role of antonymy relations in semantic judgments | journal = Journal of Experimental Psychology: Human Learning & Memory | volume = 5 | issue = 6| pages = 598β606 | doi=10.1037/0278-7393.5.6.598}}</ref> ====Semantic networks==== TLC is an instance of a more general class of models known as [[semantic networks]]. In a semantic network, each node is to be interpreted as representing a specific concept, word, or feature; each node is a symbol. Semantic networks generally do not employ distributed representations for concepts, as may be found in a [[neural network]]. The defining feature of a semantic network is that its links are almost always directed (that is, they only point in one direction, from a base to a target) and the links come in many different types, each one standing for a particular relationship that can hold between any two nodes.<ref>Arbib, M. A. (Ed.). (2002). Semantic networks. In ''The Handbook of Brain Theory and Neural Networks (2nd ed.)'', Cambridge, MA: MIT Press.</ref> Semantic networks see the most use in models of [[Discourse analysis|discourse]] and [[logic]]al [[comprehension (logic)|comprehension]], as well as in [[artificial intelligence]].<ref> * {{cite book |last1=Barr |first1=Avron |last2=Feigenbaum |first2=Edward A. |title=The Handbook of artificial intelligence, volume 1 |date=1981 |publisher=HeurisTech Press; William Kaufmann |location=Stanford, CA; Los Altos, CA |isbn=978-0-86576-004-2 |url=https://archive.org/details/handbookofartific01barr/}} * {{cite book |last1=Barr |first1=Avron |last2=Feigenbaum |first2=Edward A. |title=The Handbook of artificial intelligence, volume 2 |date=1982 |publisher=HeurisTech Press; William Kaufmann |location=Stanford, CA; Los Altos, CA |isbn=978-0-86576-006-6 |url=https://archive.org/details/handbookofartific02barr/}} * {{cite book |last1=Cohen |first1=Paul R. |last2=Feigenbaum |first2=Edward A. |title=The Handbook of artificial intelligence, volume 3 |date=1982 |publisher=HeurisTech Press; William Kaufmann |location=Stanford, CA; Los Altos, CA |isbn=978-0-86576-007-3 |url=https://archive.org/details/handbookofartific03cohe}} * {{cite book |last1=Barr |first1=Avron |last2=Cohen |first2=Paul R. |last3=Feigenbaum |first3=Edward A. (Edward Albert) |title=Handbook of artificial intelligence, volume 4 |date=1989 |publisher=Addison Wesley |location=Reading, MA |isbn=978-0-201-51731-6 |url=https://archive.org/details/handbookofartific04barr}}</ref> In these models, the nodes correspond to words or word stems and the links represent syntactic relations between them.<ref>{{cite journal | last1 = Cravo | first1 = M. R. | last2 = Martins | first2 = J. P. | year = 1993 | title = SNePSwD: A newcomer to the SNePS family | journal = Journal of Experimental & Theoretical Artificial Intelligence | volume = 5 | issue = 2β3| pages = 135β148 | doi=10.1080/09528139308953764}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)