Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Semantic memory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Statistical models=== Some models characterize the acquisition of semantic information as a form of [[statistical inference]] from a set of discrete experiences, distributed across a number of [[Context (language use)|contexts]]. Though these models differ in specifics, they generally employ an (Item × Context) [[matrix (mathematics)|matrix]] where each cell represents the number of times an item in memory has occurred in a given context. Semantic information is gleaned by performing a statistical analysis of this matrix. Many of these models bear similarity to the algorithms used in [[search engines]], though it is not yet clear whether they really use the same computational mechanisms.<ref>{{cite journal | last1 = Griffiths | first1 = T. L. | last2 = Steyvers | first2 = M. | last3 = Firl | first3 = A. | year = 2007 | title = Google and the mind: Predicting fluency with PageRank | journal = Psychological Science | volume = 18 | issue = 12| pages = 1069–1076 | doi=10.1111/j.1467-9280.2007.02027.x| pmid = 18031414 | s2cid = 12063124 }}</ref><ref>Anderson, J. R. (1990). ''The adaptive character of thought''. Hillsdale, NJ: Lawrence Erlbaum Associates.</ref> ====Latent semantic analysis==== One of the more popular models is [[latent semantic analysis]] (LSA).<ref>{{cite journal | last1 = Landauer | first1 = T. K. | last2 = Dumais | first2 = S. T. | year = 1997 | title = A solution to Plato's problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge | journal = Psychological Review | volume = 104 | issue = 2| pages = 211–240 | doi=10.1037/0033-295x.104.2.211| citeseerx = 10.1.1.184.4759 | s2cid = 1144461 }}</ref> In LSA, a T × D [[matrix (mathematics)|matrix]] is constructed from a [[text corpus]], where T is the number of terms in the corpus and D is the number of documents (here "context" is interpreted as "document" and only words—or word phrases—are considered as items in memory). Each cell in the matrix is then transformed according to the equation: <math>\mathbf{M}_{t,d}'=\frac{\ln{(1 + \mathbf{M}_{t,d})}}{-\sum_{i=0}^D P(i|t) \ln{P(i|t)}}</math> where <math>P(i|t)</math> is the probability that context <math>i</math> is active, given that item <math>t</math> has occurred (this is obtained simply by dividing the raw frequency, <math>\mathbf{M}_{t,d}</math> by the total of the item vector, <math>\sum_{i=0}^D \mathbf{M}_{t,i}</math>). ====Hyperspace Analogue to Language (HAL)==== The Hyperspace Analogue to Language (HAL) model<ref>Lund, K., Burgess, C. & Atchley, R. A. (1995). Semantic and associative priming in a high-dimensional semantic space. ''Cognitive Science Proceedings (LEA)'', 660-665.</ref><ref>{{cite journal | last1 = Lund | first1 = K. | last2 = Burgess | first2 = C. | year = 1996 | title = Producing high-dimensional semantic spaces from lexical co-occurrence | journal = Behavior Research Methods, Instruments, and Computers | volume = 28 | issue = 2| pages = 203–208 | doi=10.3758/bf03204766| doi-access = free }}</ref> considers context only as the words that immediately surround a given word. HAL computes an NxN matrix, where N is the number of words in its lexicon, using a 10-word reading frame that moves incrementally through a corpus of text. Like SAM, any time two words are simultaneously in the frame, the association between them is increased, that is, the corresponding cell in the NxN matrix is incremented. The bigger the distance between the two words, the smaller the amount by which the association is incremented (specifically, <math>\Delta=11-d</math>, where <math>d</math> is the distance between the two words in the frame).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)