Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Detection theory
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Means to measure signal processing ability}} '''Detection theory''' or '''signal detection theory''' is a means to measure the ability to differentiate between information-bearing patterns (called [[Stimulus (psychology)|stimulus]] in living organisms, [[Signal (electronics)|signal]] in machines) and random patterns that distract from the information (called [[Noise (electronics)|noise]], consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field of [[electronics]], '''signal recovery''' is the separation of such patterns from a disguising background.<ref name=Wilmshurst> {{cite book |title=Signal Recovery from Noise in Electronic Instrumentation |author=T. H. Wilmshurst |url=https://books.google.com/books?id=49hfsIPpGwYC&pg=PP11 |pages=11 ''ff'' |isbn=978-0-7503-0058-2 |edition=2nd |publisher=CRC Press |year=1990}} </ref> According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g. fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done by [[radar]] researchers.<ref>{{Cite journal | pages = 90 | last = Marcum | first = J. I. | title = A Statistical Theory of Target Detection by Pulsed Radar | journal = The Research Memorandum | access-date = 2009-06-28 | year = 1947 | url = http://www.rand.org/pubs/research_memoranda/RM754/ }}</ref> By 1954, the theory was fully developed on the theoretical side as described by [[W. Wesley Peterson|Peterson]], Birdsall and Fox<ref>{{cite journal |last1=Peterson |first1=W. |last2=Birdsall |first2=T. |last3=Fox |first3=W. |title=The theory of signal detectability |journal=Transactions of the IRE Professional Group on Information Theory |date=September 1954 |volume=4 |issue=4 |pages=171–212 |doi=10.1109/TIT.1954.1057460 }}</ref> and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and [[John A. Swets]], also in 1954.<ref>{{cite journal |last1=Tanner |first1=Wilson P. |last2=Swets |first2=John A. |title=A decision-making theory of visual detection. |journal=Psychological Review |date=1954 |volume=61 |issue=6 |pages=401–409 |doi=10.1037/h0058700 |pmid=13215690 }}</ref> Detection theory was used in 1966 by John A. Swets and David M. Green for [[psychophysics]].<ref>Swets, J.A. (ed.) (1964) ''Signal detection and recognition by human observers''. New York: Wiley{{page needed|date=July 2019}}</ref> Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) [[response bias]]es.<ref name="Green&Swets">Green, D.M., Swets J.A. (1966) ''Signal Detection Theory and Psychophysics''. New York: Wiley. ({{ISBN|0-471-32420-5}}){{page needed|date=July 2019}}</ref> Detection theory has applications in many fields such as [[diagnostics]] of any kind, [[quality control]], [[telecommunications]], and [[psychology]]. The concept is similar to the [[signal-to-noise ratio]] used in the sciences and [[confusion matrix|confusion matrices]] used in [[artificial intelligence]]. It is also usable in [[alarm management]], where it is important to separate important events from [[background noise]]. ==Psychology== Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or during [[eyewitness identification]].<ref>{{Cite journal | doi=10.1177/2372732215602267|title = Eyewitness Identification and the Accuracy of the Criminal Justice System| journal=Policy Insights from the Behavioral and Brain Sciences| volume=2| pages=175–186|year = 2015|last1 = Clark|first1 = Steven E.| last2=Benjamin| first2=Aaron S.| last3=Wixted| first3=John T.| last4=Mickes| first4=Laura| last5=Gronlund| first5=Scott D.| hdl=11244/49353| s2cid=18529957 | hdl-access=free}}</ref><ref>{{Cite journal | url=https://digitalcommons.fiu.edu/dissertations/AAI3169457/ | title=A theoretical analysis of eyewitness identification: Dual -process theory, signal detection theory and eyewitness confidence| journal=ProQuest Etd Collection for Fiu| pages=1–98| date=January 2005| last1=Haw| first1=Ryann Michelle}}</ref> SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see also [[decision theory]]). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect. To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories: :{| class="wikitable" |- ! ! Respond "Absent" ! Respond "Present" |- ! Stimulus Present | [[Type 1 error#Type I and type II errors|Miss]] | Hit |- ! Stimulus Absent | Correct Rejection | [[Type 1 error#Type I and type II errors|False Alarm]] |} Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like the [[sensitivity index|sensitivity index ''d''']] and A',<ref name="Stanislaw 1999 137–49">{{cite journal |last1=Stanislaw |first1=Harold |last2=Todorov |first2=Natasha |title=Calculation of signal detection theory measures |journal=Behavior Research Methods, Instruments, & Computers |date=March 1999 |volume=31 |issue=1 |pages=137–149 |doi=10.3758/BF03207704 |pmid=10495845 |doi-access=free }}</ref> and response bias can be estimated with statistics like c and β.<ref name="Stanislaw 1999 137–49"/> β is the measure of response bias.<ref>{{Cite web |title=Signal Detection Theory |url=https://elvers.us/perception/sdtGraphic/ |access-date=2023-07-14 |website=elvers.us}}</ref> Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm. :{| class="wikitable" |- ! ! Respond "No" ! Respond "Yes" |- ! Target | [[Type 1 error#Type I and type II errors|Miss]] | Hit |- ! Distractor | Correct Rejection | [[Type 1 error#Type I and type II errors|False Alarm]] |} == Applications == Signal Detection Theory has wide application, both in humans and [[Comparative psychology|animals]]. Topics include [[memory]], stimulus characteristics of schedules of reinforcement, etc. === Sensitivity or discriminability === Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-called [[sensitivity index]] or ''d'''. There are also [[non-parametric]] measures, such as the area under the [[Receiver operating characteristic|ROC-curve]].<ref name="Green&Swets"/> === Bias === Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often ([[The Boy Who Cried Wolf|crying wolf]]) may make people less likely to respond, a problem that can be reduced by a conservative response bias. === Compressed sensing === Another field which is closely related to signal detection theory is called '''''[[compressed sensing]]''''' (or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing including '''''[[basis pursuit]]''''', '''''expander recovery algorithm'''''<ref>{{cite journal |last1=Jafarpour |first1=Sina |last2=Xu |first2=Weiyu |last3=Hassibi |first3=Babak |last4=Calderbank |first4=Robert |title=Efficient and Robust Compressed Sensing Using Optimized Expander Graphs |journal=IEEE Transactions on Information Theory |date=September 2009 |volume=55 |issue=9 |pages=4299–4308 |doi=10.1109/tit.2009.2025528 |s2cid=15490427 |url=https://authors.library.caltech.edu/15653/1/Jafarpour2009p5830Ieee_T_Inform_Theory.pdf }}</ref>''''', CoSaMP'''''<ref>{{Cite journal|last1=Needell|first1=D.|last2=Tropp|first2=J.A.|title=CoSaMP: Iterative signal recovery from incomplete and inaccurate samples|journal=Applied and Computational Harmonic Analysis|volume=26|issue=3|pages=301–321|doi=10.1016/j.acha.2008.07.002|year=2009|arxiv=0803.2392|s2cid=1642637 }}</ref> and also '''''fast''''' '''''non-iterative algorithm'''''.<ref>Lotfi, M.; Vidyasagar, M."[[arxiv:1708.03608|A Fast Noniterative Algorithm for Compressive Sensing Using Binary Measurement Matrices]]".</ref> In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such as '''''[[Restricted isometry property|RIP]]''''' (Restricted Isometry Property) or '''''[[Nullspace property|Null-Space property]]''''' in order to achieve robust sparse recovery. ==Mathematics== === P(H1|y) > P(H2|y) / MAP testing === In the case of making a decision between two [[Hypothesis|hypotheses]], ''H1'', absent, and ''H2'', present, in the event of a particular [[observation]], ''y'', a classical approach is to choose ''H1'' when ''p(H1|y) > p(H2|y)'' and ''H2'' in the reverse case.<ref name=Schonhoff>Schonhoff, T.A. and Giordano, A.A. (2006) ''Detection and Estimation Theory and Its Applications''. New Jersey: Pearson Education ({{ISBN|0-13-089499-0}})</ref> In the event that the two ''[[a posteriori]]'' [[probability|probabilities]] are equal, one might choose to default to a single choice (either always choose ''H1'' or always choose ''H2''), or might randomly select either ''H1'' or ''H2''. The ''[[A priori and a posteriori|a priori]]'' probabilities of ''H1'' and ''H2'' can guide this choice, e.g. by always choosing the hypothesis with the higher ''a priori'' probability. When taking this approach, usually what one knows are the conditional probabilities, ''p(y|H1)'' and ''p(y|H2)'', and the ''[[A priori and a posteriori|a priori]]'' probabilities <math>p(H1) = \pi_1</math> and <math>p(H2) = \pi_2</math>. In this case, <math>p(H1|y) = \frac{p(y|H1) \cdot \pi_1}{p(y)} </math>, <math>p(H2|y) = \frac{p(y|H2) \cdot \pi_2}{p(y)} </math> where ''p(y)'' is the total probability of event ''y'', <math> p(y|H1) \cdot \pi_1 + p(y|H2) \cdot \pi_2 </math>. ''H2'' is chosen in case <math> \frac{p(y|H2) \cdot \pi_2}{p(y|H1) \cdot \pi_1 + p(y|H2) \cdot \pi_2} \ge \frac{p(y|H1) \cdot \pi_1}{p(y|H1) \cdot \pi_1 + p(y|H2) \cdot \pi_2} </math> <math> \Rightarrow \frac{p(y|H2)}{p(y|H1)} \ge \frac{\pi_1}{\pi_2}</math> and ''H1'' otherwise. Often, the ratio <math>\frac{\pi_1}{\pi_2}</math> is called <math>\tau_{MAP}</math> and <math>\frac{p(y|H2)}{p(y|H1)}</math> is called <math>L(y)</math>, the ''[[Likelihood function|likelihood ratio]]''. Using this terminology, ''H2'' is chosen in case <math>L(y) \ge \tau_{MAP}</math>. This is called MAP testing, where MAP stands for "maximum ''a posteriori''"). Taking this approach minimizes the expected number of errors one will make. ===Bayes criterion=== In some cases, it is far more important to respond appropriately to ''H1'' than it is to respond appropriately to ''H2''. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying a [[nuclear weapon]]), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect a [[false alarm]] (i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). The [[Thomas Bayes|Bayes]] criterion is an approach suitable for such cases.<ref name=Schonhoff/> Here a [[utility]] is associated with each of four situations: * <math>U_{11}</math>: One responds with behavior appropriate to H1 and H1 is true: fighters destroy bomber, incurring fuel, maintenance, and weapons costs, take risk of some being shot down; * <math>U_{12}</math>: One responds with behavior appropriate to H1 and H2 is true: fighters sent out, incurring fuel and maintenance costs, bomber location remains unknown; * <math>U_{21}</math>: One responds with behavior appropriate to H2 and H1 is true: city destroyed; * <math>U_{22}</math>: One responds with behavior appropriate to H2 and H2 is true: fighters stay home, bomber location remains unknown; As is shown below, what is important are the differences, <math>U_{11} - U_{21}</math> and <math>U_{22} - U_{12}</math>. Similarly, there are four probabilities, <math>P_{11}</math>, <math>P_{12}</math>, etc., for each of the cases (which are dependent on one's decision strategy). The Bayes criterion approach is to maximize the expected utility: <math> E\{U\} = P_{11} \cdot U_{11} + P_{21} \cdot U_{21} + P_{12} \cdot U_{12} + P_{22} \cdot U_{22} </math> <math> E\{U\} = P_{11} \cdot U_{11} + (1-P_{11}) \cdot U_{21} + P_{12} \cdot U_{12} + (1-P_{12}) \cdot U_{22} </math> <math> E\{U\} = U_{21} + U_{22} + P_{11} \cdot (U_{11} - U_{21}) - P_{12} \cdot (U_{22} - U_{12}) </math> Effectively, one may maximize the sum, <math>U' = P_{11} \cdot (U_{11} - U_{21}) - P_{12} \cdot (U_{22} - U_{12}) </math>, and make the following substitutions: <math>P_{11} = \pi_1 \cdot \int_{R_1}p(y|H1)\, dy </math> <math>P_{12} = \pi_2 \cdot \int_{R_1}p(y|H2)\, dy </math> where <math>\pi_1</math> and <math>\pi_2</math> are the ''a priori'' probabilities, <math>P(H1)</math> and <math>P(H2)</math>, and <math>R_1</math> is the region of observation events, ''y'', that are responded to as though ''H1'' is true. <math> \Rightarrow U' = \int_{R_1} \left \{ \pi_1 \cdot (U_{11} - U_{21}) \cdot p(y|H1) - \pi_2 \cdot (U_{22} - U_{12}) \cdot p(y|H2) \right \} \, dy </math> <math>U'</math> and thus <math>U</math> are maximized by extending <math>R_1</math> over the region where <math>\pi_1 \cdot (U_{11} - U_{21}) \cdot p(y|H1) - \pi_2 \cdot (U_{22} - U_{12}) \cdot p(y|H2) > 0 </math> This is accomplished by deciding H2 in case <math>\pi_2 \cdot (U_{22} - U_{12}) \cdot p(y|H2) \ge \pi_1 \cdot (U_{11} - U_{21}) \cdot p(y|H1) </math> <math> \Rightarrow L(y) \equiv \frac{p(y|H2)}{p(y|H1)} \ge \frac{\pi_1 \cdot (U_{11} - U_{21})}{\pi_2 \cdot (U_{22} - U_{12})} \equiv \tau_B </math> and H1 otherwise, where ''L(y)'' is the so-defined ''[[Likelihood function|likelihood ratio]]''. ===Normal distribution models=== Das and Geisler <ref name="Das">{{cite journal |last1=Das|first1=Abhranil|last2=Geisler|first2=Wilson|arxiv=2012.14331|title=A method to integrate and classify normal distributions|journal=Journal of Vision |year=2021 |volume=21 |issue=10 |page=1 |doi=10.1167/jov.21.10.1 |pmid=34468706 |pmc=8419883 }}</ref> extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate and [[confusion matrix]] for [[Ideal observer analysis|ideal observers]] and non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories. ==See also== {{div col|colwidth=22em}} * [[Binary classification]] * [[Constant false alarm rate]] * [[Decision theory]] * [[Demodulation]] * [[Detector (radio)]] * [[Estimation theory]] * [[Just-noticeable difference]] * [[Likelihood-ratio test]] * [[Modulation]] * [[Neyman–Pearson lemma]] * [[Psychometric function]] * [[Receiver operating characteristic]] * [[Statistical hypothesis testing]] * [[Statistical signal processing]] * [[Two-alternative forced choice]] * [[Type I and type II errors]] {{div col end}} ==References== <references/> === Bibliography === * Coren, S., Ward, L.M., Enns, J. T. (1994) ''Sensation and Perception''. (4th Ed.) Toronto: Harcourt Brace. * Kay, SM. ''Fundamentals of Statistical Signal Processing: Detection Theory'' ({{ISBN|0-13-504135-X}}) * McNichol, D. (1972) ''A Primer of Signal Detection Theory''. London: George Allen & Unwin. * Van Trees HL. ''Detection, Estimation, and Modulation Theory, Part 1'' ({{ISBN|0-471-09517-6}}; [https://web.archive.org/web/20050428233957/http://gunston.gmu.edu/demt/demtp1/ website]) * Wickens, Thomas D., (2002) ''Elementary Signal Detection Theory''. New York: Oxford University Press. ({{ISBN|0-19-509250-3}}) ==External links== * [http://www.cns.nyu.edu/~david/sdt/sdt.html A Description of Signal Detection Theory] * [https://web.archive.org/web/20050427150430/http://strobos.cee.vt.edu/IGLC11/PDF%20Files/19.pdf An application of SDT to safety] * [http://demonstrations.wolfram.com/SignalDetectionTheory/ Signal Detection Theory] by Garrett Neske, [[The Wolfram Demonstrations Project]] * [https://harvard.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=9b856917-fad1-42ce-bdd2-ab3b0140eee7 Lecture by Steven Pinker] {{Neuroscience}} {{Psychology}} {{DSP}} {{DEFAULTSORT:Detection Theory}} [[Category:Detection theory| ]] [[Category:Signal processing]] [[Category:Telecommunication theory]] [[Category:Psychophysics]] [[Category:Mathematical psychology]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:DSP
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:ISBN
(
edit
)
Template:Neuroscience
(
edit
)
Template:Page needed
(
edit
)
Template:Psychology
(
edit
)
Template:Short description
(
edit
)