Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Information cascade
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Basic model == This section provides some basic examples of information cascades, as originally described by Bikchandani et al. (1992).<ref name="Bik" /> The basic model has since been developed in a variety of directions to examine its robustness and better understand its implications.<ref name="BikhchandaniHirshleifer1998">{{cite journal |last1=Bikhchandani |first1=Sushil |last2=Hirshleifer |first2=David |last3=Welch |first3=Ivo |title=Learning from the Behavior of Others: Conformity, Fads, and Informational Cascades |journal=Journal of Economic Perspectives |date=August 1998 |volume=12 |issue=3 |pages=151β170 |doi=10.1257/jep.12.3.151 |doi-access=free |hdl=2027.42/35413 |hdl-access=free }}</ref><ref>{{cite journal |last1=Smith |first1=Lones |last2=Sorensen |first2=Peter |title=Pathological Outcomes of Observational Learning |journal=Econometrica |date=March 2000 |volume=68 |issue=2 |pages=371β398 |doi=10.1111/1468-0262.00113 |hdl=1721.1/64049 |s2cid=14414203 |url=https://curis.ku.dk/portal/da/publications/pathological-outcomes-of-observational-learning(4c2aedb0-74c6-11db-bee9-02004c4f4f50).html |hdl-access=free}}</ref> === Qualitative example === Information cascades occur when external information obtained from previous participants in an event overrides one's own private signal, irrespective of the correctness of the former over the latter. The experiment conducted by Anderson<ref name="Anderson">{{cite journal |last1=Anderson |first1=Lisa R. |last2=Holt |first2=Charles A. |title=Information Cascades in the Laboratory |journal=The American Economic Review |date=1997 |volume=87 |issue=5 |pages=847β862 |jstor=2951328 }}</ref> is a useful example of this process. The experiment consisted of two urns labeled A and B. Urn A contains two balls labeled "a" and one labeled "b". Urn B contains one ball labeled "a" and two labeled "b". The urn from which a ball must be drawn during each run is determined randomly and with equal probabilities (from the throw of a dice). The contents of the chosen urn are emptied into a neutral container. The participants are then asked in random order to draw a marble from this container. This entire process may be termed a "run", and a number of such runs are performed. Each time a participant picks up a marble, he is to decide which urn it belongs to. His decision is then announced for the benefit of the remaining participants in the room. Thus, the (n+1)th participant has information about the decisions made by all the n participants preceding him, and also his private signal which is the label on the ball that he draws during his turn. The experimenters observed that an information cascade was observed in 41 of 56 such runs. This means, in the runs where the cascade occurred, at least one participant gave precedence to earlier decisions over his own private signal. It is possible for such an occurrence to produce the wrong result. This phenomenon is known as "Reverse Cascade". === Quantitative description === A person's signal telling them to accept is denoted as {{mvar|H}} (a high signal, where high signifies he should accept), and a signal telling them not to accept is {{mvar|L}} (a low signal). The model assumes that when the correct decision is to accept, individuals will be more likely to see an {{mvar|H}}, and conversely, when the correct decision is to reject, individuals are more likely to see an {{mvar|L}} signal. This is essentially a [[conditional probability]] β the probability of {{mvar|H}} when the correct action is to accept, or <math>P[H|A]</math>. Similarly <math>P[L|R]</math> is the probability that an agent gets an {{mvar|L}} signal when the correct action is reject. If these likelihoods are represented by ''q'', then ''q'' > 0.5. This is summarized in the table below.<ref name="Easley">{{cite book|url=http://www.cs.cornell.edu/home/kleinber/networks-book/|title=Networks, Crowds and Markets: Reasoning about a Highly Connected World|last=Easley|first=David|publisher=Cambridge University Press|year=2010|pages=483β506}}</ref> {| class="wikitable" |- ! rowspan=2 | Agent signal ! colspan=2 | True probability state |- ! Reject !! Accept |- | ''L'' || ''q'' || 1−''q'' |- | ''H'' || 1−''q'' || ''q'' |} The first agent determines whether or not to accept solely based on his own signal. As the model assumes that all agents act rationally, the action (accept or reject) the agent feels is more likely is the action he will choose to take. This decision can be explained using [[Bayes' rule]]: <math display="block">\begin{align} P\left(A|H\right) &= \frac{P\left(A\right) P\left(H|A\right)}{P\left(H\right)} \\ &= \frac{P\left(A\right) P\left(H|A\right)}{P\left(A\right) P\left(H|A\right) + P\left(R\right) P\left(H|R\right)} \\ &= \frac{pq}{pq + \left(1 - p\right)\left(1 - q\right)} \\ &> p \end{align}</math> If the agent receives an {{mvar|H}} signal, then the likelihood of accepting is obtained by calculating <math>P[A|H]</math>. The equation says that, by virtue of the fact that ''q'' > 0.5, the first agent, acting only on his private signal, will always increase his estimate of ''p'' with an {{mvar|H}} signal. Similarly, it can be shown that an agent will always decrease his expectation of ''p'' when he receives a low signal. Recalling that, if the value, {{mvar|V}}, of accepting is equal to the value of rejecting, then an agent will accept if he believes ''p'' > 0.5, and reject otherwise. Because this agent started out with the assumption that both accepting and rejecting are equally viable options (''p'' = 0.5), the observation of an {{mvar|H}} signal will allow him to conclude that accepting is the rational choice. The second agent then considers both the first agent's decision and his own signal, again in a rational fashion. In general, the ''n''th agent considers the decisions of the previous ''n''-1 agents, and his own signal. He makes a decision based on Bayesian reasoning to determine the most rational choice. <math display="block">P (A | \text{Previous}, \text{Personal signal}) = \frac{pq^a (1 - q)^b}{p q^a (1 - q)^b + (1 - p)(1 - q)^a q^b}</math> Where {{mvar|a}} is the number of accepts in the previous set plus the agent's own signal, and {{mvar|b}} is the number of rejects. Thus, {{tmath|1=a + b = n}}. The decision is based on how the value on the right hand side of the equation compares with ''p''.<ref name="Easley"/> ===Explicit model assumptions=== The original model makes several assumptions about human behavior and the world in which humans act,<ref name="Bik">{{cite journal |last1=Bikhchandani |first1=Sushil |last2=Hirshleifer |first2=David |last3=Welch |first3=Ivo |title=A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades |journal=Journal of Political Economy |date=October 1992 |volume=100 |issue=5 |pages=992β1026 |doi=10.1086/261849 |s2cid=7784814 |url=http://www.dklevine.com/archive/refs41193.pdf }}</ref> some of which are relaxed in later versions<ref name="Easley"/> or in alternate definitions of similar problems, such as the [[diffusion of innovations]]. #[[Bounded rationality|Boundedly Rational]] Agents: The original Independent Cascade model assumes humans are boundedly rational<ref name=newellsimon>{{cite book|last=Newell|first=A.|title=Human problem solving|url=https://archive.org/details/humanproblemsolv0000newe|url-access=registration|year=1972|publisher=Prentice Hall|location=Englewood Cliffs, NY|isbn=9780134454030 }}</ref> β that is, they will always make rational decisions based on the information they can observe, but the information they observe may not be complete or correct. In other words, agents do not have complete knowledge of the world around them (which would allow them to make the correct decision in any and all situations). In this way, there is a point at which, even if a person has correct knowledge of the idea or action cascading, they can be convinced via social pressures to adopt some alternate, incorrect view of the world. #[[Complete information|Incomplete Knowledge]] of Others: The original information cascade model assumes that agents have incomplete knowledge of the agents which precede them in the specified order. As opposed to definitions where agents have some knowledge of the "private information" held by previous agents, the current agent makes a decision based only on the observable action (whether or not to imitate) of those preceding him. It is important to note that the original creators argue this is a reason why information cascades can be caused by small shocks. #Behavior of all previous agents is known === Resulting conditions === #'''Cascades will always occur''' β as discussed, in the simple mode, the likelihood of a cascade occurring increases towards 1 as the number of people making decisions increases towards infinity. #'''Cascades can be incorrect''' β because agents make decisions with both bounded rationality and probabilistic knowledge of the initial truth (e.g. whether accepting or rejecting is the correct decision), the incorrect behavior may cascade through the system. #'''Cascades can be based on little information''' β mathematically, a cascade of an infinite length can occur based only on the decision of two people. More generally, a small set of people who strongly promote an idea as being rational can rapidly influence a much larger subset of the general population #'''Cascades are fragile''' β because agents receive no extra information after the difference between a and b increases beyond 2, and because such differences can occur at small numbers of agents, agents considering opinions from those agents who are making decisions based on actual information can be dissuaded from a choice rather easily.<ref name="Bik"/> This suggests that cascades are susceptible to the release of public information.<ref name="Bik"/> also discusses this result in the context of the underlying value p changing over time, in which case a cascade can rapidly change course.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)