Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Infinite monkey theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Direct proof=== There is a straightforward proof of this theorem. As an introduction, recall that if two events are [[statistically independent]], then the probability of both happening equals the product of the probabilities of each one happening independently. For example, if the chance of rain in [[Moscow]] on a particular day in the future is 0.4 and the chance of an [[earthquake]] in [[San Francisco]] on any particular day is 0.00003, then the chance of both happening on the same day is {{nowrap|1=0.4 × 0.00003 = 0.000012}}, [[Statistical assumption|assuming]] that they are indeed independent. Consider the probability of typing the word ''banana'' on a typewriter with 50 keys. Suppose that the keys are pressed independently and uniformly at random, meaning that each key has an equal chance of being pressed regardless of what keys had been pressed previously. The chance that the first letter typed is 'b' is 1/50, and the chance that the second letter typed is 'a' is also 1/50, and so on. Therefore, the probability of the first six letters spelling ''banana'' is: :(1/50) × (1/50) × (1/50) × (1/50) × (1/50) × (1/50) = (1/50)<sup>6</sup> = 1/15,625,000,000. The result is less than one in 15 billion, but ''not'' zero. From the above, the chance of ''not'' typing ''banana'' in a given block of 6 letters is 1 − (1/50)<sup>6</sup>. Because each block is typed independently, the chance ''X''<sub>''n''</sub> of not typing ''banana'' in any of the first ''n'' blocks of 6 letters is: :<math>X_n=\left(1-\frac{1}{50^6}\right)^n.</math> As ''n'' grows, ''X''<sub>''n''</sub> gets smaller. For ''n'' = 1 million, ''X''<sub>''n''</sub> is roughly 0.9999, but for ''n'' = 10 billion ''X''<sub>''n''</sub> is roughly 0.53 and for ''n'' = 100 billion it is roughly 0.0017. As ''n'' approaches infinity, the probability ''X''<sub>''n''</sub> [[limit of a function|approaches]] zero; that is, by making ''n'' large enough, ''X''<sub>''n''</sub> can be made as small as is desired,<ref name="Isaac1995">{{cite book |last=Isaac |first=Richard E. |title=The Pleasures of Probability |publisher=Springer |year=1995 |isbn=0-387-94415-X |location=New York |pages=48–50 |oclc=610945749 |postscript=– Isaac generalizes this argument immediately to variable text and alphabet size; the common main conclusion is on page 50.}}</ref> and the chance of typing ''banana'' approaches 100%.{{efn|This shows that the probability of typing "banana" in one of the predefined non-overlapping blocks of six letters tends to 1. In addition the word may appear across two blocks, so the estimate given is conservative.}} Thus, the probability of the word ''banana'' appearing at some point in an infinite sequence of keystrokes is equal to one. The same argument applies if we replace one monkey typing ''n'' consecutive blocks of text with ''n'' monkeys each typing one block (simultaneously and independently). In this case, ''X''<sub>''n''</sub> = (1 − (1/50)<sup>6</sup>)<sup>''n''</sup> is the probability that none of the first ''n'' monkeys types ''banana'' correctly on their first try. Therefore, at least one of infinitely many monkeys will (''with probability equal to one'') produce a text using the same number of keystrokes as a perfectly accurate human typist copying it from the original. ====Infinite strings==== This can be stated more generally and compactly in terms of [[string (computer science)|strings]], which are sequences of characters chosen from some finite [[alphabet]]: * Given an infinite string where each character is chosen independently and [[Uniform distribution (discrete)|uniformly at random]], any given finite string almost surely occurs as a [[substring]] at some position. * Given an infinite sequence of infinite strings, where each character of each string is chosen independently and uniformly at random, any given finite string almost surely occurs as a prefix of one of these strings. Both follow easily from the second [[Borel–Cantelli lemma]]. For the second theorem, let ''E''<sub>''k''</sub> be the [[event (probability theory)|event]] that the ''k''th string begins with the given text. Because this has some fixed nonzero probability ''p'' of occurring, the ''E''<sub>''k''</sub> are independent, and the below sum diverges, :<math>\sum_{k=1}^\infty P(E_k) = \sum_{k=1}^\infty p = \infty,</math> the probability that infinitely many of the ''E''<sub>''k''</sub> occur is 1. The first theorem is shown similarly; one can divide the random string into nonoverlapping blocks matching the size of the desired text and make ''E''<sub>''k''</sub> the event where the ''k''th block equals the desired string.{{efn|The first theorem is proven by a similar if more indirect route in Gut (2005).<ref>{{cite book |last=Gut |first=Allan |title=Probability: A Graduate Course |year=2005 |publisher=Springer |isbn=0-387-22833-0 |pages=97–100}}</ref>}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)