Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bernoulli process
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Use American English|date = February 2019}} {{Short description|Random process of binary (boolean) random variables}} {{More footnotes|date=September 2011}} {{Probability fundamentals}} In [[probability]] and [[statistics]], a '''Bernoulli process''' (named after [[Jacob Bernoulli]]) is a finite or infinite sequence of binary [[random variable]]s, so it is a [[discrete-time stochastic process]] that takes only two values, canonically 0 and 1. The component '''Bernoulli variables''' ''X''<sub>''i''</sub> are [[Independent and identically distributed random variables|identically distributed and independent]]. Prosaically, a Bernoulli process is a repeated [[coin flipping]], possibly with an unfair coin (but with consistent unfairness). Every variable ''X''<sub>''i''</sub> in the sequence is associated with a [[Bernoulli trial]] or experiment. They all have the same [[Bernoulli distribution]]. Much of what can be said about the Bernoulli process can also be generalized to more than two outcomes (such as the process for a six-sided die); this generalization is known as the [[Bernoulli scheme]]. The problem of determining the process, given only a limited sample of Bernoulli trials, may be called the problem of [[checking whether a coin is fair]]. ==Definition== A ''Bernoulli process'' is a finite or infinite sequence of [[statistical independence|independent]] [[random variable]]s ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub>, ..., such that * for each ''i'', the value of ''X''<sub>''i''</sub> is either 0 or 1; * for all values of <math display="inline">i</math>, the probability ''p'' that ''X''<sub>''i''</sub> = 1 is the same. In other words, a Bernoulli process is a sequence of [[independent identically distributed]] [[Bernoulli trial]]s. Independence of the trials implies that the process is [[Memorylessness|memoryless]], in which past event frequencies have no influence on about future event probability frequencies. In most instances the true value of ''p'' is unknown, therefore we use past frequencies to assess/forecast/estimate future events & their probabilities indirectly via applying probabilistic inference upon ''p''. If the process is infinite, then from any point the future trials constitute a Bernoulli process identical to the whole process, the fresh-start property. === Interpretation === The two possible values of each ''X''<sub>''i''</sub> are often called "success" and "failure". Thus, when expressed as a number 0 or 1, the outcome may be called the number of successes on the ''i''th "trial". Two other common interpretations of the values are true or false and yes or no. Under any interpretation of the two values, the individual variables ''X''<sub>''i''</sub> may be called [[Bernoulli trial]]s with parameter p. In many applications time passes between trials, as the index i increases. In effect, the trials ''X''<sub>1</sub>, ''X''<sub>2</sub>, ... ''X''<sub>i</sub>, ... happen at "points in time" 1, 2, ..., ''i'', .... That passage of time and the associated notions of "past" and "future" are not necessary, however. Most generally, any ''X''<sub>i</sub> and ''X''<sub>''j''</sub> in the process are simply two from a set of random variables indexed by {1, 2, ..., ''n''}, the finite cases, or by {1, 2, 3, ...}, the infinite cases. One experiment with only two possible outcomes, often referred to as "success" and "failure", usually encoded as 1 and 0, can be modeled as a [[Bernoulli distribution]].<ref name=":0">{{Cite book|title=A modern introduction to probability and statistics|isbn=9781852338961|pages=45–46|last1=Dekking|first1=F. M.|last2=Kraaikamp|first2=C.|last3=Lopuhaä|first3=H. P.|last4=Meester|first4=L. E.|year=2005|publisher=Springer }}</ref> Several random variables and [[probability distribution]]s beside the Bernoullis may be derived from the Bernoulli process: *The number of successes in the first ''n'' trials, which has a [[binomial distribution]] B(''n'', ''p'') *The number of failures needed to get ''r'' successes, which has a [[negative binomial distribution]] NB(''r'', ''p'') *The number of failures needed to get one success, which has a [[geometric distribution]] NB(1, ''p''), a special case of the negative binomial distribution The negative binomial variables may be interpreted as random [[Negative binomial distribution#Waiting time in a Bernoulli process|waiting times]]. ==Formal definition== The Bernoulli process can be formalized in the language of [[probability space]]s as a random sequence of independent realisations of a random variable that can take values of heads or tails. The state space for an individual value is denoted by <math>2=\{H,T\} .</math> ===Borel algebra=== Consider the [[countably infinite]] [[direct product]] of copies of <math>2=\{H,T\}</math>. It is common to examine either the one-sided set <math>\Omega=2^\mathbb{N}=\{H,T\}^\mathbb{N}</math> or the two-sided set <math>\Omega=2^\mathbb{Z}</math>. There is a natural [[topology]] on this space, called the [[product topology]]. The sets in this topology are finite sequences of coin flips, that is, finite-length [[string (computer science)|strings]] of ''H'' and ''T'' (''H'' stands for heads and ''T'' stands for tails), with the rest of (infinitely long) sequence taken as "don't care". These sets of finite sequences are referred to as [[cylinder set]]s in the product topology. The set of all such strings forms a [[sigma algebra]], specifically, a [[Borel algebra]]. This algebra is then commonly written as <math>(\Omega, \mathcal{B})</math> where the elements of <math>\mathcal{B}</math> are the finite-length sequences of coin flips (the cylinder sets). ===Bernoulli measure=== If the chances of flipping heads or tails are given by the probabilities <math>\{p,1-p\}</math>, then one can define a natural [[measure (mathematics)|measure]] on the product space, given by <math>P=\{p, 1-p\}^\mathbb{N}</math> (or by <math>P=\{p, 1-p\}^\mathbb{Z}</math> for the two-sided process). In another word, if a [[discrete random variable]] ''X'' has a ''Bernoulli distribution'' with parameter ''p'', where 0 ≤ ''p'' ≤ 1, and its [[probability mass function]] is given by :<math>pX(1)=P(X=1)=p</math> and <math>pX(0)=P(X=0)=1-p</math>. We denote this distribution by Ber(''p'').<ref name=":0" /> Given a cylinder set, that is, a specific sequence of coin flip results <math>[\omega_1, \omega_2,\cdots\omega_n]</math> at times <math>1,2,\cdots,n</math>, the probability of observing this particular sequence is given by :<math>P([\omega_1, \omega_2,\cdots ,\omega_n])= p^k (1-p)^{n-k}</math> where ''k'' is the number of times that ''H'' appears in the sequence, and ''n''−''k'' is the number of times that ''T'' appears in the sequence. There are several different kinds of notations for the above; a common one is to write :<math>P(X_1=x_1, X_2=x_2,\cdots, X_n=x_n)= p^k (1-p)^{n-k}</math> where each <math>X_i</math> is a binary-valued [[random variable]] with <math>x_i=[\omega_i=H]</math> in [[Iverson bracket]] notation, meaning either <math>1</math> if <math>\omega_i=H</math> or <math>0</math> if <math>\omega_i=T</math>. This probability <math>P</math> is commonly called the [[Bernoulli measure]].<ref name=klenke>{{cite book |first=Achim |last=Klenke |title=Probability Theory |year=2006 |publisher=Springer-Verlag |isbn=978-1-84800-047-6}}</ref> Note that the probability of any specific, infinitely long sequence of coin flips is exactly zero; this is because <math>\lim_{n\to\infty}p^n=0</math>, for any <math>0\le p<1</math>. A probability equal to 1 implies that any given infinite sequence has [[measure zero]]. Nevertheless, one can still say that some classes of infinite sequences of coin flips are far more likely than others, this is given by the [[asymptotic equipartition property]]. To conclude the formal definition, a Bernoulli process is then given by the probability triple <math>(\Omega, \mathcal{B}, P)</math>, as defined above. == Law of large numbers, binomial distribution and central limit theorem== {{main article|Law of large numbers|Central limit theorem|Binomial distribution}} Let us assume the canonical process with <math> H </math> represented by <math> 1 </math> and <math> T </math> represented by <math> 0 </math>. The [[law of large numbers]] states that the average of the sequence, i.e., <math> \bar{X}_{n}:=\frac{1}{n}\sum_{i=1}^{n}X_{i} </math>, will approach the [[expected value]] almost certainly, that is, the events which do not satisfy this limit have zero probability. The [[expectation value]] of flipping ''heads'', assumed to be represented by 1, is given by <math>p</math>. In fact, one has :<math>\mathbb{E}[X_i]=\mathbb{P}([X_i=1])=p,</math> for any given random variable <math>X_i</math> out of the infinite sequence of [[Bernoulli trial]]s that compose the Bernoulli process. One is often interested in knowing how often one will observe ''H'' in a sequence of ''n'' coin flips. This is given by simply counting: Given ''n'' successive coin flips, that is, given the set of all possible [[string (computer science)|strings]] of length ''n'', the number ''N''(''k'',''n'') of such strings that contain ''k'' occurrences of ''H'' is given by the [[binomial coefficient]] :<math>N(k,n) = {n \choose k}=\frac{n!}{k! (n-k)!}</math> If the probability of flipping heads is given by ''p'', then the total probability of seeing a string of length ''n'' with ''k'' heads is :<math>\mathbb{P}([S_n=k]) = {n\choose k} p^k (1-p)^{n-k} , </math> where <math> S_n=\sum_{i=1}^{n}X_i </math>. The probability measure thus defined is known as the [[Binomial distribution]]. As we can see from the above formula that, if n=1, the ''Binomial distribution'' will turn into a ''Bernoulli distribution''. So we can know that the ''Bernoulli distribution'' is exactly a special case of ''Binomial distribution'' when n equals to 1. Of particular interest is the question of the value of <math>S_{n}</math> for a sufficiently long sequences of coin flips, that is, for the limit <math>n\to\infty</math>. In this case, one may make use of [[Stirling's approximation]] to the factorial, and write :<math>n! = \sqrt{2\pi n} \;n^n e^{-n} \left(1 + \mathcal{O}\left(\frac{1}{n}\right)\right)</math> Inserting this into the expression for ''P''(''k'',''n''), one obtains the [[Normal distribution]]; this is the content of the [[central limit theorem]], and this is the simplest example thereof. The combination of the law of large numbers, together with the central limit theorem, leads to an interesting and perhaps surprising result: the [[asymptotic equipartition property]]. Put informally, one notes that, yes, over many coin flips, one will observe ''H'' exactly ''p'' fraction of the time, and that this corresponds exactly with the peak of the Gaussian. The asymptotic equipartition property essentially states that this peak is infinitely sharp, with infinite fall-off on either side. That is, given the set of all possible infinitely long strings of ''H'' and ''T'' occurring in the Bernoulli process, this set is partitioned into two: those strings that occur with probability 1, and those that occur with probability 0. This partitioning is known as the [[Kolmogorov 0-1 law]]. The size of this set is interesting, also, and can be explicitly determined: the logarithm of it is exactly the [[information entropy|entropy]] of the Bernoulli process. Once again, consider the set of all strings of length ''n''. The size of this set is <math>2^n</math>. Of these, only a certain subset are likely; the size of this set is <math>2^{nH}</math> for <math>H\le 1</math>. By using Stirling's approximation, putting it into the expression for ''P''(''k'',''n''), solving for the location and width of the peak, and finally taking <math>n\to\infty</math> one finds that :<math>H=-p\log_2 p - (1-p)\log_2(1-p)</math> This value is the [[Bernoulli entropy]] of a Bernoulli process. Here, ''H'' stands for entropy; not to be confused with the same symbol ''H'' standing for ''heads''. [[John von Neumann]] posed a question about the Bernoulli process regarding the possibility of a given process being [[isomorphic]] to another, in the sense of the [[isomorphism of dynamical systems]]. The question long defied analysis, but was finally and completely answered with the [[Ornstein isomorphism theorem]]. This breakthrough resulted in the understanding that the Bernoulli process is unique and [[universal property|universal]]; in a certain sense, it is the single most random process possible; nothing is 'more' random than the Bernoulli process (although one must be careful with this informal statement; certainly, systems that are [[mixing (mathematics)|mixing]] are, in a certain sense, "stronger" than the Bernoulli process, which is merely ergodic but not mixing. However, such processes do not consist of independent random variables: indeed, many purely deterministic, non-random systems can be mixing). ==Dynamical systems== The Bernoulli process can also be understood to be a [[dynamical system]], as an example of an [[ergodic system]] and specifically, a [[measure-preserving dynamical system]], in one of several different ways. One way is as a [[shift space]], and the other is as an [[Markov odometer|odometer]]. These are reviewed below. ===Bernoulli shift=== {{main article|Bernoulli scheme|Dyadic transformation}} One way to create a dynamical system out of the Bernoulli process is as a [[shift space]]. There is a natural translation symmetry on the product space <math>\Omega=2^\mathbb{N}</math> given by the [[shift operator]] :<math>T(X_0, X_1, X_2, \cdots) = (X_1, X_2, \cdots)</math> The Bernoulli measure, defined above, is translation-invariant; that is, given any cylinder set <math>\sigma\in\mathcal{B}</math>, one has :<math>P(T^{-1}(\sigma))=P(\sigma)</math> and thus the [[Bernoulli measure]] is a [[Haar measure]]; it is an [[invariant measure]] on the product space. Instead of the probability measure <math>P:\mathcal{B}\to\mathbb{R}</math>, consider instead some arbitrary function <math>f:\mathcal{B}\to\mathbb{R}</math>. The [[pushforward measure|pushforward]] :<math>f\circ T^{-1}</math> defined by <math>\left(f\circ T^{-1}\right)(\sigma) = f(T^{-1}(\sigma))</math> is again some function <math>\mathcal{B}\to\mathbb{R}.</math> Thus, the map <math>T</math> induces another map <math>\mathcal{L}_T</math> on the space of all functions <math>\mathcal{B}\to\mathbb{R}.</math> That is, given some <math>f:\mathcal{B}\to\mathbb{R}</math>, one defines :<math>\mathcal{L}_T f = f \circ T^{-1}</math> The map <math>\mathcal{L}_T</math> is a [[linear operator]], as (obviously) one has <math>\mathcal{L}_T(f+g)= \mathcal{L}_T(f) + \mathcal{L}_T(g)</math> and <math>\mathcal{L}_T(af)= a\mathcal{L}_T(f)</math> for functions <math>f,g</math> and constant <math>a</math>. This linear operator is called the [[transfer operator]] or the ''Ruelle–Frobenius–Perron operator''. This operator has a [[spectrum]], that is, a collection of [[eigenfunction]]s and corresponding eigenvalues. The largest eigenvalue is the [[Frobenius–Perron theorem|Frobenius–Perron eigenvalue]], and in this case, it is 1. The associated eigenvector is the invariant measure: in this case, it is the Bernoulli measure. That is, <math>\mathcal{L}_T(P)= P.</math> If one restricts <math>\mathcal{L}_T</math> to act on polynomials, then the eigenfunctions are (curiously) the [[Bernoulli polynomial]]s!<ref>Pierre Gaspard, "''r''-adic one-dimensional maps and the Euler summation formula", ''Journal of Physics A'', '''25''' (letter) L483-L485 (1992).</ref><ref>Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic Publishers, Dordrecht Netherlands {{ISBN|0-7923-5564-4}}</ref> This coincidence of naming was presumably not known to Bernoulli. === The 2x mod 1 map=== [[Image:Exampleergodicmap.svg|thumb|The map ''T'' : [0,1) → [0,1), <math>x \mapsto 2x \bmod 1</math> preserves the [[Lebesgue measure]].]] The above can be made more precise. Given an infinite string of binary digits <math>b_0, b_1, \cdots</math> write :<math>y=\sum_{n=0}^\infty \frac{b_n}{2^{n+1}}.</math> The resulting <math>y</math> is a real number in the unit interval <math>0\le y\le 1.</math> The shift <math>T</math> induces a [[homomorphism]], also called <math>T</math>, on the unit interval. Since <math>T(b_0, b_1, b_2, \cdots) = (b_1, b_2, \cdots),</math> one can see that <math>T(y)=2y\bmod 1.</math> This map is called the [[dyadic transformation]]; for the doubly-infinite sequence of bits <math>\Omega=2^\mathbb{Z},</math> the induced homomorphism is the [[Baker's map]]. Consider now the space of functions in <math>y</math>. Given some <math>f(y)</math> one can find that :<math>\left[\mathcal{L}_T f\right](y) = \frac{1}{2}f\left(\frac{y}{2}\right)+\frac{1}{2}f\left(\frac{y+1}{2}\right)</math> Restricting the action of the operator <math>\mathcal{L}_T</math> to functions that are on polynomials, one finds that it has a [[discrete spectrum]] given by :<math>\mathcal{L}_T B_n= 2^{-n}B_n</math> where the <math>B_n</math> are the [[Bernoulli polynomials]]. Indeed, the Bernoulli polynomials obey the identity :<math>\frac{1}{2}B_n\left(\frac{y}{2}\right)+\frac{1}{2}B_n\left(\frac{y+1}{2}\right) = 2^{-n}B_n(y)</math> ===The Cantor set=== Note that the sum :<math>y=\sum_{n=0}^\infty \frac{b_n}{3^{n+1}}</math> gives the [[Cantor function]], as conventionally defined. This is one reason why the set <math>\{H,T\}^\mathbb{N}</math> is sometimes called the [[Cantor set]]. ===Odometer=== {{main article|Markov odometer}} Another way to create a dynamical system is to define an [[Markov odometer|odometer]]. Informally, this is exactly what it sounds like: just "add one" to the first position, and let the odometer "roll over" by using [[carry bit]]s as the odometer rolls over. This is nothing more than base-two addition on the set of infinite strings. Since addition forms a [[group (mathematics)]], and the Bernoulli process was already given a topology, above, this provides a simple example of a [[topological group]]. In this case, the transformation <math>T</math> is given by :<math>T\left(1,\dots,1,0,X_{k+1},X_{k+2},\dots\right) = \left(0,\dots,0,1,X_{k+1},X_{k+2},\dots \right).</math> It leaves the Bernoulli measure invariant only for the special case of <math>p=1/2</math> (the "fair coin"); otherwise not. Thus, <math>T</math> is a [[measure preserving dynamical system]] in this case, otherwise, it is merely a [[conservative system]]. ==Bernoulli sequence== The term ''Bernoulli sequence'' is often used informally to refer to a [[realization (probability)|realization]] of a Bernoulli process. However, the term has an entirely different formal definition as given below. Suppose a Bernoulli process formally defined as a single random variable (see preceding section). For every infinite sequence ''x'' of coin flips, there is a [[sequence]] of integers :<math>\mathbb{Z}^x = \{n\in \mathbb{Z} : X_n(x) = 1 \} \, </math> called the ''Bernoulli sequence''{{Verify source|date=March 2010}} associated with the Bernoulli process. For example, if ''x'' represents a sequence of coin flips, then the associated Bernoulli sequence is the list of natural numbers or time-points for which the coin toss outcome is ''heads''. So defined, a Bernoulli sequence <math>\mathbb{Z}^x</math> is also a random subset of the index set, the natural numbers <math>\mathbb{N}</math>. [[Almost all]] Bernoulli sequences <math>\mathbb{Z}^x</math> are [[ergodic sequence]]s.{{Verify source|date=March 2010}} ==Randomness extraction== {{Main article|Randomness extractor}} From any Bernoulli process one may derive a Bernoulli process with ''p'' = 1/2 by the [[von Neumann extractor]], the earliest [[randomness extractor]], which actually extracts uniform randomness. === Basic von Neumann extractor === Represent the observed process as a sequence of zeroes and ones, or bits, and group that input stream in non-overlapping pairs of successive bits, such as (11)(00)(10)... . Then for each pair, * if the bits are equal, discard; * if the bits are not equal, output the first bit. This table summarizes the computation. {| ! input !! output |- | 00 || discard |- | 01 || 0 |- | 10 || 1 |- | 11 || discard |} For example, an input stream of eight bits ''10011011'' would by grouped into pairs as ''(10)(01)(10)(11)''. Then, according to the table above, these pairs are translated into the output of the procedure: ''(1)(0)(1)()'' (=''101''). In the output stream 0 and 1 are equally likely, as 10 and 01 are equally likely in the original, both having probability ''p''(1−''p'') = (1−''p'')''p''. This extraction of uniform randomness does not require the input trials to be independent, only [[uncorrelated]]. More generally, it works for any [[exchangeable random variables|exchangeable sequence]] of bits: all sequences that are finite rearrangements are equally likely. The von Neumann extractor uses two input bits to produce either zero or one output bits, so the output is shorter than the input by a factor of at least 2. On average the computation discards proportion ''p''<sup>2</sup> + (1 − ''p'')<sup>2</sup> of the input pairs(00 and 11), which is near one when ''p'' is near zero or one, and is minimized at 1/4 when ''p'' = 1/2 for the original process (in which case the output stream is 1/4 the length of the input stream on average). Von Neumann (classical) main operation [[pseudocode]]: <syntaxhighlight lang="text"> if (Bit1 ≠ Bit2) { output(Bit1) } </syntaxhighlight> === Iterated von Neumann extractor === {{Cite check|section|date=January 2014|talk=Iterated Von Neumann extractor}} This decrease in efficiency, or waste of randomness present in the input stream, can be mitigated by iterating the algorithm over the input data. This way the output can be made to be "arbitrarily close to the entropy bound".<ref name=Peres>{{cite journal|last=Peres|first=Yuval|title=Iterating Von Neumann's Procedure for Extracting Random Bits|journal=The Annals of Statistics|date=March 1992|volume=20|issue=1|pages=590–597|doi=10.1214/aos/1176348543|doi-access=free}}</ref> The iterated version of the von Neumann algorithm, also known as advanced multi-level strategy (AMLS),<ref>{{cite web |url=http://www.eecs.harvard.edu/~michaelm/coinflipext.pdf |archive-url=https://web.archive.org/web/20100331021838/http://www.eecs.harvard.edu/~michaelm/coinflipext.pdf |archive-date=2010-03-31 |url-status=live |title=Tossing a Biased Coin |publisher=eecs.harvard.edu |access-date=2018-07-28}}</ref> was introduced by Yuval Peres in 1992.<ref name=Peres/> It works recursively, recycling "wasted randomness" from two sources: the sequence of discard-non-discard, and the values of discarded pairs (0 for 00, and 1 for 11). It relies on the fact that, given the sequence already generated, both of those sources are still exchangeable sequences of bits, and thus eligible for another round of extraction. While such generation of additional sequences can be iterated infinitely to extract all available entropy, an infinite amount of computational resources is required, therefore the number of iterations is typically fixed to a low value – this value either fixed in advance, or calculated at runtime. More concretely, on an input sequence, the algorithm consumes the input bits in pairs, generating output together with two new sequences, () gives AMLS paper notation: {| ! input !! output !! new sequence 1(A) !! new sequence 2(1) |- | 00 || ''none'' || 0 || 0 |- | 01 || 0 || 1 || ''none'' |- | 10 || 1 || 1 || ''none'' |- | 11 || ''none'' || 0 || 1 |} (If the length of the input is odd, the last bit is completely discarded.) Then the algorithm is applied recursively to each of the two new sequences, until the input is empty. Example: The input stream from the AMLS paper, ''11001011101110'' using 1 for H and 0 for T, is processed this way: {| ! step number !! input !! output !! new sequence 1(A) !! new sequence 2(1) |- | 0 || (11)(00)(10)(11)(10)(11)(10) || ()()(1)()(1)()(1) || (1)(1)(0)(1)(0)(1)(0) || (1)(0)()(1)()(1)() |- | 1 || (10)(11)(11)(01)(01)() || (1)()()(0)(0) || (0)(1)(1)(0)(0) || ()(1)(1)()() |- | 2 || (11)(01)(10)() || ()(0)(1) || (0)(1)(1) || (1)()() |- | 3 || (10)(11) || (1) || (1)(0) || ()(1) |- | 4 || (11)() || () || (0) || (1) |- | 5 || (10) || (1) || (1) || () |- | 6 || () || () || () || () |} Starting from step 1, the input is a concatenation of sequence 2 and sequence 1 from the previous step (the order is arbitrary but should be fixed). The final output is ''()()(1)()(1)()(1)(1)()()(0)(0)()(0)(1)(1)()(1)'' (=''1111000111''), so from 14 bits of input 10 bits of output were generated, as opposed to 3 bits through the von Neumann algorithm alone. The constant output of exactly 2 bits per round per bit pair (compared with a variable none to 1 bit in classical VN) also allows for constant-time implementations which are resistant to [[Timing attack|timing attacks]]. Von Neumann–Peres (iterated) main operation pseudocode: <syntaxhighlight lang="text"> if (Bit1 ≠ Bit2) { output(1, Sequence1) output(Bit1) } else { output(0, Sequence1) output(Bit1, Sequence2) } </syntaxhighlight> Another tweak was presented in 2016, based on the observation that the Sequence2 channel doesn't provide much throughput, and a hardware implementation with a finite number of levels can benefit from discarding it earlier in exchange for processing more levels of Sequence1.<ref>{{cite conference |url=https://www.esat.kuleuven.be/cosic/publications/article-2628.pdf |archive-url=https://web.archive.org/web/20190212011337/https://www.esat.kuleuven.be/cosic/publications/article-2628.pdf |archive-date=2019-02-12 |url-status=live |title=Iterating Von Neumann's post-processing under hardware constraints |last1=Rožić |first1=Vladimir |last2=Yang |first2=Bohan |last3=Dehaene |first3=Wim |last4=Verbauwhede |first4=Ingrid |date=3–5 May 2016 |place=Maclean, VA, USA |conference=2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) |doi=10.1109/HST.2016.7495553 }}</ref> ==References== {{reflist}} ==Further reading== * Carl W. Helstrom, ''Probability and Stochastic Processes for Engineers'', (1984) Macmillan Publishing Company, New York {{ISBN|0-02-353560-1}}. ==External links== * [http://www.r-statistics.com/2011/11/diagram-for-a-bernoulli-process-using-r/ Using a binary tree diagram for describing a Bernoulli process] {{Stochastic processes}} [[Category:Stochastic processes]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite check
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:ISBN
(
edit
)
Template:Main article
(
edit
)
Template:More footnotes
(
edit
)
Template:Probability fundamentals
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Stochastic processes
(
edit
)
Template:Use American English
(
edit
)
Template:Verify source
(
edit
)