Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Low-discrepancy sequence
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Type of mathematical sequence}} In [[mathematics]], a '''low-discrepancy sequence''' is a [[sequence]] with the property that for all values of <math>N</math>, its subsequence <math>x_1, \ldots, x_N</math> has a low [[discrepancy of a sequence|discrepancy]]. Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set ''B'' is close to proportional to the [[Measure (mathematics)|measure]] of ''B'', as would happen on average (but not for particular samples) in the case of an [[equidistributed sequence]]. Specific definitions of discrepancy differ regarding the choice of ''B'' ([[hyperspheres]], [[Hypercube|hypercubes]], etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value). Low-discrepancy sequences are also called '''quasirandom''' sequences, due to their common use as a replacement of uniformly distributed [[random sequence|random numbers]]. The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor [[pseudorandom]], but such sequences share some properties of random variables and in certain applications such as the [[quasi-Monte Carlo method]] their lower discrepancy is an important advantage. ==Applications== [[Image:Subrandom Kurtosis.gif|thumb|370px|right|Error in estimated kurtosis as a function of number of datapoints. 'Additive quasirandom' gives the maximum error when ''c'' = ({{radic|5}} − 1)/2. 'Random' gives the average error over six runs of random numbers, where the average is taken to reduce the magnitude of the wild fluctuations]] Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly. Two useful applications are in finding the [[characteristic function (probability theory)|characteristic function]] of a [[probability density function]], and in finding the [[derivative]] function of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-order [[moment (mathematics)|moments]] to be calculated to high accuracy very quickly. Applications that don't involve sorting would be in finding the [[mean]], [[standard deviation]], [[skewness]] and [[kurtosis]] of a statistical distribution, and in finding the [[integral]] and global [[maxima and minima]] of difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as [[Newton–Raphson iteration]]. Quasirandom numbers can also be combined with search algorithms. With a search algorithm, quasirandom numbers can be used to find the [[mode (statistics)|mode]], [[median]], [[confidence intervals]] and [[cumulative distribution function|cumulative distribution]] of a statistical distribution, and all [[local minima]] and all solutions of deterministic functions. === Low-discrepancy sequences in numerical integration === Various methods of [[numerical integration]] can be phrased as approximating the integral of a function <math>f</math> in some interval, e.g. <nowiki>[0,1]</nowiki>, as the average of the function evaluated at a set <math>\{x_1, \dots, x_N }\</math> in that interval: :<math> \int_0^1 f(u)\,du \approx \frac{1}{N}\,\sum_{i=1}^N f(x_i). </math> If the points are chosen as <math>x_i = i/N</math>, this is the ''rectangle rule''. If the points are chosen to be randomly (or [[pseudorandom]]ly) distributed, this is the ''[[Monte Carlo method]]''. If the points are chosen as elements of a low-discrepancy sequence, this is the ''quasi-Monte Carlo method''. A remarkable result, the '''Koksma–Hlawka inequality''' (stated below), shows that the error of such a method can be bounded by the product of two terms, one of which depends only on <math>f</math>, and the other one is the discrepancy of the set <math>\{x_1, \dots, x_N }\</math>. It is convenient to construct the set <math>\{x_1, \dots, x_N }\</math> in such a way that if a set with <math>N+1</math> elements is constructed, the previous <math>N</math> elements need not be recomputed. The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed if <math>N</math> is increased. Elements need not be recomputed in the random Monte Carlo method if <math>N</math> is increased, but the point sets do not have minimal discrepancy. By using low-discrepancy sequences we aim for low discrepancy and no need for recomputations, but actually low-discrepancy sequences can only be incrementally good on discrepancy if we allow no recomputation. ==Definition of discrepancy== The ''discrepancy'' of a set <math>P = \{x_1, \dots, x_N }\</math> is defined, using [[Harald Niederreiter|Niederreiter's]] notation, as :<math> D_N(P) = \sup_{B\in J} \left| \frac{A(B;P)}{N} - \lambda_s(B) \right|</math> where <math>\lambda_s</math> is the <math>s</math>-dimensional [[Lebesgue measure]], <math>A(B;P)</math> is the number of points in <math>P</math> that fall into <math>B</math>, and <math>J</math> is the set of <math>s</math>-dimensional intervals or boxes of the form :<math> \prod_{i=1}^s [a_i, b_i) = \{ \mathbf{x} \in \mathbf{R}^s : a_i \le x_i < b_i \} \, </math> where <math> 0 \le a_i < b_i \le 1 </math>. The ''star-discrepancy'' <math>D^*_N(P)</math> is defined similarly, except that the supremum is taken over the set <math>J^*</math> of rectangular boxes of the form :<math> \prod_{i=1}^s [0, u_i) </math> where <math>u_i</math> is in the half-open interval <nowiki>[0, 1)</nowiki>. The two are related by :<math>D^*_N \le D_N \le 2^s D^*_N . \,</math> ''Note'': With these definitions, discrepancy represents the worst-case or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance, <math>L^2</math>-discrepancy or modified centered <math>L^2</math>-discrepancy are also used intensively to compare the quality of uniform point sets. Both are much easier to calculate for large <math>N</math> and <math>s</math>. ==The Koksma–Hlawka inequality== Let <math>\overline{I}^s</math> be the <math>s</math>-dimensional unit cube, <math>\overline{I}^s = [0, 1] \times \cdots \times [0, 1]</math>. Let <math>f</math> have [[bounded variation]] <math>V(f)</math> on <math>\overline{I}^s</math> in the sense of [[G. H. Hardy|Hardy]] and Krause. Then for any <math>x_1, \ldots, x_N</math> in <math>I^s = [0, 1)^s = [0, 1) \times \cdots \times [0, 1)</math>, : <math> \left| \frac{1}{N} \sum_{i=1}^N f(x_i) - \int_{\bar I^s} f(u)\,du \right| \le V(f)\, D_N^* (x_1,\ldots,x_N). </math> The [[Jurjen Ferdinand Koksma|Koksma]]–[[Edmund Hlawka|Hlawka]] inequality is sharp in the following sense: For any point set <math>\{x_1,\ldots,x_N\}</math> in <math>I^s</math> and any <math>\varepsilon>0</math>, there is a function <math>f</math> with bounded variation and <math>V(f) = 1</math> such that :<math> \left| \frac{1}{N} \sum_{i=1}^N f(x_i) - \int_{\bar I^s} f(u)\,du \right|>D_{N}^{*}(x_1,\ldots,x_N)-\varepsilon. </math> Therefore, the quality of a numerical integration rule depends only on the discrepancy <math>D^*_N(x_1,\ldots,x_N)</math>. ==The formula of Hlawka–Zaremba== Let <math>D=\{1,2,\ldots,d\}</math>. For <math>\emptyset\neq u\subseteq D</math> we write :<math> dx_u:=\prod_{j\in u} dx_j </math> and denote by <math>(x_u,1)</math> the point obtained from ''x'' by replacing the coordinates not in ''u'' by <math>1</math>. Then :<math> \frac{1}{N} \sum_{i=1}^N f(x_i) - \int_{\bar I^s} f(u)\,du= \sum_{\emptyset\neq u\subseteq D}(-1)^{|u|} \int_{[0,1]^{|u|}} \operatorname{disc}(x_u,1)\frac{\partial^{|u|}}{\partial x_u}f(x_u,1) \, dx_u, </math> where <math>\operatorname{disc}(z)= \frac{1}{N}\sum_{i=1}^N \prod_{j=1}^d 1_{[0,z_j)}(x_{i,j}) - \prod_{j=1}^d z_i</math> is the discrepancy function. ==The ''L<sup>2</sup>'' version of the Koksma–Hlawka inequality== Applying the [[Cauchy–Schwarz inequality]] for integrals and sums to the Hlawka–Zaremba identity, we obtain an <math>L^2</math> version of the Koksma–Hlawka inequality: : <math> \left|\frac{1}{N} \sum_{i=1}^N f(x_i) - \int_{\bar I^s} f(u)\,du\right|\le \|f\|_d \operatorname{disc}_d (\{t_i\}), </math> where :<math> \operatorname{disc}_d(\{t_i\})=\left(\sum_{\emptyset\neq u\subseteq D} \int_{[0,1]^{|u|}} \operatorname{disc}(x_u,1)^2 \, dx_u\right)^{1/2} </math> and :<math> \|f\|_d = \left(\sum_{u\subseteq D} \int_{[0,1]^{|u|}} \left|\frac{\partial^{|u|}}{\partial x_u} f(x_u,1)\right|^2 dx_u\right)^{1/2}. </math> <math>L^2</math> discrepancy has a high practical importance because fast explicit calculations are possible for a given point set. This way it is easy to create point set optimizers using <math>L^2</math> discrepancy as criteria. ==The Erdős–Turán–Koksma inequality== It is computationally hard to find the exact value of the discrepancy of large point sets. The [[Paul Erdős|Erdős]]–[[Turán]]–[[Jurjen Ferdinand Koksma|Koksma]] inequality provides an upper bound. Let <math>x_1,\ldots,x_N</math> be points in <math>I^s</math> and <math>H</math> be an arbitrary positive integer. Then :<math> D_{N}^{*}(x_1,\ldots,x_N)\leq \left(\frac{3}{2}\right)^s \left( \frac{2}{H+1}+ \sum_{0<\|h\|_{\infty}\leq H}\frac{1}{r(h)} \left| \frac{1}{N} \sum_{n=1}^{N} e^{2\pi i\langle h,x_n\rangle} \right| \right) </math> where :<math> r(h)=\prod_{i=1}^s\max\{1,|h_i|\}\quad\mbox{for}\quad h=(h_1,\ldots,h_s)\in\Z^s. </math> ==The main conjectures== '''Conjecture 1.''' There is a constant <math>c_s</math> depending only on the dimension <math>s</math>, such that :<math>D_{N}^{*}(x_1,\ldots,x_N)\geq c_s\frac{(\ln N)^{s-1}}{N}</math> for any finite point set <math>{x_1,\ldots,x_N}</math>. '''Conjecture 2.''' There is a constant <math>c'_s</math> depending only on :<math>s</math>, such that:<math>D_{N}^{*}(x_1,\ldots,x_N)\geq c'_s\frac{(\ln N)^{s}}{N}</math> for infinite number of <math>N</math> for any infinite sequence <math>x_1,x_2,x_3,\ldots</math>. These conjectures are equivalent. They have been proved for <math>s \leq 2</math> by [[Wolfgang M. Schmidt | W. M. Schmidt]]. In higher dimensions, the corresponding problem is still open. The best-known lower bounds are due to [[Michael Lacey (mathematician)|Michael Lacey]] and collaborators. ==Lower bounds== Let <math>s=1</math>. Then :<math> D_N^*(x_1,\ldots,x_N)\geq\frac{1}{2N} </math> for any finite point set <math>\{x_1, \dots, x_N }\</math>. Let <math>s=2</math>. [[Wolfgang M. Schmidt|W. M. Schmidt]] proved that for any finite point set <math>\{x_1, \dots, x_N }\</math>, :<math> D_N^*(x_1,\ldots,x_N)\geq C\frac{\log N}{N} </math> where :<math> C=\max_{a\geq3}\frac{1}{16}\frac{a-2}{a\log a}=0.023335\dots. </math> For arbitrary dimensions<math>s > 1</math>, [[Klaus Roth|K. F. Roth]] proved that :<math> D_N^*(x_1,\ldots,x_N)\geq\frac{1}{2^{4s}}\frac{1}{((s-1)\log2)^\frac{s-1}{2}}\frac{\log^{\frac{s-1}{2}}N}{N} </math> for any finite point set <math>\{x_1, \dots, x_N }\</math>. Jozef Beck <ref>{{cite journal|title=A two-dimensional van Aardenne-Ehrenfest theorem in irregularities of distribution|journal= Compositio Mathematica|volume= 72 |issue=3|year=1989|pages= 269–339|s2cid=125940424|last=Beck|first= József|url=https://eudml.org/doc/89992|mr= 1032337 | zbl= 0691.10041}}</ref> established a double log improvement of this result in three dimensions. This was improved by D. Bilyk and [[Michael Lacey (mathematician)| M. T. Lacey]] to a power of a single logarithm. The best known bound for ''s'' > 2 is due D. Bilyk and [[Michael Lacey (mathematician)| M. T. Lacey]] and A. Vagharshakyan.<ref>{{Cite journal|doi=10.1016/j.jfa.2007.09.010|title=On the Small Ball Inequality in all dimensions |year=2008 |last1=Bilyk |first1=Dmitriy |last2=Lacey |first2=Michael T. |last3=Vagharshakyan |first3=Armen |journal=Journal of Functional Analysis |volume=254 |issue=9 |pages=2470–2502 |s2cid=14234006 |doi-access=free |arxiv=0705.4619 }}</ref> There exists a <math>t>0</math> depending on ''s'' so that :<math> D_N^*(x_1,\ldots,x_N)\geq t \frac{\log^{\frac{s-1}{2}+t}N}{N} </math> for any finite point set <math>\{x_1, \dots, x_N }\</math>. ==Construction of low-discrepancy sequences== Because any distribution of random numbers can be mapped onto a uniform distribution, and quasirandom numbers are mapped in the same way, this article only concerns generation of quasirandom numbers on a multidimensional uniform distribution. There are constructions of sequences known such that :<math> D_N^{*}(x_1,\ldots,x_N)\leq C\frac{(\ln N)^{s}}{N}. </math> where <math>C</math> is a certain constant, depending on the sequence. After Conjecture 2, these sequences are believed to have the best possible order of convergence. Examples below are the [[van der Corput sequence]], the [[Halton sequence]]s, and the [[Sobol sequence|Sobol’ sequence]]s. One general limitation is that construction methods can usually only guarantee the order of convergence. Practically, low discrepancy can be only achieved if <math>N</math> is large enough, and for large given s this minimum <math>N</math> can be very large. This means running a Monte-Carlo analysis with e.g. <math>s=20</math> variables and <math>N=1000</math> points from a low-discrepancy sequence generator may offer only a very minor accuracy improvement {{Citation needed|date=July 2023}}. ===Random numbers=== Sequences of quasirandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbers <math>r_i</math> on <math>[0,0.5)</math> and construct quasirandom numbers <math>s_i</math> which are uniform on <math>[0,1)</math> using: <math>s_i = r_i</math> for <math>i</math> odd and <math>s_i = 0.5 + r_i</math> for <math>i</math> even. A second way to do it with the starting random numbers is to construct a random walk with offset 0.5 as in: : <math>s_i = s_{i-1} + 0.5+ r_i \pmod 1. \, </math> That is, take the previous quasirandom number, add 0.5 and the random number, and take the result [[modular arithmetic|modulo]] 1. For more than one dimension, [[Latin squares]] of the appropriate dimension can be used to provide offsets to ensure that the whole domain is covered evenly. [[Image:Subrandom 2D.gif|thumb|270px|right|Coverage of the unit square. Left for additive quasirandom numbers with ''c'' = 0.5545497..., 0.308517... Right for random numbers. From top to bottom. 10, 100, 1000, 10000 points.]] ===Additive recurrence=== For any irrational <math>\alpha</math>, the sequence : <math>s_n = \{s_0 + n\alpha\}</math> has discrepancy tending to <math>1/N</math>. Note that the sequence can be defined recursively by : <math>s_{n+1} = (s_n + \alpha)\bmod 1 \;.</math> A good value of <math>\alpha</math> gives lower discrepancy than a sequence of independent uniform random numbers. The discrepancy can be bounded by the [[approximation exponent]] of <math>\alpha</math>. If the approximation exponent is <math>\mu</math>, then for any <math>\varepsilon>0</math>, the following bound holds:<ref name="kn05">{{Harvnb|Kuipers|Niederreiter|2005|p=123}}</ref> : <math> D_N((s_n)) = O_\varepsilon (N^{-1/(\mu-1)+\varepsilon}).</math> By the [[Thue–Siegel–Roth theorem]], the approximation exponent of any irrational algebraic number is 2, giving a bound of <math>N^{-1+\varepsilon}</math> above. The recurrence relation above is similar to the recurrence relation used by a [[linear congruential generator]], a poor-quality pseudorandom number generator:<ref>{{Cite book|first=Donald E. |last= Knuth |title=[[The Art of Computer Programming]] |volume=2 |chapter=Chapter 3 – Random Numbers}}</ref> : <math>r_i = (a r_{i-1} + c) \bmod m</math> For the low discrepancy additive recurrence above, ''a'' and ''m'' are chosen to be 1. Note, however, that this will not generate independent random numbers, so should not be used for purposes requiring independence. The value of <math>c</math> with lowest discrepancy is the fractional part of the [[golden ratio]]:<ref> {{Cite web|first=Malte |last=Skarupke|url=https://probablydance.com/2018/06/16/fibonacci-hashing-the-optimization-that-the-world-forgot-or-a-better-alternative-to-integer-modulo/ |title=Fibonacci Hashing: The Optimization that the World Forgot|date=16 June 2018 |quote=One property of the Golden Ratio is that you can use it to subdivide any range roughly evenly ... if you don’t know ahead of time how many steps you’re going to take}} </ref> : <math>c = \frac{\sqrt{5}-1}{2} = \varphi - 1 \approx 0.618034.</math> Another value that is nearly as good is the fractional part of the [[silver ratio]], which is the fractional part of the square root of 2: : <math>c = \sqrt{2}-1 \approx 0.414214. \, </math> In more than one dimension, separate quasirandom numbers are needed for each dimension. A convenient set of values that are used, is the square roots of [[primes]] from two up, all taken modulo 1: : <math>c = \sqrt{2}, \sqrt{3}, \sqrt{5}, \sqrt{7}, \sqrt{11}, \ldots \, </math> However, a set of values based on the generalised golden ratio has been shown to produce more evenly distributed points. <ref>{{Cite web|first=Martin |last=Roberts|year= 2018 |url=https://web.archive.org/web/20250301162105/https://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/ |title=The Unreasonable Effectiveness of Quasirandom Sequences}} </ref> The [[list of pseudorandom number generators]] lists methods for generating independent pseudorandom numbers. ''Note'': In few dimensions, recursive recurrence leads to uniform sets of good quality, but for larger <math>s</math> (like <math> s>8 </math>) other point set generators can offer much lower discrepancies. ===van der Corput sequence=== {{Main article|van der Corput sequence}} Let :<math> n=\sum_{k=0}^{L-1}d_k(n)b^k </math> be the <math>b</math>-ary representation of the positive integer <math>n \geq 1</math>, i.e. <math>0 \leq d_k(n) < b</math>. Set :<math> g_b(n)=\sum_{k=0}^{L-1}d_k(n)b^{-k-1}. </math> Then there is a constant <math>C</math> depending only on <math>b</math> such that <math>(g_b(n))_{n \geq 1}</math> satisfies :<math> D^*_N(g_b(1),\dots,g_b(N))\leq C\frac{\log N}{N}, </math> where <math>D^*_N</math> is the '''[[#Definition of discrepancy|star discrepancy]]'''. ===Halton sequence=== [[Image:Halton sequence 2D.svg|thumb|right|First 256 points of the (2,3) Halton sequence]] {{Main article|Halton sequence}} The Halton sequence is a natural generalization of the van der Corput sequence to higher dimensions. Let ''s'' be an arbitrary dimension and ''b''<sub>1</sub>, ..., ''b''<sub>''s''</sub> be arbitrary [[coprime]] integers greater than 1. Define :<math> x(n)=(g_{b_1}(n),\dots,g_{b_s}(n)). </math> Then there is a constant ''C'' depending only on ''b''<sub>1</sub>, ..., ''b''<sub>''s''</sub>, such that sequence {''x''(''n'')}<sub>''n''≥1</sub> is a ''s''-dimensional sequence with :<math> D^*_N(x(1),\dots,x(N))\leq C'\frac{(\log N)^s}{N}. </math> ===Hammersley set=== [[File:Hammersley set 2D.svg|thumb|right|2D Hammersley set of size 256]] Let <math>b_1,\ldots,b_{s-1}</math> be [[coprime]] positive integers greater than 1. For given <math>s</math> and <math>N</math>, the <math>s</math>-dimensional [[John Hammersley|Hammersley]] set of size <math>N</math> is defined by<ref name="HammersleyHandscomb1964">{{cite book|title=Monte Carlo Methods|last1=Hammersley|first1=J. M.|last2=Handscomb|first2=D. C.|year=1964|doi=10.1007/978-94-009-5819-7|isbn=978-94-009-5821-0 }}</ref> :<math> x(n)=\left(g_{b_1}(n),\dots,g_{b_{s-1}}(n),\frac{n}{N}\right) </math> for <math>n = 1, \ldots, N</math>. Then :<math> D^*_N(x(1),\dots,x(N))\leq C\frac{(\log N)^{s-1}}{N} </math> where <math>C</math> is a constant depending only on <math>b_1, \ldots, b_{s-1}</math>. ''Note'': The formulas show that the Hammersley set is actually the Halton sequence, but we get one more dimension for free by adding a linear sweep. This is only possible if <math>N</math> is known upfront. A linear set is also the set with lowest possible one-dimensional discrepancy in general. Unfortunately, for higher dimensions, no such "discrepancy record sets" are known. For <math>s=2</math>, most low-discrepancy point set generators deliver at least near-optimum discrepancies. ===Sobol sequence=== {{Main article|Sobol sequence}} The Antonov–Saleev variant of the Sobol’ sequence generates numbers between zero and one directly as binary fractions of length <math>w,</math> from a set of <math>w</math> special binary fractions, <math>V_i, i = 1, 2, \dots, w</math> called direction numbers. The bits of the [[Gray code]] of <math>i</math>, <math>G(i)</math>, are used to select direction numbers. To get the Sobol’ sequence value <math>s_i</math> take the [[exclusive or]] of the binary value of the Gray code of <math>i</math> with the appropriate direction number. The number of dimensions required affects the choice of <math>V_i</math>. === Poisson disk sampling === {{main article|Supersampling#Poisson disk}} [[Supersampling#Poisson disk|Poisson disk sampling]] is popular in video games to rapidly place objects in a way that appears random-looking but guarantees that every two points are separated by at least the specified minimum distance.<ref> Herman Tulleken. {{Cite magazine|url=http://devmag.org.za/2009/05/03/poisson-disk-sampling/ |title=Poisson Disk Sampling |magazine=Dev.Mag | issue= 21 |date= March 2008| first=Herman |last=Tulleken |pages=21–25 }} </ref> This does not guarantee low discrepancy (as e. g. Sobol’), but at least a significantly lower discrepancy than pure random sampling. The goal of these sampling patterns is based on frequency analysis rather than discrepancy, a type of so-called "blue noise" patterns. ==Graphical examples== The points plotted below are the first 100, 1000, and 10000 elements in a sequence of the Sobol' type. For comparison, 10000 elements of a sequence of pseudorandom points are also shown. The low-discrepancy sequence was generated by [[ACM Transactions on Mathematical Software|TOMS]] algorithm 659.<ref>{{cite journal|doi=10.1145/42288.214372|title=Algorithm 659 |year=1988 |last1=Bratley |first1=Paul |last2=Fox |first2=Bennett L. |journal=ACM Transactions on Mathematical Software |volume=14 |pages=88–100 |s2cid=17325779 |doi-access=free }}</ref> An implementation of the algorithm in [[Fortran]] is available from [[Netlib]]. {| style="border-spacing: 2px; border: 1px solid darkgray;" |- style="text-align: center;" | [[File:Low discrepancy 100.png|Low discrepancy 100.png]] | [[File:Low discrepancy 1000.png|Low discrepancy 1000.png]] |- style="text-align: center;" | The first 100 points in a low-discrepancy sequence of the [[Sobol sequence|Sobol']] type. | The first 1000 points in the same sequence. These 1000 comprise the first 100, with 900 more points. |} {| style="border-spacing: 2px; border: 1px solid darkgray;" |- style="text-align: center;" | [[File:Low discrepancy 10000.png|Low discrepancy 10000.png]] | [[File:Random 10000.png|Random 10000.png]] |- style="text-align: center;" | The first 10000 points in the same sequence. These 10000 comprise the first 1000, with 9000 more points. | For comparison, here are the first 10000 points in a sequence of uniformly distributed pseudorandom numbers. Regions of higher and lower density are evident. |} {{Clear}} ==See also== * [[Discrepancy theory]] * [[Markov chain Monte Carlo]] * [[Quasi-Monte Carlo method]] * [[Sparse grid]] * [[Systematic sampling]] ==Notes== {{reflist}} == References == * {{cite book|isbn=978-0-521-19159-3|title=Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration |last1=Dick |first1=Josef |last2=Pillichshammer |first2=Friedrich |year=2010 |publisher=Cambridge University Press }} * {{Citation | last1=Kuipers | first1=L. | last2= Niederreiter |author2-link= Harald Niederreiter | first2=H. | title = Uniform distribution of sequences | publisher=[[Dover Publications]] | year=2005 | isbn=0-486-45019-8 }} * {{Cite book|author= [[Harald Niederreiter]] |title=Random Number Generation and Quasi-Monte Carlo Methods|publisher= Society for Industrial and Applied Mathematics|year= 1992 |isbn=0-89871-295-5 }} * {{cite book|isbn=3-540-62606-9|title=Sequences, Discrepancies and Applications |last1=Drmota |first1=Michael |last2=Tichy |first2=Robert F. |date = 1997 |series=Lecture Notes in Math|volume=1651|publisher=Springer}} * {{cite book|isbn=0-521-43108-5|title=[[Numerical Recipes]] in C |publisher=Cambridge University Press |first1=William H.|last1= Press|last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |last4=Vetterling |first4=William T. |year=1992|edition=2nd |at=see Section 7.7 for a less technical discussion of low-discrepancy sequences}} ==External links== * [http://calgo.acm.org/ Collected Algorithms of the ACM] ''(See algorithms 647, 659, and 738.)'' * [https://www.gnu.org/software/gsl/doc/html/qrng.html Quasi-Random Sequences] from the [[GNU Scientific Library]] * [http://finmathblog.blogspot.com/2013/09/quasi-random-sampling-subject-to-linear.html Quasi-random sampling subject to constraints] at FinancialMathematics.Com * [http://kirillsprograms.com/top_3Sobol.php C++ generator of Sobol’ sequence] * [https://scipy.github.io/devdocs/reference/stats.qmc.html SciPy QMC API Reference: scipy.stats.qmc] {{DEFAULTSORT:Low-Discrepancy Sequence}} [[Category:Numerical analysis]] [[Category:Low-discrepancy sequences]] [[Category:Random number generation]] [[Category:Diophantine approximation]] [[Category:Sequences and series]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite magazine
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Harvnb
(
edit
)
Template:Main article
(
edit
)
Template:Radic
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)