Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Analytic number theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Problems and results == Theorems and results within analytic number theory tend not to be exact structural results about the integers, for which algebraic and geometrical tools are more appropriate. Instead, they give approximate bounds and estimates for various number theoretical functions, as the following examples illustrate. === Multiplicative number theory === {{main|Multiplicative number theory}} [[Euclid]] showed that there are infinitely many prime numbers. An important question is to determine the asymptotic distribution of the prime numbers; that is, a rough description of how many primes are smaller than a given number. [[Carl Gauss|Gauss]], amongst others, after computing a large list of primes, conjectured that the number of primes less than or equal to a large number ''N'' is close to the value of the [[integral]] <math display="block">\int^N_2 \frac{1}{\log t} \, dt.</math> In 1859 [[Bernhard Riemann]] used complex analysis and a special [[meromorphic]] function now known as the [[Riemann zeta function]] to derive an analytic expression for the number of primes less than or equal to a real number ''x''. Remarkably, the main term in Riemann's formula was exactly the above integral, lending substantial weight to Gauss's conjecture. Riemann found that the error terms in this expression, and hence the manner in which the primes are distributed, are closely related to the complex zeros of the zeta function. Using Riemann's ideas and by getting more information on the zeros of the zeta function, [[Jacques Hadamard]] and [[Charles Jean de la Vallée-Poussin]] managed to complete the proof of Gauss's conjecture. In particular, they proved that if <math display="block">\pi(x) = (\text{number of primes }\leq x),</math> then <math display="block">\lim_{x \to \infty} \frac{\pi(x)}{x/\log x} = 1.</math> This remarkable result is what is now known as the ''[[prime number theorem]]''. It is a central result in analytic number theory. Loosely speaking, it states that given a large number ''N'', the number of primes less than or equal to ''N'' is about ''N''/log(''N''). More generally, the same question can be asked about the number of primes in any [[arithmetic progression]] ''a'' + ''nq'' for any integer ''n''. In one of the first applications of analytic techniques to number theory, Dirichlet proved that any arithmetic progression with ''a'' and ''q'' coprime contains infinitely many primes. The prime number theorem can be generalised to this problem; letting <math display="block">\pi(x, a, q) = (\text {number of primes } \leq x \text{ in the arithmetic progression } a + nq, \ n \in \mathbf Z), </math> then if ''a'' and ''q'' are coprime, <math display="block">\lim_{x \to \infty} \frac{\pi(x,a,q)\phi(q)}{x/\log x} = 1,</math> where <math>\phi</math> is the [[totient function]].<ref>{{Cite web |last=Weisstein |first=Eric W. |title=Totient Function |url=https://mathworld.wolfram.com/TotientFunction.html |access-date=2025-02-09 |website=mathworld.wolfram.com |language=en}}</ref> There are also many deep and wide-ranging conjectures in number theory whose proofs seem too difficult for current techniques, such as the [[Twin prime|twin prime conjecture]] which asks whether there are infinitely many primes ''p'' such that ''p'' + 2 is prime. On the assumption of the [[Elliott–Halberstam conjecture]] it has been proven recently that there are infinitely many primes ''p'' such that ''p'' + ''k'' is prime for some positive even ''k'' at most 12. Also, it has been proven unconditionally (i.e. not depending on unproven conjectures) that there are infinitely many primes ''p'' such that ''p'' + ''k'' is prime for some positive even ''k'' at most 246. === Additive number theory === {{main|Additive number theory}} One of the most important problems in additive number theory is [[Waring's problem]], which asks whether it is possible, for any ''k'' ≥ 2, to write any positive integer as the sum of a bounded number of ''k''th powers, :<math>n=x_1^k+\cdots+x_\ell^k.</math> The case for squares, ''k'' = 2, was [[Lagrange's four-square theorem|answered]] by Lagrange in 1770, who proved that every positive integer is the sum of at most four squares. The general case was proved by [[David Hilbert|Hilbert]] in 1909, using algebraic techniques which gave no explicit bounds. An important breakthrough was the application of analytic tools to the problem by [[G. H. Hardy|Hardy]] and [[John Edensor Littlewood|Littlewood]]. These techniques are known as the circle method, and give explicit upper bounds for the function ''G''(''k''), the smallest number of ''k''th powers needed, such as [[Ivan Matveyevich Vinogradov|Vinogradov]]'s bound :<math>G(k)\leq k(3\log k+11).</math> === Diophantine problems === {{main|Diophantine problem}} [[Diophantine problem]]s are concerned with integer solutions to polynomial equations: one may study the distribution of solutions, that is, counting solutions according to some measure of "size" or ''[[height function|height]]''. An important example is the [[Gauss circle problem]], which asks for integers points (''x'' ''y'') which satisfy :<math>x^2+y^2\leq r^2.</math> In geometrical terms, given a circle centered about the origin in the plane with radius ''r'', the problem asks how many integer lattice points lie on or inside the circle. It is not hard to prove that the answer is <math>\pi r^2 + E(r)</math>, where <math>E(r)/r^2 \to 0</math> as <math>r \to \infty</math>. Again, the difficult part and a great achievement of analytic number theory is obtaining specific upper bounds on the error term ''E''(''r''). It was shown by Gauss that <math> E(r) = O(r)</math>. In general, an ''O''(''r'') error term would be possible with the unit circle (or, more properly, the closed unit disk) replaced by the dilates of any bounded planar region with piecewise smooth boundary. Furthermore, replacing the unit circle by the unit square, the error term for the general problem can be as large as a linear function of ''r''. Therefore, getting an [[error bound]] of the form <math>O(r^{\delta})</math> for some <math>\delta < 1</math> in the case of the circle is a significant improvement. The first to attain this was [[Wacław Sierpiński|Sierpiński]] in 1906, who showed <math> E(r) = O(r^{2/3})</math>. In 1915, Hardy and [[Edmund Landau|Landau]] each showed that one does ''not'' have <math>E(r) = O(r^{1/2})</math>. Since then the goal has been to show that for each fixed <math>\epsilon > 0</math> there exists a real number <math>C(\epsilon)</math> such that <math>E(r) \leq C(\epsilon) r^{1/2 + \epsilon}</math>. In 2000 [[Martin Huxley|Huxley]] showed<ref>M.N. Huxley, ''Integer points, exponential sums and the Riemann zeta function'', Number theory for the millennium, II (Urbana, IL, 2000) pp.275–290, A K Peters, Natick, MA, 2002, {{MR|1956254}}.</ref> that <math>E(r) = O(r^{131/208})</math>, which is the best published result.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)