Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Mathematical statistics
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Branch of statistics}} {{confused|Mathematics and statistics|Mathematics|Statistics}} [[Image:Linear regression.svg|thumb|right|300px|Illustration of linear regression on a data set. [[Regression analysis]] is an important part of mathematical statistics.]] {{Statistics topics sidebar}} {{Math topics TOC}} '''Mathematical statistics''' is the application of [[probability theory]] and other mathematical concepts to [[statistics]], as opposed to techniques for collecting statistical data.<ref>{{Cite book |last=Shao |first=Jun |url=https://books.google.com/books?id=_bEPBwAAQBAJ |title=Mathematical Statistics |date=2008-02-03 |publisher=Springer Science & Business Media |isbn=978-0-387-21718-5 |language=en}}</ref> Specific mathematical techniques that are commonly used in statistics include [[mathematical analysis]], [[linear algebra]], [[stochastic analysis]], [[differential equations]], and [[measure theory]].<ref>{{cite book|editor1-last=Kannan|editor1-first=D.|editor2-last=Lakshmikantham|editor2-first=V.|title=Handbook of stochastic analysis and applications|date=2002|publisher=M. Dekker|location=New York|isbn=0824706609}}</ref><ref>{{cite book|last=Schervish|first=Mark J.|title=Theory of statistics|date=1995|publisher=Springer|location=New York|isbn=0387945466|edition=Corr. 2nd print.}}</ref> ==Introduction== Statistical data collection is concerned with the planning of studies, especially with the [[design of experiments|design of randomized experiments]] and with the planning of [[statistical survey|surveys]] using [[random sampling]]. The initial analysis of the data often follows the study protocol specified prior to the study being conducted. The data from a study can also be analyzed to consider secondary hypotheses inspired by the initial results, or to suggest new studies. A secondary analysis of the data from a planned study uses tools from [[data analysis]], and the process of doing this is mathematical statistics. Data analysis is divided into: * [[descriptive statistics]] – the part of statistics that describes data, i.e. summarises the data and their typical properties. * [[inferential statistics]] – the part of statistics that draws conclusions from data (using some model for the data): For example, inferential statistics involves selecting a model for the data, checking whether the data fulfill the conditions of a particular model, and with quantifying the involved uncertainty (e.g. using [[confidence interval]]s). While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data. For example, from [[natural experiments]] and [[observational studies]], in which case the inference is dependent on the model chosen by the statistician, and so subjective.<ref>[[David A. Freedman (statistician)|Freedman, D.A.]] (2005) ''Statistical Models: Theory and Practice'', Cambridge University Press. {{isbn|978-0-521-67105-7}}</ref><ref name=Freedman>{{cite book |last1=Freedman |first1=David A. |editor1-last=Collier |editor1-first=David |editor2-last=Sekhon |editor2-first=Jasjeet S. |editor3-last=Stark |editor3-first=Philp B. |title=Statistical Models and Causal Inference: A Dialogue with the Social Sciences |date=2010 |publisher=Cambridge University Press |isbn=978-0-521-12390-7 |url=http://www.cambridge.org/9780521123907}}</ref> ==Topics== The following are some of the important topics in mathematical statistics:<ref>Hogg, R. V., A. Craig, and J. W. McKean. "Intro to Mathematical Statistics." (2005).</ref><ref>Larsen, Richard J. and Marx, Morris L. "An Introduction to Mathematical Statistics and Its Applications" (2012). Prentice Hall.</ref> ===Probability distributions=== {{main|Probability distribution}} A [[probability distribution]] is a [[function (mathematics)|function]] that assigns a [[probability]] to each [[measure (mathematics)|measurable subset]] of the possible outcomes of a random [[Experiment (probability theory)|experiment]], [[Survey methodology|survey]], or procedure of [[statistical inference]]. Examples are found in experiments whose [[sample space]] is non-numerical, where the distribution would be a [[categorical distribution]]; experiments whose sample space is encoded by discrete [[random variables]], where the distribution can be specified by a [[probability mass function]]; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a [[probability density function]]. More complex experiments, such as those involving [[stochastic processes]] defined in [[continuous time]], may demand the use of more general [[probability measure]]s. A probability distribution can either be [[Univariate distribution|univariate]] or [[Multivariate distribution|multivariate]]. A univariate distribution gives the probabilities of a single [[random variable]] taking on various alternative values; a multivariate distribution (a [[joint probability distribution]]) gives the probabilities of a [[random vector]]—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the [[binomial distribution]], the [[hypergeometric distribution]], and the [[normal distribution]]. The [[multivariate normal distribution]] is a commonly encountered multivariate distribution. ====Special distributions==== *[[Normal distribution]], the most common continuous distribution *[[Bernoulli distribution]], for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no) *[[Binomial distribution]], for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of [[independent (statistics)|independent]] occurrences *[[Negative binomial distribution]], for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs *[[Geometric distribution]], for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution, where the number of successes is one. *[[Discrete uniform distribution]], for a finite set of values (e.g. the outcome of a fair die) *[[Continuous uniform distribution]], for continuously distributed values *[[Poisson distribution]], for the number of occurrences of a Poisson-type event in a given period of time *[[Exponential distribution]], for the time before the next Poisson-type event occurs *[[Gamma distribution]], for the time before the next k Poisson-type events occur *[[Chi-squared distribution]], the distribution of a sum of squared [[standard normal]] variables; useful e.g. for inference regarding the [[sample variance]] of normally distributed samples (see [[chi-squared test]]) *[[Student's t distribution]], the distribution of the ratio of a [[standard normal]] variable and the square root of a scaled [[chi squared distribution|chi squared]] variable; useful for inference regarding the [[mean]] of normally distributed samples with unknown variance (see [[Student's t-test]]) *[[Beta distribution]], for a single probability (real number between 0 and 1); conjugate to the [[Bernoulli distribution]] and [[binomial distribution]] ===Statistical inference=== {{main|Statistical inference}} [[Statistical inference]] is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation.<ref name="Oxford">Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. {{isbn|978-0-19-954145-4}}</ref> Initial requirements of such a system of procedures for [[inference]] and [[Inductive reasoning|induction]] are that the system should produce reasonable answers when applied to well-defined situations and that it should be general enough to be applied across a range of situations. Inferential statistics are used to test hypotheses and make estimations using sample data. Whereas [[descriptive statistics]] describe a sample, inferential statistics infer predictions about a larger population that the sample represents. The outcome of statistical inference may be an answer to the question "what should be done next?", where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy. For the most part, statistical inference makes propositions about populations, using data drawn from the population of interest via some form of random sampling. More generally, data about a random process is obtained from its observed behavior during a finite period of time. Given a parameter or hypothesis about which one wishes to make inference, statistical inference most often uses: * a [[statistical model]] of the random process that is supposed to generate the data, which is known when randomization has been used, and * a particular realization of the random process; i.e., a set of data. ===Regression=== {{main|Regression analysis}} In [[statistics]], '''regression analysis''' is a statistical process for estimating the relationships among variables. It includes many ways for modeling and analyzing several variables, when the focus is on the relationship between a [[dependent variable]] and one or more [[independent variable]]s. More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the [[conditional expectation]] of the dependent variable given the independent variables – that is, the [[average value]] of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a [[quantile]], or other [[location parameter]] of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a [[function (mathematics)|function]] of the independent variables called the '''regression function'''. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a [[probability distribution]]. Many techniques for carrying out regression analysis have been developed. Familiar methods, such as [[linear regression]], are [[parametric statistics|parametric]], in that the regression function is defined in terms of a finite number of unknown [[parameter]]s that are estimated from the [[data]] (e.g. using [[ordinary least squares]]). [[Nonparametric regression]] refers to techniques that allow the regression function to lie in a specified set of [[function (mathematics)|functions]], which may be [[dimension|infinite-dimensional]]. ===Nonparametric statistics=== {{main|Nonparametric statistics}} '''Nonparametric statistics''' are values calculated from data in a way that is not based on [[Statistical parameter|parameterized]] families of [[probability distribution]]s. They include both [[descriptive statistics|descriptive]] and [[statistical inference|inferential]] statistics. The typical parameters are the expectations, variance, etc. Unlike [[parametric statistics]], nonparametric statistics make no assumptions about the [[probability distribution]]s of the variables being assessed.<ref>{{Cite web |title=Research Nonparametric Methods |url=https://d8.stat.cmu.edu/research-areas/nonparametric-methods |access-date=August 30, 2022 |website=Carnegie Mellon University}}</ref> Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a [[ranking]] but no clear numerical interpretation, such as when assessing [[preferences]]. In terms of [[level of measurement|levels of measurement]], non-parametric methods result in "ordinal" data. As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more [[Robust statistics#Introduction|robust]]. One drawback of non-parametric methods is that since they do not rely on assumptions, they are generally less [[Power of a test|powerful]] than their parametric counterparts.<ref name=":0">{{Cite web |title=Nonparametric Tests |url=https://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Nonparametric/BS704_Nonparametric_print.html |access-date=2022-08-31 |website=sphweb.bumc.bu.edu}}</ref> Low power non-parametric tests are problematic because a common use of these methods is for when a sample has a low sample size.<ref name=":0" /> Many parametric methods are proven to be the most powerful tests through methods such as the [[Neyman–Pearson lemma]] and the [[Likelihood-ratio test]]. Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding. ==Statistics, mathematics, and mathematical statistics== Mathematical statistics is a key subset of the discipline of [[statistics]]. [[Statisticians|Statistical theorists]] study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. Mathematicians and statisticians like [[Gauss]], [[Laplace]], and [[Charles Sanders Peirce|C. S. Peirce]] used [[optimal decision|decision theory]] with [[probability distribution]]s and [[loss function]]s (or [[utility function]]s). The decision-theoretic approach to statistical inference was reinvigorated by [[Abraham Wald]] and his successors<ref>{{Cite book | first = Abraham | last = Wald |author-link=Abraham Wald | title = Sequential analysis | year = 1947 | publisher = John Wiley and Sons | location = New York | isbn = 0-471-91806-7 | quote = See Dover reprint, 2004: {{isbn|0-486-43912-7}} }}</ref><ref>{{cite book |first=Abraham |last=Wald |author-link=Abraham Wald |title=Statistical Decision Functions |year=1950 |publisher=John Wiley and Sons, New York }}</ref><ref>{{cite book|last=Lehmann|first=Erich|author-link=Erich Leo Lehmann | title=Testing Statistical Hypotheses|year=1997 |edition=2nd |isbn=0-387-94919-4 }}</ref><ref> {{cite book | last1=Lehmann | first1=Erich | last2=Cassella | first2=George | author-link1=Erich Leo Lehmann | title=Theory of Point Estimation | year=1998 |edition=2nd|isbn= 0-387-98502-6}}</ref><ref> {{cite book | last1=Bickel|first1= Peter J.|last2=Doksum|first2=Kjell A. | author-link1=Peter J. Bickel |title=Mathematical Statistics: Basic and Selected Topics |volume=1 |edition=Second (updated printing 2007) |year=2001 |publisher=Pearson Prentice-Hall }}</ref><ref>{{cite book |first=Lucien |last=Le Cam |author-link=Lucien Le Cam |title=Asymptotic Methods in Statistical Decision Theory |year=1986 |publisher=Springer-Verlag |isbn=0-387-96307-3 }}</ref><ref>{{cite book |author1=Liese, Friedrich |author2=Miescke, Klaus-J. |name-list-style=amp |title=Statistical Decision Theory: Estimation, Testing, and Selection |year=2008 |publisher=Springer }} </ref> and makes extensive use of [[scientific computing]], [[mathematical analysis|analysis]], and [[Optimization (mathematics)|optimization]]; for the [[design of experiments]], statisticians use [[Algebraic statistics|algebra]] and [[Combinatorial design|combinatorics]]. But while statistical practice often relies on [[Probability theory|probability]] and [[optimal decision|decision theory]], their application can be controversial <ref name=Freedman/> ==See also== {{portal|Mathematics}} *[[Asymptotic theory (statistics)]] ==References== <references/> == Further reading == * [[Aleksandr Alekseevich Borovkov|Borovkov, A. A.]] (1999). ''Mathematical Statistics''. CRC Press. {{isbn|90-5699-018-7}} * [http://www.math.uah.edu/stat/ Virtual Laboratories in Probability and Statistics (Univ. of Ala.-Huntsville)] * [http://www.trigonella.ch/statibot/english/ StatiBot], interactive online expert system on statistical tests. * {{Cite book|last1=Ray|first1=Manohar|url=https://books.google.com/books?id=NXGpYgEACAAJ|title=Mathematical Statistics|last2=Sharma|first2=Har Swarup|date=1966|publisher=Ram Prasad & Sons}} {{ISBN|978-9383385188}} {{Statistics}} {{Areas of mathematics}} {{Authority control}} {{DEFAULTSORT:Mathematical Statistics}} [[Category:Statistical theory]] [[Category:Actuarial science]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Areas of mathematics
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Confused
(
edit
)
Template:ISBN
(
edit
)
Template:Isbn
(
edit
)
Template:Main
(
edit
)
Template:Math topics TOC
(
edit
)
Template:Portal
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)
Template:Statistics topics sidebar
(
edit
)