Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Point estimation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Methods of finding point estimates == Below are some commonly used methods of estimating unknown parameters which are expected to provide estimators having some of these important properties. In general, depending on the situation and the purpose of our study we apply any one of the methods that may be suitable among the methods of point estimation. === Method of maximum likelihood (MLE) === The [[Maximum likelihood estimation|method of maximum likelihood]], due to R.A. Fisher, is the most important general method of estimation. This estimator method attempts to acquire unknown parameters that maximize the likelihood function. It uses a known model (ex. the normal distribution) and uses the values of parameters in the model that maximize a likelihood function to find the most suitable match for the data.<ref>{{Cite book|title=Categorical Data Analysis|publisher=Agresti A.|year=1990|location=John Wiley and Sons, New York|pages=}}</ref> Let X = (X<sub>1</sub>, X<sub>2</sub>, ... ,X<sub>n</sub>) denote a random sample with joint p.d.f or p.m.f. f(x, θ) (θ may be a vector). The function f(x, θ), considered as a function of θ, is called the likelihood function. In this case, it is denoted by L(θ). The principle of maximum likelihood consists of choosing an estimate within the admissible range of θ, that maximizes the likelihood. This estimator is called the maximum likelihood estimate (MLE) of θ. In order to obtain the MLE of θ, we use the equation ''dlog''L(θ)/''d''θ<sub>i</sub>=0, i = 1, 2, …, k. If θ is a vector, then partial derivatives are considered to get the likelihood equations.<ref name=":1" /> === Method of moments (MOM) === The [[Method of moments (statistics)|method of moments]] was introduced by K. Pearson and P. Chebyshev in 1887, and it is one of the oldest methods of estimation. This method is based on [[law of large numbers]], which uses all the known facts about a population and apply those facts to a sample of the population by deriving equations that relate the population moments to the unknown parameters. We can then solve with the sample mean of the population moments.<ref>{{Cite book|title=The Concise Encyclopedia of Statistics|publisher=Dodge, Y.|year=2008|location=Springer}}</ref> However, due to the simplicity, this method is not always accurate and can be biased easily. Let (X<sub>1</sub>, X<sub>2</sub>,…X<sub>n</sub>) be a random sample from a population having p.d.f. (or p.m.f) f(x,θ), θ = (θ<sub>1</sub>, θ<sub>2</sub>, …, θ<sub>k</sub>). The objective is to estimate the parameters θ<sub>1</sub>, θ<sub>2</sub>, ..., θ<sub>k</sub>. Further, let the first k population moments about zero exist as explicit function of θ, i.e. μ<sub>r</sub> = μ<sub>r</sub>(θ<sub>1</sub>, θ<sub>2</sub>,…, θ<sub>k</sub>), r = 1, 2, …, k. In the method of moments, we equate k sample moments with the corresponding population moments. Generally, the first k moments are taken because the errors due to sampling increase with the order of the moment. Thus, we get k equations μ<sub>r</sub>(θ<sub>1</sub>, θ<sub>2</sub>,…, θ<sub>k</sub>) = m<sub>r</sub>, r = 1, 2, …, k. Solving these equations we get the method of moment estimators (or estimates) as m<sub>r</sub> = 1/n ΣX<sub>i</sub><sup>r</sup>.<ref name=":1" /> See also [[generalized method of moments]]. === Method of least square === In the method of least square, we consider the estimation of parameters using some specified form of the expectation and second moment of the observations. For fitting a curve of the form y = f( x, β<sub>0</sub>, β<sub>1</sub>, ,,,, β<sub>p</sub>) to the data (x<sub>i</sub>, y<sub>i</sub>), i = 1, 2,…n, we may use the method of least squares. This method consists of minimizing the sum of squares. When f(x, β<sub>0</sub>, β<sub>1</sub>, ,,,, β<sub>p</sub>) is a linear function of the parameters and the x-values are known, least square estimators will be [[best linear unbiased estimator]] (BLUE). Again, if we assume that the least square estimates are independently and identically normally distributed, then a linear estimator will be [[minimum-variance unbiased estimator]] (MVUE) for the entire class of unbiased estimators. See also [[minimum mean squared error]] (MMSE).<ref name=":1" /> === Minimum-variance mean-unbiased estimator (MVUE) === The method of [[minimum-variance unbiased estimator]] minimizes the [[risk function|risk]] (expected loss) of the squared-error [[loss function|loss-function]]. === Median unbiased estimator === [[Median-unbiased estimator]] minimizes the risk of the absolute-error loss function. === Best linear unbiased estimator (BLUE) === [[Best linear unbiased estimator]], also known as the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero.<ref>{{Cite book|title=Best Linear Unbiased Estimation and Prediction|publisher=Theil Henri|year=1971|location=New York: John Wiley & Sons}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)