Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Weibull distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Parameter estimation== ===Ordinary least square using Weibull plot=== [[File:Weibull qq.svg|thumb|right|Weibull plot]] The fit of a Weibull distribution to data can be visually assessed using a Weibull plot.<ref>{{cite web|url=http://www.itl.nist.gov/div898/handbook/eda/section3/weibplot.htm|title=1.3.3.30. Weibull Plot|website=www.itl.nist.gov}}</ref> The Weibull plot is a plot of the [[empirical cumulative distribution function]] <math>\widehat F(x)</math> of data on special axes in a type of [[QβQ plot]]. The axes are <math>\ln(-\ln(1-\widehat F(x)))</math> versus <math>\ln(x)</math>. The reason for this change of variables is the cumulative distribution function can be linearized: :<math>\begin{align} F(x) &= 1-e^{-(x/\lambda)^k}\\[4pt] -\ln(1-F(x)) &= (x/\lambda)^k\\[4pt] \underbrace{\ln(-\ln(1-F(x)))}_{\textrm{'y'}} &= \underbrace{k\ln x}_{\textrm{'mx'}} - \underbrace{k\ln \lambda}_{\textrm{'c'}} \end{align} </math> which can be seen to be in the standard form of a straight line. Therefore, if the data came from a Weibull distribution then a straight line is expected on a Weibull plot. There are various approaches to obtaining the empirical distribution function from data. One method is to obtain the vertical coordinate for each point using :<math>\widehat F = \frac{i-0.3}{n+0.4}</math>, where <math>i</math> is the rank of the data point and <math>n</math> is the number of data points.<ref>Wayne Nelson (2004) ''Applied Life Data Analysis''. Wiley-Blackwell {{ISBN|0-471-64462-5}}</ref><ref>{{Cite journal |last=Barnett |first=V. |date=1975 |title=Probability Plotting Methods and Order Statistics |url=https://www.jstor.org/stable/2346708 |journal=Journal of the Royal Statistical Society. Series C (Applied Statistics) |volume=24 |issue=1 |pages=95β108 |doi=10.2307/2346708 |jstor=2346708 |issn=0035-9254}}</ref> Another common estimator<ref>{{Cite ISO standard | csnumber = 69875 | title = ISO 20501:2019 β Fine ceramics (advanced ceramics, advanced technical ceramics) β Weibull statistics for strength data}}</ref> is :<math>\widehat F = \frac{i-0.5}{n}</math>. Linear regression can also be used to numerically assess goodness of fit and estimate the parameters of the Weibull distribution. The gradient informs one directly about the shape parameter <math>k</math> and the scale parameter <math>\lambda</math> can also be inferred. ===Method of moments=== The [[coefficient of variation]] of Weibull distribution depends only on the shape parameter:<ref name="Cohen1965">{{cite journal | url=https://www.stat.cmu.edu/technometrics/59-69/VOL-07-04/v0704579.pdf |title=Maximum Likelihood Estimation in the Weibull Distribution Based on Complete and on Censored Samples |first=A. Clifford |last=Cohen | journal=Technometrics |issue=4 |volume=7 |date=Nov 1965 | pages=579β588|doi=10.1080/00401706.1965.10490300 }}</ref> :<math>CV^2 = \frac{\sigma^2}{\mu^2} = \frac{\Gamma\left(1+\frac{2}{k}\right) - \left(\Gamma\left(1+\frac{1}{k}\right)\right)^2}{\left(\Gamma\left(1+\frac{1}{k}\right)\right)^2}.</math> Equating the sample quantities <math>s^2/\bar{x}^2</math> to <math>\sigma^2/\mu^2</math>, the moment estimate of the shape parameter <math>k</math> can be read off either from a look up table or a graph of <math>CV^2</math> versus <math>k</math>. A more accurate estimate of <math>\hat{k}</math> can be found using a root finding algorithm to solve :<math>\frac{\Gamma\left(1+\frac{2}{k}\right) - \left(\Gamma\left(1+\frac{1}{k}\right)\right)^2}{\left(\Gamma\left(1+\frac{1}{k}\right)\right)^2} = \frac{s^2}{\bar{x}^2}.</math> The moment estimate of the scale parameter can then be found using the first moment equation as :<math>\hat{\lambda} = \frac{\bar{x}}{\Gamma\left(1 + \frac{1}{\hat{k}}\right)}.</math> ===Maximum likelihood=== The [[Maximum likelihood estimation|maximum likelihood estimator]] for the <math>\lambda</math> parameter given <math>k</math> is<ref name="Cohen1965"/> :<math>\widehat \lambda = \left(\frac{1}{n} \sum_{i=1}^n x_i^k \right)^\frac{1}{k} </math> The maximum likelihood estimator for <math>k</math> is the solution for ''k'' of the following equation<ref name="Sornette, D. 2004">{{cite book | author = Sornette, D. | year = 2004 | title = Critical Phenomena in Natural Science: Chaos, Fractals, Self-organization, and Disorder}}.</ref> :<math> 0 = \frac{\sum_{i=1}^n x_i^k \ln x_i }{\sum_{i=1}^n x_i^k } - \frac{1}{k} - \frac{1}{n} \sum_{i=1}^n \ln x_i </math> This equation defines <math>\widehat k</math> only implicitly, one must generally solve for <math>k</math> by numerical means. When <math>x_1 > x_2 > \cdots > x_N</math> are the <math>N</math> largest observed samples from a dataset of more than <math>N</math> samples, then the maximum likelihood estimator for the <math>\lambda</math> parameter given <math>k</math> is<ref name="Sornette, D. 2004"/> :<math>\widehat \lambda^k = \frac{1}{N} \sum_{i=1}^N (x_i^k - x_N^k)</math> Also given that condition, the maximum likelihood estimator for <math>k</math> is{{citation needed|date=December 2017}} :<math> 0 = \frac{\sum_{i=1}^N (x_i^k \ln x_i - x_N^k \ln x_N)} {\sum_{i=1}^N (x_i^k - x_N^k)} - \frac{1}{N} \sum_{i=1}^N \ln x_i </math> Again, this being an implicit function, one must generally solve for <math>k</math> by numerical means.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)