Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lehmann–Scheffé theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Example for when using a non-complete minimal sufficient statistic == An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is '''not complete''', was provided by Galili and Meilijson in 2016.<ref>{{cite journal|title= An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator |author1=Tal Galili |author2=Isaac Meilijson | date = 31 Mar 2016 | journal = The American Statistician | volume = 70 | issue = 1 | pages = 108–113 |doi=10.1080/00031305.2015.1100683| pmc = 4960505 | pmid=27499547}}</ref> Let <math>X_1, \ldots, X_n</math> be a random sample from a scale-uniform distribution <math>X \sim U ( (1-k) \theta, (1+k) \theta),</math> with unknown mean <math>\operatorname{E}[X]=\theta</math> and known design parameter <math>k \in (0,1)</math>. In the search for "best" possible unbiased estimators for <math>\theta</math>, it is natural to consider <math>X_1</math> as an initial (crude) unbiased estimator for <math>\theta</math> and then try to improve it. Since <math>X_1</math> is not a function of <math>T = \left( X_{(1)}, X_{(n)} \right)</math>, the minimal sufficient statistic for <math>\theta</math> (where <math>X_{(1)} = \min_i X_i </math> and <math>X_{(n)} = \max_i X_i </math>), it may be improved using the Rao–Blackwell theorem as follows: :<math>\hat{\theta}_{RB} =\operatorname{E}_\theta[X_1\mid X_{(1)}, X_{( n)}] = \frac{X_{(1)}+X_{(n)}} 2.</math> However, the following unbiased estimator can be shown to have lower variance: :<math>\hat{\theta}_{LV} = \frac 1 {k^2\frac{n-1}{n+1}+1} \cdot \frac{(1-k)X_{(1)} + (1+k) X_{(n)}} 2.</math> And in fact, it could be even further improved when using the following estimator: :<math>\hat{\theta}_\text{BAYES}=\frac{n+1} n \left[1- \frac{\frac{X_{(1)} (1+k)}{X_{(n)} (1-k)}-1}{ \left (\frac{X_{(1)} (1+k)}{X_{(n)} (1-k)}\right )^{n+1} -1} \right] \frac{X_{(n)}}{1+k}</math> The model is a [[Scale parameter|scale model]]. Optimal [[Equivariant Estimator|equivariant estimators]] can then be derived for [[loss function]]s that are invariant.<ref>{{Cite journal|last=Taraldsen|first=Gunnar|date=2020|title=Micha Mandel (2020), "The Scaled Uniform Model Revisited," The American Statistician, 74:1, 98–100: Comment|url=https://doi.org/10.1080/00031305.2020.1769727|journal=The American Statistician|volume=74|issue=3|pages=315|doi=10.1080/00031305.2020.1769727|s2cid=219493070 |issn=|url-access=subscription}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)