Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Importance sampling
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Scaling ==== Shifting probability mass into the event region <math>{ X \ge t\ }</math> by positive scaling of the random variable <math>X\,</math> with a number greater than unity has the effect of increasing the variance (mean also) of the density function. This results in a heavier tail of the density, leading to an increase in the event probability. Scaling is probably one of the earliest biasing methods known and has been extensively used in practice. It is simple to implement and usually provides conservative simulation gains as compared to other methods. In importance sampling by scaling, the simulation density is chosen as the density function of the scaled random variable <math>aX\,</math>, where usually <math>a>1</math> for tail probability estimation. By transformation, :<math> f_*(x)=\frac{1}{a} f \bigg( \frac{x}{a} \bigg)\,</math> and the weighting function is :<math> W(x)= a \frac{f(x)}{f(x/a)} \,</math> While scaling shifts probability mass into the desired event region, it also pushes mass into the complementary region <math>X<t\,</math> which is undesirable. If <math>X\,</math> is a sum of <math>n\,</math> random variables, the spreading of mass takes place in an <math>n\,</math> dimensional space. The consequence of this is a decreasing importance sampling gain for increasing <math>n\,</math>, and is called the dimensionality effect. A modern version of importance sampling by scaling is e.g. so-called sigma-scaled sampling (SSS) which is running multiple Monte Carlo (MC) analysis with different scaling factors. In opposite to many other high yield estimation methods (like worst-case distances WCD) SSS does not suffer much from the dimensionality problem. Also addressing multiple MC outputs causes no degradation in efficiency. On the other hand, as WCD, SSS is only designed for Gaussian statistical variables, and in opposite to WCD, the SSS method is not designed to provide accurate statistical corners. Another SSS disadvantage is that the MC runs with large scale factors may become difficult, e. g. due to model and simulator convergence problems. In addition, in SSS we face a strong bias-variance trade-off: Using large scale factors, we obtain quite stable yield results, but the larger the scale factors, the larger the bias error. If the advantages of SSS does not matter much in the application of interest, then often other methods are more efficient.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)