Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sensitivity analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Pitfalls and difficulties== Some common difficulties in sensitivity analysis include: * '''Assumptions vs. inferences:''' In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input [[:wikt:assumption|assumptions]] and how wide the resulting [[inference]] may be. The point is well illustrated by the econometrician [[Edward E. Leamer]]:<ref>{{cite journal |first=Edward E. |last=Leamer |title=Let's Take the Con Out of Econometrics |journal=[[American Economic Review]] |volume=73 |issue=1 |year=1983 |pages=31β43 |jstor=1803924 }}</ref><ref>{{cite journal |first=Edward E. |last=Leamer |title=Sensitivity Analyses Would Help |journal=[[American Economic Review]] |volume=75 |issue=3 |year=1985 |pages=308β313 |jstor=1814801 }}</ref> <blockquote> " I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful."</blockquote> : Note Leamer's emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when ''uncertainties in inputs must be suppressed lest outputs become indeterminate.''<ref>Ravetz, J.R., 2007, ''No-Nonsense Guide to Science'', New Internationalist Publications Ltd.</ref> * ''' Not enough information to build probability distributions for the inputs:''' Probability distributions can be constructed from [[expert elicitation]], although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis. * '''Unclear purpose of the analysis:''' Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/low values of the output. * '''Too many model outputs are considered:''' This may be acceptable for the quality assurance of sub-models but should be avoided when presenting the results of the overall analysis. * '''Piecewise sensitivity:''' This is when one performs sensitivity analysis on one sub-model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)