Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sensitivity analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Challenges, settings and related issues== Taking into account uncertainty arising from different sources, whether in the context of uncertainty analysis or sensitivity analysis (for calculating sensitivity indices), requires multiple samples of the uncertain parameters and, consequently, running the model (evaluating the <math>f</math>-function) multiple times. Depending on the complexity of the model there are many challenges that may be encountered during model evaluation. Therefore, the choice of method of sensitivity analysis is typically dictated by a number of problem constraints, settings or challenges. Some of the most common are: * '''Computational expense:''' Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a [[Sampling (statistics)|sampling]]-based approach.<ref>{{cite journal |first1=J. C. |last1=Helton |first2=J. D. |last2=Johnson |first3=C. J. |last3=Salaberry |first4=C. B. |last4=Storlie |year=2006 |title=Survey of sampling based methods for uncertainty and sensitivity analysis |journal=Reliability Engineering and System Safety |volume=91 |issue= 10β11|pages=1175β1209 |doi=10.1016/j.ress.2005.11.017|url=https://digital.library.unt.edu/ark:/67531/metadc891681/ }}</ref> This can be a significant problem when: ** Time-consuming models are very often encountered when complex models are involved. A single run of the model takes a significant amount of time (minutes, hours or longer). The use of statistical model ([[#Metamodels|meta-model]], [[data-driven model]]) including [[#High-dimensional model representations (HDMR)|HDMR]] to approximate the <math>f</math>-function is one way of reducing the computation costs. ** The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the [[Dimension|multidimensional input space]], which grows exponentially in size with the number of inputs. Therefore, screening methods can be useful for dimension reduction. Another way to tackle the [[curse of dimensionality]] is to use sampling based on [[low discrepancy sequence|low discrepancy sequences]].<ref>{{cite journal | last1 = Tsvetkova | first1 = O. | last2 = Ouarda | first2 = T.B.M.J. | year = 2019 | title = Quasi-Monte Carlo technique in global sensitivity analysis of wind resource assessment with a study on UAE | journal = Journal of Renewable and Sustainable Energy | volume = 11| issue = 5| page = 053303| doi = 10.1063/1.5120035 | s2cid = 208835771 | url = http://espace.inrs.ca/id/eprint/9701/1/P3626.pdf }}</ref> * '''Correlated inputs:''' Most common sensitivity analysis methods assume [[Independence (probability theory)|independence]] between model inputs, but sometimes inputs can be strongly correlated. Correlations between inputs must then be taken into account in the analysis.<ref name="Gamboa">{{cite journal |last1=Chastaing|first1=G.|last2=Gamboa|first2=F.|last3=Prieur|first3=C.|year=2012 |title=Generalized Hoeffding-Sobol decomposition for dependent variables - application to sensitivity analysis |url=https://projecteuclid.org/euclid.ejs/1356098617 |journal=[[Electronic Journal of Statistics]] |volume=6 |pages=2420β2448 |doi=10.1214/12-EJS749|issn=1935-7524|arxiv=1112.1788}}</ref> * '''Nonlinearity:''' Some sensitivity analysis approaches, such as those based on [[linear regression]], can inaccurately measure sensitivity when the model response is [[nonlinear system|nonlinear]] with respect to its inputs. In such cases, [[Variance-based sensitivity analysis|variance-based measures]] are more appropriate. * '''Multiple or functional outputs:''' Generally introduced for [[univariate|single-output codes]], sensitivity analysis extends to cases where the output <math>Y</math> is a vector or function.<ref name="Gamboa multidim and f outputs">{{cite journal |last1=Gamboa|first1=F. |last2=Janon|first2=A. |last3=Klein|first3=T. |last4=Lagnoux|first4=A. |year=2014 |title=Sensitivity analysis for multidimensional and functional outputs |url=https://projecteuclid.org/euclid.ejs/1400592265 |journal=[[Electronic Journal of Statistics]] |volume=8 |pages=575β603 |doi=10.1214/14-EJS895|issn=1935-7524|arxiv=1311.1797}}</ref> When outputs are correlated, it does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret. * '''Stochastic code:''' A code is said to be stochastic when, for several evaluations of the code with the same inputs, different outputs are obtained (as opposed to a deterministic code when, for several evaluations of the code with the same inputs, the same output is always obtained). In this case, it is necessary to separate the variability of the output due to the variability of the inputs from that due to stochasticity.<ref name="Marrel GSA for stochastic code">{{cite journal |last1=Marrel|first1=A. |last2=Iooss|first2=B. |last3=Da Veiga|first3=S. |last4=Ribatet|first4=M. |year= 2012 |title=Global sensitivity analysis of stochastic computer models with joint metamodels |url= https://link.springer.com/article/10.1007/s11222-011-9274-8 |journal=[[Statistics and Computing]] |volume=22 |issue=3 |pages=833β847 |doi=10.1007/s11222-011-9274-8 |issn=0960-3174|arxiv=0802.0443}}</ref> * '''Data-driven approach:''' Sometimes it is not possible to evaluate the code at all desired points, either because the code is confidential or because the experiment is not reproducible. The code output is only available for a given set of points, and it can be difficult to perform a sensitivity analysis on a limited set of data. We then build a statistical model ([[#Metamodels|meta-model]], [[data-driven model]]) from the available data (that we use for training) to approximate the code (the <math>f</math>-function).<ref name="Marrel Kriging">{{cite journal |last1=Marrel|first1=A. |last2=Iooss|first2=B. |last3=Van Dorpe|first3=F. |last4=Volkova|first4=E.|year=2008|title=An efficient methodology for modeling complex computer codes with Gaussian processes|url=http://www.sciencedirect.com/science/article/pii/S0167947308001758 |journal=[[Computational Statistics & Data Analysis]] |volume=52 |issue=10|pages=4731β4744 |doi=10.1016/j.csda.2008.03.026|arxiv=0802.1099}}</ref> To address the various constraints and challenges, a number of methods for sensitivity analysis have been proposed in the literature, which we will examine in the next section.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)