Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sensitivity analysis
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Study of uncertainty in the output of a mathematical model or system}} {{MOS|date=September 2024}} '''Sensitivity analysis''' is the study of how the [[uncertainty]] in the output of a [[mathematical model]] or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs.<ref name="SA def book">{{cite book |last1=Saltelli|first1=A. |last2=Ratto|first2=M. |last3=Andreas|first3=T. |last4=Campolongo|first4=F. |last5=Gariboni |first5=J. |last6=Gatelli |first6=D. |last7=Saisana |first7=M. |last8=Tarantola |first8=S. |year=2008|title=Global sensitivity analysis: the primer |publisher=John Wiley & Sons |doi=10.1002/9780470725184|isbn=978-0-470-05997-5 }}</ref><ref name="SA def">{{cite book |last1=Saltelli|first1=A. |last2=Tarantola|first2=S. |last3=Campolongo|first3=F. |last4=Ratto |first4=M. |year=2004|title=Sensitivity analysis in practice: a guide to assessing scientific models |volume=1 |doi=10.1002/0470870958|isbn=978-0-470-87093-8 }}</ref> This involves estimating sensitivity indices that quantify the influence of an input or group of inputs on the output. A related practice is [[uncertainty analysis]], which has a greater focus on [[uncertainty quantification]] and [[propagation of uncertainty]]; ideally, uncertainty and sensitivity analysis should be run in tandem. ==Motivation== A [[mathematical model]] (for example in biology, climate change, economics, renewable energy, agronomy...) can be highly complex, and as a result, its relationships between inputs and outputs may be faultily understood. In such cases, the model can be viewed as a [[black box]], i.e. the output is an "opaque" function of its inputs. Quite often, some or all of the model inputs are subject to sources of [[Uncertainty quantification|uncertainty]], including [[Measurement uncertainty|errors of measurement]], errors in input data, parameter estimation and approximation procedure, absence of information and poor or partial understanding of the driving forces and mechanisms, choice of underlying hypothesis of model, and so on. This uncertainty limits our confidence in the [[Reliability (statistics)|reliability]] of the model's response or output. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of [[stochastic]] events.<ref>{{cite journal |last1=Der Kiureghian |first1=A. |last2=Ditlevsen |first2=O. |year=2009 |title=Aleatory or epistemic? Does it matter? |journal=Structural Safety |volume=31 |issue=2 |pages=105β112 |doi=10.1016/j.strusafe.2008.06.020}}</ref> In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance and can be useful to determine the impact of a uncertain variable for a range of purposes,<ref name="Examples">{{cite journal |last=Pannell |first=D. J. |year=1997 |title=Sensitivity Analysis of Normative Economic Models: Theoretical Framework and Practical Strategies |journal=Agricultural Economics |volume=16 |issue= 2|pages=139β152 |doi=10.1111/j.1574-0862.1997.tb00449.x|url= https://doi.org/10.1111/j.1574-0862.1997.tb00449.x }}</ref> including: * Testing the [[robust decision|robustness]] of the results of a model or system in the presence of uncertainty. * Increased understanding of the relationships between input and output variables in a system or model. * Uncertainty reduction, through the identification of model input that cause significant uncertainty in the output and should therefore be the focus of attention in order to increase robustness. * Searching for errors in the model (by encountering unexpected relationships between inputs and outputs). * Model simplification β fixing model input that has no effect on the output, or identifying and removing redundant parts of the model structure. * Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive). * Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see [[optimization]] and [[Monte Carlo method|Monte Carlo filtering]]). * For calibration of models with large number of parameters, by focusing on the sensitive parameters.<ref name="Hydrology">{{cite journal |last1=Bahremand |first1=A. |last2=De Smedt |first2=F. |year=2008 |title=Distributed Hydrological Modeling and Sensitivity Analysis in Torysa Watershed, Slovakia |journal=Water Resources Management |volume=22 |issue=3 |pages=293β408 |doi=10.1007/s11269-007-9168-x |bibcode=2008WatRM..22..393B |s2cid=9710579 }}</ref> * To identify important connections between observations, model inputs, and predictions or forecasts, leading to the development of better models.<ref name="Model Analysis">{{cite journal |last1=Hill |first1=M. |last2=Kavetski |first2=D. |last3=Clark |first3=M. |last4=Ye |first4=M. |last5=Arabi |first5=M. |last6=Lu |first6=D. |last7=Foglia |first7=L. |last8=Mehl |first8=S. |year=2015 |title=Practical use of computationally frugal model analysis methods |journal=Groundwater |volume=54 |issue=2 |pages=159β170 |doi=10.1111/gwat.12330|pmid=25810333 |osti=1286771 |doi-access=free }}</ref><ref name="Methods and Guidelines">{{cite book |last1=Hill |first1=M. |last2=Tiedeman |first2=C. |year=2007 |title=Effective Groundwater Model Calibration, with Analysis of Data, Sensitivities, Predictions, and Uncertainty |publisher=John Wiley & Sons }}</ref> == Mathematical formulation and vocabulary == [[File:Sensitivity scheme.jpg|thumb|right | upright=2 | Figure 1. Schematic representation of uncertainty analysis and sensitivity analysis. In mathematical modeling, uncertainty arises from a variety of sources - errors in input data, parameter estimation and approximation procedure, underlying hypothesis, choice of model, alternative model structures and so on. They propagate through the model and have an impact on the output. The uncertainty on the output is described via uncertainty analysis (represented [[Probability density function|pdf]] on the output) and their relative importance is quantified via sensitivity analysis (represented by [[pie chart]]s showing the proportion that each source of uncertainty contributes to the total uncertainty of the output).]] The object of study for sensitivity analysis is a function <math>f</math>, (called "'''mathematical model'''" or "'''programming code'''"), viewed as a [[black box]], with the <math>p</math>-dimensional '''input''' vector <math>X=(X_1,...,X_p)</math> and the '''output''' <math>Y</math>, presented as following: <math display="block">Y=f(X).</math> The variability in input parameters <math>X_i,i=1,\ldots,p</math> have an impact on the output <math>Y</math>. While [[uncertainty analysis]] aims to describe the distribution of the output <math>Y</math> (providing its [[statistics]], [[Moment measure|moments]], [[Probability density function|pdf]], [[Cumulative distribution function|cdf]],...), sensitivity analysis aims to measure and quantify the impact of each input <math>X_i</math> or a group of inputs on the variability of the output <math>Y</math> (by calculating the corresponding sensitivity indices). Figure 1 provides a schematic representation of this statement. ==Challenges, settings and related issues== Taking into account uncertainty arising from different sources, whether in the context of uncertainty analysis or sensitivity analysis (for calculating sensitivity indices), requires multiple samples of the uncertain parameters and, consequently, running the model (evaluating the <math>f</math>-function) multiple times. Depending on the complexity of the model there are many challenges that may be encountered during model evaluation. Therefore, the choice of method of sensitivity analysis is typically dictated by a number of problem constraints, settings or challenges. Some of the most common are: * '''Computational expense:''' Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a [[Sampling (statistics)|sampling]]-based approach.<ref>{{cite journal |first1=J. C. |last1=Helton |first2=J. D. |last2=Johnson |first3=C. J. |last3=Salaberry |first4=C. B. |last4=Storlie |year=2006 |title=Survey of sampling based methods for uncertainty and sensitivity analysis |journal=Reliability Engineering and System Safety |volume=91 |issue= 10β11|pages=1175β1209 |doi=10.1016/j.ress.2005.11.017|url=https://digital.library.unt.edu/ark:/67531/metadc891681/ }}</ref> This can be a significant problem when: ** Time-consuming models are very often encountered when complex models are involved. A single run of the model takes a significant amount of time (minutes, hours or longer). The use of statistical model ([[#Metamodels|meta-model]], [[data-driven model]]) including [[#High-dimensional model representations (HDMR)|HDMR]] to approximate the <math>f</math>-function is one way of reducing the computation costs. ** The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the [[Dimension|multidimensional input space]], which grows exponentially in size with the number of inputs. Therefore, screening methods can be useful for dimension reduction. Another way to tackle the [[curse of dimensionality]] is to use sampling based on [[low discrepancy sequence|low discrepancy sequences]].<ref>{{cite journal | last1 = Tsvetkova | first1 = O. | last2 = Ouarda | first2 = T.B.M.J. | year = 2019 | title = Quasi-Monte Carlo technique in global sensitivity analysis of wind resource assessment with a study on UAE | journal = Journal of Renewable and Sustainable Energy | volume = 11| issue = 5| page = 053303| doi = 10.1063/1.5120035 | s2cid = 208835771 | url = http://espace.inrs.ca/id/eprint/9701/1/P3626.pdf }}</ref> * '''Correlated inputs:''' Most common sensitivity analysis methods assume [[Independence (probability theory)|independence]] between model inputs, but sometimes inputs can be strongly correlated. Correlations between inputs must then be taken into account in the analysis.<ref name="Gamboa">{{cite journal |last1=Chastaing|first1=G.|last2=Gamboa|first2=F.|last3=Prieur|first3=C.|year=2012 |title=Generalized Hoeffding-Sobol decomposition for dependent variables - application to sensitivity analysis |url=https://projecteuclid.org/euclid.ejs/1356098617 |journal=[[Electronic Journal of Statistics]] |volume=6 |pages=2420β2448 |doi=10.1214/12-EJS749|issn=1935-7524|arxiv=1112.1788}}</ref> * '''Nonlinearity:''' Some sensitivity analysis approaches, such as those based on [[linear regression]], can inaccurately measure sensitivity when the model response is [[nonlinear system|nonlinear]] with respect to its inputs. In such cases, [[Variance-based sensitivity analysis|variance-based measures]] are more appropriate. * '''Multiple or functional outputs:''' Generally introduced for [[univariate|single-output codes]], sensitivity analysis extends to cases where the output <math>Y</math> is a vector or function.<ref name="Gamboa multidim and f outputs">{{cite journal |last1=Gamboa|first1=F. |last2=Janon|first2=A. |last3=Klein|first3=T. |last4=Lagnoux|first4=A. |year=2014 |title=Sensitivity analysis for multidimensional and functional outputs |url=https://projecteuclid.org/euclid.ejs/1400592265 |journal=[[Electronic Journal of Statistics]] |volume=8 |pages=575β603 |doi=10.1214/14-EJS895|issn=1935-7524|arxiv=1311.1797}}</ref> When outputs are correlated, it does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret. * '''Stochastic code:''' A code is said to be stochastic when, for several evaluations of the code with the same inputs, different outputs are obtained (as opposed to a deterministic code when, for several evaluations of the code with the same inputs, the same output is always obtained). In this case, it is necessary to separate the variability of the output due to the variability of the inputs from that due to stochasticity.<ref name="Marrel GSA for stochastic code">{{cite journal |last1=Marrel|first1=A. |last2=Iooss|first2=B. |last3=Da Veiga|first3=S. |last4=Ribatet|first4=M. |year= 2012 |title=Global sensitivity analysis of stochastic computer models with joint metamodels |url= https://link.springer.com/article/10.1007/s11222-011-9274-8 |journal=[[Statistics and Computing]] |volume=22 |issue=3 |pages=833β847 |doi=10.1007/s11222-011-9274-8 |issn=0960-3174|arxiv=0802.0443}}</ref> * '''Data-driven approach:''' Sometimes it is not possible to evaluate the code at all desired points, either because the code is confidential or because the experiment is not reproducible. The code output is only available for a given set of points, and it can be difficult to perform a sensitivity analysis on a limited set of data. We then build a statistical model ([[#Metamodels|meta-model]], [[data-driven model]]) from the available data (that we use for training) to approximate the code (the <math>f</math>-function).<ref name="Marrel Kriging">{{cite journal |last1=Marrel|first1=A. |last2=Iooss|first2=B. |last3=Van Dorpe|first3=F. |last4=Volkova|first4=E.|year=2008|title=An efficient methodology for modeling complex computer codes with Gaussian processes|url=http://www.sciencedirect.com/science/article/pii/S0167947308001758 |journal=[[Computational Statistics & Data Analysis]] |volume=52 |issue=10|pages=4731β4744 |doi=10.1016/j.csda.2008.03.026|arxiv=0802.1099}}</ref> To address the various constraints and challenges, a number of methods for sensitivity analysis have been proposed in the literature, which we will examine in the next section. == Sensitivity analysis methods == There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above. They are also distinguished by the type of sensitivity measure, be it based on (for example) [[Variance-based sensitivity analysis|variance decompositions]], [[partial derivatives]] or [[elementary effects method|elementary effects]]. In general, however, most procedures adhere to the following outline: # Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note that this can be difficult and many methods exist to elicit uncertainty distributions from subjective data.<ref>{{cite book |last=O'Hagan |first=A. |title=Uncertain Judgements: Eliciting Experts' Probabilities |publisher=Wiley |location=Chichester |year=2006 |isbn= 9780470033302|url=https://books.google.com/books?id=H9KswqPWIDQC |display-authors=etal}}</ref> # Identify the model output to be analysed (the target of interest should ideally have a direct relation to the problem tackled by the model). # Run the model a number of times using some [[design of experiments]],<ref>{{cite journal |last1=Sacks |first1=J. |first2=W. J. |last2=Welch |first3=T. J. |last3=Mitchell |first4=H. P. |last4=Wynn |year=1989 |title=Design and Analysis of Computer Experiments |journal=Statistical Science |volume=4 |issue= 4|pages=409β435 |doi= 10.1214/ss/1177012413|doi-access=free }}</ref> dictated by the method of choice and the input uncertainty. # Using the resulting model outputs, calculate the sensitivity measures of interest. In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis. The various types of "core methods" (discussed below) are distinguished by the various sensitivity measures which are calculated. These categories can somehow overlap. Alternative ways of obtaining these measures, under the constraints of the problem, can be given. In addition, an engineering view of the methods that takes into account the four important sensitivity analysis parameters has also been proposed.<ref>{{cite book | vauthors=Da Veiga S, Gamboa F, Iooss B, Prieur C | date= 2021 | title=Basics and Trends in Sensitivity Analysis | publisher=SIAM | url=https://blackwells.co.uk/bookshop/product/Basics-and-Trends-in-Sensitivity-Analysis-by-Sbastien-da-Veiga-Fabrice-Gamboa-Bertrand-Iooss-Clmentine-Prieur/9781611976687 | doi=10.1137/1.9781611976694 | isbn=978-1-61197-668-7}}</ref> === Visual analysis === [[File:Scatter plots for sensitivity analysis bis.jpg|thumb|right | upright=2 | Figure 2. Sampling-based sensitivity analysis by scatterplots. ''Y'' (vertical axis) is a function of four factors. The points in the four scatterplots are always the same though sorted differently, i.e. by ''Z''<sub>1</sub>, ''Z''<sub>2</sub>, ''Z''<sub>3</sub>, ''Z''<sub>4</sub> in turn. Note that the abscissa is different for each plot: (β5, +5) for ''Z''<sub>1</sub>, (β8, +8) for ''Z''<sub>2</sub>, (β10, +10) for ''Z''<sub>3</sub> and ''Z''<sub>4</sub>. ''Z''<sub>4</sub> is most important in influencing ''Y'' as it imparts more 'shape' on ''Y''.]] The first intuitive approach (especially useful in less complex cases) is to analyze the relationship between each input <math>Z_i</math> and the output <math>Y</math> using scatter plots, and observe the behavior of these pairs. The diagrams give an initial idea of the correlation and which input has an impact on the output. Figure 2 shows an example where two inputs, <math>Z_3</math> and <math>Z_4</math> are highly correlated with the output. === One-at-a-time (OAT) === {{main|One-factor-at-a-time method}} One of the simplest and most common approaches is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output.<ref>{{cite journal |first=J. |last=Campbell |year=2008 |title=Photosynthetic Control of Atmospheric Carbonyl Sulfide During the Growing Season |journal=[[Science (journal)|Science]] |volume=322 |issue=5904 |pages=1085β1088 |doi=10.1126/science.1164015 |display-authors=etal |pmid=19008442|bibcode=2008Sci...322.1085C |s2cid=206515456 |url=http://www.escholarship.org/uc/item/82r9s2x3 }}</ref><ref>{{cite journal |first1=R. |last1=Bailis |first2=M. |last2=Ezzati |first3=D. |last3=Kammen |year=2005 |title=Mortality and Greenhouse Gas Impacts of Biomass and Petroleum Energy Futures in Africa |journal=[[Science (journal)|Science]] |volume=308 |issue= 5718|pages=98β103 |doi=10.1126/science.1106881 |pmid=15802601|bibcode=2005Sci...308...98B |s2cid=14404609 }}</ref><ref>{{cite journal |first=J. |last=Murphy |year=2004 |title=Quantification of modelling uncertainties in a large ensemble of climate change simulations |journal=[[Nature (journal)|Nature]] |volume=430 |issue= 7001|pages=768β772 |doi= 10.1038/nature02771|display-authors=etal |pmid=15306806|bibcode=2004Natur.430..768M|s2cid=980153 }}</ref> OAT customarily involves * moving one input variable, keeping others at their baseline (nominal) values, then, * returning the variable to its nominal value, then repeating for each of the other inputs in the same way. Sensitivity may then be measured by monitoring changes in the output, e.g. by [[partial derivatives]] or [[linear regression]]. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all 'effects' are computed with reference to the same central point in space) and minimizes the chances of computer program crashes, more likely when several input factors are changed simultaneously. OAT is frequently preferred by modelers because of practical reasons. In case of model failure under OAT analysis the modeler immediately knows which is the input factor responsible for the failure. Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence of [[Interaction (statistics)|interactions]] between input variables and is unsuitable for nonlinear models.<ref>{{cite journal |last=Czitrom|first=Veronica|author-link= Veronica Czitrom |year=1999 |title=One-Factor-at-a-Time Versus Designed Experiments |journal=American Statistician |volume=53 |issue=2 |pages=126β131 |doi=10.2307/2685731|jstor= 2685731}}</ref> The proportion of input space which remains unexplored with an OAT approach grows superexponentially with the number of inputs. For example, a 3-variable parameter space which is explored one-at-a-time is equivalent to taking points along the x, y, and z axes of a cube centered at the origin. The [[convex hull]] bounding all these points is an [[octahedron]] which has a volume only 1/6th of the total parameter space. More generally, the convex hull of the axes of a hyperrectangle forms a [[hyperoctahedron]] which has a volume fraction of <math>1/n!</math>. With 5 inputs, the explored space already drops to less than 1% of the total parameter space. And even this is an overestimate, since the off-axis volume is not actually being sampled at all. Compare this to random sampling of the space, where the convex hull approaches the entire volume as more points are added.<ref>{{cite journal |last1=Gatzouras |first1=D |last2=Giannopoulos |first2=A |title=Threshold for the volume spanned by random points with independent coordinates |journal=[[Israel Journal of Mathematics]] |date=2009 |volume=169 |issue=1 |pages=125β153 | doi=10.1007/s11856-009-0007-z | doi-access=free}}</ref> While the sparsity of OAT is theoretically not a concern for [[linear model]]s, true linearity is rare in nature. === Morris === {{main|Morris method}} Named after the statistician Max D. Morris, this method is suitable for screening systems with many parameters. This is also known as method of elementary effects because it combines repeated steps along the various parametric axes.<ref>{{cite journal | vauthors=Morris MD | journal=Technometrics | title=Factorial Sampling Plans for Preliminary Computational Experiments | volume=33 | issue=2 | pages=161β174 | publisher=Taylor & Francis | date= 1991 | doi=10.2307/1269043| jstor=1269043 }}</ref> === Derivative-based local methods === Local derivative-based methods involve taking the [[partial derivative]] of the output <math>Y</math> with respect to an input factor <math>X_i</math>: :<math> \left| \frac{\partial Y}{\partial X_i} \right |_{\textbf {x}^0 }, </math> where the subscript '''x'''<sup>0</sup> indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling<ref>{{cite book |last=Cacuci |first=Dan G. |title=Sensitivity and Uncertainty Analysis: Theory |volume=I |publisher=Chapman & Hall }}</ref><ref>{{cite book |last1=Cacuci |first1=Dan G. |first2=Mihaela |last2=Ionescu-Bujor |first3=Michael |last3=Navon |year=2005 |title=Sensitivity and Uncertainty Analysis: Applications to Large-Scale Systems |volume=II |publisher=Chapman & Hall }}</ref> and Automated Differentiation<ref>{{cite book |last=Griewank |first=A. |year=2000 |title=Evaluating Derivatives, Principles and Techniques of Algorithmic Differentiation |publisher=SIAM }}</ref> are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification. One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables.<ref name="Possible">[https://ieeexplore.ieee.org/abstract/document/9206746 Kabir HD, Khosravi A, Nahavandi D, Nahavandi S. Uncertainty Quantification Neural Network from Similarity and Sensitivity. In2020 International Joint Conference on Neural Networks (IJCNN) 2020 Jul 19 (pp. 1-8). IEEE.]</ref> === Regression analysis === [[Regression analysis]], in the context of sensitivity analysis, involves fitting a [[linear regression]] to the model response and using [[Standardized coefficient|standardized regression coefficients]] as direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if the [[coefficient of determination]] is large. The advantages of regression analysis are that it is simple and has a low computational cost. === Variance-based methods === {{Main|Variance-based sensitivity analysis}} Variance-based methods<ref>{{cite journal | last1 = Sobol' | first1 = I | year = 1990 | title = Sensitivity estimates for nonlinear mathematical models | journal = Matematicheskoe Modelirovanie | volume = 2 | pages = 112β118 | language = ru }}; translated in English in {{cite journal | last1 = Sobol' | first1 = I | year = 1993 | title = Sensitivity analysis for non-linear mathematical models | journal = Mathematical Modeling & Computational Experiment | volume = 1 | pages = 407β414 }}</ref> are a class of probabilistic approaches which quantify the input and output uncertainties as [[random variable]]s, represented via their [[probability distribution]]s, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input. This amount is quantified and calculated using '''Sobol indices''': they represent the proportion of variance explained by an input or group of inputs. This expression essentially measures the contribution of <math>X_i</math> alone to the uncertainty (variance) in <math>Y</math> (averaged over variations in other variables), and is known as the ''' ''first-order sensitivity index'' ''' or ''' ''main effect index'' ''' or ''' ''main Sobol index'' ''' or ''' ''Sobol main index'' '''. For an input <math>X_i</math>, Sobol index is defined as following: <math display="block">S_i=\frac{V(\mathbb{E}[Y\vert X_i])}{V(Y)}</math> where <math>V(\cdot)</math> and <math>\mathbb{E}[\cdot]</math> denote the variance and expected value operators respectively. Importantly, first-order sensitivity index of <math>X_i</math> does not measure the uncertainty caused by interactions <math>X_i</math> has with other variables. A further measure, known as the ''' ''total effect index'' ''', gives the total variance in <math>Y</math> caused by <math>X_i</math> and its interactions with any of the other input variables. The total effect index is given as following: <math display="block">S_i^T=1-\frac{V(\mathbb{E}[Y\vert X_{\sim i}])}{V(Y)}</math>where <math>X_{\sim i} = (X_1,...,X_{i-1},X_{i+1},...,X_p)</math> denotes the set of all input variables except <math>X_i</math>. Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use of [[Monte Carlo integration|Monte Carlo]] methods, but since this can involve many thousands of model runs, other methods (such as metamodels) can be used to reduce computational expense when necessary. === Moment-independent methods === Moment-independent methods extend variance-based techniques by considering the probability density or cumulative distribution function of the model output <math>Y</math>. Thus, they do not refer to any particular [[Moment (mathematics)|moment]] of <math>Y</math>, whence the name. The moment-independent sensitivity measures of <math>X_i</math>, here denoted by <math>\xi_i</math>, can be defined through an equation similar to variance-based indices replacing the conditional expectation with a distance, as <math>\xi_i=E[d(P_Y,P_{Y|X_i})]</math>, where <math>d(\cdot,\cdot) </math> is a [[statistical distance]] [metric or divergence] between probability measures, <math>P_Y</math> and <math>P_{Y|X_i}</math> are the marginal and [[conditional probability]] measures of <math>Y</math>.<ref name="Borgonovo2014">{{Cite journal |vauthors=Borgonovo E, Tarantola S, Plischke E, Morris MD |date=2014 |title=Transformations and invariance in the sensitivity analysis of computer experiments |journal=Journal of the Royal Statistical Society |series=Series B (Statistical Methodology) |volume=76 |issue=5 |pages=925β947 |doi=10.1111/rssb.12052 |issn=1369-7412}}</ref> If <math>d()\geq 0</math> is a [[Statistical distance|distance]], the moment-independent global sensitivity measure satisfies zero-independence. This is a relevant statistical property also known as Renyi's postulate D.<ref name="Renyi">{{Cite journal |last=RΓ©nyi |first=A |date=1 September 1959 |title=On measures of dependence |journal=Acta Mathematica Academiae Scientiarum Hungaricae |volume=10 |issue=3 |pages=441β451 |doi=10.1007/BF02024507 |issn=1588-2632}}</ref> The class of moment-independent sensitivity measures includes indicators such as the <math>\delta </math> -importance measure,<ref name="Borgonovo2007">{{Cite journal |vauthors=Borgonovo E |date=June 2007 |title=A new uncertainty importance measure |journal=Reliability Engineering & System Safety |volume=92 |issue=6 |pages=771β784 |doi=10.1016/J.RESS.2006.04.015 |issn=0951-8320}}</ref> the new correlation coefficient of Chatterjee,<ref name="Chatterjee">{{Cite journal |vauthors=Chatterjee S |date=2 October 2021 |title=A New Coefficient of Correlation |journal=Journal of the American Statistical Association |volume=116 |issue=536 |pages=2009β2022 |arxiv=1909.10140 |doi=10.1080/01621459.2020.1758115 |issn=0162-1459}}</ref> the Wasserstein correlation of Wiesel <ref name="Wiesel">{{Cite journal |vauthors=Wiesel JC |date=November 2022 |title=Measuring association with Wasserstein distances |journal=Bernoulli |volume=28 |issue=4 |pages=2816β2832 |arxiv=2102.00356 |doi=10.3150/21-BEJ1438 |issn=1350-7265}}</ref> and the kernel-based sensitivity measures of Barr and Rabitz.<ref name="Barr">{{Cite journal |vauthors=Barr J, Rabitz H |date=31 March 2022 |title=A Generalized Kernel Method for Global Sensitivity Analysis |journal=SIAM/ASA Journal on Uncertainty Quantification |publisher=Society for Industrial and Applied Mathematics |volume=10 |issue=1 |pages=27β54 |doi=10.1137/20M1354829}}</ref> Another measure for global sensitivity analysis, in the category of moment-independent approaches, is the PAWN index.<ref name="PAWN">{{Cite journal |vauthors=Pianosi F, Wagener T |date=2015 |title=A simple and efficient method for global sensitivity analysis based on cumulative distribution functions |journal=Environmental Modelling & Software |volume=67 |pages=1β11 |bibcode=2015EnvMS..67....1P |doi=10.1016/j.envsoft.2015.01.004 |doi-access=free}}</ref> {{citation needed span|It relies on [[cumulative distribution functions|Cumulative Distribution Functions]] (CDFs) to characterize the maximum distance between the unconditional output distribution and conditional output distribution (obtained by varying all input parameters and by setting the <math>i</math>-th input, consequentially). The difference between the unconditional and conditional output distribution is usually calculated using the [[KolmogorovβSmirnov test]] (KS). The PAWN index for a given input parameter is then obtained by calculating the summary statistics over all KS values.|date=October 2024}} === Variogram analysis of response surfaces (''VARS'') === One of the major shortcomings of the previous sensitivity analysis methods is that none of them considers the spatially ordered structure of the response surface/output of the model <math>Y=f(X)</math> in the parameter space. By utilizing the concepts of directional [[variogram]]s and covariograms, variogram analysis of response surfaces (VARS) addresses this weakness through recognizing a spatially continuous correlation structure to the values of <math>Y</math>, and hence also to the values of <math> \frac{\partial Y}{\partial x_i} </math>.<ref>{{cite journal|last1=Razavi|first1=Saman|last2=Gupta|first2=Hoshin V.|title=A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory|journal=Water Resources Research|date=January 2016|volume=52|issue=1|pages=423β439|doi=10.1002/2015WR017558|language=en|issn=1944-7973|bibcode=2016WRR....52..423R|doi-access=free}}</ref><ref>{{cite journal|last1=Razavi|first1=Saman|last2=Gupta|first2=Hoshin V.|title=A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application|journal=Water Resources Research|date=January 2016|volume=52|issue=1|pages=440β455|doi=10.1002/2015WR017559|language=en|issn=1944-7973|bibcode=2016WRR....52..440R|doi-access=free}}</ref> Basically, the higher the variability the more heterogeneous is the response surface along a particular direction/parameter, at a specific perturbation scale. Accordingly, in the VARS framework, the values of directional [[variogram]]s for a given perturbation scale can be considered as a comprehensive illustration of sensitivity information, through linking variogram analysis to both direction and perturbation scale concepts. As a result, the VARS framework accounts for the fact that sensitivity is a scale-dependent concept, and thus overcomes the scale issue of traditional sensitivity analysis methods.<ref>{{cite journal|last1=Haghnegahdar|first1=Amin|last2=Razavi|first2=Saman|title=Insights into sensitivity analysis of Earth and environmental systems models: On the impact of parameter perturbation scale|journal=Environmental Modelling & Software|date=September 2017|volume=95|pages=115β131|doi=10.1016/j.envsoft.2017.03.031|bibcode=2017EnvMS..95..115H }}</ref> More importantly, VARS is able to provide relatively stable and statistically robust estimates of parameter sensitivity with much lower computational cost than other strategies (about two orders of magnitude more efficient).<ref>{{cite book|last1=Gupta|first1=H|last2=Razavi|first2=S|editor1-last=Petropoulos|editor1-first=George|editor2-last=Srivastava|editor2-first=Prashant|title=Sensitivity Analysis in Earth Observation Modelling|date=2016|isbn=9780128030318|pages=397β415|edition=1st|chapter-url=https://www.elsevier.com/books/sensitivity-analysis-in-earth-observation-modelling/petropoulos/978-0-12-803011-0|language=en|chapter=Challenges and Future Outlook of Sensitivity Analysis|publisher=Elsevier}}</ref> Noteworthy, it has been shown that there is a theoretical link between the VARS framework and the [[Variance-based sensitivity analysis|variance-based]] and derivative-based approaches. === Fourier amplitude sensitivity test (FAST) === {{Main| Fourier amplitude sensitivity testing }} The Fourier amplitude sensitivity test (FAST) uses the [[Fourier series]] to represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings. === Shapley effects === Shapley effects rely on [[Shapley value]]s and represent the average marginal contribution of a given factors across all possible combinations of factors. These value are related to Sobolβs indices as their value falls between the first order Sobolβ effect and the total order effect.<ref>{{cite journal | vauthors=Owen AB | journal=SIAM/ASA Journal on Uncertainty Quantification | title=Sobol' Indices and Shapley Value | volume=2 | issue=1 | pages=245β251 | publisher=Society for Industrial and Applied Mathematics | date=1 January 2014 | doi=10.1137/130936233}}</ref> === Chaos polynomials === The principle is to project the function of interest onto a basis of orthogonal polynomials. The Sobol indices are then expressed analytically in terms of the coefficients of this decomposition.<ref name="GSA Sudret">{{cite journal |last1=Sudret|first1=B. |year= 2008 |title=Global sensitivity analysis using polynomial chaos expansions |url= http://www.sciencedirect.com/science/article/pii/S0951832007001329 |journal=Bayesian Networks in Dependability] |volume=93 |issue=7 |pages=964β979|doi=10.1016/j.ress.2007.04.002 }}</ref> == Complementary research approaches for time-consuming simulations == A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due to [[computational expense]]). Generally, these methods focus on efficiently (by creating a metamodel of the costly function to be evaluated and/or by β wisely β sampling the factor space) calculating variance-based measures of sensitivity. === Metamodels === Metamodels (also known as emulators, surrogate models or response surfaces) are [[Data modeling|data-modeling]]/[[machine learning]] approaches that involve building a relatively simple mathematical function, known as an ''metamodels'', that approximates the input/output behavior of the model itself.<ref name="emcomp">{{cite journal | last1 = Storlie | first1 = C.B. | last2 = Swiler | first2 = L.P. | last3 = Helton | first3 = J.C. | last4 = Sallaberry | first4 = C.J. | year = 2009 | title = Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models | journal = Reliability Engineering & System Safety | volume = 94 | issue = 11| pages = 1735β1763 | doi=10.1016/j.ress.2009.05.007}}</ref> In other words, it is the concept of "modeling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputs <math>Y=f(X)</math>. By running the model at a number of points in the input space, it may be possible to fit a much simpler metamodels <math>\hat{f}(X)</math>, such that <math>\hat{f}(X) \approx f(X)</math> to within an acceptable margin of error.<ref>{{Cite journal|last1=Wang|first1=Shangying|last2=Fan|first2=Kai|last3=Luo|first3=Nan|last4=Cao|first4=Yangxiaolu|last5=Wu|first5=Feilun|last6=Zhang|first6=Carolyn|last7=Heller|first7=Katherine A.|last8=You|first8=Lingchong|date=2019-09-25|title=Massive computational acceleration by using neural networks to emulate mechanism-based biological models|journal=Nature Communications|language=en|volume=10|issue=1|pages=4354|doi=10.1038/s41467-019-12342-y|issn=2041-1723|pmc=6761138|pmid=31554788|bibcode=2019NatCo..10.4354W}}</ref> Then, sensitivity measures can be calculated from the metamodel (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the metamodel can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model.<ref name="oak">{{cite journal | last1 = Oakley | first1 = J. | last2 = O'Hagan | first2 = A. | year = 2004 | title = Probabilistic sensitivity analysis of complex models: a Bayesian approach | journal = J. R. Stat. Soc. B | volume = 66 | issue = 3| pages = 751β769 | doi=10.1111/j.1467-9868.2004.05304.x| citeseerx = 10.1.1.6.9720 | s2cid = 6130150 }}</ref> Clearly, the crux of an metamodel approach is to find an <math>\hat{f}(X)</math> (metamodel) that is a sufficiently close approximation to the model <math>f(X)</math>. This requires the following steps, # Sampling (running) the model at a number of points in its input space. This requires a sample design. # Selecting a type of emulator (mathematical function) to use. # "Training" the metamodel using the sample data from the model β this generally involves adjusting the metamodel parameters until the metamodel mimics the true model as well as possible. Sampling the model can often be done with [[low-discrepancy sequences]], such as the [[Sobol sequence]] β due to mathematician [[Ilya M. Sobol]] or [[Latin hypercube sampling]], although random designs can also be used, at the loss of some efficiency. The selection of the metamodel type and the training are intrinsically linked since the training method will be dependent on the class of metamodel. Some types of metamodels that have been used successfully for sensitivity analysis include: * [[Gaussian processes]]<ref name="oak" /> (also known as [[kriging]]), where any combination of output points is assumed to be distributed as a [[multivariate Gaussian distribution]]. Recently, "treed" Gaussian processes have been used to deal with [[Heteroscedasticity|heteroscedastic]] and discontinuous responses.<ref>{{cite journal |last1=Gramacy |first1=R. B. |last2=Taddy |first2=M. A. |title=Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models |journal=Journal of Statistical Software |volume=33 |issue=6 |doi= 10.18637/jss.v033.i06 |url=https://cran.r-project.org/web/packages/tgp/vignettes/tgp2.pdf |year=2010 |doi-access=free }}</ref><ref>{{cite journal |last1=Becker |first1=W. |last2=Worden |first2=K. |last3=Rowson |first3=J. |title=Bayesian sensitivity analysis of bifurcating nonlinear models |journal=Mechanical Systems and Signal Processing |volume= 34|issue= 1β2|pages= 57β75|doi=10.1016/j.ymssp.2012.05.010 |bibcode=2013MSSP...34...57B |year=2013 |url=https://zenodo.org/record/890779 }}</ref> * [[Random forest]]s,<ref name="emcomp" /> in which a large number of [[decision trees]] are trained, and the result averaged. * [[Gradient boosting]],<ref name="emcomp" /> where a succession of simple regressions are used to weight data points to sequentially reduce error. * [[polynomial chaos|Polynomial chaos expansions]],<ref>{{cite journal|last=Sudret|first=B.|date=2008|title=Global sensitivity analysis using polynomial chaos expansions|journal=Reliability Engineering & System Safety|volume=93|issue=7|pages=964β979|doi=10.1016/j.ress.2007.04.002}}</ref> which use [[orthogonal polynomials]] to approximate the response surface. * [[Smoothing spline]]s,<ref>{{cite journal | last1 = Ratto | first1 = M. | last2 = Pagano | first2 = A. | year = 2010 | title = Using recursive algorithms for the efficient identification of smoothing spline ANOVA models | journal = AStA Advances in Statistical Analysis | volume = 94 | issue = 4| pages = 367β388 | doi=10.1007/s10182-010-0148-8| s2cid = 7678955 }}</ref> normally used in conjunction with [[high-dimensional model representation]] (HDMR) truncations (see below). * Discrete [[Bayesian networks]],<ref>{{cite journal|last1= Cardenas |first1=IC|title= On the use of Bayesian networks as a meta-modeling approach to analyse uncertainties in slope stability analysis|journal =Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards|date=2019|volume=13|issue=1|pages=53β65|doi=10.1080/17499518.2018.1498524|bibcode=2019GAMRE..13...53C |s2cid=216590427 }}</ref> in conjunction with canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables to significantly reduce dimensionality. The use of an emulator introduces a [[machine learning]] problem, which can be difficult if the response of the model is highly [[nonlinear]]. In all cases, it is useful to check the accuracy of the emulator, for example using [[Cross-validation (statistics)|cross-validation]]. === High-dimensional model representations (HDMR) === A [[high-dimensional model representation]] (HDMR)<ref>{{cite journal | last1 = Li | first1 = G. | last2 = Hu | first2 = J. | last3 = Wang | first3 = S.-W. | last4 = Georgopoulos | first4 = P. | last5 = Schoendorf | first5 = J. | last6 = Rabitz | first6 = H. | year = 2006 | title = Random Sampling-High Dimensional Model Representation (RS-HDMR) and orthogonality of its different order component functions | journal = Journal of Physical Chemistry A | volume = 110 | issue = 7| pages = 2474β2485 | doi=10.1021/jp054148m| pmid = 16480307 | bibcode = 2006JPCA..110.2474L }}</ref><ref>{{cite journal | last1 = Li | first1 = G. | year = 2002 | title = Practical approaches to construct RS-HDMR component functions | journal = Journal of Physical Chemistry | volume = 106 | issue = 37| pages = 8721β8733 | doi = 10.1021/jp014567t | bibcode = 2002JPCA..106.8721L }}</ref> (the term is due to H. Rabitz<ref>{{cite journal | last1 = Rabitz | first1 = H | year = 1989 | title = System analysis at molecular scale | journal = Science | volume = 246 | issue = 4927| pages = 221β226 | doi=10.1126/science.246.4927.221| pmid = 17839016 | bibcode = 1989Sci...246..221R| s2cid = 23088466 }}</ref>) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage is that they are able to emulate models with higher dimensionality than full-order emulators. === Monte Carlo filtering === Sensitivity analysis via Monte Carlo filtering<ref>{{cite journal |last1=Hornberger |first1=G. |first2=R. |last2=Spear |year=1981 |title=An approach to the preliminary analysis of environmental systems |journal=Journal of Environmental Management |volume=7 |pages=7β18 }}</ref> is also a sampling-based approach, whose objective is to identify regions in the space of the input factors corresponding to particular values (e.g., high or low) of the output. ==Related concepts== Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall [[uncertainty]] in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions. The problem setting in sensitivity analysis also has strong similarities with the field of [[design of experiments]].<ref name="BoxHunter">Box GEP, Hunter WG, Hunter, J. Stuart. Statistics for experimenters [Internet]. New York: Wiley & Sons</ref> In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments. ==Sensitivity auditing== {{main|Sensitivity auditing}} It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision-making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis β with its emphasis on parametric uncertainty β may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about 'what the problem is' and foremost about 'who is telling the story'. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant). In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called 'sensitivity auditing'. It takes inspiration from NUSAP,<ref name="NUSAP">{{cite journal | last1 = Van der Sluijs | first1 = JP | last2 = Craye | first2 = M | last3 = Funtowicz | first3 = S | last4 = Kloprogge | first4 = P | last5 = Ravetz | first5 = J | last6 = Risbey | first6 = J | year = 2005 | title = Combining quantitative and qualitative measures of uncertainty in model based environmental assessment: the NUSAP system | journal = Risk Analysis | volume = 25 | issue = 2| pages = 481β492 | doi=10.1111/j.1539-6924.2005.00604.x| pmid = 15876219 | bibcode = 2005RiskA..25..481V | hdl = 1874/386039 | s2cid = 15988654 | hdl-access = free }}</ref> a method used to qualify the worth of quantitative information with the generation of `Pedigrees' of numbers. Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests.<ref name="Nutrition">{{cite journal |last1 = Lo Piano | first1 = S | last2 = Robinson | first2 = M |year=2019 |title=Nutrition and public health economic evaluations under the lenses of post normal science |journal=Futures |volume=112| page = 102436 |doi =10.1016/j.futures.2019.06.008 | s2cid = 198636712 }}</ref> Sensitivity auditing is recommended in the European Commission guidelines for impact assessment,<ref name="EC_GUIDE"/> as well as in the report Science Advice for Policy by European Academies.<ref>Science Advice for Policy by European Academies, Making sense of science for policy under conditions of complexity and uncertainty, Berlin, 2019.</ref> ==Pitfalls and difficulties== Some common difficulties in sensitivity analysis include: * '''Assumptions vs. inferences:''' In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input [[:wikt:assumption|assumptions]] and how wide the resulting [[inference]] may be. The point is well illustrated by the econometrician [[Edward E. Leamer]]:<ref>{{cite journal |first=Edward E. |last=Leamer |title=Let's Take the Con Out of Econometrics |journal=[[American Economic Review]] |volume=73 |issue=1 |year=1983 |pages=31β43 |jstor=1803924 }}</ref><ref>{{cite journal |first=Edward E. |last=Leamer |title=Sensitivity Analyses Would Help |journal=[[American Economic Review]] |volume=75 |issue=3 |year=1985 |pages=308β313 |jstor=1814801 }}</ref> <blockquote> " I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful."</blockquote> : Note Leamer's emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when ''uncertainties in inputs must be suppressed lest outputs become indeterminate.''<ref>Ravetz, J.R., 2007, ''No-Nonsense Guide to Science'', New Internationalist Publications Ltd.</ref> * ''' Not enough information to build probability distributions for the inputs:''' Probability distributions can be constructed from [[expert elicitation]], although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis. * '''Unclear purpose of the analysis:''' Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/low values of the output. * '''Too many model outputs are considered:''' This may be acceptable for the quality assurance of sub-models but should be avoided when presenting the results of the overall analysis. * '''Piecewise sensitivity:''' This is when one performs sensitivity analysis on one sub-model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error). ==SA in international context== The importance of understanding and managing uncertainty in model results has inspired many scientists from different research centers all over the world to take a close interest in this subject. National and international agencies involved in [[impact assessment]] studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the [[European Commission]] (see e.g. the guidelines for [[impact assessment]]),<ref name="EC_GUIDE">[https://ec.europa.eu/info/law/law-making-process/planning-and-proposing-law/better-regulation-why-and-how/better-regulation-guidelines-and-toolbox_en European Commission. 2021. βBetter Regulation Toolbox.β November 25.]</ref> the White House [[Office of Management and Budget]], the [[Intergovernmental Panel on Climate Change]] and [[US Environmental Protection Agency]]'s modeling guidelines.<ref>{{Cite web |url=http://www.epa.gov/CREM/library/cred_guidance_0309.pdf |title=Archived copy |access-date=2009-10-16 |archive-date=2011-04-26 |archive-url=https://web.archive.org/web/20110426180258/http://www.epa.gov/CREM/library/cred_guidance_0309.pdf |url-status=dead }}</ref> ==Specific applications of sensitivity analysis== The following pages discuss sensitivity analyses in relation to specific applications: *[[Applications of sensitivity analysis to environmental sciences|Environmental sciences]] *[[Applications of sensitivity analysis to business|Business]] *[[Corporate_finance#Sensitivity_and_scenario_analysis|(Corporate) finance]] *[[Applications of sensitivity analysis in epidemiology|Epidemiology]] *[[Applications of sensitivity analysis to multi-criteria decision making|Multi-criteria decision making]] *[[Applications of sensitivity analysis to model calibration|Model calibration]] ==See also== {{Div col|colwidth=20em}} * [[Causality]] * [[Elementary effects method]] * [[Experimental uncertainty analysis]] * [[Fourier amplitude sensitivity testing]] * [[Info-gap decision theory]] * [[Interval FEM]] * [[Perturbation analysis]] * [[Probabilistic design]] * [[Probability bounds analysis]] * [[Robustification]] * [[ROC curve]] * [[Uncertainty quantification]] * [[Variance-based sensitivity analysis]] * [[Multiverse analysis]] * [[Feature selection]] {{div col end}} ==References== {{reflist|30em}} ==Further reading== *Borgonovo, E. (2017). ''Sensitivity Analysis: An Introduction for the Management Scientist.'' International Series in Management Science and Operations Research, Springer New York. [https://link.springer.com/book/10.1007/978-3-319-52259-3/] *{{cite journal | last1 = Pianosi | first1 = F. | last2 = Beven | first2 = K. | last3 = Freer | first3 = J. | last4 = Hall | first4 = J.W. | last5 = Rougier | first5 = J. | last6 = Stephenson | first6 = D.B. | last7 = Wagener | first7 = T. | year = 2016 | title = Sensitivity analysis of environmental models: A systematic review with practical workflow | journal = Environmental Modelling & Software | volume = 79 | pages = 214β232 | doi=10.1016/j.envsoft.2016.02.008| doi-access = free | bibcode = 2016EnvMS..79..214P | hdl = 10871/21086 | hdl-access = free }} *Pilkey, O. H. and L. Pilkey-Jarvis (2007), ''Useless Arithmetic. Why Environmental Scientists Can't Predict the Future.'' New York: Columbia University Press. *Santner, T. J.; Williams, B. J.; Notz, W.I. (2003) ''Design and Analysis of Computer Experiments''; Springer-Verlag. *Haug, Edward J.; Choi, Kyung K.; [[Vadim Komkov|Komkov, Vadim]] (1986) ''Design sensitivity analysis of structural systems''. Mathematics in Science and Engineering, 177. Academic Press, Inc., Orlando, FL. * Hall, C. A. S. and Day, J. W. (1977). ''Ecosystem Modeling in Theory and Practice: An Introduction with Case Histories.'' John Wiley & Sons, New York, NY. {{isbn|978-0-471-34165-9}} ==External links== * [https://www.sensitivityanalysis.org/conferences/ Web site with material from SAMO conference series (1995-2025)] [[Category:Sensitivity analysis| ]] [[Category:Simulation]] [[Category:Business intelligence terms]] [[Category:Mathematical modeling]] [[Category:Mathematical and quantitative methods (economics)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed span
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Isbn
(
edit
)
Template:MOS
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)