Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sensitivity analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Derivative-based local methods === Local derivative-based methods involve taking the [[partial derivative]] of the output <math>Y</math> with respect to an input factor <math>X_i</math>: :<math> \left| \frac{\partial Y}{\partial X_i} \right |_{\textbf {x}^0 }, </math> where the subscript '''x'''<sup>0</sup> indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling<ref>{{cite book |last=Cacuci |first=Dan G. |title=Sensitivity and Uncertainty Analysis: Theory |volume=I |publisher=Chapman & Hall }}</ref><ref>{{cite book |last1=Cacuci |first1=Dan G. |first2=Mihaela |last2=Ionescu-Bujor |first3=Michael |last3=Navon |year=2005 |title=Sensitivity and Uncertainty Analysis: Applications to Large-Scale Systems |volume=II |publisher=Chapman & Hall }}</ref> and Automated Differentiation<ref>{{cite book |last=Griewank |first=A. |year=2000 |title=Evaluating Derivatives, Principles and Techniques of Algorithmic Differentiation |publisher=SIAM }}</ref> are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification. One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables.<ref name="Possible">[https://ieeexplore.ieee.org/abstract/document/9206746 Kabir HD, Khosravi A, Nahavandi D, Nahavandi S. Uncertainty Quantification Neural Network from Similarity and Sensitivity. In2020 International Joint Conference on Neural Networks (IJCNN) 2020 Jul 19 (pp. 1-8). IEEE.]</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)