Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Nonlinear regression
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Regression analysis}} {{Regression bar}} [[Image:Michaelis-Menten saturation curve of an enzyme reaction.svg|thumb|300 px| See [[Michaelis–Menten kinetics]] for details]] In statistics, '''nonlinear regression''' is a form of [[regression analysis]] in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations). ==General == In nonlinear regression, a [[statistical model]] of the form, <math display="block"> \mathbf{y} \sim f(\mathbf{x}, \boldsymbol\beta)</math> relates a vector of [[independent variables]], <math>\mathbf{x}</math>, and its associated observed [[dependent variables]], <math>\mathbf{y}</math>. The function <math>f</math> is nonlinear in the components of the vector of parameters <math>\beta</math>, but otherwise arbitrary. For example, the [[Michaelis–Menten]] model for enzyme kinetics has two parameters and one independent variable, related by <math>f</math> by:{{efn|This model can also be expressed in the conventional biological notation: <math display="block"> v = \frac{V_\max\ [\mathrm{S}]}{K_m + [\mathrm{S}]} </math>}} <math display="block"> f(x,\boldsymbol\beta)= \frac{\beta_1 x}{\beta_2 + x} </math> This function, which is a rectangular hyperbola, is {{em|nonlinear}} because it cannot be expressed as a [[linear combination]] of the two <math>\beta</math>s. [[Systematic error]] may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an [[errors-in-variables model]], also outside this scope. Other examples of nonlinear functions include [[exponential function]]s, [[Logarithmic growth|logarithmic functions]], [[trigonometric functions]], [[Exponentiation|power functions]], [[Gaussian function]], and [[Cauchy distribution|Lorentz distribution]]s. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See {{mslink||Linearization|Transformation}}, below, for more details. In general, there is no closed-form expression for the best-fitting parameters, as there is in [[linear regression]]. Usually numerical [[Optimization (mathematics)|optimization]] algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many [[local maximum|local minima]] of the function to be optimized and even the global minimum may produce a [[Bias of an estimator|biased]] estimate. In practice, [[guess value|estimated values]] of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares. For details concerning nonlinear data modeling see [[least squares]] and [[non-linear least squares]]. ==Regression statistics== The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-order [[Taylor series]]: <math display="block"> f(x_i,\boldsymbol\beta) \approx f(x_i,0) + \sum_j J_{ij} \beta_j </math> where <math>J_{ij} = \frac{\partial f(x_i,\boldsymbol\beta)}{\partial \beta_j}</math> are Jacobian matrix elements. It follows from this that the least squares estimators are given by <math display="block">\hat{\boldsymbol{\beta}} \approx \mathbf { (J^TJ)^{-1}J^Ty},</math> compare [[generalized least squares]] with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but using '''J''' in place of '''X''' in the formulas. When the function <math>f(x_i,\boldsymbol\beta)</math> itself is not known analytically, but needs to be [[Linear regression|linearly approximated]] from <math>n+1</math>, or more, known values (where <math>n</math> is the number of estimators), the best estimator is obtained directly from the [[Linear Template Fit]] as <ref>{{cite journal | title=The Linear Template Fit | last=Britzger | first=Daniel | journal=Eur. Phys. J. C | volume=82 | year=2022 | issue=8 |pages=731 | doi=10.1140/epjc/s10052-022-10581-w | arxiv=2112.01548| bibcode=2022EPJC...82..731B }}</ref><math display="block"> \hat{\boldsymbol\beta} = ((\mathbf{Y\tilde{M}})^\mathsf{T} \boldsymbol\Omega^{-1} \mathbf{Y\tilde{M}})^{-1}(\mathbf{Y\tilde{M}})^\mathsf{T}\boldsymbol\Omega^{-1}(\mathbf{d}-\mathbf{Y\bar{m})}</math> (see also [[Linear_least_squares#Alternative_formulations|linear least squares]]). The linear approximation introduces [[bias (statistics)|bias]] into the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model. ==Ordinary and weighted least squares== The best-fit curve is often assumed to be that which minimizes the sum of squared [[errors and residuals in statistics|residuals]]. This is the [[ordinary least squares]] (OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; see [[weighted least squares]]. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case,<ref>{{cite journal | last1 = Motulsky | first1 = H.J. | last2 = Ransnas | first2 = L.A. | title = Fitting curves to data using nonlinear regression: a practical and nonmathematical review | journal = The FASEB Journal | volume = 1 | issue = 5 | pages = 365–374 | year = 1987 | doi = 10.1096/fasebj.1.5.3315805 | doi-access = free | pmid = 3315805 }}</ref> but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm. ==Linearization== ===Transformation=== {{further|Data transformation (statistics)}} Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation. For example, consider the nonlinear regression problem <math display="block"> y = a e^{b x}U </math> with parameters ''a'' and ''b'' and with multiplicative error term ''U''. If we take the logarithm of both sides, this becomes <math display="block"> \ln{(y)} = \ln{(a)} + b x + u, </math> where ''u'' = ln(''U''), suggesting estimation of the unknown parameters by a linear regression of ln(''y'') on ''x'', a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations. For [[Michaelis–Menten kinetics]], the linear [[Lineweaver–Burk plot]] <math display="block"> \frac{1}{v} = \frac{1}{V_\max} + \frac{K_m}{V_{\max}[S]}</math> of 1/''v'' against 1/[''S''] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [''S''], its use is strongly discouraged. For error distributions that belong to the [[exponential family]], a link function may be used to transform the parameters under the [[Generalized linear model]] framework. ===Segmentation=== [[Image:MUSTARD.JPG|thumb|175 px|right|Yield of mustard and soil salinity]] {{main|Segmented regression}} The [[Independent variable|''independent'' or ''explanatory variable'']] (say X) can be split up into classes or segments and [[linear regression]] can be performed per segment. Segmented regression with [[Confidence interval|confidence analysis]] may yield the result that the [[Dependent variable|''dependent'' or ''response'' variable]] (say Y) behaves differently in the various segments.<ref>R.J.Oosterbaan, 1994, Frequency and Regression Analysis. In: H.P.Ritzema (ed.), Drainage Principles and Applications, Publ. 16, pp. 175-224, International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. {{ISBN|90-70754-33-9}} . Download as PDF : [http://www.waterlog.info/pdf/regtxt.pdf]</ref> The figure shows that the [[soil salinity]] (X) initially exerts no influence on the [[crop yield]] (Y) of mustard, until a ''critical'' or ''threshold'' value (''breakpoint''), after which the yield is affected negatively.<ref>R.J.Oosterbaan, 2002. Drainage research in farmers' fields: analysis of data. Part of project “Liquid Gold” of the International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. Download as PDF : [http://www.waterlog.info/pdf/analysis.pdf]. The figure was made with the [[SegReg]] program, which can be downloaded freely from [http://www.waterlog.info/segreg.htm]</ref> ==See also== {{Portal|Mathematics}} * [[Non-linear least squares]] * [[Curve fitting]] * [[Generalized linear model]] * [[Local regression]] * [[Response modeling methodology]] * [[Genetic programming]] * [[Multi expression programming]] * [[Linear_least_squares#Alternative_formulations|Linear or quadratic template fit]] ==Notes== {{notelist}} ==References== {{Reflist}} ==Further reading== *{{cite book |first1=R. M. |last1=Bethea |first2=B. S. |last2=Duran |first3=T. L. |last3=Boullion |title=Statistical Methods for Engineers and Scientists |location=New York |publisher=Marcel Dekker |year=1985 |isbn=0-8247-7227-X }} *{{cite journal |last1=Meade |first1=N. |first2=T. |last2=Islam |year=1995 |title=Prediction Intervals for Growth Curve Forecasts |journal=Journal of Forecasting |volume=14 |issue=5 |pages=413–430 |doi=10.1002/for.3980140502 }} *{{cite book |first=K. |last=Schittkowski |title=Data Fitting in Dynamical Systems |publisher=Kluwer |location=Boston |year=2002 |isbn=1402010796 }} *{{cite book |first1=G. A. F. |last1=Seber |first2=C. J. |last2=Wild |title=Nonlinear Regression |location=New York |publisher=John Wiley and Sons |year=1989 |isbn=0471617601 }} {{Statistics}} {{least squares and regression analysis}} {{DEFAULTSORT:Nonlinear Regression}} [[Category:Regression analysis]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Efn
(
edit
)
Template:Em
(
edit
)
Template:Further
(
edit
)
Template:ISBN
(
edit
)
Template:Least squares and regression analysis
(
edit
)
Template:Main
(
edit
)
Template:Mslink
(
edit
)
Template:Notelist
(
edit
)
Template:Portal
(
edit
)
Template:Reflist
(
edit
)
Template:Regression bar
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)