Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inverse problem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Stability, regularization and model discretization in infinite dimension ==== We focus here on the recovery of a distributed parameter. When looking for distributed parameters we have to discretize these unknown functions. Doing so, we reduce the dimension of the problem to something finite. But now, the question is: is there any link between the solution we compute and that of the initial problem? Then another question: what do we mean with the solution of the initial problem? Since a finite number of data does not allow the determination of an infinity of unknowns, the original data misfit functional has to be regularized to ensure the uniqueness of the solution. Many times, reducing the unknowns to a finite-dimensional space will provide an adequate regularization: the computed solution will look like a discrete version of the solution we were looking for. For example, a naïve discretization will often work for solving the [[deconvolution]] problem: it will work as long as we do not allow missing frequencies to show up in the numerical solution. But many times, regularization has to be integrated explicitly in the objective function. In order to understand what may happen, we have to keep in mind that solving such a linear inverse problem amount to solving a Fredholm integral equation of the first kind: <math display="block">d(x) = \int_\Omega K(x,y) p(y) dy</math> where <math>K</math> is the kernel, <math>x</math> and <math>y</math> are vectors of <math>R^2</math>, and <math>\Omega</math> is a domain in <math>R^2</math>. This holds for a 2D application. For a 3D application, we consider <math> x,y \in R^3</math>. Note that here the model parameters <math>p</math> consist of a function and that the response of a model also consists of a function denoted by <math>d(x)</math>. This equation is an extension to infinite dimension of the matrix equation <math>d=Fp</math> given in the case of discrete problems. For sufficiently smooth <math>K</math> the operator defined above is [[compact operator|compact]] on reasonable [[Banach space]]s such as the [[Lp space|<math>L^2</math>]]. [[Compact operator|F. Riesz theory]] states that the set of singular values of such an operator contains zero (hence the existence of a null-space), is finite or at most countable, and, in the latter case, they constitute a sequence that goes to zero. In the case of a symmetric kernel, we have an infinity of eigenvalues and the associated eigenvectors constitute a hilbertian basis of <math>L^2</math>. Thus any solution of this equation is determined up to an additive function in the null-space and, in the case of infinity of singular values, the solution (which involves the reciprocal of arbitrary small eigenvalues) is unstable: two ingredients that make the solution of this integral equation a typical ill-posed problem! However, we can define a solution through the [[Generalized inverse|pseudo-inverse]] of the forward map (again up to an arbitrary additive function). When the forward map is compact, the classical [[Tikhonov regularization]] will work if we use it for integrating prior information stating that the <math>L^2</math> norm of the solution should be as small as possible: this will make the inverse problem well-posed. Yet, as in the finite dimension case, we have to question the confidence we can put in the computed solution. Again, basically, the information lies in the eigenvalues of the Hessian operator. Should subspaces containing eigenvectors associated with small eigenvalues be explored for computing the solution, then the solution can hardly be trusted: some of its components will be poorly determined. The smallest eigenvalue is equal to the weight introduced in Tikhonov regularization. Irregular kernels may yield a forward map which is not compact and even [[Unbounded operator|unbounded]] if we naively equip the space of models with the <math>L^2</math> norm. In such cases, the Hessian is not a bounded operator and the notion of eigenvalue does not make sense any longer. A mathematical analysis is required to make it a [[bounded operator]] and design a well-posed problem: an illustration can be found in.<ref>{{cite journal |last1=Delprat-Jannaud |first1=Florence |last2=Lailly |first2=Patrick |title=Ill-posed and well-posed formulations of the reflection travel time tomography problem|journal=Journal of Geophysical Research |date=1993 |volume=98 |issue=B4 |pages=6589–6605 |doi=10.1029/92JB02441 |bibcode=1993JGR....98.6589D }}</ref> Again, we have to question the confidence we can put in the computed solution and we have to generalize the notion of eigenvalue to get the answer.<ref>{{cite journal |last1=Delprat-Jannaud |first1=Florence |last2=Lailly |first2=Patrick |title=What information on the Earth model do reflection traveltimes provide |journal=Journal of Geophysical Research |date=1992 |volume=98 |issue=B13 |pages=827–844|doi=10.1029/92JB01739 |bibcode=1992JGR....9719827D }}</ref> Analysis of the spectrum of the Hessian operator is thus a key element to determine how reliable the computed solution is. However, such an analysis is usually a very heavy task. This has led several authors to investigate alternative approaches in the case where we are not interested in all the components of the unknown function but only in sub-unknowns that are the images of the unknown function by a linear operator. These approaches are referred to as the " Backus and Gilbert method<ref>{{cite journal |last1=Backus |first1=George |last2=Gilbert |first2=Freeman |title=The Resolving Power of Gross Earth Data |journal=Geophysical Journal of the Royal Astronomical Society |date=1968 |volume=16 |issue=10 |pages=169–205 |doi=10.1111/j.1365-246X.1968.tb00216.x |bibcode=1968GeoJ...16..169B |doi-access=free }}</ref>", [[Jacques-Louis Lions|Lions]]'s sentinels approach,<ref>{{cite journal |last1=Lions |first1=Jacques Louis |title=Sur les sentinelles des systèmes distribués |journal=C. R. Acad. Sci. Paris |date=1988 |series=I Math |pages=819–823}}</ref> and the SOLA method:<ref>{{cite journal |last1=Pijpers |first1=Frank |last2=Thompson |first2=Michael |title=The SOLA method for helioseismic inversion |journal=Astronomy and Astrophysics |date=1993 |volume=281 |issue=12 |pages=231–240|bibcode=1994A&A...281..231P }}</ref> these approaches turned out to be strongly related with one another as explained in Chavent<ref>{{cite book |last1=Chavent |first1=Guy |title=Least-Squares, Sentinels and Substractive Optimally Localized Average in Equations aux dérivées partielles et applications |date=1998 |publisher=Gauthier Villars |location=Paris |pages=345–356 |url=https://hal.inria.fr/inria-00073357/document}}</ref> Finally, the concept of [[Optical resolution|limited resolution]], often invoked by physicists, is nothing but a specific view of the fact that some poorly determined components may corrupt the solution. But, generally speaking, these poorly determined components of the model are not necessarily associated with high frequencies.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)