Lecture 22 Lecture 23 Lecture 24 Describing Inverse Problems Probability and Measurement Error, Part 1 Probability and Measurement Error, Part 2 The L2 Norm and Simple Least Squares A Priori Information and Weighted Least Squared
Resolution and Generalized Inverses Backus-Gilbert Inverse and the Trade Off of Resolution and Variance The Principle of Maximum Likelihood Inexact Theories Nonuniqueness and Localized Averages Vector Spaces and Singular Value Decomposition Equality and Inequality Constraints L1 , L Norm Problems and Linear Programming Nonlinear Problems: Grid and Monte Carlo Searches
Nonlinear Problems: Newtons Method Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Factor Analysis Varimax Factors, Empircal Orthogonal Functions Backus-Gilbert Theory for Continuous Problems; Radons Problem Linear Operators and Their Adjoints Frchet Derivatives Exemplary Inverse Problems, incl. Filter Design Exemplary Inverse Problems, incl. Earthquake Location
Exemplary Inverse Problems, incl. Vibrational Problems Purpose of the Lecture Introduce the idea of a Generalized Inverse, the Data and Model Resolution Matrices and the Unit Covariance Matrix Quantify the spread of resolution and the size of the covariance Use the maximization of resolution and/or covariance as the guiding principle for solving inverse problems
Part 1 The Generalized Inverse, the Data and Model Resolution Matrices and the Unit Covariance Matrix all of the solutions of the form
mest = Md + v mest = Md + v lets focus on this matrix mest = G-gd + v rename it the generalized
inverse and use the symbol G-g (lets ignore the vector v for a moment) Generalized Inverse G-g operates on the data to give an estimate of the model parameters if
dpre = Gmest then mest = G-gdobs Generalized Inverse G-g if dpre = Gmest then mest = G-gdobs sort of looks like a matrix inverse except
MN, not square and GG-gI and G-gGI so actually the generalized inverse is not a matrix inverse at all plug one equation into the other
dpre = Gmest and mest = G-gdobs dpre = Ndobs with N = GG-g data resolution matrix Data Resolution Matrix, N dpre = Ndobs
How much does diobs contribute to its own prediction? if N=I dpre = dobs dipre = diobs diobs completely controls its own prediction
(A) dpre = dobs
The closer N is to I, the more diobs controls its own prediction straight line problem 10 10 d
15 d 15 5
5 0 0 0 5
z 10 0 5 z
dpre = N d obs j =
= i i only the data at j
the ends control their own prediction plug one equation into the other dobs = Gmtrue and mest = G-gdobs mest = Rmtrue with R = G-gG model resolution matrix
Model Resolution Matrix, R mest = Rmtrue How much does mitrue contribute to its own estimated value? if R=I
mest = mtrue miest = mitrue miest reflects mitrue only else if RI miest = + Ri,i-1mi-1true + Ri,imitrue + Ri,i+1mi+1true+
miest is a weighted average of all the elements of mtrue mest = mtrue
The closer R is to I, the more miest reflects only mitrue Discrete version of Laplace Transform large c: d is shallow average of m(z) small c: d is deep average of m(z)
e-c z lo z m(z) integrat e dlo
e-c hi integrat e dhi
z z z mest = R
j mtrue == i
i j the shallowest model parameters are best resolved Covariance associated with the Generalized Inverse unit covariance matrix
divide by 2 to remove effect of the overall magnitude of the measurement error unit covariance for straight line problem model parameters uncorrelated when this term zero happens when data are centered about the origin Part 2
The spread of resolution and the size of the covariance a resolution matrix has small spread if only its main diagonal has large elements it is close to the identity matrix Dirichlet Spread Functions
a unit covariance matrix has small size if its diagonal elements are small error in the data corresponds to only small error in the model parameters (ignore correlations) Part 3 minimization of
spread of resolution and/or size of covariance as the guiding principle for creating a generalized inverse over-determined case note that for simple least squares
G-g = [GTG]-1GT model resolution R=G-gG = [GTG]-1GTG=I always the identify matrix suggests that we try to minimize the spread of the data resolution matrix, N find G-g that minimizes spread(N)
spread of the k-th row of N now compute first term second term
third term is zero putting it all together which is just simple least squares G-g = [GTG]-1GT
the simple least squares solution minimizes the spread of data resolution and has zero spread of the model resolution under-determined case note that for minimum length solution
G-g = GT [GGT]-1 data resolution N=GG-g = G GT [GGT]-1 =I always the identify matrix suggests that we try to minimize the spread of the model resolution matrix, R find G-g that minimizes spread(R)
minimization leads to [GGT]G-g = GT which is just minimum length solution G-g = GT [GGT]-1 the minimum length solution minimizes the spread of model resolution
and has zero spread of the data resolution general case leads to general case
leads to a Sylvester Equation, so explicit solution in terms of matrices special case #1 1
0 2 I [GTG+2I]G-g=GT G-g=[GTG+2I]-1GT damped least
squares special case #2 0 1 2
I G-g[GGT+2I] =GT G-g=GT [GGT+2I]-1 damped minimum length so
no new solutions have arisen just a reinterpretation of previouslyderived solutions reinterpretation instead of solving for estimates of the model parameters We are solving for estimates of weighted averages of the model parameters, where the weights are given by the model
resolution matrix criticism of Direchlet spread() functions when m represents m(x) is that they dont capture the sense of being localized very well These two rows of the model resolution matrix have the same spread
Rij Rij i index, j i index, j
but the left case is better localized we will take up this issue in the next lecture
Guideposts to Economic Thinking. The use of scarce resources to produce a good or service is always costly.. Someone must give up something if we are to have more of a scarce good. The highest valued alternative that must be...
Incomplete Dominance A condition where all three genotypes are expressed. Phenotypic level: contradicts Mendel's conclusions. Genotypic level: consistent with Mendel's laws. Lethality A condition in which the inheritance of a lethal combination of alleles results in death of the organism....
And the importance of HOMONYMS An introduction Homophone Practice After reading the background on Wilde, complete the summary activity on page 3 of your packet by correcting the misused homophones! Oscar Wilde "Lived a gloriously notorious eccentric and decadent life...
Online retailers can seamlessly link the "back-end" (e.g., inventory and payment) with the "front-end" CRM system and the entire supply chain management system (SCM). Extranets, two or more intranet networks that share information, allow CRM-SCM integration. 5. Organizational Collaboration. 15-
a relatively hidden form of rule. ... They've just surrendered their two-row boat navy without a fight. You're now going to negotiate their fate away, but first, you need to plan your end state. Your group is the newly appointed...
chapter Better Business 1st Edition ... The foundation of their commitment to this idea is their Business Effectiveness Survey (BES) of employee satisfaction. Since 1996, net income has nearly quintupled and executive compensation is tied to employee satisfaction. ... 2006)...
-Histoire familiale d'une affection associée au HLAB27 . -Uvéite antérieure aigue. -Début des symptômes chez un garçon âgé de + de 8 ans. Signes cliniques: Le début ( 80%) : oligoarthrite asymétrique touchant les membres inférieurs, l'atteinte des interphalangiennes des...
Ready to download the document? Go ahead and hit continue!