Subspace Regularization

Subspace regularization follows a different approach than Tikhonov regulari­zation.

The fundamental idea of Subspace regularization is to separate identifiable parameter components from non-identifiable parameter components in order to exclude the latter one from the parameter search.

The identifiability of a parameter is related to the way and extent it influences existing observation data (if a parameter does not influence any of the exist­ing observations, it cannot be identified).

Parameters (called base parameters in the following) are usually neither completely identifiable or non-identifiable. It is however possible to create lin­ear combinations of base parameters for which this is the case. These are called super parameters.

The transformed (super-)parameter space is separated into two subspaces: One subspace is comprised of combinations of parameters that have an influ­ence on observations. These combinations of parameters can be uniquely estimated through the history matching process.

The remaining parameter combinations occupy the so-called null subspace. These combinations of parameters have no or very small influence on model outputs corresponding to observations; hence estimation of these parameters through history matching is not possible.

The groups are also denoted as sub-spaces of the parameter space (the parameter space is the combination of these two orthogonal subspaces, and contains all parameters):

  • The group of identifiable parameters is called solution subspace (or often just solution space)

  • The group of non-identifiable parameters is called null subspace (or often just null space)

Truncated Singular Value Decomposition

Singular Value Decomposition (SVD) is the name of the method through which the parameter space is partitioned into the two orthogonal solution and null subspaces.

In most groundwater modelling contexts the solution space is smaller than the null space. The earth is complex, and the information content of most calibra­tion data sets is insufficient to provide unique estimation of the parameters which describe this complexity.

SVD analyses the Eigenvectors of the covariance matrix to identify the super parameters. The Eigenvalues - a measure for the post-calibration var­iability of their associated Eigenvectors - are the criterion to decide if a parameter is associated with the solution space and therefore included in the optimization or not. The ratio of highest to lowest Eigenvalue is a measure of the extent to which the inverse problem approaches ill-posedness. If this ratio is more than about 5e-7 then the problem can be considered to be ill-posed (in which case PEST would fail to optimize).

The truncated SVD separates the parameter space into solution and null subspace using this ratio as a criterion, and therefore omits any super param­eters that are to insensitive to be uniquely estimated. As a consequence, the inversion of the solution space is always well-posed and a stable optimization is guaranteed (unless flawed by other sources of error, e.g. bad derivative calculation).

FePEST applies truncated SVD with a threshold of 5e-7 by default in any PEST setup.


Further reading: PEST Manual (5th Ed.), Chapter 8.4: Truncated Singular Value Decomposition.

Least Squares (LSQR)

Least Squares (LSQR) is an alternative to the SVD method for highly-para­metrized inversion problems. Experience has shown that its application is useful when more than 2500 parameters are involved.


Further reading: C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, TOMS 8(1), 43-71 (1982). and C. C. Paige and M. A. Saunders, Algorithm 583; LSQR: Sparse linear equations and least-squares problems, TOMS 8(2), 195-209 (1982).

Table of Contents