Tikhonov Regularization
The Tikhonov regularization method as implemented in PEST automatically generates a number of “information" equations, which defines the initial value of each parameter as the preferred value. The user can also make changes to these equations, or set up his/her own additional equations.
Regularization of parameters: departures (red) from preferred parameter values (green) are penalized.
When using Tikhonov Regularization the calibration process is formulated as a constrained minimization process as follows minimize the regularization objective function while ensuring that the measurement objective function is set at the userspecified target. If this userspecified target is not met, then PEST minimizes the measurement objective function. In the meantime it adjusts weights applied to prior information such that they act as Lagrange multipliers in the constrained optimization process. PEST thus determines the appropriate relative weighting between measurements and respect for prior information in accordance with a user’s choice of target measurement objective function.
As a result, TikhonovRegularization reduces the number of possible parameter sets that constitute a calibrated model by rejecting calibrated models with unrealistic parameter values.
Constraint optimization: the regularization objective function (green contours) is minimized while staying within the defined limits (red contour) of the measurement objective function.
Further reading: Methodologies and Software for PESTBased Model Predictive Uncertainty Analysis: Regularization (p. 46). 
Regularization of PilotPoint Parameters
If parameter fields are defined as a varying distribution using the pilot point method, this will allow a better fit to the observation data during history matching compared to a result obtained using constant parameters. While this is favourable to some extent, the resulting parameter field might look implausible, especially when pilot points are placed at a high density.
Because there are more pilot points (104, purple crosses) than observations (12, flags), a perfect match between observed and simulated results is obtained.
An overfitted parameter field.
The transmissivity field however reveals that this result is flawed nonetheless: The distribution looks somewhat "bumpy", especially around the observations. Even more severe, the transmissivity above the northernmost row of observation points is totally different (lower) from the one in the remaining area.
A parameter distribution like this is unlikely, and accordingly a prediction made with this model has a high potential of wrongness even though it is perfectly aligned with its calibration data. This state is called overfitting.
To prevent overfitting, a second objective (next to the measurement objective function) is required, through that plausibility is preferred.
A common approach is to prefer homogeneous distributions of parameters over heterogeneous distributions. If different values are assigned to neighboring pilot points to lower the measurement objective function, this difference will be penalized and will give rise to the regularization objective function. As a consequence, the optimization will yield a balanced compromise between calibration fit and homogeneity Finding the right distance within these penalties are applied is important. Differences between closely located points needs to be penalized stronger as the likeliness of parameter differences becomes smaller. Pilot points located far apart (above a certain distance, the correlation length) do not need to be penalized at all.
This distance and the strength of correlation are defined through a variogram.
Regularization of pilot point parameters: by penalizing differences between pilot points (red), a homogeneous (smooth) distribution is preferred. Initial parameter values are still preferred through prior information (grey).
Thus, within the range of correlation, implausible heterogeneities are suppressed unless they are necessary to meet the targeted value of the objective function. PEST calculates the expected correlation between each two pilot points and creates a covariance matrix which is used to impose the correct weights. In summary, the correlation length allows to define a preferred variability of a model property, in addition to the preferred mean value that is provided through the initial parameter value.
Further reading: PEST Groundwater Data Utilities, Chapter. 5.6: Regularization (of pilot points).

A sphericaltype variogram with a correlation length (Range) of 200 m.
Figure below shows the same model, regularized with a correlation length of 200 m. The transmissivity field is smoother, but still reflects general trends suggested by the observation data. Even though it yields a stronger modeltomeasurement misfit, predictions made using this model will have higher confidence.
Regularization of pilot points parameters leads to a smoother parameter field.