Is the FEFLOW model ready for FePEST?
This is normally a question each modeller has to evaluate prior initiate the work in FePEST. Below we have listed few questions, which can help you to check if the FEFLOW FEM file (*.fem) is ready for a PEST operation.
Related to Basic Settings
The model convergence is a very important information to judge if the work in FePEST can be initiated. Since FEFLOW parameters will be automatically adjusted by PEST during any of its operation models (Estimation, Predictive analysis, Regularization, Pareto and Monte Carlo), a numerically "weak" model setup will lead to a slower convergence of the PEST optimization or may prevent it altogether. The modeller should therefore be mindful when setting up a model that is meant to be calibrated with PEST. If instabilities cannot be prevented however, usage of the IES (Iterative Ensemble Smoother) method instead of the classic GLM method may be preferred, as it is a lot more forgiving in terms of model instabilities.
Certain knowledge about parameter uncertainty and variation within the model domain is always very beneficial, for example for parameter estimation, Monte Carlo analysis, etc in PEST. If working with the Iterative Ensemble Smoother (PESTIES), knowledge (or estimation) of the parameter uncertainty is fundamental to the process.
You can get an initial idea what information is relevant from the list below:
Parameter bounds:
• Allowed parameter variation (upper and lower bounds).
• How is the variability of the parameters? Is there any information about the spatial correlation (e.g. variogram), interpolation method, etc.
• Has anisotropy to be considered in the system?
Further details are discussed in section Parameter Definitions.
All the PEST operation modes require certain Observation Definitions.
An operation without a single observation is misleading and technically not supported. Observations are used to define the measurement objective function in PEST and these can be standard FEFLOW Observation points, FEFLOW budget groups and/or any other information parsed via a plugin or script.
Understanding the uncertainty of the observations that are the calibration target is important to set a correct weighting of the different observations. Observations with higher certainty should thereby have a higher weight in the calibration than observations with a lower uncertainty.
Note that observation uncertainty not only arises from a measurement error (e.g. of the measurement instrument), but is the accumulation of all effects that are expected to prevent the model to match the observations; including for example surveillance errors, temporal variations in steady state model but also known model defect, e.g. an observation bore very close to a an abstraction well or a boundary conditions with an unknown value.
In a general situation, some observations are also more important (or relevant) than others. In these circumstances a good practice is to include this in the weighting strategy. Weights can define the observation definitions more dominant on the measurement objective function.
Note that when using the Iterative Ensemble, knowledge (or estimation) of the observation uncertainty is fundamental to the process.
Further details are discussed in section Observation Definitions.
Helpful to know it
A Priori (prior) knowledge about parameter uncertainty and variability within the model domain is beneficial, for example for regularized parameter estimation, Monte Carlo analysis and IESOptimization, etc in PEST. You can get an initial idea what information is relevant from the list below:

Parameter bounds : Allowed parameter variation (upper and lower bounds).

How is the spatial variability of the parameters? Is there any information about the spatial correlation (e.g. variogram), interpolation method, etc.

Has anisotropy to be considered in the system?
In a general situation, there are some observation sets more important (or relevant) than others. In these circumstances a good practice is to identify possible observation weighting strategy. Weights can define the observation definitions more dominant on the measurement objective function.
In case there is no prior knowledge about observation relevance, a first PEST run in FePEST with equal observation weights can be also used to understand the contribution of each observation (or observation group) to the measurement objective function, thus the parameter estimation (calibration process).
Further information can be found in our discussion on the section Observation Definitions.
Depending on the conceptual model, there may be certain "rules" to be respect for some parameters used in the PEST operation, strictly speaking on the Estimation mode. Such rules can be for example zones of homogeneity, anisotropy ratios (e.g. horizontal and vertical conductivity), spatial parameter variability, etc.
It is always good to write down these "rules", since they can be later implemented as the prior knowledge to guide PEST during its operation.
More information is available in section Prior Information.