In the stationary case, both GEV and GP distribution functions were used for fitting and parameters were estimated with four different parameter estimation methods (PEM) which are Maximum Likelihood Estimation (MLE), L-moments, Generalized Maximum Likelihood Estimation (GMLE), and a Bayesian approach. The MLE approach became one of the standards in statistical inference because of its asymptotic efficiency with sample size
n increasing to infinity and for distributions satisfying several regularity conditions [
11,
46,
47]. Meanwhile, it is one of the most frequently applied methods to estimate unknown parameters and in addition one of the most flexible regarding its application to modified models [
11]. As a likelihood-based approach, MLE yields to find the parameter values of
which maximize the probability of resampling the sample data [
8]. Coles [
11] provides a detailed description of the MLE method. According to Coles and Dixon [
48] maximum likelihood estimators for the AMAX approach tend to get unstable for sample sizes of
. In addition, Hosking and Wallis [
49] showed for the GP, that unless the sample size is 500 or more, estimators derived by the method of moments are more reliable than MLE. There are several variations of the method of moments with probability-weighted moments and L-moments being one of them [
9]. Since L-moments are more robust to outliers than conventional moments and additionally enable more secure inferences about the underlying probability distribution to be made from small samples [
50], this technique is often the preferred choice. Thereby, parameter estimation with L-moments is similar to the common method of moments, but convenient moments are replaced by L-moments, which are expectations of certain linear combinations of order statistics [
50]. For detailed information, Hosking [
50] unified the theory of L-moments and provides guidelines for practical use. Another method invented to circumvent possible weaknesses of the MLE is the GMLE [
51], which restricts estimates of the shape parameter
. Since maximum likelihood estimates can generate absurd values of the GEV shape parameter
, when sample sizes are small [
50,
51], with GMLE a prior distribution
is chosen assigning weights to different values of
within the allowed range. The choice of prior function is by default the beta distribution similar to Martins and Stedinger [
40], Martins and Stedinger [
51]. Analogous to the MLE method, the estimator of
can be identified by maximizing the generalized log-likelihood function [
51] which is the joint distribution of the likelihood function and the prior distribution. As an alternative to classical statistical inference, the last estimation technique used is based on Bayesian inference [
46]. One of the main differences is that parameters are no longer assumed to be constant but are rather treated as random variables with distribution function
[
11]. This
is called prior distribution since it contains possible beliefs about
without reference to the data [
11]. An obvious advantage of this method would be the inclusion of additional knowledge about the parameters which may come from other data sets or a modeler’s experience and physical intuition [
46]. On the other hand, choices of priors remain subjective decisions, so different analysts would supposedly specify different priors [
11]. By applying the well-known Bayes’ Theorem, the prior distribution can be converted into a posterior distribution
, which includes the additional information provided by the data
x, as follows:
But the normalizing integral in the denominator aggravates the direct computation of the posterior distribution [
11,
36]. This difficulty was overcome by the development of simulation-based techniques, such as Markov Chain Monte Carlo (MCMC), which facilitated the use of Bayesian techniques to the extent that they are now standard in many areas of application [
11]. After fitting the statistical models, consistency between modeled distribution and distribution of the sample was done by the examination of diagnostic plots such as quantile-quantile plots (QQ-Plots), probability plots, and simple comparisons of density distributions as well as the Kolmogorov-Smirnov test (KS-test) as a goodness-of-fit test [
13,
38,
39,
52].