Fine-Tune Robust Optimization

A Robust Optimization framework with original concepts and fundamentals also admitting a fusion of ideals from relative regret models and static robust optimization, containing conservatism concepts is disclosed. The algorithm uses a fine-tune strategy to tune the model so the robustness and a target ideality can be mutually achieved with a specified risk. The framework comprises original concepts, a mathematical approach and an algorithm. The statistical treatment of the data with the original concepts from the framework make it able to make short, middle or long-term decision-making setting. The framework has high tractability since the algorithm forces the creation of a setting that makes a robust optimization with the specified risk. The framework can be applied in linear and nonlinear mathematical models since that the objective function is monotonic in the domain of the active convex region. Several examples are solved to best understand the framework and all results demonstrated high tractability and performance. There is a wide range of applications. Along all the text, there is a profound discussion about its philosophy, objective, original concepts, fields of application, statistical and probabilistic fundamentals.


Introduction
A classical concern in mathematical programming is that, the fact that if the input data of a model vary, some of the constraints can be violated, generating thus an infeasible solution. Robust Optimization (RO) was first developed based on this concern, aiming to generate feasible solutions. Over the past years until the present time, as it is practical and powerful to use on real applications, RO have been used as a choice of handling optimization problems containing uncertainties in input parameters of a model. Different concepts have been developed independently in this field, and all of them cope with variation in the values of the input data of a model, but the objective of RO may differ.
Due to the need to arise models that are immune to perturbations in the input data, Soyster (1973) was the first to handle a RO method imbuing the idea of conservatism and robustness through the concept of robust counterpart. In this approach, the static robust optimization is applied to linear programming (LP) problems with j constraints and parameters being represented as hyperspheres with center and radius in Euclidian space. is the vector of all-ones and is the right side. This RO makes an ultraconservative adaptation to the convex feasible region to completely immunize the solution against infeasibility (Eq. 1) for any perturbation in the interval [ − , + ]. min , . . 1 ( 1 + 1 . ) + ⋯ + ( + . ) ≤ (1) RO based in conservatism and robustness aims to improve the performance of a deterministic model on giving feasible optimal solutions by changing the convex feasible region of the original deterministic model. Generally, the inclusion of scenarios is not considered in this methodology, neither probability distributions. The main challenge of this methodology is to balance conservatism and robustness: as the robustness increase, the conservatism increase and the solution turns more deviated from a nominal solution by the deterministic model. Based on this challenge, researches with different approaches has been developed over the past years, e.g. see El Ghaoui & Lebret (1997); El Ghaoui et al. (1998); Ben-Tal & Nemirovski (1998, 1999, 2000; Bertsimas & Sim (2004). To avoid overconservatism these approaches changed the way the parameters were represented in the Euclidian space and solved the problem as being either conic or quadratic.
El Ghaoui & Lebret (1997) focus in least-squares problems to minimize worst-case residuals considering unknown-but-bounded uncertainties. El Ghaoui & Lebret (1998) developed an approach with uniform distributions for the uncertainties to quantify the effect of unknown-but-bounded deterministic perturbation of problem data in solutions from uncertain semidefinite programs. Ben-Tal & Nemirovski (1998) developed a RO approach to convex problems with unknown-but-bounded data immunized for all perturbations in the data considering that they are represented by an ellipsoidal in the Euclidian space. Ben-Tal & Nemirovski (1999) replaces a LP problem by its robust counterpart (Eq. 2) to solve all hard constraints, and to make a tractable approach they considered data as ellipsoidal uncertainty set to ease the optimization. In spite of being convex, all these approaches considering either conic or quadratic problems lead to nonlinear models, improving computational efforts and demanding more time and complexity.

min{ | ≥ ∀( , ) ∈ (2)
Ben- Tal & Nemirovski (2000) solved over 90 LPs with RO methods such as El Ghaoui & Lebret (1997), El Ghaoui et al. (1998, Ben-Tal & Nemirovski (1998, 1999, showing the importance of RO in optimality and highlighting the practical viewpoint that small perturbations of the data can lead to infeasible solutions. Approaches from Ben-Tal & Nemirovski (1998,1999, 2000 cannot be directly applied to discrete optimization (Assavapokee et al. (2008)). In addition, all these approaches do not consider probability distributions. Bertsimas & Sim (2004) proposed an approach to make a RO that allows controlling the degree of Independently to the idea of robust counterpart first approached by Soyster (1973), Kouvelis & Yu (1997) considered robustness in a different point of view: based in the concept of relative regret model, they considered that a solution ( , ) is robust as closer as it gets to an ideal solution ( , ). This method was marked as solving all scenarios ∀ ∈ one by one by minimizing the maximum regret * (Eq. 3).
. Assavapokee et al. (2008) discloses an approach to LP focusing in solving large-scale min-max regret and min-max relative regret RO problems under ambiguity for two-stage decision-making. The first-stage problem must be a MILP model and the second-stage problem must be a LP model. The methodology is addressed to consider finite scenario set with the search of the best scenario among all possible realizations. Xidonas et al. (2017) developed a minimax regret approach. The idea was to extend the minimax regret criterion to multiobjective robust portfolio optimization focusing the application to portfolio management.
The regret-based robustness can be applied to Pareto solutions and has managerial usability to investment practitioners. In addition, other researchers has widespread specific adaptations of regret models to related fields of engineering, e.g. see Baohua & Shiwei (2009), Jiang et al. (2013, Chen et al. (2014).
New concerns were derived from the ideal of robust optimization. Distributionally robust optimization is a class of approaches addressed to cope with ambiguous probability distributions and this method generally consider chance-constraints and is usually applied in stochastic programs. Adaptive robust optimization is a class of approaches addressed to balance the conservatism and robustness through multiple stages of decision by adjusting some wait and see variables and adapting the robust counterpart of the problem. All of these roles are not related to the present framework that is a static robust optimization interested in a maximization or minimization problem. The new approach may be called as fine-tune robust optimization.
Optimization under uncertainty that lacks information from probability distribution (e.g. RO without historical data) lacks historical trends to make big decisions for long-term-run. First steps of the procedure by the present framework is addressed to make a statistical treatment of the data by original concepts from the framework, essential for performing since the short-term to even the middle and long-term setting for decisionmaking that lead to a best long-run average performance.
There is no framework with a fusion of ideals between regret models, conservatism and robustness from static RO. With this fusion of ideals it is possible to obtain a robust decision-making at the same time that one is approaching a target ideality. Two mutual benefits making it naturally that, by approaching a target ideality, conservatism is reduced in a natural way. In addition, if these decision-making takes the most recent events as a starting point, a solution closer to ideality is achieved without the current tractability of operation being ignored, and this is another basis for the present framework.
Based in the concept of conservatism and robustness, robust solutions deviates from a nominal solution, but if a regret model is added to the framework, the solution naturally approaches an ideal target at the same time that the balance of robustness is done (as will be seen in the section 3). Decisions based on events far from the present reality may not be feasible to be taken, so when there are robust solutions based on recent trends there is a greater chance of consistency in taking action because they are the occurrences that can be repeated or slightly modified, i.e. decision-making does not ignore the standard, which increases the feasibility of being put into practice. To make that it is necessary to consider some original concepts, statistical and probabilistic fundamentals from this framework and probability distributions. These are the pillars of the present framework. The approach from this framework incorporate probability distributions that are not joint.
The proposed framework is composed by a mathematical approach and an algorithm to make a robust optimization based on recent historical data and original concepts. The concepts of the framework consider conservatism and robustness and also has the objective to find a solution that gets closer to a target solution (an ideal solution/perfect information), two concepts of RO in one. The balance of conservatism and robustness is made by adding a penalty factor that multiplies the expectance of an uncertainty. To execute this methodology it is necessary to have at least one uncertainty that follows a normal distribution, because the algorithm finds the best standard deviation values for each normal uncertainty, so then the static robust optimization can be done in the final step of the algorithm to find the best solution. This final robust solution is picked for decision-making based in a stochastic procedure to avoid exhaustive computational effort in the final step of the algorithm.
In this approach, epistemic uncertainties are also considered to have a normal distribution as it is subjective and naturally is endowed by the central limit theorem. The framework serves for linear and nonlinear models, under certain axioms, but the objective function (OF) must be monotonic in the domain of the convex feasible region (DCFR). The OF can be non-monotonic out of the DCFR. This framework can be applied to discrete and continuous optimization, and also to semidefinite programming (SDP) as all LP can be expressed as SDPs.
The framework of this present paper does not include probabilistic constraints, making thus an easier resolution without needing to solve integrals and neither creating nonlinear constraints by insertion of cumulative distribution functions, naturally increasing the robustness of the problem by avoiding nonlinearity if there is none.
The present framework is highly tractable since the algorithm always force the tuning to grant feasible and tractable solutions with a specified risk. The regulation of conservatism and robustness is performed by the algorithm according to the risk specification or satisfaction by the planner to avoid using only worst cases or cases generally delimited by some hyperplane geometry. In section 2 all about the framework is profoundly discussed. In section 3 several linear and nonlinear examples are solved using the framework, and the results are discussed. Conclusions are in section 4.

Framework
In this section, all about the framework is discussed. Philosophy, objective, original concepts, fields of application, statistical and probabilistic fundamentals, the mathematical approach, the algorithm. Before arguing about why and how procedures are executed, it is important to understand fundamentals and definitions from section 2.1. General procedure is to first make a statistical treatment of the data (section 2.1), then the original model is transformed (adapted) into another (section 2.2), finally the algorithm is executed to fine-tune the model and the robust optimization is done.

Objective, philosophy and first considerations
In order to carry out a robust optimization through this new approach that regulates robustness and conservatism, the present framework incorporates an algorithm, a regret model with its own definitions and concepts that decrease the relative regret, and a mathematical formulation that make the deterministic model more robust to parametric variations. In order to perform the RO, it is necessary to transform the original model into another, a robust model with inserted penalty factors, through the mathematical formulation of the framework, and finally it is necessary to tune the values of all penalty factors and standard deviations of normal uncertainties through an algorithm.
The philosophy of the framework is based on performing a RO which does not consider discrete scenarios or a tree of scenarios, and which is based on the consideration of the representation of random uncertainties by probability distributions that are not joint, and the consideration of normal distribution for normal uncertainties or for epistemic uncertainties, if any.
In this framework, data is statistically treated to make a tractable and more flexible problem that consider since the short-term to even the middle and long-term-run of decision-making, because data timeline is chosen. The following definitions for data treatment also increase the tractability by avoiding the use of scenarios along the mathematical modeling and because the resulted decision-making does not ignore the standard nor the recent trend, which increases the feasibility of being put into practice. All that follows, also allow to be possible to work with a relative regret model without the need to generate tree scenarios for the set of parameters of the problem nor the exhaustive resolution of all of them as generally done in relative regret models. First is necessary to understand about the general deterministic optimization of reference (GDOR) and about the recent reference timeline (RRT), because they give an overall limitation for the problem, i.e. decision-making will have as a basis an average of historical data of short, middle or long-termrun, as chosen by the planner. It must be chosen by the planner because it is necessary to decide whether it is going to be a RO based in short, middle or long-term-run. All definitions and concepts used in the framework are as follows: i) GDOR: It is an optimization guide, if the target ideal solution is not specified, to determine it. RRT: It is the timeline that takes as basis a nominal value for each parameter, in such a way that this nominal value is an average of the nominal values that have already been used in optimizations carried out in the most recent past. The RRT refers to the most recent historical data as chosen by the planner. Historical RRT data is a subset of historical data from the GDOR timeline, which have a more recent timeline and which were used in the optimizations that were carried out during that RRT, e.g. historical optimization data from the last monthly scheduling or e.g. the historical data used in the last two monthly planning optimizations of an industry. The nominal value of each parameter that is not an uncertainty, in the RRT, are those that are actually used in the RO.
As the RRT contemplates the most recent historical trend, this is the most likely or feasible standard to be repeated or slightly modified to happen, so it is reasonable that decision-making starts from this reference, i.e. from the average of these recent historical data, and not from imaginable, unpredictable or  Explaining a simple example between using GDOR and RRT is as follows, suppose that one uncertainty is the market price of something, so this uncertainty in GDOR would be the price that would lead to the target ideal solution , but as the period of the closest time is the RRT, it may not be feasible to use the GDOR price in the period in which the process is. One occurrence is the inflation price of any product that generally increases along the timeline, so it is more feasible to use the price of the RRT and determine decisions that lead to the target ideal solution considering recent conditions based in recent trends.
iii) Relative regret (RR (%)): How much the OF value of a performed optimization with RRT data ( ( ) ), distances in module, from the OF value of the target ideal solution ( ). Relative regret is calculated in terms of percentage deviation according to Eq. (4): Average relative regret (%) (ARR): ARR is the arithmetic mean of the relative regrets. The ARR value is used in the criterion for choosing the tuned standard deviation value of an uncertainty that follows a normal distribution, as will be seen in the algorithm (section 2.4).

Statistical and probabilistic fundamentals of the framework
Among all possible values for a normal uncertainty, there will always have a specific value for a normal uncertainty that makes the solution as close as it is possible to the ideal solution, this value is defined as .
Because monotonic OF preserves the order relation, the values closer to value will always give the closest values that the solution can have to the ideal solution. Additionally, monotonic objective functions incorporate the principle of superposition, and because of that, each normal uncertainty will have an independent individual contribution to the problem, meaning that an already tuned standard deviation value will not affect the tuning of another standard deviation. Even for nonlinear models, the OF is monotonic in the DCFR. Therefore, the tuning of a standard deviation will not affect the tuning of another.
The best standard deviation, for a normal uncertainty, is the one that gives the most chance for the random value of the normal uncertainty to be generated as close as possible to . Looking at the illustration in Fig. 2, the value of uncertainty X, which when substituted in the mathematical model, would cause ( ) to get as close as possible to , is , and looking this figure, the tuned bell curve would be the one with the highest probability of this value being generated randomly, which in this case would be σ = √3.5 (orange curve). Because it is a Monte Carlo simulation, the greater the number of samples, the more statistically accurate the final results will be, due to the law of large numbers. If the uncertain parameter with normal PDF were transformed into a variable in the optimization matrix to determine the optimal value for , to which minimizes the RR, the big challenge would be the increase in the complexity of the model and the decrease of robustness, as this way the model would often and probably become nonlinear or more nonlinear if it already was. In addition, for the case that is analytically calculated, if the optimization provides an infeasible solution, the value would not be determined and the whole analytical methodology would go wrong as well as all the effort would have been in vain. Another positive point for not analytically calculate is that parameters that have the role to assemble the mathematical model, i.e. the parameter does not appear in the left or right side of constraints of the model, but that influence the activation of constraints, can also be tuned with this framework. So the conclusion is that cannot be analytically calculated.
In this framework, OF that are monotonic along the DCFR, will have more stability (statistical confidence) in the ARR value associated with each different standard deviation value, because this type of functions do not show oscillation or changes in the order relation.
If the OF is not monotonic in the DCFR, for each different tuning, the ARR may not converge to the same value if the OF in the problem is very sensitive to variations and if the amount of optimizations made for tuning is low, as it will be shown further in the results from section 3. Monotonic OF is a function that is not oscillatory and therefore tends to be more stable, making it not necessary to have a very large number of Monte Carlo simulations to satisfy the objective of the framework, as will be shown in section 3.
While monotonic functions have just one , non-monotonic functions can have more than one if it is out of the DCFR as can be seen in Fig. 3 and can be very sensitive to variations, e.g. the bigger the sensitivity of the OF bigger the difficulty to tune the standard deviation value of uncertainty X. This difficulty on non-monotonic functions exist because there would be several values for the uncertainty X that would cause the OF f (X) to achieve the value of the ideal solution . That is, there would be several values of standard deviation that would be able to make the uncertainty get a value that make the OF to achieve , but these different standard deviations could achieve this less frequently, and to circumvent this and to be able to infer a reliable decision-making for criterion for choosing the best standard deviation, it would be necessary to perform a very large number of Monte Carlo simulations. That is why when the function is monotonic in the DCFR, this problem is avoided and such a large number of Monte Carlo simulations is not necessary. Note that is reachable by the function in the Fig. 3, but generally this ideal target does not need to be reachable by the function as it is a goal and the objective is to get closer as it is possible. This framework can be applied on Science, Engineering (e.g. production and / or distribution planning and scheduling, project and production design etc), Process Control and Automation (e.g. MPC project to which FO is the setpoint of a controlled variable, to which can be the reference setpoint value that maximizes the process profit

Mathematical formulation of the framework
In this section the schematic of the mathematical formulation of the framework will be presented, to which there is a transformation from the original deterministic model to a deterministic model that is then tuned by an algorithm, becoming robust. Soon after, the axioms necessary to understand the functioning are presented. Any deterministic linear or nonlinear model in the problem domain that has at least one parameter that can be considered as an uncertainty that follows a normal distribution, can be mathematically written in the way that the proposed framework establishes for the optimization problem. The problem to be optimized Note: there can be no division by zero. The probability distributions in this framework are not joint, being thus easier to be estimated regardless the dimensionality of the problem.
The framework can be used for problems with LP, NLP, MILP, MINLP or SDP formulation, and some axioms about the mathematical formulation of the framework are: The OF of the problem must have only increasing monotonic or only decreasing monotonic behavior, along the DCFR, therefore the partial derivatives of the OF in relation to the normal uncertainty(ies) and in relation to must have the same mathematical signs in the DCFR.
When the OF is monotonic in the DCFR, the order relation is preserved and the problem becomes either convex or strictly convex, or concave or strictly concave, thus for different amounts of If the term(s) linked to a normal uncertainty is linearly independent (LI) in relation to the other terms linked to other normal uncertainties in the model, then in fact the framework can be used, because the stimulus in the OF will be exclusive (not influenced by other terms of other normal uncertainties) and independent (independent of the existence of other terms of other normal uncertainties) to the other stimuli. So it is only necessary to check if a normal uncertainty satisfy this axiom ii) if it is an uncertainty that is multiplying/dividing some decision variable of the optimization problem, or forming a power term with the variable, i.e. a multiplicative uncertainty, any other case will satisfy this axiom without having to check, e.g.
additive uncertainties in the model (OF and / or constraints). In addition, if the problem has just one normal uncertainty, there is no need to check axiom ii).
Uncertainties can appear anywhere in the model, i.e. in OF and/or constraints, but there can be no multiplication/division/potentiation between different normal uncertainties (e.g. 1 2 ). The non-multiplication between normal uncertainties prevents the possibility of changing the behavior of the OF from increasing to decreasing or vice versa (thus satisfying axiom i) and ii)). If the stimuli are independent, the OF has a unique behavior for each stimulus, and there may be superposition of independent stimuli to the behavior of the FO in relation to the normal uncertainties ∀ . For example, Eq. (6) shows an example of testing a deterministic model to find out whether the terms of the multiplicative uncertainties are exclusive and independent, to which 1 and 2 are exclusive and independent because 1 4 , 2 3 , 3 and 4 are not linked to 1 1 , 1 e 2 , and therefore the framework can be used to this model. As the term 3 is an additive uncertainty in the model, its stimulus over OF is exclusive and independent, i.e. 3 is impartial in relation to 1 1 and 2 3 , as well as its stimulus over OF is independent of the influence of 1 1 and 2 3 which are the terms linked to the other normal uncertainties. If an uncertainty is independent, the values of the terms linked to this uncertainty do not depend on the values of other terms not linked to this uncertainty.

 Axiom vii):
If the uncertainty is epistemic, or has historical data that did not fit well with some probability distribution, it is considered that it has a normal distribution in this framework, that is, the uncertainty will be of type . The greater the amount of historical data of an uncertainty, the more this is true, due to the central limit theorem.
The constraints of the present framework are not reformulated because as the expected value is a number and is too, the only mathematical role that plays is to modify the expected value module (or the multiplication between expected values in case there is), in order to protect the model against the violation of constraints. That is, the multiplication of with the expected value will only generate a new number that will represent the variability of a given uncertainty (as well as the role played by the expected value), keeping the physical meaning of the terms of the constraints unchanged.
The The inclusion of the probability density function of a normal distribution in the model, and if the steps of the algorithm of the framework were not followed, would prevent the chance of an uncertainty with normal distribution having its value generated close to the (hypothetically shown in Fig. 2). If the expected value of the probability density function for an uncertainty with normal distribution were considered, the "uncertain" value of the uncertainty would be the average value in the curve, and hence the problem would not be flexible enough to be able to choose the behavior of the uncertainty to make ( ) as close as possible to . For this reason, a normal distribution is considered for the random generation of the uncertainty value by Monte Carlo sampling before optimization is performed, instead of analytically including the probability density function in the model. In addition to that, this methodology keeps the model continuing having a deterministic nature without having to solve integrals. Since it is not an analytical methodology, this Monte Carlo strategy allows the OF to be discontinuous in relation to , in addition to allowing to be linear or nonlinear in nature.
The mathematical properties of the framework for protection against violation of constraints are: i) When any appears in constraints of type ( , , , , ) ≤ 0, and the term linked to on the left side of the present constraint is mathematically positive, decreasing the value of will increase the robustness of the model.
ii) When any appears in constraints of type ( , , , , ) ≤ 0, and the term linked to on the left side of the present constraint is mathematically negative, increasing the value of will increase the robustness of the model.
iii) When any appears in constraints of type ( , , , , ) ≥ 0, and the term involving the on the left side of the present constraint is mathematically positive, increasing the value of will increase the robustness of the model. iv) When any appears in constraints of type ( , , , , ) ≥ 0, and the term surrounding the on the left side of the present constraint is mathematically negative, decreasing the value of will increase the robustness of the model.
The values for the penalty factors ∀ ∈ is specified to change inside a loop in the algorithm of the framework or the planner can specify it by trial and error according to the properties above. In both cases, the value of must follow axiom vi).

Algorithm of the framework
The tuning is carried out so that the final robust solution approaches as close as possible to an ideal solution . Once the original deterministic model is transcribed to the mathematical formulation of the framework, the step-by-step procedure of the framework algorithm will fine-tune the robust model in order to determine the best value for the standard deviation of a normal uncertainty. The conservatism and robustness of the model is regulated by choosing or determining (through a loop) the values of the penalty factors . In this framework, it is not necessary to control the risk of undesirable values being generated for uncertainties, as the framework tunes the model so that decision-making does not depend on the risk aversion of unusual values generated for normal uncertainties to be undesirable, because uncertainties with higher values away from the average can generate better results.
Unlike the variability models, this framework does not control the variability of the solution, but rather finds an operating bell curve that is the best to achieve the objective of the framework. At the end of the tuning of the robust model, following the steps of the algorithm, the objective is not to find a solution with better or worse OF, but to perform a tuning of the model in which a robust result can be found ( ( ) ) that comes closest to ideal solution itself ( ). Next, the algorithm will be presented regarding how to tune the value of the standard deviation of a normal uncertainty ( 1 ), in line with the regulation of the conservatism and the robustness of the model to carry out the robust optimization of the present framework. In the case where the optimization problem has more than one normal uncertainty, according to axiom ii), the functioning of the algorithm shown below would be the same, but first it would be tuned e.g. 1 and then it would be tuned 2 and so on, highlighting that when tuning a normal uncertainty ( ), the others remain with their fixed and constant values, each equivalent to its average of the RRT data. The algorithm is comprised of the following steps: i) Step i): Perform a GDOR that has a feasible solution and store to calculate the RR (%) according to Eq. (4), or simply specify .

iii)
Step Step ii) and specify a new . Г and are specified according to the planner's will, but can be looped if not specified (as stated in Step ii)). Г is the percentage ratio between the number of optimizations that gave feasible solutions and the number of performed optimizations.

v)
Step v): Calculate, according to Eq. (7), the ARR (%) for each of the assumed standard deviation values ( ) for the uncertainty 1 , and store the calculated values for future comparison in Step vi). Where, is the quantity of performed optimizations for each standard deviation value assumed for the uncertainty 1 , and is the value of the RR (%) of the optimization of the n realization for an assumed value for the standard deviation , to which is calculated during the execution of Step iv) according to Eq. (4). vi) Step vi): The standard deviation , which is linked to the lowest value of , will correspond to the standard deviation value tuned for the uncertainty 1 , which will lead to a robust average solution that most closely reach .

vii)
Step vii): The tuning for 1 is complete. Choose another normal uncertainty and perform the procedure again from step iii), under the same tuning parameter conditions as the past normal uncertainties, until all of ∀ are tuned. When all ∀ are tuned, now just perform the RO a few times as the planner wishes, randomly generating values for all ∀ with their tuned , before a realization is done, and choose that performed optimization that grants the lowest RR value (%) according to the Eq. (4). This chosen RO will describe the best decision-making to better approach the ideal solution .
The tuning adjusts the individual influence of each normal uncertainty in the model, to determine the standard deviation value of each normal uncertainty, which this value has to imply in an average of solutions that comes closest to the ideal solution value. These steps can be summarized in the following Fig. 4:  Fig. 4. Algorithm of the framework.
In the execution of the tuning, the risk of an optimization to be infeasible (1-Г) is already a conditional parameter specified by the planner instead of being calculated. In this new framework it is not usual to do the cost-risk analysis because the risk is already imbued as a target conditioning in the algorithm (the tuning is performed and the chosen condition is satisfied).
Since step v) calculates an average, obviously the higher the , the more precise the tuning is.
Stochasticity in step vii) avoids the need of having an exhaustive search by force brute method to choose the best result. The final decision-making coming from the RO in this framework, which the planner must take, are those from the best case of step vii) (the one with the lowest RR (%)).
Some important points of the framework are: i) The algorithm tunes the standard deviations and the penalty factors to lead to feasible solutions matching the specified risk, being a highly tractable form to solve the optimization problem, since it adapts the model.
ii) The percentage risk Г is appropriate when the planner wants to align with standardized metrics of optimization quality according to a business plan.

iii)
If is equal to 1, it may be that the model tends to get closer to reality, since the estimated expected value { } will not change, so it is recommended that the default value for of non-normal uncertainties are equal to 1 first, i.e. this value must be included in the loop.
iv) The framework is consistent with the central limit theorem, given that epistemic uncertainties are well suited to the normal distribution. e.g. demand is a great candidate to be an epistemic uncertainty, when: i) production or consumption levels do not follow a fixed variable pattern, or ii) when there is no production or consumption trend to be followed, and even if there is historical data available for demand, the turbulence in the data values caused by these two items, will make it difficult to adjust the data to a probability distribution.
v) Standard deviations with close values can dispute a lot on determining the best tuning value (e.g. Fig. 14), since the solutions will come out similar in the calculation of the ARR, since the behaviors of the normal curves will be similar. In linear problems this dispute is not big, but for nonlinear problems, increasing gives more statistical confidence to the tuning. In all examples of the section 4 it was assumed just 4 different values for the standard deviation, to clearly see the behavior of the trends since low to high range values for the standard deviation. Fig. 14 shows that the objective function is smooth at intervals of standard deviation between 2 and 5% for the normal uncertainty of the problem. It is not recommended that the standard deviation value exceeds the value of 20% of the average, to guarantee that there will not exist the risk of randomly generating values that have no physical significance (possibly negative values).
vi) The value of the RR (%) also varies from model to model, with cases in which the RR has a large, medium or low sensitivity to the different values of standard deviations for an uncertainty. This is natural because the mathematical models differ, so the results must also differ.

vii)
The values also influence the ARR value and therefore also influence the tuning of the standard deviation.
viii) If the variation between values of is small, it is because the OF must have relevant levels (smooth in relation to the variables) for some ranges of the variables, but if the variation between values of is high, it is because in other intervals of the feasible region (of the optimization problem) there is exponential or high order relation behavior for the OF (non-smooth).

ix)
If is closest to a smooth region of the OF, then there will be a little variation in the values of , but if is closest to a non-smooth region of the OF, there will be considerable variations between the values of .

Application and computational results
The intention of this section is to ease the understanding of the reader by applying the framework on simple examples, but containing the requirements established in the axioms of the framework. First it is applied on a linear model and then it is applied on two different nonlinear models derived from the linear one.
In all cases, Г was specified as being 100%. The simple problem exemplified to be optimized is a problem of a mechanical pieces seller who leaves the house with pieces of two different types to sell door to door in people's homes, to which the seller works for a company and earns so much by direct selling as through sales commission. The example is the application of the framework to tune the normal uncertainty 1 of a planning problem to maximize the daily profit ( , , , , ), knowing that there are limits to the gain from selling type 1 and for type 2 pieces, due to production limitations on the part of the company, and that the sale prices of the different types are uncertainties. The selling price per type 1 follows a normal distribution, while the selling price per type 2 follows an exponential distribution. The mathematical formulation of the example, according to the mathematical formulation of the framework, is described by Eq. (8): max ( , , , , ) = 1 1 + 2 2 { 2 } + 1 1 + 2 2 − 3 . Where, 1 is the quantity of type 1 pieces sold, 1 is the selling price per type 1 piece, 2 is the quantity of type 2 pieces sold, 2 is the penalty factor of the term that appears { 2 }, and { 2 } is the expected value of the sale price per type 2 piece, 1 is the commission gain for selling type 1 piece, 2 is the commission gain for selling type 2 piece, 3 is the cost due to the use of a transportation vehicle, LB is the lower limit value that the sum ( 1 + 2 ) can take, UB is the upper limit value that the sum ( 1 + 2 ) can take, 4 is the gain limit value per sale of type 2 pieces, 5 is the gain limit value per sale of type 1 pieces. This example do not use a loop to tune the penalty factor 2 .
According to the steps of the algorithm: it is necessary to first perform an GDOR, so that a comparison can be made as a criterion for tuning the problem (since is not being specified in the example, but is calculated); and for the realization of GDOR, uncertain parameters become deterministic. In the proposed example, eight different GDORs were performed for two different cases (four instances for each case), so that the procedure of the algorithm could be performed different times so that the model could be tuned for different situations and thus observe the behavior of RR, and consequently ARR and the selection of the best standard deviation value for each situation. For both GDOR and RRT, in this example, the first case takes into account 2 = 1 and therefore it was necessary to change the value of the penalty factor, according to the properties, and the value adopted was 2 = 0.85 to make the model more robust according to the mathematical property i) of the present framework. The values of the parameters used in all the GDORs in the example are as follows in Table 1. That is, for the linear problem, 8 tuning problems are solved, 4 for each case. In all tunes, the ARR value means an average deviation between the robust solution and the ideal solution. The nominal values of the parameters in the hypothetical RRT timeline of the example are as follows in Table 2. Reminding that 1 is randomly generated according to a normal distribution.  It is important to see the statistical trend of Monte Carlo Sampling in the graphs of RR, to see the statistical and probabilistic fundamentals of the framework in action. In the practical application of the methodology, making graphs for RR or ARR is not necessary, it was only done in this paper to see the stochastic behavior trend in the fine-tuning, matching the fundamentals of the section 2.2.

Fig. 5.
Graphs of the relative regret (%) vs performed robust optimizations for 1 = 2, 5, 10 20 % ( 1 ), where the RRT for the robust optimization considers 2 = 1, 4 = 100, ( 1 ) = 2, to which the GDOR is performed with 4 = 100, 1 = 1.9.      Fig. 11. Graphs of the relative regret (%) vs performed robust optimizations for 1 = 2, 5, 10 20 % ( 1 ), where the RRT for the robust optimization considers 2 = 1, 4 = 100, ( 1 ) = 2, to which the GDOR is performed with 4 = 100, 1 = 2.5.         The results of Table 3, interpreted by reading the graphs in Figs. 5 to 12, indicate the best values for all tuned standard deviations, for each GDOR situation, for the price of piece type 1 ( 1 ), when 2 = 1 and 4 = 100, so that the optimal sales planning under uncertainty can achieve the best RO, i.e. get closer to GDOR. Table 4 shows the best values for all tuned standard deviations when 2 = 0.85 and 4 = 70. The criterion for choosing each best value for a standard deviation is the one with the lowest ARR as established in step vi) of the algorithm.  (Fig. 20) The tendency of each bell curve to generate more values in certain ranges inherent to each standard deviation can be seen in the graphs of Relative Regret (%) vs Performed Robust Optimizations contained between Figs. 5 to 20, to which there is a higher frequency of single points generated in these bands (due to the greater area under the normal curve), and therefore a higher frequency of solutions obtained linked to these bands. As can be seen in these figures, the ranges and frequencies change for each different standard deviation value, as well as for each set of specifications of the parameters from the model. Tables 4 and 5 show that the framework methodology can also lead to results of tuning for the standard deviation that prove that it is not necessary to have a risk aversion, to obtain the best results for models of regret in RO when the GDOR and RRT philosophy is considered.
Two other tunings were performed for the example problem when 2 = 1, 4 = 100, ( 1 ) = 2 for RRT, and 4 = 100, 1 = 1.9 for GDOR (same conditions as in Fig. 6 and 24 was 50. As can be seen in Figs. 6, 22 and 24, the value of the tuned standard deviation is the same (5% ( 1 ) ), the values of (%) are very similar and the trends are the same.
Step v) of the algorithm, favors that the large subjective number of optimizations performed does not need to be exaggeratedly large, as the decision-making for choosing the tuned standard deviation is based on an average, i.e. through the calculation of (%) according to Eq. 7, to which another physical meaning of (%) is the average distance of the solutions from the realizations in relation to the ideal solution. However, if the OF were not monotonic, this average distance would not follow an order relation, making it necessary to perform many more optimizations to obtain a statistically true tuning (statistical confidence).

Table 5
Tuning results for the case 2 = 1, 4 = 100, ( 1 ) = 2 for RRT, and 4 = 100, 1 = 1.9 for GDOR (conditions of the Fig. 6) for different realizations. Quantity of performed optimizations ‡ Tuned value of 1 10000 5% ( 1 ) (Fig. 6) 100 5% ( 1 ) (Fig. 22) 50 5% ( 1 ) (Fig. 24) ‡ : Just like the other tunings, the tuning of this case was performed several times for each of these quantities, and the result remains unchanged. All the algorithms implemented in MATLAB® is given as supplementary materials.     Generation of random numbers by Monte Carlo simulation gives two properties for the standard deviation of a normal uncertainty: i) range and ii) population concentration. These properties are designed by the shape of the normal curve. That is why few optimizations are needed to tune the model, as the RR (%) will have exclusive ranges and exclusive high concentrations in monotonic objective functions in the feasible region, for different values of standard deviation. Other than that, the less horizontal the OF curve is (the less smooth the function in the feasible region), less optimizations will also be necessary because the highest concentrations of RR population (%) will have more distinct and separate ranges from each other, therefore the greater is the difference between values of (e.g. Figs. 15 and 20). In addition, the greater the difference in value between the , smaller is the number of required because more non-smooth is the OF.
The standard deviation value equal to 5%.
( 1 ) , in the example of Figs. 6, 22 and 24, represents an improvement of approximately 114% compared to the standard deviation value equal to 20% . ( 1 ) , and an improvement of approximately 14% compared to the value equal to 2% ( 1 ) . Table 6 shows the improvement between using the best value against the worst standard deviation in Figs. 6,8,10,12,14,16, 18 and 20. The improvement calculation is done by subtracting, in module, the ARR value of the worst standard deviation value by the ARR value of the best standard deviation, with the subsequent division by ARR of the best standard deviation value and the multiplication by 100%. Now suppose that the mathematical model of the same example was hypothetically nonlinear as Eq.
(9) (note: the purpose of this tuning is to show the functioning for nonlinear problems in situations that obey the axioms of the framework): max ( , , , , ) = 4 1 2 2 1 + 2 2 { 2 } + 1 1 + 2 2 − 3 . . 1 + 2 − ≤ 0 First, it is necessary to perform the monotonicity test to assess whether this nonlinear OF is monotonic in the feasible region of the optimization problem for each and all and , so that it is not necessary to perform many optimizations during the tuning by the natural increase of the statistical confidence. This test was not done for the linear version of this example because it was obvious that the OF was monotonic since it was linear and all terms containing and were mathematically positive, and there was no division of any term by any and . The partial derivatives of this nonlinear function are given by Eq. (10) to (12) All terms involving the variables 1 e 2 and the uncertainty 1 in Eq. (10) and (12) are positive, so the function is strictly increasing and therefore is monotonic for 1 and 1 , but Eq. (11) has a negative and a positive term and because of that, it is necessary to check whether or not the order relation of the function will be preserved for 2 in the feasible region of the optimization problem. There are several ways to test monotonicity in the literature, and it is up to the user how to make the test. For the operational research sector, most models are conducive to having monotonic objective functions that naturally obey the framework's axioms. An algorithm was created and executed in MATLAB® (R2019b, Mathworks, Natick, MA, USA) to check for monotonicity of this function and can be seen as supplementary material, to which the function also showed to be monotonic for 2 in the feasible region of the optimization problem. What this algorithm for checking the monotonicity does is to check all the values of the partial derivatives through the DCFR, and to be monotonic the sign of all partial derivatives must not change through the domain.
Tuning this problem to the same tuning situation as in the case of Fig. 5 but for values of 2, 5, 7 and 10%.
Different models will have different natural robustness, and for each parameterization of the model, there will also exist a given robustness linked to the model and parameterization. The standard deviation value directly influences the quantity of solutions that are feasible, as this obviously provides the value of a parameter that can generate feasible solutions or not.
The framework of the present work determines the standard deviation that makes the robust solution closer to ideality ( ), while naturally reducing the penalty value of the robust solution while regulating the robustness of the model, rather than just depending on the worst case scenario or limited cases. That is, in addition to increasing the robustness of the model by increasing conservatism, in this framework there will also be, in parallel, a reduction in the penalty in the OF due to the strategy of tuning the framework that aims to achieve an ideal solution through the regret model without assigning the scenario tree.

Conclusions
Robust optimization is a field of high flexibility as to how the treatability of solutions in an optimization under uncertainty will be dealt with. Different frameworks have been developed along the timeline, and in this work, a framework for robust optimization was developed with a new concept considering a fine-tune approach that considers some original concepts with an original regret model and a stochastic and algorithmic numerical strategy to make the robust solution better approach an ideal solution. This methodology considers the uncertainties in the process, at least one of which must be normally distributed.
In addition, the developed framework has the robustness and conservatism being regulated by the algorithm through the adjustment of penalty factor values, being specified out or within a loop. As this is a new way to make a robust optimization, some aspects can be profoundly studied and changed to analyze the behaviors of the fine-tune methodology by different aspects, e.g. by changing the way that the penalty factor is calculated/specified.
The results of the applied examples of the methodology showed that depending on the philosophy of the robust optimization framework used, e.g. from this framework, risk aversion for choosing values close to the average for the uncertainties is not always the best option to achieve the best interest. Besides that, a profound study can be done to investigate changes that can be made in the methodology to be applicable to non-monotonic objective functions in the DCFR.