An Interpolation-Based Polynomial Method of Estimating the Objective Function Value in Scheduling Problems of Minimizing the Maximum Lateness

An approach to estimating the objective function value of minimization maximum lateness problem is proposed. It is shown how to use transformed instances to define a new continuous objective function. After that, using this new objective function, the approach itself is formulated. We calculate the objective function value for some polynomially solvable transformed instances and use them as interpolation nodes to estimate the objective function of the initial instance. What is more, two new polynomial cases, that are easy to use in the approach, are proposed. In the end of the paper numeric experiments are described and their results are provided.


Introduction
The vast majority of scheduling theory problems are NP-hard [1]. To solve such 12 problems, it is common to use algorithms, the performance of which strongly depends on 13 the input data. A new approach to estimating the objective function value of scheduling 14 theory problems is proposed -the interpolation approach. 15 Algorithms for solving problems in the theory of schedules, considered, for example, 16 in [1,2], can be used. Algorithms and methods from [3] can be used to work with random 17 data, and metric interpolation speeds up their execution when processing difficult cases. 18 Since the interpolation approach works only with the values of the objective func- 19 tion, it can also be used to create schedules for multi-stage systems, solving problems, 20 for example, using algorithms from [4]. 21 For certainty, this article considers the solution of the problem of minimizing the 22 maximum time offset 1|r j |L max . 23 New polynomial cases, that can be easily used in the interpolation approach, are 24 defined. Using these cases and Lagrange interpolation [5,11], the objective function 25 value is approximated. 26 Other interpolation methods [5] also can be used in the approach: for instance, 27 Chebyshev interpolation [20] or Spline interpolation [21]. However these methods will 28 be considered in our future work, while in this paper we will keep using Lagrange 29 interpolation polynome. In the problem 1|r j |L max [1,7,10], which we will consider, a set of n jobs is given A = {1, ..., n}. For each job j, the following parameters are set: the release time r j , the processing time p j and the due date d j [1]. By schedule π we mean some permutation of the jobs of the set A. Let's enter the completion time of the job j with the schedule π: C j (π) = max π r j , max (k→j) π C k (π) + p j . (1) Here (k → j) π is the set of jobs that are processed before the work of j with the 33 schedule of π. 34 The lateness of the job j in the schedule π is defined as follows: Thus, the task of minimizing the maximum lateness is to find such schedule π 0 , at which the objective function obtains the minimum value: This problem is NP-hard in the strong sense [6]. In the paper each instance of the scheduling problem [1], consisting of n jobs, is 38 considered as a point in a 3n-dimensional feature space [8,9] with coordinates 39 (r 1 , r 2 , ..., r n , p 1 , p 2 , ..., p n , d 1 , d 2 , ..., d n ).

40
For convenience, we will denote each instance as a 3 × n matrix: Let pick a point A in this space. Then the instance for which we want to solve the 42 scheduling problem is an instance consisting of n jobs with r j , p j , d j parameters specified 43 by the coordinates of the point A.

44
More about the 3n-dimensional feature space can be found in [7].  Thus, the r = αr transform multiplies all the release times of the instance by some 50 factor α while keeping the processing times and due dates constant.

Introduction to the interpolation approach
52 Notation 1. When writing A α we refer to a transformed instance A obtained from the initial 53 instance A using the r = αr transform with some coefficient α.

54
Notation 2. The optimal value of the L max objective function obtained for the initial instance A 55 will be denoted as L * max .

56
Now it is time to define the L max (α) function which will be used for interpolation 57 later.

58
Definition 2. Function L max (α) recieves a real non-negative transform coefficient α value and 59 returns the optimal value of the objective function obtained on the transformed instance A α .

60
The concept of the approach is that it is possible to draw a straight line through the 61 point A in the 3n-dimensional feature space mentioned above, pick some other points 62 on that line, solve the instances specified by those points and then, using interpolation 63 [1,5], find an approximate value of the objective function at the point A.

64
Lagrange interpolation polynomial is defined as follows [5]: Let presume we have calculated the objective function values for the n transformed 66 instances A α 1 . . . A α n . Now we are willing to find the L max value of the initial instance A.

67
Using Lagrange interpolation polynomial (4) we will obtain the following formula: This procedure is formalized in the following algorithm.

2.
For each α i in A create a transformed instance A α i using the r = αr transform. Obtain 74 the L max (α i ) value for this instance.   These classes are called the "highly different r" polynomial subcase and "slightly 82 different r" polynomial subcase. inequality is true for this instance: To get an intuitive understanding of the situation described in the definition, let 87 consider the following Gantt chart [12]. Each r i , r j are so far away from each other on the timeline, that the processor has 89 enough time to complete the previous job before receiving the next one. So it is obvious 90 that the optimal schedule π * for this case is obtained by sorting the jobs by increasing 91 receivement time order.

92
However, a strict proof of this fact is given below.

93
Lemma 1. For an instance N of n jobs we will consider such schedule π = j 1 . . . j n , for which 94 the inequality r j 1 < r j 2 < · · · < r j n is obtained. Then in the "highly different r" the following 95 equality is true: Proof. 97 1.
For the job j 1 the equality (7) s 1 = r 1 is true, because it is the first job in the schedule 98 and so it will start being processed right after the receivement time. 99

2.
May the equality (7) be true for the job j i : s i = r i . Then for the job j i+1 : From (8) we can conclude that max(r i + p i , r i+1 ) = r i+1 . Then, s i+1 = r i+1 . The 103 equality (7) is obtained and hereby the lemma is proven.
104 105 Theorem 1. The optimal schedule π * = j 1 . . . j n for the "highly different r" case is such 106 schedule, in which the jobs are ordered by increasing release times: r j 1 < r j 2 < · · · < r j n .

107
Proof. Let consider the job j i on which the maximum lateness value is obtained: L j i (π * ) = 108 L max (π * ). Let suppose that a schedule π exists, for which L max (π) < L max (π * ). This 109 means that also L max (π) < L j i (π * ). 110 obtain the following equality: As shown above for the schedule π: According to the definition 2, L j i (π) = s j i (π) + p j i − d j i . Then we obtain the 115 following inequality for the equation (9): s j i (π) < r j i . Which is impossible According to 116 the definition of the release time.

117
Therefore we came to a contradiction. Hence, there cannot exist a schedule π for 118 which L max (π) < L max (π * ). π * is the optimal schedule. inequality is true for this instance: Remark 1. Let note that the inequality (10) is equivalent to the following one: To get an intuitive understanding of the situation described in the definition 4, let 125 consider the following Gantt chart. In this case all release times are so near to each other on the time line, that all the 127 jobs in the instance will have been recieved after completing the first job in the schedule.
Create n different schedules π 1 . . . π n using the following rule: A strict proof that the schedule π * obtained by the algorithm is optimal follows. 136 Lemma 2. In the "slightly different r" case the following inequality is true for any schedule: Proof. 138

139
According to (11): Assume the inequality (12) is true for the job j i . Then for the job j i+1 : According to (11) for the jobs j i , j i+1 : r j i + p j i > r j i+1 . And from the inequality (12) 143 for the job j i : Finally we obtain And for the job j i+1 145 the following is true: 146 147 Lemma 3. In the "slightly different r" case the following inequality is true for any schedule: Proof. 149 s j i+1 (π) = max(C j i (π), r j i+1 ).

2.
Assume the inequality (13) is true for the job j i . Then for the job j i+1 : For the job j i+1 : C j i+1 (π) = r j 1 + j i+1 ∑ k=1 p k .
157 158 Corollary 1. In the "slightly different r" case the following inequality is true for any schedule: Proof. According to the definition, Proof. 167 1.

170
According to the equation 14, the function L objective function of Jackson polynomial instance [13] with r = C j i (π * ). Because 172 j 1 here is fixed, π * is the schedule on which the minimum maximum lateness is 173 achieved here as proven in [13]. 174 3.
If max(L j 1 (π * ), L J max (π * )) = L j 1 (π * ) then the inequality L max (π) < L max (π * ) 175 cannot be true because the algorithm puts each job on the first position in the 176 schedule to obtain the minimum objective function value. In this section we will find the α coefficient values that are to be used in the r = αr 180 transform to achieve each of the polynomial cases listed above.
Proof. According to the definition, in the "highly different r" case the following inequal-185 ity is true: Let consider the r = αr transform.
For brevity we will denote ξ j i as ξ j i = p i r j −r i , then: And we finally obtain: 190 So the coefficient α, to achieve the "highly different r" case should lie in the following 191 interval: α ∈ [max p i r j −r i ; +∞), i, j = 1 . . . n, i = j, r j > r i .

192
Definition 5. The minimum value of the coefficient α to achieve the "highly different r" case is 193 denoted as α * and calculated, according to the Theorem 3, as follows: Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 9 November 2021 doi:10.20944/preprints202111.0169.v1 It can be concluded from the definition that α * ≥ 0, because the numerator of the 195 fraction there is non-negative and denominator is a positive value.

196
From the equation (18) the condition of existence of the "highly different r" case can 197 also be easily concluded.
198 Corollary 2 (The condition of existence of the "highly different r" case). The "highly 199 different r" case exists for the initial instance A (which means that the value α * is defined) if the 200 following condition is met: What is more, a sufficient condition of the "highly different r" case can be stated as 202 follows.

203
Theorem 4 (A sufficient condition of the "highly different r" case). If the α * value satisfies 204 the inequality: α * ≤ 1 then the instance is already a case of "highly different r".

206
Then, if α * ≤ 1: This means that the initial instance A is is already a case of "highly different r".

208
Now we will proceed to proving the equivalent theorems for the "slightly different 209 r" case.
Proof. According to the definition, the coefficient α should satisfy the following inequal-214 ity: Which means that For brevity we will denote ξ j i as ξ . Then we obtain: For this inequality to true for any i, j = 1 . . . n, i = j, r j > r i , there is also the 218 following requirement: This means that What is more, p i > 0. Then, So the coefficient α to achieve the "highly different r" case should lie in the following Definition 6. The maximum value of the coefficient α to achieve the "slightly different r" case 225 is denoted as α * and calculated, according to the theorem, as follows: From the equation (27) the condition of existence of the "slightly different r" case 227 can be easily concluded.

228
Corollary 3 (The condition of existence of the "slightly different r" case). The "highly 229 different r" case exists for the initial instance A (which means that the value α * is defined) if the 230 following condition is met: Theorem 6 (A sufficient condition of the "slightly different r" case). If the α * value satisfies 232 the inequality: α * ≥ 1 than the instance is already a case of "highly different r".

234
Then, if α * ≥ 1: This means that the initial instance A is is already a case of "slightly different r".

236
Remark 2. It can also be shown that, for example, for Lazarev polynomial class of instances, the 237 following inequality is obtained: However, because the conditions in this and the other polynomial cases are more complex 239 and may require different transforms, in this paper only the "highly different r" and "slightly 240 different r" cases are defined and considered.

5.
Estimate the optimal value of the objective function of the initial instances using the 255 L max (α 1 ) . . . L max (α 2 k) values and the formula (5):    The first experiment was conducted to calculate the optimal interpolation nodes 266 number k. The results are presented on the following plot.

267
The nodes were selected according to the Algorithm 3, the parameter k was being 268 changed.

269
The relative error value for each instance N was calculated using the following 270 formula:   Now we can see from the graph that experimentally calculated optimal k value is 284 k = 8.

285
The next experiment was conducted the following way. The parameter k value 286 remained constant, but the distance ∆ * between each two neighboring points on the 287 "highly different r" interval was increased in relation to the distance ∆ * between each 288 two neighboring points on the "slightly different r" interval.

289
This can be done because, as shown above, "highly different r" interval has no 290 higher bound on coefficient α. 291 Figure 8. The plot shows the dependence of the median and mean relative error values on the step ratio ∆ * ∆ * .
We can see that errors don't depend on the step ratio ∆ * ∆ * , so we can just choose the 292 steps to be equal: ∆ * = ∆ * = ∆.

293
In the next experiment we have fixed the intervals ∆ * = ∆ * = ∆ but were changing 294 the number k * of "highly different r" points. The results follow on the Figure 9.
295 Figure 9. The plot shows the dependence of the product of median and mean relative error values on the number k * of "highly different r" points.
The complexity [17] of the Algorithm 3 was evaluated as O(n p log(n)), where n is 296 the number of jobs in the instance.

297
The resulting p value appeared to be p ≈ 2, so the complexity can be estimated as 298 O(n 2 log(n)) (see Figure 10). 299 Figure 10. Complexity of the Algorithm 3.

300
In this paper a new approach to approximating the objective function value of the 301 1|r j |L max problem is proposed.

302
The approach is based on the L max (α) function (using the r = αr transform) and 303 Lagrange interpolation.

304
The numeric experiments that have been carried out show how to optimize the

308
Further research into the features of the L max (α) will be conducted to develop a 309 method of error estimation for the approach. The results will be compared with the 310 results of error estimation of the metric approach [7].

311
There are also other transforms and polynomial cases that have to be studied.

312
What is more, we are planning to study combinations of different transforms and their 313 geometry in the 3n-dimensional feature space.

314
The Hypotheses stated in this paper will also be proven, so that we can boost the 315 efficiency and the accuracy of the approach.

318
Also a combination of metric and interpolation approaches -the metric interpolation 319 method -is being studied and developed. available in [19].