A note on an L-approach for solving the

0 downloads 0 Views 96KB Size Report
19 Nov 2004 - do Mat˜ao 1010, Cidade Universitária, 05508-090 S˜ao Paulo, SP - Brazil ([email protected]). Sponsored .... undefined, otherwise,. 3 ...
A note on an L-approach for solving the manufacturer’s pallet loading problem Ernesto G. Birgin



Reinaldo Morabito



Fabio H. Nishihara



November 19th , 2004.

Abstract An L-approach for packing (l, w)-rectangles into an (L, W )-rectangle was introduced in an earlier work by Lins, Lins and Morabito. They conjecture that the Lapproach is exact and point out its runtime requirements as the main drawback. In this note it is shown that, by simply using a different data structure, the runtime is considerably reduced in spite of larger (but affordable) memory requirements. This reduction is important for practical purposes since it makes the algorithm much more acceptable for supporting actual decisions in pallet loading. Intensive numerical experiments showing the efficiency and effectiveness of the algorithm are presented. Key words: Cutting and packing, pallet and container loading, recursive algorithm, implementation.

Introduction An interesting case of cutting and packing problems is loading products (packaged in boxes) on a rectangular pallet in such a way as to optimize pallet utilization. This problem is known as the manufacturer’s pallet loading problem if all boxes are identical. This is the situation of a manufacturer that produces goods packaged in identical boxes of size (l, w, h), which are then arranged in horizontal layers on pallets of size (L, W, H) (where H is the maximum height of the loading). It is assumed that the boxes are available in large quantities and are orthogonally loaded on each pallet (that is, with their sides parallel to the pallet sides). ∗

Department of Computer Science, Institute of Mathematics and Statistics, University of S˜ ao Paulo, Rua do Mat˜ ao 1010, Cidade Universit´ aria, 05508-090 S˜ ao Paulo, SP - Brazil ([email protected]). Sponsored by FAPESP (Grants 01/04597-4, 02/00094-0 and 03/09196-6), CNPq (Grant 302266/2002-0) and Pronex. † Department of Production Engineering, Federal University of S˜ ao Carlos, Via Washington Luiz km. 235, 13565-905, S˜ ao Carlos, SP - Brazil ([email protected]). Sponsored by FAPESP (Grant 01/2972-2) and CNPq (Grant 522973/95-7) ‡ Department of Computer Science, Institute of Mathematics and Statistics, University of S˜ ao Paulo, Rua do Mat˜ ao 1010, Cidade Universit´ aria, 05508-090 S˜ ao Paulo, SP - Brazil ([email protected]). Sponsored by FAPESP (Grant 03/00460-0)

1

If an orientation is fixed, the problem consists of packing the maximum number of (l, w)-rectangles and (w, l)-rectangles orthogonally into a larger rectangle without overlapping, yielding a layer of height h. Although there are polynomial-time algorithms for the guillotine version of this problem (Tarnowski et al 1 ), the non-guillotine problem is widely claimed, but yet not proven, to be NP-complete (Dowsland2 , Nelissen3 ). Actually, the decision version of the problem is not known to be in NP at all (Nelissen4 , Letchford and Amaral5 ). A number of authors have dealt with the manufacturer’s pallet loading problem as discussed in Lins et al 6 , Pureza and Morabito7 and Alvarez-Valdes et al 8,9 (see also the references therein). In particular, in Lins et al 6 , an L-approach based on a recursively defined function for packing (l, w)-rectangles into a larger rectangular (or L-shaped) piece was presented. Such approach is able to optimally solve all testing problems (more than 20,000 representatives of infinite equivalence classes of the literature), including the 16 hard instances unresolved by other heuristics. Such testing problems cover all instances with solutions of up to 100 (l, w)-rectangles and pallet dimension (L, W ) of up to (1000, 1000) from problem sets Cover I and II in Morabito and Morales10 . These problems are realistic in pallet loading contexts as depicted in Morabito et al 11 . Based on those results, Lins et al 6 conjectured that the L-approach always finds optimal packings of (l, w)-rectangles into an (L, W )-rectangle. Nevertheless, the main drawback of the approach is its requirement in terms of computer runtime. For example, there are instances for which the approach takes hundreds of minutes (on a 700Mhz Pentium III processor) to solve them. In this note, we show that by simply using a different data structure in a C language implementation of the algorithm, the runtimes can be considerably reduced (from hundreds of minutes to a few minutes in hard instances), despite larger (but affordable) memory requirements. This reduction is important for practical purposes since it makes the algorithm much more acceptable for supporting actual decisions in pallet loading. This note is organized as follows: in the next section we discuss some details of the algorithm implementation. Then we present the numerical experiments. Finally, the last section contains some concluding remarks.

Implementation of the algorithm The L-approach is based on the computation of a recursive formula of dynamic programming that deals with a huge number of subproblems. Given that the same subproblem may appear several times along the recursion, and to be able to build up the solution at the end, it is fundamental to keep the information of the subproblems previously solved in memory. This information basically relates to the way each subproblem was solved and the number of packed rectangles in its solution. Therefore, an important aspect of the L-approach implementation is how to store and retrieve information of each solved subproblem. Note that there is a trade-off between the amount of memory required and rapid information access. Each subproblem is related to an L-shaped piece in which the (l, w)-rectangles must be

2

packed. A standardly positioned L-shaped piece6 , represented by (X, Y, x, y) where x ≤ X and y ≤ Y , is defined as the topological closure of the rectangle whose diagonal goes from (0, 0) to (X, Y ) minus the rectangle whose diagonal goes from (x, y) to (X, Y ), as shown in Figure 1. Assume that L, W , l, w are non-negative integers such that L ≥ W and l ≥ w, and consider the increasing and finite sequence Z = z0 , z1 , . . . , zp of all the non-negative integer combinations zi of l and w, such that zi ≤ L. All subproblems generated by the algorithm correspond to standardly positioned (X, Y, x, y) such that X, Y, x, y ∈ Z. Hence, given a subproblem (X, Y, x, y), there are unique indices I,J,i and j satisfying X = zI , Y = zJ , x = zi and y = zj . In other words, for each subproblem (X, Y, x, y) there is a unique 4-uple (I, J, i, j). Figure 1 about here. In Lins et al 6 a data structure named phorma (Lins et al 12 ) was used to accomplish an efficient indexation of (I, J, i, j). Phorma (acronym for perfectly hashable order restricted multidimensional array) is a data structure for perfect hashing of multidimensional arrays which have order restrictions on their entries. This is the case in the Lapproach, where we would like to enumerate the quatruples of non-negative numbers (X, Y, x, y) ≤ (L, W, L, W ) that satisfy BL = (X ≥ x) ∧ (Y ≥ y) ∧ (X ≥ Y ) ∧ ((X 6= Y ) ∨ (x ≥ y))∧ ((X 6= x) ∨ (Y = y)) ∧ ((Y 6= y) ∨ (X = x)). The solution adopted in phorma is based on the theory of combinatorial families developed in Nijenhuis and Wilf13 . The central idea is to associate a digraph to a collection of combinatorial objects in such a way that each object in the family is in 1 − 1 correspondence with a path in the digraph. Under mild assumptions, the digraph is logarithmically smaller than the number of objects to be enumerated. Moreover, its construction requires the enumeration and lexicographical ordering of a reduced set of objects related to (but much smaller than) the original set of objects to be enumerated. Finally, the perfect hash function can be computed at the expense of computing a path in the digraph. Moreover, many savings can be done in the construction and storage of the related data structures, doing most of the tasks at compilation time. See Lins et al 12 for details (including several examples for typical values of some phorma parameters). Phorma has a good compromise between memory requirements and access time. However, for the problem sizes treated in Lins et al 6 , it is possible to apply a simpler data structure that requires much more memory but substantially reduces the main drawback of the L-approach implementation presented in Lins et al 6 : its computer runtime. Let α(·) : Z → {0, 1, . . . , p}, α(zi ) = i, and α−1 (·) : {0, 1, . . . , p} → Z, α−1 (i) = zi , for i = 0, 1, . . . , p, be a bijective function and its inverse to perform the indexation of the subproblems. Consider two auxiliary vectors u and v with sizes L + 1 and p + 1, respectively, such that u[k] =

(

i, if there exists i such that k = zi ∈ Z, undefined, otherwise, 3

for k = 0, 1, . . . , L, and v[k] = zk , k = 0, 1, . . . , p. Then, α(zi ) = u[i] e α−1 (i) = v[i]. That is, both functions can be evaluated in constant time. Figure 2 shows vectors u and v for a problem instance with L = 10, W = 4, l = 5 e w = 2. Figure 2 about here. It should be noted that, for a subproblem (X, Y, x, y), the indices (I, J, i, j) = (α(X), α(Y ), α(x), α(y)) such that X = zI , Y = zJ , x = zi and y = zj are computed in constant time. In this way, the information of each subproblem can be saved at position (I, J, i, j) of a 4-dimensional array (with each dimension of size p + 1). Observe that the data access is direct (constant computational cost) at the expense of many allocated but non-utilized array positions. The memory requirement is O(p4 ).

Numerical experiments We implemented the L-approach using function α and its inverse for the indexation of the subproblems. The algorithm was coded in C language and the experiments run on an 1533 MHz AMD Athlon (TM) MP 1800+ processor, 512 Mb of RAM memory and Linux operating system. An approximate comparison (http://www.spec.org) indicates that this computer is less than twice as fast as the one used in Lins et al 6 (700 MHz Pentium III processor). The compiler was gcc version 2.95.4 with -O4 flag to optimize the code. Table 1 shows the results obtained for the 16 problems analyzed in Lins et al 6 ; these problems are hard to solve for block heuristics as shown in Morabito and Morales10 . In the table the columns show the problem Pn (L, W, l, w) (which means that n (l, w)-rectangles were packed into a pallet of dimensions (L, W )), the total amount of memory used by the 4-dimensional array (in MegaBytes) and the percentage actually used (to store the information of the subproblems), and the CPU runtime in seconds. Note that, on average, 1.52% of the elements of the 4-dimensional array is actually used by the algorithm. The amounts of memory displayed in Table 1 are the ones used for the array utilized to save the subproblem information. The total amount of physical memory used by the task (including the size of the task’s code, data and stack) is at most twice of the reported amounts. So, any computer with 512Mb of RAM memory is capable of solving these problems. Table 1 about here. In addition to the 16 instances presented in Table 1, we also solved more than 20,000 problems selected from sets Cover I and II in Morabito and Morales10 . These instances correspond to all problems satisfying L, W ≤ 1000. Table 2 shows the average, the standard deviation, the minimum and the maximum runtimes (in seconds) for each set of selected problems. 4

Table 2 about here. Note that the average runtimes are affordable (with relatively high standard deviations). As pointed out in Lins et al 6 , the L-approach solved all problems to optimality. This fact reinforces the conjecture of the algorithm exactness. The problems in Cover I (with up to 50 boxes) are on average much easier to solve, whereas the ones in Cover II (51 to 100 boxes) are on average as difficult as the 16 problems. Since most problems in Covers I and II are quickly solved by the block heuristic in Morabito and Morales10 , the runtimes of Table 2 can be substantially reduced combining this heuristic to the L-approach.

Concluding remarks In this note we show that the L-approach introduced in Lins et al 6 can be effective for solving pallet loading problems of realistic sizes. By simply using a different data structure, the runtimes are considerably reduced in spite of larger (but affordable) memory requirements. This way, the computer requirements of the algorithm become more acceptable and the L-approach is a real option for pallet loading in logistics environments. The present implementation (including the source code in C language) is available for benchmark purposes14. Acknowledgement: The authors thank the anonymous referee for his/her useful comments.

References [1] A. Tarnowski, J. Terno and G. Scheithauer, A polynomial-time algorithm for the guillotine pallet loading problem, INFOR 32, pp. 275–287, 1994. [2] K. A. Dowsland, An exact algorithm for the pallet loading problem, European Journal of Operational Research 84, pp. 78–84, 1987. [3] J. Nelissen, How to use the structural constraints to compute an upper bound for the pallet loading problem, European Journal of Operational Research 84, pp. 662–680, 1995. [4] J. Nelissen, Solving the pallet loading problem more efficiently, working paper, Graduiertenkolleg Informatik und Technik, Aachen, 1994. [5] A. Letchford and A. Amaral, Analysis of upper bounds for the pallet loading problem, European Journal of Operational Research 132, pp. 582–593, 2001. [6] L. Lins, S. Lins and R. Morabito, An L-approach for packing (l, w)-rectangles into rectangular and L-shaped pieces, Journal of the Operational Research Society 54, pp. 777–789, 2003.

5

[7] V. Pureza and R. Morabito, Some experiments with a simple tabu search algorithm for the manufacturer’s pallet loading problem, Computers & Operations Research, to appear. [8] R. Alvarez-Valdes, F. Parre˜ no and J. M. Tamarit, A branch-and-cut algorithm for the pallet loading problem, Computers & Operations Research, to appear. [9] R. Alvarez-Valdes, F. Parre˜ no and J. M. Tamarit, A tabu search algorithm for the pallet loading problem, OR Spectrum 27, pp. 43–61, 2005. [10] R. Morabito and S. Morales, A simple and effective recursive procedure for the manufacturer’s pallet loading problem, Journal of the Operational Research Society 49, pp. 819–828, 1998. [11] R. Morabito, S. Morales and J. Widmer, Loading optimization of palletized products on trucks, Transportation Research (Part E) 36, pp. 285–296, 2000. [12] L. Lins, S. Lins and S. Melo, Phorma: perfectly hashable order restricted multidimensional arrays, Discrete Applied Mathematics 141, pp.209–223, 2004. [13] A. Nijenhuis and H. S. Wilf, Combinatorial algorithms for computers and calculators, Academic Press (second edition), 1978. [14] www.ime.usp.br/˜egbirgin.

6

(X,Y) (x,y)

(0,0)

Figure 1: Standardly positioned L-shaped piece represented by (X, Y, x, y).

7

u

0 0

v

1

2

3

4

5

6

7

8 10

1

2

3

4

5

6

7

8

9

0

2

4

5

6

7

8

9

10

0

1

2

3

4

5

6

7

8

Figure 2: Vectors u and v for a problem instance with L = 10, W = 4, l = 5 e w = 2.

8

Problem P53 (43, 26, 7, 3) P57 (49, 28, 8, 3) ′ (57, 34, 7, 4) P69 ′′ (63, 44, 8, 5) P69 P71 (61, 35, 10, 3) P75 (67, 37, 11, 3) ′ (61, 38, 10, 3) P77 ′′ (61, 38, 6, 5) P77 P81 (67, 40, 11, 3) ′ (74, 49, 11, 4) P82 ′′ (93, 46, 13, 4) P82 ′ (106, 59, 13, 5) P96 ′′ (141, 71, 13, 8) P96 P97 (74, 46, 7, 5) P99 (86, 52, 9, 5) P100 (108, 65, 10, 7) Average

4-dimensional array memory requirement (in Megabytes) Allocated Actually used 2.98 0.981 % 4.89 1.375 % 8.25 1.242 % 8.94 1.215 % 11.29 1.821 % 16.19 1.858 % 11.29 2.392 % 10.46 1.237 % 16.19 2.434 % 18.54 2.750 % 47.72 1.298 % 67.89 1.463 % 143.05 0.396 % 22.53 1.197 % 36.35 1.852 % 64.68 0.732 % 30.70 1.52 %

Runtime (in seconds) 2.48 7.62 13.25 15.18 37.52 68.80 56.35 19.45 103.03 134.88 168.38 327.88 141.23 57.73 191.28 111.87 91.06

Table 1: Results obtained for the 16 hard problems.

9

Data set Problems in Lins et al 6 Cover I Cover II

# selected problems 16 3,179 16,938

Average 91.06 1.10 113.10

Runtimes (in seconds) Standard deviation Min 87.39 2.48 3.19 0.00 217.55 0.00

Max 327.88 50.20 3096.45

Table 2: Runtime statistics for the more than 20,000 problems selected from data sets Covers I and II.

10