#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Research on multi-agent genetic algorithm based on tabu search for the job shop scheduling problem


Authors: Chong Peng aff001;  Guanglin Wu aff001;  T. Warren Liao aff002;  Hedong Wang aff001
Authors place of work: School of Mechanical Engineering and Automation, Beihang University, Beijing, China aff001;  Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA, United States of America aff002
Published in the journal: PLoS ONE 14(9)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0223182

Summary

The solution to the job shop scheduling problem (JSSP) is of great significance for improving resource utilization and production efficiency of enterprises. In this paper, in view of its non-deterministic polynomial properties, a multi-agent genetic algorithm based on tabu search (MAGATS) is proposed to solve JSSPs under makespan constraints. Firstly, a multi-agent genetic algorithm (MAGA) is proposed. During the process, a multi-agent grid environment is constructed based on characteristics of multi-agent systems and genetic algorithm (GA), and a corresponding neighbor interaction operator, a mutation operator based on neighborhood structure and a self-learning operator are designed. Then, combining tabu search algorithm with a MAGA, the algorithm MAGATS are presented. Finally, 43 benchmark instances are tested with the new algorithm. Compared with four other algorithms, the optimization performance of it is analyzed based on obtained test results. Effectiveness of the new algorithm is verified by analysis results.

Keywords:

Evolutionary genetics – Algorithms – Optimization – Islands – Built structures – Agent-based modeling – Genetic algorithms – Insertion mutation

1 Introduction

Under current rapid development of social economy, whether an enterprise can quickly respond to market demands with limited resources under the premise of ensuring high efficiency, high quality, high output and low cost largely determines the development and destiny of it. In a production process of products, an effective solution of the JSSP is an extremely important link in achieving the above goals. Job shop scheduling means optimization of product manufacturing time or manufacturing cost is satisfied as far as possible by reasonably arranging processing order of each job to be processed on each machine based on existing machine resources and job raw materials under the condition of satisfying realistic constraints (product delivery date, product production process route and available resources).

The importance of JSSPs attracts many scholars to conduct research in this field. Garey and Sethi proved that JSSPs had non-deterministic polynomial (NP) characteristics [1]. For this feature, methods for solving JSSPs can be divided into two categories: exact methods and approximation methods. Exact methods include mathematical programming method, branch and bound method, and so on [2,3]. However, exact methods mentioned above are only applicable to small-scale JSSPs. As complexity of JSSPs increases, applicability of exact methods decreases continuously. Approximation methods include priority dispatching rules, shifting bottleneck procedure, meta-heuristic algorithms, and the like. At present, meta-heuristic algorithms such as GA, ant colony algorithm (ACO), particle swarm optimization algorithm (PSO), simulated annealing algorithm (SA), neural network, tabu search (TS) and artificial bee colony algorithm (ABC) have been widely used in various field and have shown good performance [46].

GA was presented by Holland [7] and was extensively applied to solving scheduling problems. Its excellent searching performance is favored by many researchers in this field. However, application of GA in dealing with complex JSSPs is limited due to its premature and local convergence. In response to these shortcomings, researchers proposed a number of improvement measures, such as two-stage genetic algorithm, island model genetic algorithm, hybrid genetic algorithm, and so on. Xu and Li [8] proposed immune genetic algorithm by combining immune theory with GA, which improved the global search performance of GA. Kurdi [9] proposed a new island model genetic algorithm(NIMGA), which contains a new naturally inspired evolution model and a new naturally inspired migration selection mechanism, to improve effectiveness of classical island model genetic algorithm. The new algorithm improved diversification of the search and delayed premature of GA. Chang and Liu [10] proposed a hybrid genetic algorithm by using the Taguchi method to optimize the parameters of a GA. Robustness and good optimization performance of the new algorithm were verified after applying to solve some distributed and flexible job-shop scheduling problems. Moreover, multi-agent systems have gained more and more applications in production scheduling due to their flexibility and adaptability to open and dynamic real-world environments and synergy mechanism between multiple agents. Liu et al. [11] designed a multi-agent-based solution system and successfully solved the complex 7000 queen problem. Chen et al. [12] proposed a hybrid flow shop rescheduling algorithm for perishable manufacturing systems. Products with different deadlines, values, due dates and stochastically failed operational units in the system were simulated by agents. The product agents can search the optimal scheduling path with the new algorithm.

A MAGATS is proposed for solving JSSPs in this paper. To improve global search ability of the algorithm and maintain diversity of the population, on the basis of establishing a multi-agent grid environment based on GA, this paper designs a neighbor interaction operator, a mutation operator based on neighborhood structure and a self-learning operator. The algorithm combines high efficiency and simple operation of GA, synergy mechanism of a multi-agent system and global search ability of TS, which can make up for premature and local convergence of GA to some extent and improve search efficiencies and optimization ability of the algorithm. Then, JSSPs can be solved more efficiently and accurately. Resource utilization of enterprises can be promoted. Market demands can be faster respond to, which is of great significance to development of enterprises and a country.

The remaining of this paper is organized as follows. The model of the JSSP and the process of achieving a MAGA are illustrated in Section 2. Section 3 introduces the establishment of MAGATS. In Section 4, 11 benchmark instances are selected to verify optimization performance of MAGATS compared with GA and MAGA. 43 benchmark instances are used to test optimization performance of the new algorithm. Based on test results and comparison with other algorithms, optimization performance of MAGATS is studied. The conclusion is made in Section 5.

2 The achievement of MAGA

This section firstly introduces the model of the JSSP to be solved. Then combining multi-agent synergy theories and GA, a MAGA is proposed.

2.1 Model of the JSSP

A JSSP is usually defined as follows.

n jobs are machined on m machines, respectively denoted as sets J = {Ji|i = 1, 2, …, n}, M = {Mi|i = 1, 2, …, m}, where Ji is a job code, Mi is a machine code. Each job has a specific process. Orders that jobs use machines are assigned. The processing time required for each operation of each job on the corresponding machine is also given. The specific content of scheduling is to determine processing sequences of jobs on each machine and the starting time of each job, which makes makespan (recorded as Cmax) shortest. Besides that,

(1) Sequences of different jobs on a same machine have no constraints;

(2) Once a job is started on a machine, it is completed until its process is completed. Each machine can only process one job at the same time, and it is assumed that no machine failed;

(3) All jobs arrive at the same time and only flow through each machine once.

It is assumed that the total number of operations for all jobs is Q, G = {1, 2, …, Q} is named as an operation set; Ji represents the job subordinate to operation i; Machines required for processing operation i are marked as Mi; The starting time of operation i is denoted by Si; Time needed for processing operation i is recorded as Pi; The sequence of two adjacent operations of a job is represented by →. Take ij for example, operations i, j belong to one job, that is, Ji = Jj and operation i is before operation j. So, the model of the JSSP can be described as follows.



In Eq (2), the second constraint represents an operation sequence constraint, that is, the start time of any one operation of a job must be later than the completion time of its previous operation. The third constraint represents a resource constraint of machines, that is, each machine can only process one job at the same time.

2.2 Construction of MAGA

By simulating the "survival of the fittest" rule in nature, classic GA calculates fitness values of all chromosomes in each evolution process at first, and then preferentially selects individuals with high fitness in a certain way, and weeds out ones with low fitness with a high probability. But in nature, things often evolve locally and then expands globally, which is like co-evolution of multi-agents in a multi-agent system. A multi-agent system is a distributed system composed of multiple agents. Each agent can be either hardware or software. It can sense and respond to local environment and communicate with neighbor agents to complete complex tasks. In a multi-agent system, co-evolution of multi-agents in a local environment can better reflect processes of evolution in nature. Therefore, GA is firstly improved to some extent by combining GA with multi-agent synergy theories.

2.2.1 Multi-agent grid environment based on GA

GA is a random search algorithm formed by simulating evolution processes of biology. The optimization process is realized by three basic operators: selection-reproduction, crossover and mutation. As shown in Fig 1, GA consists of five basic elements: encoding and decoding, design of initialing population, fitness evaluation function design, design of genetic operations such as selection-reproduction, crossover, mutation, and setting of genetic parameters.

Flow chart of GA.
Fig. 1. Flow chart of GA.

Based on GA, a multi-agent grid environment was built. In this environment, each chromosome in GA is treated as an independent agent. Firstly, a neighbor interaction operator was designed to realize the optimization function of multi-agent systems. Secondly, to maintain diversity of population, a mutation operator based on neighborhood structure and a self-learning operator were designed and introduced. The optimization algorithm based on the multi-agent grid environment is called a MAGA. The overall flow chart is shown in Fig 2.

Flow chart of MAGA.
Fig. 2. Flow chart of MAGA.

Compared Fig 1 with Fig 2, the process of MAGA is basically the same as that of GA. Similar to GA, MAGA has the same process of encoding and decoding each agent in the grid environment.

2.2.1.1 Encoding and decoding of GA. Encoding is a gene representation of solutions of JSSPs. It is the primary problem faced by GA for optimization of JSSPs, which plays a key role in improving optimization effectiveness of the algorithm. Encoding methods of GA currently used for JSSPs can be summarized into two categories: direct encoding and indirect encoding. Direct encoding methods include operation-based encoding, job-based encoding, job pairs-based encoding, completion time-based encoding, random key-based encoding, while indirect encoding methods usually include precedence table-based encoding, priority rules-based encoding, disjunctive graph-based encoding and machine-based encoding [1316]. Combined with advantages and disadvantages of various encoding methods, the operation-based encoding is more prominent than other encoding methods, and it is the most widely used method to solve JSSPs. Adopting operation-based encoding, Arbitrary arrangement of jobs can be expressed as feasible scheduling. The corresponding decoding scheme is simple and easy to operate. Moreover, feasible scheduling can always be obtained after replacing chromosomes.

Although this method only has a half-Lamarkian characteristic (Lamarkian characteristic refers to the ability of inheriting good information from parent chromosomes), performance of the algorithm can be enhanced by improving design of genetic operations, thereby improving its genetic characteristics. In summary, the operation-based encoding method is used to encode JSSPs. In this method, each chromosome is represented by a gene sequence. Each gene sequence contains n×m (n jobs, m machines) genes representing operations, which is an arrangement of all operations.

Existing decoding methods include semi-active decoding and active decoding. Active decoding can make decoded operations more concentrated, which can improve search efficiencies of an algorithm [17]. Therefore, an insertion strategy is adopted to decode chromosomes. Every operation in a sequence is sequentially arranged on a corresponding machine. When constraints are satisfied, every operation is started as early as possible. Finally, an active chromosome of the JSSP, that is, an active scheduling solution, is obtained. To illustrate the decoding process, data given in Table 1 are taken as an example. Processing time "4-5-11-3" in the second row of Table 1 indicates processing time required for operations of job J1 on the machines M4M3M1M2, respectively.

Tab. 1. 4×4 JSSP.
4×4 JSSP.

The process of decoding a chromosome is shown in Fig 3. For example, an operation sequence [1 3 1 4 2 3 2 4 3 3 4 1 4 1 2 2] generated based on data of Table 1 is actively decoded. When the 3rd operation of job J1 is to be arranged, that is, when the 3rd “1” of the operation sequence is taken, the corresponding machine M1 has been arranged job J3, it could be inserted in front of job J3. Similarly, the 4th operation O44 of job J4 could also be inserted in front of job J3, and finally Cmax could be decoded to be 34. The active scheduling solution obtained by actively decoding the chromosome is shown in Fig 4 where Oij represents the jth operation of job Ji. After active decoding the example chromosome, an active scheduling solution with more concentrated operations [1 3 4 1 3 2 1 2 4 3 4 4 3 1 2 2] is obtained.

Active decoding of JSSP.
Fig. 3. Active decoding of JSSP.
Active scheduling solution.
Fig. 4. Active scheduling solution.

2.2.1.2 Neighbor environment and action criteria of chromosome agents. MAGA uses encoding and decoding methods in GA to initial population and evaluate fitness of chromosome agents in the grid environment. To utilize multiple agents for collaborative optimization, three issues should be identified first.

(1) Action intention definition of all agents. When using multiple agents for collaborative optimization, each agent represents a feasible solution of the JSSP. Therefore, its action intention is defined: under the premise of satisfying constraint conditions, encoding sequences are adjusted to the optimal solution in combination with its own environment.

(2) Neighbor environment definition of chromosome agents. Regard each chromosome as a separate agent, and then fix them in a Lsize×Lsize grid environment (as shown in Fig 5), thus forming a neighbor environment for each chromosome agent. It should be noted that the number of neighbors per chromosome is not fixed. It can be 4, 8 or more, or it can be dynamic. Since the main purpose of this paper is to combine a multi-agent grid environment with GA to study its optimization effectiveness, the number of neighbors is set to 4. It is defined as follows.

Multi-agent gird environment.
Fig. 5. Multi-agent gird environment.

Suppose Aij represents a chromosome agent with coordinates (i, j) in the grid environment, then a neighbor set Nij of it can be defined:


where Lsize represents the dimension of the grid environment. Four neighbors corresponding to each agent can be found according to Eq (3). For example, neighbors coordinates of agent A22 are (1, 2), (2, 1), (2, 3) and (3, 2). Neighbors coordinates of agent A11 are (Lsize, 1), (1, Lsize), (2, 1) and (1, 2).

(3) Action criteria of chromosome agents. Each agent interacts with only four neighbors according to its own environment, and improves its own fitness by competing and cooperating with the best neighbors around.

2.2.2 Design of neighbor interaction operators

In general, design of a crossover operator should be able to inherit good features of parent chromosomes on the basis of ensuring generation of feasible solutions, so that the algorithm evolves toward the optimal solution. For JSSPs, if the scheduling solution obtained by processing two adjacent jobs (Ji, Jk) on a machine is better, then inheritance of this processing order (Ji, Jk) to offspring can still retain this advantage. Many scholars have proposed many crossover operators. The more successful ones are Linear Order Crossover (LOX), Precedence Operation Crossover (POX), Precedence Preservation Crossover (PPX), etc [1820]. The POX operator proposed by Zhang et al. can better inherit superior features of parent chromosomes compared with other crossover operators [21]. Its specific implementation is shown in Fig 6.

Principles of POX.
Fig. 6. Principles of POX.

Specific steps of POX are as follows.

(1) A job set is randomly divided into two non-empty subsets JS1, JS2, as shown in Fig 6, JS1 = {1, 3}, JS2 = {2, 4};

(2) Genes belonging to JS1 in parent1 are directly copied to child1, and genes belonging to JS1 in parent2 are directly copied into child2.Positions of the genes are retained;

(3) The remaining genes in parent2 are copied into child1, and the remaining genes in parent1 are copied into child2. Orders of the genes are preserved.

Individuals involved in the crossover in GA are from random pairing, but individuals participating in neighbor interaction only exist in the local environment of each agent. The design criteria for a neighbor interaction operator are as follows.

(1) If a current agent's fitness value is higher than its four neighbor agents, it will be retained as a winner, and its encoding sequence remains unchanged;

(2) If a current agent’s fitness value is lower than the fitness value of the optimal neighbor agent, it will be replaced by the optimal neighbor. The replacement process adopts a POX operator. Unlike traditional crossover operations, each interaction only generates one offspring, that is, only child1 is retained.

There are two strategies for the replacement mechanism.

(1) Take the optimal neighbor agent as parent1 and the current agent is regarded as parent2. Obtain an optimal offspring through λ POX operations;

(2) Take the current agent as parent1 and the optimal neighbor agent is regarded as parent2. Obtain an optimal offspring through λ POX operations.

Strategy (1) is accepted at probability Po, the other one is adopted at probability 1-Po. The first strategy focuses more on concentration of optimization and is beneficial to speed up an algorithm’s convergence rate. The second strategy focuses more on dispersion of optimization.

2.2.3 Design of mutation operators based on neighborhood structure

Mutation operators are designed to give agents a small disturbance to ensure the diversity of population. After solving the JSSP through the neighbor interaction operator, if the optimal agent obtained has high fitness, how to further improve its fitness decides whether an algorithm can find a better solution. Mutation operations commonly used in existing researches include exchange mutation, insertion mutation, reverse mutation, replacement mutation, and the like. There are two shortcomings in these mutation operations. Firstly, directions of mutation have a large blindness, and it is difficult to direct an algorithm to the global optimal solution. Secondly, these mutation operations have a greedy nature to some extent, which is easy to direct an algorithm to local optimum. A neighborhood structure is a mechanism that applies a small perturbation to a given solution to obtain another solution. Therefore, a new mutation operator named mutation operator based on neighborhood structure is presented.

2.2.3.1 The neighborhood structure of the JSSP. In the field of combinatorial optimization, studying the neighborhood structure of a problem is one of the most important ways to optimize solutions of the problem. Van Laarhoven et al. [22] proposed N1 neighborhood structure. New solutions were generated by randomly exchanging two adjacent operations on a critical path. It is proved that the optimal solution can be found through a limited number of interchange operations under N1 neighborhood structure. To further promote an algorithm’s quality and efficiency, many researchers subsequently proposed N4, N5, N6 neighborhood structures [2325]. Matsuo et al. found that only when the job immediate predecessor operation (JIPO) of operation u or the job immediate successor operation (JISO) of operation v was contained in a critical path including operations u and v (assuming operation u is processed before operation v), exchanging key operations u and v made it possible to reduce makespan of a given solution [26]. Based on this theory, N7 neighborhood structure was proposed by Zhang et al. [27]. This new neighborhood structure can explore a wider solution space on the basis of ensuring generation of feasible solutions, so that higher quality feasible solutions can be obtained. Before introducing N7 neighborhood structure, some concepts need to be explained.

Definition 2.1 In the JSSP, each key operation u has two immediate predecessor operations and two immediate successor operations, which includes: a JIPO and a JISO. The previous and next operation of operation u that belong to the same job as operation u are represented by Jpre[u] and Jsuc[u] respectively. A machine immediate predecessor operation (MIPO) and a machine immediate successor operation (MISO). The previous and next operation of operation u that belong to the same machine as operation u are represented by Mpre[u] and Msuc[u] respectively.

Definition 2.2 A critical path is the longest path from the starting point 0 to the ending point 17 in a directed graph as shown in Fig 7. The path length is makespan of a scheduling solution. Fig 7 is a disjunctive graph model which took 4×4 JSSP in Table 1 as an example and used the introduction of a disjunctive graph in [14] as a basis.

A disjunctive graph model of 4×4 JSSP.
Fig. 7. A disjunctive graph model of 4×4 JSSP.

Definition 2.3 Each operation on a critical path is called a key operation. As shown in Fig 8, according to the active scheduling solution in Fig 4, select a certain direction for bidirectional arcs in the disjunctive graph model, thereby forming a directed graph representation of 4x4 JSSP. At the same time, adjacent key operations group of a largest sequence processed on the same machine is called a block, such as [O34, O23] is a block.

An acyclic directed graph and a critical path of 4×4 JSSP.
Fig. 8. An acyclic directed graph and a critical path of 4×4 JSSP.

Based on definitions 2.1 to 2.3, under the premise that operation x and y are both on the same critical path, N7 neighborhood structure can be defined as follows.

(1) When y is a block tail operation and its JISO is in the critical path, operation x can be moved after operation y, and vice versa. As shown in Fig 9.

(2) When x is a block head operation and its JIPO is in the critical path, operation y can be moved before operation x, and vice versa. As shown in Fig 10.

JISO of a block tail operation is in the critical path.
Fig. 9. JISO of a block tail operation is in the critical path.
JIPO of a block head operation is in the critical path.
Fig. 10. JIPO of a block head operation is in the critical path.

Figs 9 and 10 are key operation blocks in two critical paths including operation x and operation y. u1, uk and "…" in the rectangular frame indicate intermediate operations in the block.

2.2.3.2 Key operations search based on operation-based encoding. Based on operation-based encoding, the first step in generating a neighborhood solution is to search for key operations. Searching for key operations can be done by a machine Gantt chart obtained after active decoding. Starting from the last completed operation, operations whose completion time immediately follows the start time of the current operation are marked as key operations. If MIPO and JIPO of an operation are encountered at the same time, the JIPO is selected as a key operation.

Take the active scheduling solution [1 3 4 1 3 2 1 2 4 3 4 4 3 1 2 2] obtained before for example, its key operation blocks are shown in Fig 11.

Key operation blocks.
Fig. 11. Key operation blocks.

2.2.3.3 Mutation mechanism based on neighborhood solutions. According to discussions of two parts above, after obtaining key operation blocks of an agent encoding sequence, a mutation operator based on neighborhood structure is implemented as follows.

(1) Record all operation pairs that can be exchanged according to the definition of N7 neighborhood structure.

(2) Randomly select an exchangeable operation pair at probability Pm to perform mutation operation.

2.2.4 Design of self-learning operators

Each agent can gradually improve its fitness value through interaction operations with neighbors and mutation operations based on neighborhood structure. Similarly, take the optimal agents in each generation to construct a small grid environment with a size of sLsize×sLsize. Perform Sgen neighbor interactions and mutation operations based on neighborhood structure to obtain a new optimal agent. The flow chart is shown in Fig 12.

Flow chart of mutation operator.
Fig. 12. Flow chart of mutation operator.

3 The achievement of MAGATS

The algorithm framework of MAGA is based on that of GA. Therefore, it is inevitable to fall into local optimal solutions when solving JSSPs. Currently, a hybrid intelligent algorithm through combining a swarm intelligence algorithm with a local search algorithm is a more advanced algorithm for solving JSSPs [19]. Compared with only adopting swarm intelligence algorithms, its optimization effects have been significantly improved. Currently, local search algorithms are mainly represented by TS and SA. Since SA avoids local optimization by accepting inferior solutions with a certain probability, it may return to the previous solution in the process of searching for the optimal solution, which leads to the algorithm to oscillate around a local optimal solution, thus wasting a lot of computing time. For this, on the basis of MAGA, a MAGATS is proposed by introducing TS to enhance the global search ability. The flow chart of MAGATS is shown in Fig 13.

Flow chart of MAGATS.
Fig. 13. Flow chart of MAGATS.

Specific steps of MAGATS are summarized as follows.

(1) After conducting a MAGA, the best agent is set as the current solution and the optimal solution.

(2) Obtain a set of candidate solutions to be selected according to neighborhood structure of current solutions (N7 neighborhood structure is adopted). Verify tabu of each candidate solution.

(3) Determine whether tabu solutions satisfy the aspiration criterion: if fitness of a tabu solution is higher than previous accepted solutions, then the solution will be released and will be set as the current solution and the optimal solution. If it does not, a solution with the highest fitness selected from the non-tabu solution set will be the current solution. Update the tabu table.

(4) Repeat steps (2), (3) until satisfying termination criteria. The termination criteria are to reach the maximum number of iterations or find the best known solution (BKS). Set the maximum number of iterations to 300.

(5) If the BKS is found, the algorithm is stopped. Otherwise, repeat steps (1)—(4) until the BKS is found or CTSH.

Key parameter settings of TS in MAGATS include selection of tabu objects and length setting of a tabu table.

(1) Selection of tabu objects. In section 2.2.3, neighborhood structures of JSSPs and judgement of key operations and exchangeable operation pairs were discussed. Therefore, operation pairs were marked as tabu objects. For example, a candidate solution is generated through exchanging operations O12 and O32 which are both processed on M1. Then, exchange of this operation pair is added to the tabu table.

(2) Length setting of a tabu table. A tabu table can be regarded as a special queue with first-in, first-out characteristics. When a new tabu object is added, set its tabu length Llist. In other words, once entering in the tabu table, the tabu object can be dequeued until Llist iterations of TS (provided that aspiration criterion is not met). If tabu length of the tabu table is too small, it is easy to fall into local optimum. On the contrary, if tabu length is too large, too many constraints will be generated. An effective approach for JSSPs is to introduce a dynamic tabu table, which sets the tabu length Llist to be randomly selected in [Lmin, Lmax]. After several tests, setting Lmin = [10+n/m] and Lmax = [1.8Lmin] is better.

4 Results and analysis

The proposed algorithm MAGATS is implemented by Visual C++. The program running environment is a PC with a core i5 processor, 2.5GHz main frequency, and 8G memory. To test optimization performance of the algorithm, 43 instances are selected from the OR-library, which included FT06, FT10,FT20 contributed by Fisher and Thompson and LA01~LA40 contributed by Lawrence [28,29]. These instances are often used to test optimization performance of a new algorithm. Two comparative analysis are completed based on these instances.

4.1 Comparison of MAGATS with GA and MAGA

Parameters of GA, MAGA and MAGATS are set as follows.

GA: Size of initial population = 256, probability of crossover Pc = 0.8, the number of crossover λ = 50, probability of mutation Pm = 0.1, the maximum number of iterations I = 300. MAGA: Lsize = 16,sLsize = 3,probability of neighbor interaction Pc = 0.8, probability of replacement mechanism Po = 0.5,the number of crossover λ = 50, probability of mutation Pm = 0.1,the number of self-learning Sgen = 50, the maximum number of iterations I = 300; TS: the maximum number of cycles H = 300.The remaining of relevant parameters refers to section 3. 11 instances of different sizes are selected to verify the improvement of MAGATS compared with GA and MAGA. The results are shown in Table 2. Cbest is the best solution every algorithm can find in 20 runs. Caver and σ are the mean and standard deviation of all 20 solutions acquired by GA,MAGA and MAGATS.

Tab. 2. Comparison of MAGATS with GA and MAGA.
Comparison of MAGATS with GA and MAGA.

As shown in Table 2 and Fig 14, compared with GA and MAGA. MAGATS finds the most BKSs and behaves better optimization performance and better stability.

Mean and standard deviation of GA, MAGA and MAGATS.
Fig. 14. Mean and standard deviation of GA, MAGA and MAGATS.

4.2 Comparison between MAGATS and other algorithms

In this comparative analysis, all 43 instances are run 20 times respectively, and the obtained test results with MAGATS are compared with other four algorithms: NIMGA, aLSGA, WW, TSSB [9,3032], as shown in Table 3. Table 3 lists instance name, instance size (number of jobs × number of machines), the best known solution (BKS), the optimal solution (OS) obtained by MAGATS, RD (relative deviation between the optimal solution obtained by MAGATS and BKS) and the optimal solution obtained by other algorithms. RD can be calculated by Eq (4).


Based on the obtained RDs for each instance, ARD used to evaluate the optimization performance of each algorithm can be calculated by Eq (5).


where N is the number of instances tested by each algorithm. Contents of Table 4 include the number of instances solved/the number of instances tested (NIS/NIT), ARD of other algorithms(OA) and MAGATS and improvement. The column named improvement means the reduction of ARD obtained by MAGATS compared with OA. In other words, the more efficient algorithm can be identified.

Tab. 3. Comparison of optimization results of 43 instances.
Comparison of optimization results of 43 instances.
Tab. 4. Improvement of MAGATS compared with OA.
Improvement of MAGATS compared with OA.

Based on Tables 3 and 4, the new algorithm is analyzed.

(1) The proposed algorithm MAGATS finds 38 optimal solutions of 43 instances. To small instances FT06, FT10, FT20 and LA01~LA15, all five algorithms can find almost all the best known solutions. But to relatively large instances LA16~LA40 (25 instances), 20 best known solutions (80%) is found by MAGATS, which has better optimization performance than NIMGA(48%), aLSGA(45%), WW(64%) and TSSB (72%).

(2) Compared with other algorithms, ARD of MAGATS is the smallest. It indicates that the optimal solutions obtained by MAGATS is closest to the best known solutions or the minimum makespan. It is further shown that the new algorithm has better optimization performance. Compared with other algorithms, obtained solutions are of higher quality.

In order to visualize scheduling results of the JSSP optimized by MAGATS, an example LA39 is selected to display its optimal scheduling in a machine Gantt chart, as shown in Fig 15. Since Oij is easy to cause ambiguity, (Ji,j) is used to represent the jth operation of Ji.

Machine Gantt chart of instance LA39.
Fig. 15. Machine Gantt chart of instance LA39.

5 Conclusion

Aiming at NP characteristics of JSSPs and minimizing makespan, a MAGATS is proposed in this paper. The new algorithm is applied to test 43 benchmark instances. Compared with four other algorithms, optimization performance of it is analyzed based on the obtained test results. Analysis results show that the proposed algorithm MAGATS has high optimization performance and practical value in the field of JSSPs.

Highlights of this paper can be concluded into the following 3 aspects.

(1) A neighbor interaction operator based on a POX operator is designed. Under the algorithm framework of MAGA, each agent can only interact with neighbors. A replacement mechanism is adopted to retain good chromosomes. The algorithm's global search performance is enhanced to some extent, thus achieving optimization function of the algorithm.

(2) A mutation operator based on neighborhood structure and a self-learning operator are designed. By introducing N7 neighborhood structure into the design of mutation operators, a wider solution space can be explored, thereby obtaining a higher quality feasible scheduling. Centralized search ability of MAGATS can be further enhanced by the self-learning operator.

(3) A MAGATS is proposed. Combined with excellent global search performance of TS, MAGA is integrated with TS to further enhance optimization performance of the algorithm, avoiding premature and falling into local optimal.

The proposed algorithm MAGATS is only used to solve JSSPs with a goal of minimizing makespan. Influences of neighbor environment of chromosome agents on optimization performance of the algorithm need further research. In the future research work, feasibility of this algorithm in multi-objective job shop scheduling and flexible job shop scheduling will also be studied.

Supporting information

S1 File [docx]
Meta-data of 43 instances.


Zdroje

1. Garey MR, Johnson DS, Sethi R. The Complexity of Flowshop and Jobshop Scheduling. Math Oper Res. 1976;1(2):117–129.

2. Manne A. On the Job Shop Scheduling Problem. Cowles Foundation, Yale University, Cowles Foundation Discussion Papers. 1959;8.

3. Balas E. Machine Sequencing Via Disjunctive Graphs: An Implicit Enumeration Algorithm. Oper Res. 1969;17(6):941–957.

4. Calis B, Bulkan S. A research survey: review of AI solution strategies of job shop scheduling problem. J Intell Manuf. 2015;26(5):961–973.

5. Chen D. Research on Traffic Flow Prediction in the Big Data Environment Based on the Improved RBF Neural Network. IEEE T Ind Inform. 2017;13(4):2000–2008.

6. Liu Y, Liu Z, Jia R. Deep PF: A deep learning based architecture for metro passenger flow prediction. Transport Res C-Emer. 2019;101:18–34.

7. Holland J. Adoption in Natural and Artificial System. The University of Michigan Press, Ann Arbor. 1975.

8. Xu XD, Li CX. Research on immune genetic algorithm for solving the job-shop scheduling problem. Int J Adv Manuf Technol. 2007;34(7–8):783–789.

9. Kurdi M. An effective new island model genetic algorithm for job shop scheduling problem. Comput Oper Res. 2016;67(C):132–142.

10. Chang HC, Liu TK. Optimisation of distributed manufacturing flexible job shop scheduling by using hybrid genetic algorithms. J Intell Manuf. 2015;28(8):1–14.

11. Liu J, Jing H, Tang YY. Multi-agent oriented constraint satisfaction. Artif Intell. 2002;136(1):101–44.

12. Chen W, Li J, Ma W. Hybrid flow shop rescheduling algorithm for perishable products subject to a due date with random invalidity to the operational unit. Int J Adv Manuf Technol. 2017;93(1):225–239.

13. Tsujimura Y. A tutorial survey of job-shop scheduling problems using genetic algorithms—I. representation. Comput Ind Eng. 1996;30(4):983–997.

14. Knopp S, Dauzère-Pérès S, Yugma C. A batch-oblivious approach for Complex Job-Shop scheduling problems. Eur J Oper Res. 2017;263(1):50–61.

15. Bean JC. Genetic Algorithms and Random Keys for Sequencing and Optimization. Orsa J Comput. 2017;6:154–160.

16. Zambrano Rey G, Bekrar A, Trentesaux D, Zhou B-H. Solving the flexible job-shop just-in-time scheduling problem with quadratic earliness and tardiness costs. Int J Adv Manuf Technol. 2015;81(9):1871–1891.

17. Kuhpfahl J, Bierwirth C. A Study on Local Search Neighborhoods for the Job Shop Scheduling Problem with Total Weighted Tardiness Objective. Comput Oper Res. 2015;66:44–57.

18. Engin O, Güçlü A. A new hybrid ant colony optimization algorithm for solving the no-wait flow shop scheduling problems. Appl Soft Comput. 2018;72:166–176.

19. Gao L, Li X, Wen X, Lu C, Wen F. A hybrid algorithm based on a new neighborhood structure evaluation method for job shop scheduling problem. Comput Ind Eng. 2015;88:417–429.

20. Selsa V, Vanhouckeabc M. A hybrid single and dual population search procedure for the job shop scheduling problem. Eur J Oper Res. 2011;215(3):512–523.

21. Zhang C, Rao Y, Li P. An effective hybrid genetic algorithm for the job shop scheduling problem. Int J Adv Manuf Technol. 2008;39(9–10):965–974.

22. Laarhoven PJMV, Aarts EHL, Lenstra JK. Job shop scheduling by simulated annealing. Oper Res. 1992;40(1):113–125.

23. Dell'Amico M, Trubian M. Applying tabu search to the job-shop scheduling problem. Ann Oper Res. 1993;41(3):231–252.

24. Nowicki E, Smutnicki C. A Fast Taboo Search Algorithm for the Job Shop Problem. Manag Sci. 1996;42(6):797–813.

25. Balas E, Vazacopoulos A. Guided Local Search with Shifting Bottleneck for Job Shop Scheduling. Manag Sci. 1998;44:262–275.

26. Matsuo H, Suh C, Sullivan R. A Controlled Search Simulated Annealing Method for the General Job-Shop Scheduling Problem. 1988.

27. Zhang CY, Li PG, Guan ZL, Rao YQ. A tabu search algorithm with a new neighborhood structure for the job shop scheduling problem. Comput Oper Res. 2007;34(11):3229–3242.

28. Fisher H., & Thompson G. L. Probabilistic Learning Combinations of Local Job-Shop Scheduling Rules. Industrial Scheduling. Englewood Cliffs, New Jersey: Prentice-Hall. 1963:225–251.

29. Lawrence S. Supplement to Resource Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques. Graduate School of Industrial Administration, Carnegie-Mellon University. 1984;4(7):4411–4417.

30. Asadzadeh L. A local search genetic algorithm for the job shop scheduling problem with intelligent agents. Comput Ind Eng. 2015;85(C):376–383.

31. Lin C, Zhang Q, Fei T, Ni K, Yang C. A novel search algorithm based on waterweeds reproduction principle for job shop scheduling problem. Int J Adv Manuf Technol. 2016;84(1–4):405–424.

32. Pezzella F, Merelli E. A tabu search method guided by shifting bottleneck for the job shop scheduling problem. Eur J Oper Res. 2000;120(2):297–310.


Článok vyšiel v časopise

PLOS One


2019 Číslo 9
Najčítanejšie tento týždeň
Najčítanejšie v tomto čísle
Kurzy

Zvýšte si kvalifikáciu online z pohodlia domova

Získaná hemofilie - Povědomí o nemoci a její diagnostika
nový kurz

Eozinofilní granulomatóza s polyangiitidou
Autori: doc. MUDr. Martina Doubková, Ph.D.

Všetky kurzy
Prihlásenie
Zabudnuté heslo

Zadajte e-mailovú adresu, s ktorou ste vytvárali účet. Budú Vám na ňu zasielané informácie k nastaveniu nového hesla.

Prihlásenie

Nemáte účet?  Registrujte sa

#ADS_BOTTOM_SCRIPTS#