Open-shop scheduling or open-shop scheduling problem (OSSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job-scheduling problem, we are given n jobs
The open-shop scheduling problem can be solved in polynomial time for instances that have only two workstations or only two jobs. It may also be solved in polynomial time when all nonzero processing times are equal: in this case the problem becomes equivalent to edge coloring a bipartite graph that has the jobs and workstations as its vertices, and that has an edge for every job-workstation pair that has a nonzero processing time. The color of an edge in the coloring corresponds to the segment of time at which a job-workstation pair is scheduled to be processed. Because the line graphs of bipartite graphs are perfect graphs, bipartite graphs may be edge-colored in polynomial time.
For three or more workstations, or three or more jobs, with varying processing times, open-shop scheduling is NP-hard (source: wikipedia).
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
As an example, ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions (source: wikipedia).
For other information about ACO or Computational intelligence see my repo about computational intelligence.
Create a dir and download the project inside.
Create a virtual env in that directory
virtualenv ACO_project
Activate venv to install project requirements
source ACO_project/bin/activate
Move to project dir and Install requiremenst
pip install -r requirements.txt
Now you are ready to execute and test the project.
The project consists of 3 main files:
-
main.py
$\rightarrow$ file to execute to start the program -
ACO.py
$\rightarrow$ file where the ACO optimization algorithm is implemented -
OSSP.py
$\rightarrow$ file where the problem representation is implemented
Consequently, the file to run to start the program is main.py
. Obviously all the files are duly commented and it is possible to easily understand how they work. However, for any clarification open a discussion and i will help you.
The various instances for the OSSP issue are available in the test_instances
folder.
In the utils
folder instead there is a script that allows you to create ready-to-run instances by taking them from sites like taillard istances.
It is also possible to use instances from other websites (see the description of the instances_creator.py file).
In the main directory there is also a file called params_optimization.py
which allows to find the optimal parameters (which allow to obtain the optimal solution) for a given instance.
In optimization problems such as OSSP, JSSP, TSP, NPP and so on, it is customary to perform local optimization after the main optimization algorithm.
The main local optimization algorithm used in this situations is local search.
In this case i try to update the solution using a version of local search called: Iterated Local Search which uses local search in its 'best improvement' version.
In case you don't get the lower bound right away using ACO, you can improve your results with this additional optimization algorithm.
You can select the number of generations of the local search in a specific variable (num_tries
) in the main.py
file.
To view solution improvements as generations go by, you can print a chart showing the minimum, average, and maximum
solution costs for each generation.
To see the plot, set the appropriate variable plot_cycles
to True
in main.py
.
To make it easier to evaluate the results of the optimization algorithm, it is also possible to graphically view the gantt chart of the found scheduling.
To make the operation of the code easier to interpret, everything is already set up to execute and solve an example instance with 4 machines and 4 jobs.
To start the program it is therefore sufficient to start main.py
script with:
python3.8 main.py