BTH004 - Laboratory assignment 1In this laboratory assignment you should design and implement algorithmsfor the multiple knapsack problem. The assignment contains two parts; one ismandatory and one is optional.In part 1 (the mandatory part) you should design and implement two algorithmsfor the multiple knapsack problem:
A (contructive) greedy algorithm.An improving search (neighborhood search) algorithm.Part 2 (the optional part) concerns implementing a tabu-search algorithm(a meta-heuristic) for the considered problem.The assignment should be conducted individually, and it will be examinedthrough a written report (one per student). The report should fulfill the requirementsspecified below. The deadline for submitting the report is announced by Zhenbo Cheng.Note that you are allowed to use any high-level programming language youfind suitable, e.g., java, c, c++, and python.It is highly recommended that you start working with the assignment as soonas possible. It is of particular importance that you start before the scheduledsupervision time, so that you can make as much use as possible of the teachersupport.The knapsack problemThe (standard) knapsack problem, which was introduced earlier in the course,can be described in the following way. Assume that you have a set of items, eachwith a (positive) weight and a (positive) value, and a knapsack (or bag) with alimited weight capacity. The problem is to choose (to include in the knapsack)items so that the weight capacity of the knapsack is not violated and the valueof the chosen items are as high as possible.Mathematically, the problem can be formulated in the following way. We letI = {1, . . . , n} be an index set over the n items, where item i ∈ I have a valuepi > 0 and a weight wi > 0, and we let W > 0 denote the weight capacity of theknapsack. For item i ∈ I, the binary decision variable xiis used to determinewhether to include item i in the knapsack: xi = 1 if the item is chosen, andxi = 0 if it is not chosen.The objective function of the problem is to maximize the utility of the chosenitems, i.e.,MaximizeXi∈Ipixi.1All valid solutions to the problem should fulfill the weight constraintXi∈Iwixi ≤ W,and the values of the decision variables are restricted by the constraint setxi ∈ {0, 1} ∀i ∈ I.The multiple knapsack problemThe multiple knapsack problem is an extension to the standard knapsack problemin that it considers choosing items to include in m knapsacks (the standardknapsack problem considers only one knapsack).Mathematically, the multiple knapsack problem can be formulated as follows.We let I = {1, . . . , n} be an index set over the n items, where item i ∈ I havea value pi > 0 and a weight wi > 0. In addition, we let J = {1, . . . , m} be anindex set over the m knapsacks, where Wj > 0 denotes the weight capacity ofknapsack j ∈ J. For item i ∈ I and knapsack j ∈ J, we let the binary decisionvariable xij determine whether to include item i in knapsack j: xij = 1 if itemi is included in knapsack j, otherwise xij = 0.The objective function of the problem is to maximize the utility of the chosenitems, i.e.,MaximizeXi∈IXj∈Jpixij .For each of the knapsacks, the solution space is restricted by a weight capacityconstraint, which states that the total weight of the selected items for thatknapsack is not allowed to exceed the weight capacity of the knapsack. Thisis modeled by the following constraint set (one constraint for each of the mknapsacks):Xi∈Iwixij ≤ Wj , j ∈ J.In addition, it needs to be explicitly modeled that an item is not allowed to beincluded in more than one of the knapsacks. This is modeled by the followingconstraint set (one constraint for each of the n items):Xj∈Jxij ≤ 1, i ∈ I.Finally, the values of the decision variables are restricted by the constraint setxij ∈ {0, 1}, i ∈ I, j ∈ J.Part 1 - Greedy and neighborhood search algorithmsIn the first part (the mandatory part), you should design and implement 1)a greedy algorithm and 2) a neighborhood search algorithm for the multipleknapsack problem.2Greedy algorithmAs discussed earlier in the course, a greedy algorithm is an algorithm that startswith an empty solution, and iteratively adds solution components to partialsolution until a complete solution is found.For knapsack problems, it is possible to construct greedy algorithms that arebased on the relative benefit per weight unit for the considered items. We letbi =piwidenote the relative benefit per weight unit for item i ∈ I. By comparingbi′ =pi′wi′and bi′′ =pi′′wi′′for two items i′, i′′ ∈ I, it is possible to make a greedydecision on which item to include. If bi′ > bi′′ (i.e., bi′ gives higher value perweight unit than bi′′ ) it seems better to choose bi′ than bi′′ for some of the mknapsacks. On the other hand, if bi′ < bi′′ it seems better to include bi′′ thanbi′ in some of the m knapsacks. If bi′ = bi′′ , it does not matter which one tochoose, since bi′ and bi′′ in this case gives the same value per weight unit.A greedy algorithm for the multiple knapsack problem could be designed byiteratively adding to some knapsack the item with highest relative benefit (i.e.,i′ = arg maxi∈I(bi), such that i′ has not already been included in an earlieriteration).Please note that there are algorithm details not discussed above, which youneed to define yourself1. For example, termination criteria and choice of knapsackin case there is sufficient amount of capacity in more than one of theknapsacks, has not been discussed above. In addition, you need to considerwhat you should do if there is no capacity for the item with highest relativevalue, but there are other items with smaller weight that can be added.Improving search (neighborhood search) algorithmImproving search heuristics are algorithms that iteratively improve a feasiblesolution by doing small modifications of the most recent solution.For a solution x, we define a neighborhood N(x) as all solutions “closeenough” to x (according to some criteria). If y ∈ N(x), then y is a neighbor ofx.Neighborhood search algorithms are improving search algorithms where acurrent solution is iteratively updated by choosing the “best” solution in theneighborhood of the current solution. An algorithm description (for a maximizationproblem) is provided below.Step 0: Find a feasible (starting) solution x(0) with value (cost) c(x(0)) and setsolution index t = 0.Step 1: Determine (all points in) neighborhood N(x(t)).Step 2: If c(x(t)) ≥ c(x) ∀ x ∈ N(x(t)), then break with local optimal solutionx(t).Step 3: Choose x(t+1) ∈ N(x(t)) such that c(x(t+1)) ≥ c(xt)Set t = t + 1 and goto step 1.As starting solution to the neighborhood search algorithm, you could, forexample, use the solution obtained by your greedy algorithm.1It is typically a good idea to test different approaches to see which performs best3A challenge is how to determine an appropriate neighborhood for the multipleknapsack problem. If the neighborhood is too large, it will be impossible toiterate over all solutions in the neighborhood, if it is too small, there is a highrisk that the algorithm will get stuck in a local optima that is not so good.A possible way to define a neighborhood is to rotate items. As long as youhave at least one item that is not included in any knapsack, you could define theneighborhood of a solution as all other feasible solutions that can be obtainedby:• Moving some item (i′) from one knapsack (m′) to another knapsack (m′′).• Moving some other item (i′′) away from knapsack (m′′) (in order to makeroom for item i′). In the obtained solution, i′′ is not included in any ofthe knapsacks.• Moving some other item (i′′′), which is not included in any knapsack inthe current solution, into knapsack m′.If bi′′′ > bi′′ a better solution has been found.When using this type of neighborhood, it is important to consider that itmight be possible to, via rotations, make room for an additional item in someof the knapsacks after some item has been replaced by some other item. Thisneeds to be explicitly included in the neighborhood definition.You are encouraged to identify some other neighborhood that you find appropriatefor the considered problem. For example, you might find it interestingto consider rotation involving more than 3 items; however you need to considerthat this increases the search space significantly.Part 2 - Tabu-searchA problem with the neighborhood search heuristics is that they eventually willget stuck in a locally optimal solution. A locally optimal solution is a solutionthat cannot be improved by moving to any of the solutions that are sufficientlyclose in the search space (i.e., in the neighborhood), but there are solutionsfarther away that are better.The ideas of tabu-search methods, which are based on neighborhood search,are:• It is possible to move to a solution with worse objective if there is noimproving solution in the neighborhood (note that it is necessary to keeptrack of the so far best found solution).• The k most recent solutions are tabu, i.e., they cannot be visited (otherwisethe algorithm might immediately go back to the locally optimalsolution).Tabu search algorithms (for maximization problems) can be described in ageneral way using the following algorithm description.Step 0: Find a feasible (starting) solution x(0) with value (cost) c(x(0)) and setsolution index t = 0.4Step 1: Determine breaking criteria. For instance, break if maximum numberof iterations has been reached, or no non-tabu solutions exists.Step 2: Determine neighborhood N(x(t)).Step 3: Choose x(t+1) ∈ N(x(t)) which is not in the tabu-list, and which maximizesc(x).Step 4: Update tabu-list (remove the oldest solution in the tabu-list and addthe most recent solution, i.e., x(t)), set t = t + 1 and go to step 1.In part 2 (the optional part) you are encouraged to extend your neighborhoodsearch algorithm into a tabu-search algorithm.Requirements on report