algorithmmutationgeneticparticle-swarmcrossover

Can anyone explain how different is this hybrid PSOGA from normal GA?


Does this code have mutation, selection, and crossover, just like the original genetic algorithm.

Since this, a hybrid algorithm (i.e PSO with GA) does it use all steps of original GA or skips some

of them.Please do tell me. I am just new to this and still trying to understand. Thank you.

%%% Hybrid GA and PSO code

function [gbest, gBestScore, all_scores] = QAP_PSO_GA(CreatePopFcn, FitnessFcn, UpdatePosition, ...
                                        nCity, nPlant, nPopSize, nIters)
    % Set algorithm parameters
    constant = 0.95;
    c1 = 1.5;       %1.4944;    %2;
    c2 = 1.5;       %1.4944;    %2;
    w = 0.792 * constant;
    % Allocate memory and initialize
    gBestScore = inf;
    all_scores = inf * ones(nPopSize, nIters);
    x = CreatePopFcn(nPopSize, nCity);
    v = zeros(nPopSize, nCity);
    pbest = x;
    % update lbest
    cost_p = inf * ones(1, nPopSize);  %feval(FUN, pbest');
    for i=1:nPopSize
        cost_p(i) = FitnessFcn(pbest(i, 1:nPlant));
    end
    lbest = update_lbest(cost_p, pbest, nPopSize);
    for iter = 1 : nIters    
        if mod(iter,1000) == 0
            parents = randperm(nPopSize);
            for i = 1:nPopSize
                x(i,:) = (pbest(i,:) + pbest(parents(i),:))/2;
%                v(i,:) = pbest(parents(i),:) - x(i,:);
%                v(i,:) = (v(i,:) + v(parents(i),:))/2;
            end

        else
            % Update velocity
            v = w*v + c1*rand(nPopSize,nCity).*(pbest-x) + c2*rand(nPopSize,nCity).*(lbest-x);
            % Update position
            x = x + v;
            x = UpdatePosition(x);
        end
        % Update pbest
        cost_x = inf * ones(1, nPopSize);
        for i=1:nPopSize
            cost_x(i) = FitnessFcn(x(i, 1:nPlant));
        end

        s = cost_x<cost_p;
        cost_p = (1-s).*cost_p + s.*cost_x;
        s = repmat(s',1,nCity);
        pbest = (1-s).*pbest + s.*x;
        % update lbest
        lbest = update_lbest(cost_p, pbest, nPopSize);
        % update global best
        all_scores(:, iter) = cost_x;
        [cost,index] = min(cost_p);
        if (cost < gBestScore) 
            gbest = pbest(index, :);
            gBestScore = cost;
        end

        % draw current fitness
        figure(1);
        plot(iter,min(cost_x),'cp','MarkerEdgeColor','k','MarkerFaceColor','g','MarkerSize',8)
        hold on

        str=strcat('Best fitness: ', num2str(min(cost_x)));
        disp(str);

    end
end
% Function to update lbest
function lbest = update_lbest(cost_p, x, nPopSize)
    sm(1, 1)= cost_p(1, nPopSize);
    sm(1, 2:3)= cost_p(1, 1:2);
    [cost, index] = min(sm);
    if index==1
        lbest(1, :) = x(nPopSize, :);
    else
        lbest(1, :) = x(index-1, :);
    end
    for i = 2:nPopSize-1
        sm(1, 1:3)= cost_p(1, i-1:i+1);
        [cost, index] = min(sm);
        lbest(i, :) = x(i+index-2, :);
    end
    sm(1, 1:2)= cost_p(1, nPopSize-1:nPopSize);
    sm(1, 3)= cost_p(1, 1);
    [cost, index] = min(sm);
    if index==3
        lbest(nPopSize, :) = x(1, :);
    else
        lbest(nPopSize, :) = x(nPopSize-2+index, :);
    end    
end

Solution

  • If you are new to Optimization, I recommend you first to study each algorithm separately, then you may study how GA and PSO maybe combined, Although you must have basic mathematical skills in order to understand the operators of the two algorithms and in order to test the efficiency of these algorithm (this is what really matter).

    This code chunk is responsible for parent selection and crossover:

                parents = randperm(nPopSize);
                for i = 1:nPopSize
                    x(i,:) = (pbest(i,:) + pbest(parents(i),:))/2;
    %                v(i,:) = pbest(parents(i),:) - x(i,:);
    %                v(i,:) = (v(i,:) + v(parents(i),:))/2;
                end 
    

    Is not really obvious how selection randperm is done (I have no experience about Matlab).

    And this is the code that is responsible for updating the velocity and position of each particle:

            % Update velocity
            v = w*v + c1*rand(nPopSize,nCity).*(pbest-x) + c2*rand(nPopSize,nCity).*(lbest-x);
            % Update position
            x = x + v;
            x = UpdatePosition(x); 
    

    This version of velocity updating strategy is utilizing what is called Interia-Weight W, which basically mean we are preserving the velocity history of each particle (not completely recomputing it).

    It worth mentioning that velocity updating is done more often than crossover (each 1000 iteration).