首页 > 行业资讯 > 【智能优化算法】基于遗传结合粒子群算法求解单目标优化问题matlab代码

【智能优化算法】基于遗传结合粒子群算法求解单目标优化问题matlab代码

时间:2022-02-25 来源: 浏览:

【智能优化算法】基于遗传结合粒子群算法求解单目标优化问题matlab代码

天天Matlab 天天Matlab
天天Matlab

TT_Matlab

博主简介:擅长智能优化算法、神经网络预测、信号处理、元胞自动机、图像处理、路径规划、无人机等多种领域的Matlab仿真,完整matlab代码或者程序定制加qq1575304183。

收录于话题 #智能优化算法及应用 356个

1 简介

遗传算法(GeneticAlgorithm,GA)是一种受人工生命启发,模拟生物进化过程的随机搜索算法。遗传算法的理论及应用的研究受到广大研究者们的重视,应用领域也得到了广泛推广。遗传算法在求解函数优化问题时,算法中的控制参数交叉概率Pc和变异概率Pm取值的选择对遗传算法的性能影响很大,目前,普遍接受的Pc和Pm取值范围分别是0.4~0.99和0.0001~0.1,这两个取值范围的合理性以及科学性缺乏有效研究。为此,本文针对一类能够展开成幂级数的函数,通过大量实验,以遗传算法在求得全局最优解时所需的迭代次数最少为目标,用实验验证和数据分析的方法对Pc和Pm的取值范围进行了系统的研究,得出结论如下:(1)通过对大量实验数据的分析,以遗传算法在求得全局最优解时所需的迭代次数最少为目标,本文得出Pc的建议取值区间为[0.6,0.99], Pm的建议取值区间为[0.009,0.03];(2)通过实验数据,对Pc和Pm的交叉影响进行了分析,当Pc在本文建议的区间中取值时,Pm对遗传算法在求得全局最优解时所需的迭代次数有显著影响,然而当Pm在本文建议的区间中取值时,Pc对遗传算法在求得全局最优解时所需的迭代次数的影响不显著;(3)当变异概率Pm在本文建议的区间内取值时,GA取得全局最优解所需的计算量比在普遍接受的区间内取值时所需的计算量节约了3倍以上;比在本文建议区间之外但又在普遍接受的区间内取值时所需的计算量节约了4倍以上。粒子群优化算法(PSO)是一种原理简单,操作易实现的优化算法。算法一经提出受到国内外学者的广泛关注,目前已存在各种改进的粒子群算法,针对于粒子群优化算法的收敛速度快、解具有记忆功能但全局搜索能力较遗传算法差等特点,本文提出一种新的粒子群遗传混合算法,结合了两种算法各自的优点,扬长避短。利用本文提出的粒子群遗传混合算法对常用的标准测试函数进行了函数优化问题求解,并且与单独的粒子群算法和遗传算法进行了对比实验,实验结果验证了本文提出的新的混合算法的有效性。

2 部分代码

function [y, or1, or2, dmse] = predictor(x, dmodel) %PREDICTOR Predictor for y(x) using the given DACE model. % % Call: y = predictor(x, dmodel) % [y, or] = predictor(x, dmodel) % [y, dy, mse] = predictor(x, dmodel) % [y, dy, mse, dmse] = predictor(x, dmodel) % % Input % x : trial design sites with n dimensions. % For mx trial sites x: % If mx = 1, then both a row and a column vector is accepted, % otherwise, x must be an mx*n matrix with the sites stored % rowwise. % dmodel : Struct with DACE model; see DACEFIT % % Output % y : predicted response at x. % or : If mx = 1, then or = gradient vector/Jacobian matrix of predictor % otherwise, or is an vector with mx rows containing the estimated % mean squared error of the predictor % Three or four results are allowed only when mx = 1, % dy : Gradient of predictor; column vector with n elements % mse : Estimated mean squared error of the predictor; % dmse : Gradient vector/Jacobian matrix of mse % hbn@imm.dtu.dk % Last update August 26, 2002 or1 = NaN; or2 = NaN; dmse = NaN; % Default return values if isnan(dmodel.beta) y = NaN; error(’DMODEL has not been found’) end [m n] = size(dmodel.S); % number of design sites and number of dimensions sx = size(x); % number of trial sites and their dimension if min(sx) == 1 & n > 1 % Single trial point nx = max(sx); if nx == n mx = 1; x = x(:).’; end else mx = sx(1); nx = sx(2); end if nx ~= n error(sprintf(’Dimension of trial sites should be %d’,n)) end % Normalize trial sites x = (x - repmat(dmodel.Ssc(1,:),mx,1)) ./ repmat(dmodel.Ssc(2,:),mx,1); q = size(dmodel.Ysc,2); % number of response functions y = zeros(mx,q); % initialize result if mx == 1 % one site only dx = repmat(x,m,1) - dmodel.S; % distances to design sites if nargout > 1 % gradient/Jacobian wanted [f df] = feval(dmodel.regr, x); [r dr] = feval(dmodel.corr, dmodel.theta, dx); % Scaled Jacobian dy = (df * dmodel.beta).’ + dmodel.gamma * dr; % Unscaled Jacobian or1 = dy .* repmat(dmodel.Ysc(2, :)’, 1, nx) ./ repmat(dmodel.Ssc(2,:), q, 1); if q == 1 % Gradient as a column vector or1 = or1’; end if nargout > 2 % MSE wanted rt = dmodel.C r; u = dmodel.Ft.’ * rt - f.’; v = dmodel.G u; or2 = repmat(dmodel.sigma2,mx,1) .* repmat((1 + sum(v.^2) - sum(rt.^2))’,1,q); if nargout > 3 % gradient/Jacobian of MSE wanted % Scaled gradient as a row vector Gv = dmodel.G’ v; g = (dmodel.Ft * Gv - rt)’ * (dmodel.C dr) - (df * Gv)’; % Unscaled Jacobian dmse = repmat(2 * dmodel.sigma2’,1,nx) .* repmat(g ./ dmodel.Ssc(2,:),q,1); if q == 1 % Gradient as a column vector dmse = dmse’; end end end else % predictor only f = feval(dmodel.regr, x); r = feval(dmodel.corr, dmodel.theta, dx); end % Scaled predictor sy = f * dmodel.beta + (dmodel.gamma*r).’; % Predictor y = (dmodel.Ysc(1,:) + dmodel.Ysc(2,:) .* sy)’; else % several trial sites % Get distances to design sites dx = zeros(mx*m,n); kk = 1:m; for k = 1 : mx dx(kk, : ) = repmat(x(k,:),m,1) - dmodel.S; kk = kk + m; end % Get regression function and correlation f = feval(dmodel.regr, x); r = feval(dmodel.corr, dmodel.theta, dx); r = reshape(r, m, mx); % Scaled predictor sy = f * dmodel.beta + (dmodel.gamma * r).’; % Predictor y = repmat(dmodel.Ysc(1,:),mx,1) + repmat(dmodel.Ysc(2,:),mx,1) .* sy; if nargout > 1 % MSE wanted rt = dmodel.C r; u = dmodel.G (dmodel.Ft.’ * rt - f.’); or1 = repmat(dmodel.sigma2,mx,1) .* repmat((1 + colsum(u.^2) - colsum(rt.^2))’,1,q); if nargout > 2 disp(’WARNING from PREDICTOR. Only y and or1=mse are computed’) end end end % of several sites % >>>>>>>>>>>>>>>> Auxiliary function ==================== function s = colsum(x) % Columnwise sum of elements in x if size(x,1) == 1, s = x; else, s = sum(x); end

3 仿真结果

4 参考文献

[1]倪全贵. 粒子群遗传混合算法及其在函数优化上的应用. Diss. 华南理工大学.

博主简介:擅长智能优化算法、神经网络预测、信号处理、元胞自动机、图像处理、路径规划、无人机等多种领域的Matlab仿真,相关matlab代码问题可私信交流。

部分理论引用网络文献,若有侵权联系博主删除。

版权:如无特殊注明,文章转载自网络,侵权请联系cnmhg168#163.com删除!文件均为网友上传,仅供研究和学习使用,务必24小时内删除。
相关推荐