Projection type neural network and its convergence analysis Projection type neural network and its convergence analysis

Projection type neural network and its convergence analysis

  • 期刊名字:控制理论与应用(英文版)
  • 文件大小:397kb
  • 论文作者:Youmei LI,Feilong CAO
  • 作者单位:Department of Computer Science,China Jiliang University
  • 更新时间:2020-12-06
  • 下载次数:
论文简介

Jourmal of Control Theory and Applications3 (2006) 286- 290Projection type neural network and itsconvergence analysisYoumei LI, Feilong CAO2(1.Department of Computer Science, Shaoxing Clle of Arts and Sciences, Shaoxing Zhejiang 32000 China;2.China Jing University, Hangzhou Zhejiang 310018, China)Abstract: Projection type neural network for optimization problems has advantages over other networks for fewerparameters , low searching space dimension and simple structure. In this paper, by properly constructing a Lyapunovenergy function, we have proven the global convergence of this network when being used to optimize a continuouslydifferentiable convex function defined on a closed convex set. The result sttles the extensive applicability of the network.Several numerical examples are given to verify the eficiency of the network.Keywords: Neural network; Convex programming; Global convergence; Equilibrium points1 Introduction2 Projection-type neural network modelRecurrent neural network is a preferred optimizationConsider the following convex programming problemsolver due to its swiftness and validity, and has been exten-min{E(x),x∈8CR"},1)sively invetigated'in past decades. There exist a few neu-where 2 is a closed, nonempty convex subset of R", andral networks for convex programming or its special cases, E(x) is a continuously differentiable convex function of[1~3]. However their applications are limited for some x ∈s2. Let VE(x) denote the gradient vector of E(x) atdrawbacks, such as the difficulties of parameter settings and the point x. In this paper, we always assume the minimizerthe stringent conditions on which the convergence of neural set of the problem(1)2*≠0.network can be guaranteed.In [4], a projection-type neural network is presented,Recently in [4], a projection-type neural network waswhich can be described by the fllowing differential equa-proposed to solve nonlinear optimization problems with ation in terms of the state vectorcontinuously differentiable objective function and bounded=-x+ P2(x - aVE(x)),(2)constraints. This model has some advantages over others inwhere a, T are positive constants, and the activation func-network computation and implementation, for the numbertion Pa is defined byof variable parameters is fewer and the equilibrium pointsPa(x) = argmin|x - o川,of network correspond to the solutions of optimization prob-lem under suitable conditions.where I.1 is a vector norm. That is, P2(x) is the nearestBut regrettably, the model has some limitations when projection operator ofRn onto s2.LetF(x)= -x+Pn(x-applied to convex optimization problems, for is global aV E(x)) and denote all equilibriumn points of model (2)byconvergence require the objective function to be strictly02°.convex quadratic function and the feasible region is two- For convex optimization problem (1), it is well known [5]sides bounded. In this paper, we will further generalize this that x* is a optimizer if and only ifprojection-type neural network in [4] in two aspects:(x-x*)TVE(x*)≥0, Vx∈2.1) the feasible region is a closed nonempty set, needn'tBased on the monotone variational inequality theory, it canbe a bound-constraints region; 2) the objective function isbe represented equivalently that x* is a fixed point of projec-assumed to be convex. Based on this, the model is globallytion operator Pa(x - aVE(x)). That isconvergent for convex programming. Obviously the modelis suitable for those special case, such as linear or convex中国煤化工(")). .quadratic programming, and strictly convex programming. This.MYHCNMHGset of problem (1)ex-Received 25 January 2005; revised 14 April 2005.This work was supported by the National Natural Science Foundation of China (No. 60473034).Y L et al. /Journal of Control Theory and Applications3 (2006) 286 290287actly equals to the equilibrium set of model (2).we dropped the strict convex function condition of objectiveFurthermore, because 2 is closed convex set, for projec- function, also the feasible region is extended to any closedtion operator, it should satisfyconvex set.(u- P2(u)T(P2(u)-v)≥0,Vu∈R",v∈2,Remark 1 We can see from the theorem that the tra-|(P2(x) - Pa()l2≤((Pn(x) - P(),(x - y)》jectory of model (2) always converges to an optimal solu-≤|x- ylP.(3)tion despite 2* is finite or not. Starting from different ini-Lety = x- aVE(x) in (3), noting thatforanyx∈tial points, the trajectory may converge to different optimal82, Pn(x) =工, it immediately implies F(x)T*VE(x)≤0.solution. The simulation examples 1 and 2 will explain thisBy the definition of feasible direction, we know F(x) =clearly.-x + Pa(x - aVE(x)) is a feasible descent direction. ItRemark 2 Under certain simple constraints, the projec-should be pointed out that Pa(x) is Lipschiz continuous tion operator Pn can de described in detail, which makes(particularly, Lipschitz constant is 1).model (2) be readily implementable in hardware. Such asFrom a mathematical point of view, the characteristics fllowing insances :of an optimization neural network are relevant to the opti-Case1Q={x∈R"|a≤x≤b}.Here.Pr(x)=mization techniques employed. Prevalent and conventional (9r(x1), 92(x),... ,9n(x)T can be describedmethods are gradient methods, which involve constructingai; ifxi< ai,an appropriate computational energy function for the op-gi(xi)= Di, ifai≤xi≤ bi,timization problem and then designing a neural networkmodel which performs the gradient descent process of the(b,ifb; 0}. Here「xiflI≤r,3 Main resultsPa(x) =l r(:/II, if |I≥r.Lemma 1 Suppose x”∈几is an optimal solution toproblem (1), then forVx∈2,4 Simulation experiments(aVE(x) +x-x*)T(x- Pa(x - aVE()) .In this section, we exploit Euler method to obtain the dis-≥|x - Pn(x - aVE()I2.crete counterpart of model (2) for solving a few convex pro-Lemma2 If VE(x) is locally Lipschitzian on a, thengramming instances, in which the objective functions mayfor arbitrary x(0) = xo∈2, model (2) exists an uniquenot be strict convex and the feasible region may not bebounded. We simply let a = 1,T = 1. The discrete neu-solution x(t, xo).Lemma 3 s is an invariant set of model (2), that is, ifral network is described byxo∈S2, x(t,xo)∈02 is also true.x(t+ Ot)= x() + Ot(-x(t) + Pa(x(t)Lemma 4 The solution x(t, xo) of model (2) with ini-(F(E(())),tial condition x(0),xo) = To∈2 is bounded for t∈where0< Ot< 1 is a step parameter. These following ex-[0,t*(xo)).amples ilustrate various aspects of model (2) to the convexTheorem 1 Model (2) for convex programming prob- progral中国煤化工lem (1) is glbally convergent. That is, for any xo∈2,IYHCNMHG; problemsolution x(t, xo) converges to an exact optimal solution ofmin E(x)=二二,problem (1).2x1’.The proofs can be found in the last section. Obviously(s.t. x≥1.288Y U et al. /Journal of Control Theory and Applications 3 (2006) 286-290It is obvious that the feasible region of this problem is un-As shown in Fig.1 and Fig.2, two different randomly se-bounded and the optimal solution is (x1,0), herex1 is any lected initial states in s produce two different minimizernumber greater or equal to one. The minimum objetive of the related optimization problem. The neural network al-function value is E(x*) = 0. Although the optimizer set ways can produce an optimal solution of the convex pro-is infinite, the trajectory still converges to one optimal so-gramming problem no matter the feasible region as well aslution. The evolution processes of x(t), E(x(t)) of the dis- the optimal solution set is bounded or not. This coincidescrete neural network model (4) are shown in Fig.1.with the conclusion of Theorem 1.0.70.62.5xs_0.52l0.4- Va.3FX生L.5s0.2-1,名0.0-0.1E(x)-0.2 I02468012140012345678910/st/sa)(a)2厂1.80.8x1.6 A.1.420.21会of\异0.8-0.2-0.40.40.2- ECxN-0.6+/不0123456789100.5 11.5251/s(b)Fig. 1 The tracks of objecie function and statle variables from two dfferFig. 2 The tracks of objcctive function and state variables from two dif-ent initial points in feasible region, which is unbounded. The optimization ferent initial points in feasible region, which is bounded. The optimizationproblem has infinite optimizers.Example 2 This example ilustrates the global conver-In Fig. l(a), the initial points of 21 and x2 are 1.1351,gence of model (2) for solving the following bounded- 2.5800, respectively; In Fig. 1(b), those of 21 and x2 areconstrained convex programming.1, 2. In Fig. 2(a), the corresponding initial points to xI, T2and r3 are 0.6428, -0.1106 and 0.2309, while in Fig. 2(b),( min E(x)=(x1-x2)2+路-x3,those are -0.6475, -0.1886 and 0.8709.l s.t. -1≤xi≤1, i= 1,2,3.The optimal solution set5 ConclusionsS*= {x|1=x2,r3= 0.5}.In this paper, we have obtained the global convergenceThe optimal objective function value E(x*)= -0.25. Weof model(2) for convex propramming problems, which ex-randomly select feasible initial state in J2 to observe thepand中国煤化Iificantly. The simula-tion I|YHC N M H G whether the feasibleglobal dynamic behavior of model (2). Fig.2 explains theset as welr as the opumal soruuon set is bounded or not, theevolution processes of the state vector x(t) and the objec-projection type neural network is globally convergent to antive function value E(x).Y U et al. / Journal of Control Theory and Applications 3 (2006) 286 -290289exact optimal solution of problem (1).model (2), by lemma I, we havedV(x(t, xo))6 Some crucial proofsThe Proof ofLemma3 Let F(x) = x - aVE(x). Us-=

论文截图
版权:如无特殊注明,文章转载自网络,侵权请联系cnmhg168#163.com删除!文件均为网友上传,仅供研究和学习使用,务必24小时内删除。