مهندسی عمران-زلزله(محمدجواد خسرویانی)
مهندسی عمران-زلزله(محمدجواد خسرویانی)

مهندسی عمران-زلزله(محمدجواد خسرویانی)

Particle Swarm Optimization: A Powerful Technique for Solving Engineering Problems

Particle Swarm Optimization

Abstract

This chapter will introduce the particle swarm optimization (PSO) algorithm giving an overview of it. In order to formally present the mathematical formulation of PSO algorithm, the classical version will be used, that is, the inertial version; meanwhile, PSO variants will be summarized. Besides that, hybrid methods representing a combination of heuristic and deterministic optimization methods are going to be presented as well. Before the presentation of these algorithms, the reader will be introduced to the main challenges when approaching PSO algorithm. Two study cases of diverse nature, one regarding the PSO in its classical version and another one regarding the hybrid version, are provided in this chapter showing how handful and versatile it is to work with PSO. The former case is the optimization of a mechanical structure in the nuclear fuel bundle and the last case is the optimization of the cost function of a cogeneration system using PSO in a hybrid optimization. Finally, a conclusion is presented.

Keywords

  • PSO algorithm
  • hybrid methods
  • nuclear fuel
  • cogeneration system

1. Introduction

Maximizing earns or minimizing losses has always been a concern in engineering problems. For diverse fields of knowledge, the complexity of optimization problems increases as science and technology develop. Often, examples of engineering problems that might require an optimization approach are in energy conversion and distribution, in mechanical design, in logistics, and in the reload of nuclear reactors.

To maximize or minimize a function in order to find the optimum, there are several approaches that one could perform. In spite of a wide range of optimization algorithms that could be used, there is not a main one that is considered to be the best for any case. One optimization method that is suitable for a problem might not be so for another one; it depends on several features, for example, whether the function is differentiable and its concavity (convex or concave). In order to solve a problem, one must understand different optimization methods so this person is able to select the algorithm that best fits on the features’ problem.

The particle swarm optimization (PSO) algorithm, proposed by Kennedy and Eberhart [1], is a metaheuristic algorithm based on the concept of swarm intelligence capable of solving complex mathematics problems existing in engineering [2]. It is of great importance noting that dealing with PSO has some advantages when compared with other optimization algorithms, once it has fewer parameters to adjust, and the ones that must be set are widely discussed in the literature [3].

2. Particle swarm optimization: an overview

In the early of 1990s, several studies regarding the social behavior of animal groups were developed. These studies showed that some animals belonging to a certain group, that is, birds and fishes, are able to share information among their group , and such capability confers these animals a great survival advantage [4]. Inspired by these works, Kennedy and Eberhart proposed in 1995 the PSO algorithm [1], a metaheuristic algorithm that is appropriate to optimize nonlinear continuous functions. The author derived the algorithm inspired by the concept of swarm intelligence, often seen in animal groups, such as flocks and shoals.

In order to explain how the PSO had inspired the formulation of an optimization algorithm to solve complex mathematical problems, a discussion on the behavior of a flock is presented. A swarm of birds flying over a place must find a point to land and, in this case, the definition of which point the whole swarm should land is a complex problem, since it depends on several issues, that is, maximizing the availability of food and minimizing the risk of existence of predators. In this context, one can understand the movement of the birds as a choreography; the birds synchronically move for a period until the best place to land is defined and all the flock lands at once.

In the given example, the movement of the flock only happens as described once all the swarm members are able to share information among themselves; otherwise, each animal would most likely land at a different point and at a different time. The studies regarding the social behavior of animals from the early 1990s stated before in this text pointed out that all birds of a swarm searching for a good point to land are able to know the best point until it is found by one of the swarm’s members. By means of that, each member of the swarm balances its individual and its swarm knowledge experience, known as social knowledge. One may notice that the criteria to assess whether a point is good or not in this case is the survival conditions found at a possible landing point, such as those mentioned earlier in this text.

The problem to find the best point to land described features an optimization problem. The flock must identify the best point, for example, the latitude and the longitude, in order to maximize the survival conditions of its members. To do so, each bird flies searching and assessing different points using several surviving criteria at the same time. Each one of those has the advantage to know where the best location point is found until known by the whole swarm.

Kennedy and Eberhart inspired by the social behavior of birds, which grants them great surviving advantages when solving the problem of finding a safe point to land, proposed an algorithm called PSO that could mimic this behavior. The inertial version, also known as classical version, of the algorithm was proposed in 1995 [1]. Since then, other versions have been proposed as variations of the classical formulation, that is, the linear-decreasing inertia weight [5], the constriction factor weight [6], the dynamic inertia and maximum velocity reduction, also in Ref. [6], besides hybrid models [7] or even quantum inspired approach optimization techniques that can be applied to PSO [8]. This chapter will only present the inertial model of PSO, as it is the state-of-the-art algorithm, and to understand better the derivations of PSO, one should firstly understand its classical version.

The goal of an optimization problem is to determine a variable represented by a vector X=x1x2x3xn" role="presentation">

that minimizes or maximizes depending on the proposed optimization formulation of the function fX" role="presentation">. The variable vector X" role="presentation"> is known as position vector; this vector represents a variable model and it is n" role="presentation"> dimensions vector, where n" role="presentation"> represents the number of variables that may be determined in a problem, that is, the latitude and the longitude in the problem of determining a point to land by a flock. On the other hand, the function fX" role="presentation"> is called fitness function or objective function, which is a function that may assess how good or bad a position X" role="presentation">

is, that is, how good a certain landing point a bird thinks it is after this animal finds it, and such evaluation in this case is performed through several survival criteria.

Considering a swarm with P particles, there is a position vector Xit=xi1xi2xi3xinT" role="presentation">

and a velocity vector Vit=vi1vi2vi3vinT" role="presentation">

at a t iteration for each one of the i particle that composes it. These vectors are updated through the dimension j according to the following equations:

Vijt+1=wVijt+c1r1tpbestijXijt+c2r2tgbestjXijt" role="presentation">
E1

and

Xijt+1=Xijt+Vijt+1" role="presentation">
E2

where i = 1,2,…,P and j = 1,2,…,n.

Eq. (1) denotes that there are three different contributions to a particle’s movement in an iteration, so there are three terms in it that are going to be further discussed. Meanwhile, Eq. (2) updates the particle’s positions. The parameter w" role="presentation">

is the inertia weight constant, and for the classical PSO version, it is a positive constant value. This parameter is important for balancing the global search, also known as exploration (when higher values are set), and local search, known as exploitation (when lower values are set). In terms of this parameter, one may notice that it is one of the main differences between classical version of PSO and other versions derived from it.

Velocity update equation’s first term is a product between parameter w and particle’s previous velocity, which is the reason it denotes a particles’ previous motion into the current one. Hence, for example, if w=1" role="presentation">

, the particle’s motion is fully influenced by its previous motion, so the particle may keep going in the same direction. On the other hand, if 0w<1" role="presentation">

, such influence is reduced, which means that a particle rather goes to other regions in the search domain. Therefore, as the inertia weight parameter is reduced, the swarm may explore more areas in the searching domain, which means that the chances of finding a global optimum may increase. However, there is a price when using lower w values, which is the simulations turn out to be more time consuming [1].

The individual cognition term, which is the second term of Eq. (1), is calculated by means of the difference between the particle’s own best position, for example, pbestij" role="presentation">

, and its current position Xijt" role="presentation">. One may notice that the idea behind this term is that as the particle gets more distant from the pbestij" role="presentation"> position, the difference pbestijXijt" role="presentation"> must increase; therefore, this term increases, attracting the particle to its best own position. The parameter c1" role="presentation"> existing as a product in this term is a positive constant and it is an individual-cognition parameter, and it weighs the importance of particle’s own previous experiences. The other parameter that composes the product of second term is r1" role="presentation">, and this is a random value parameter with 01" role="presentation">

range. This random parameter plays an important role, as it avoids premature convergences, increasing the most likely global optima [1].

Finally, the third term is the social learning one. Because of it, all particles in the swarm are able to share the information of the best point achieved regardless of which particle had found it, for example, gbestj" role="presentation">

. Its format is just like the second term, the one regarding the individual learning. Thus, the difference gbestjXijt" role="presentation"> acts as an attraction for the particles to the best point until found at some t" role="presentation"> iteration. Similarly, c2" role="presentation"> is a social learning parameter, and it weighs the importance of the global learning of the swarm. And r2" role="presentation"> plays exactly the same role as r1" role="presentation">

.

Lastly, Figure 1 shows the PSO algorithm flowchart, and one may notice that the optimization logic in it searches for minimums and all position vectors are assessed by the function fX" role="presentation">

, known as fitness function. Besides that, Figures 2 and 3 present the update in a particle’s velocity and in its position at a t" role="presentation"> iteration, regarding a bi-dimensional problem with variables x1" role="presentation"> and x2" role="presentation">

.

Figure 1.

The PSO algorithm.

Figure 2.

The velocity vector at a t iteration as being composed by two components regarding a bi-dimensional problem.

Figure 3.

The position vector being updated at a t iteration as being composed by two components regarding a bi-dimensional problem.

3. Hybrid methods: coupling PSO with deterministic methods

In general, optimization methods are divided into deterministic and heuristic. Deterministic methods aim to establish an iterative process involving a gradient, which, after a certain number of iterations, will converge to the minimum of the objective function. The iterative procedure of this type of method can be written as follows:

xk+1=xk+αkdk" role="presentation">
E3

where x" role="presentation">

is the variable vector, α" role="presentation"> is the step size, d" role="presentation"> is the descent direction, and k" role="presentation">

is the iteration number. The best that can be expected from any deterministic gradient method is its convergence to a stationary point, usually a local minimum.

Heuristic methods, in contrast to deterministic methods, do not use the objective function gradient as a downward direction. Its goal is to mimic nature in order to find the minimum or maximum of the objective function by selecting, in an elegant and organized manner, the points where such a function will be calculated [9].

Hybrid methods represent a combination of deterministic and heuristic methods in order to take advantage of both approaches. Hybrid methods typically use a heuristic method to locate the most likely region where the global minimum is. Once this region is determined, the hybrid formulation algorithm switches to a deterministic method to get closer and faster to the minimum point. Usually, the most common approach used for this formulation is using the heuristic method to generate good candidates for an optimal solution and then using the best point found as a start point for the deterministic methods in order to converge to local minimums.

Numerous papers have been published over the last few years showing the efficiency and effectiveness of hybrid formulations [10, 11, 12]. There are also a growing number of publications over the last decade regarding hybrid formulations for optimization [13].

In this context, PSO algorithm can be combined with deterministic methods, increasing the chance of finding the function’s most likely global optimal. This chapter presents the three deterministic methods in which the PSO was coupled: conjugate gradient method, Newton’s method, and quasi-Newton method (BFGS). The formulation of each one of those is briefly presented in the following sections.

3.1 Conjugate gradient

The conjugate gradient method improves the convergence rate of the steepest descent method by choosing descending directions that are a linear combination of the gradient direction with the descending directions of previous iterations. Therefore, their equations are:

xk+1=xk+αkdk" role="presentation">
E4
dk=xk+γkdk1" role="presentation">
E5

where γ" role="presentation">

is the conjugation coefficient that acts by adjusting the size of the vectors. In the Fletcher-Reeves version, the conjugation coefficient is given by:

γk=xk2xk12" role="presentation">
E6

3.2 Newton’s method

While the steepest descent and conjugate gradient methods use first derivative information, Newton’s method also uses second derivative information to accelerate the convergence of the iterative process. The algorithm used in this method is presented below:

xk+1=xk+αkdk" role="presentation">
E7
dk=Hx1Uxk" role="presentation">
E8

where Hx" role="presentation">

is the Hessian of the function. In general, this method requires few iterations to converge; however, it requires a matrix that grows with the size of the problem. If the estimate is far from the minimum, the Hessian matrix may be poorly conditioned. In addition, it involves inverting a matrix, which makes the method even more computationally expensive.

3.3 Quasi-Newton (BFGS)

BFGS is a type of quasi-Newton method. It seeks to approximate the inverse of the Hessian using the function’s gradient information. This approximation is such that it does not involve second derivatives. Thus, this method has a slower convergence rate than Newton’s methods, although it is computationally faster. The algorithm is presented below:

xk+1=xk+αkdk" role="presentation">
E9
dk=HkUxk" role="presentation">
E10
Hk=Hk1+Mk1+Nk1" role="presentation">
E11
Mk1=1+Yk1T.Hk1.Yk1Yk1T.dk1dk1.dk1Tdk1T.Yk1" role="presentation">
E12
Nk1=dk1.Yk1T.Hk1+Hk1.Yk1dk1Tdk1T" role="presentation">
E13
Yk1=UxkUxk1" role="presentation">
E14

4. Recent applications and challenges

PSO can be applied to many types of problems in the most diverse areas of science. As an example, PSO has been used in healthcare in diagnosing problems of a type of leukemia through microscopic imaging [14]. In the economic sciences, PSO has been used to test restricted and unrestricted risk investment portfolios to achieve optimal risk portfolios [15].

In the engineering field, the applications are as diverse as possible. Optimization problems involving PSO can be found in the literature in order to increase the heat transfer of systems [16] or even in algorithms to predict the heat transfer coefficient [17]. In the field of thermodynamics, one can find papers involving the optimization of thermal systems such as diesel engine–organic Rankine cycle [18], hybrid diesel-ORC/photovoltaic system [19], and integrated solar combined cycle power plants (ISCCs) [20].

PSO has also been used for geometric optimization problems in order to find the best system configurations that best fit the design constraints. In this context, we can mention studies involving optical-geometric optimization of solar concentrators [21] and geometric optimization of radiative enclosures that satisfy temperature distribution and heat flow [22].

After having numerous versions of PSO algorithm such as those mentioned in the first section, PSO is able to deal with a broad range of problems, from problems with a few numbers of goals and continuum variables to others with challenging multipurpose problems with many discreet and/or continuum variables. Besides its potential, the user must be aware that the PSO will only achieve appreciated results if one implements an objective function capable of reflecting all goals at once. To derive such a function may be a challenging task that should require a good understanding of the physical problem to be solved and the ability to abstract ideas into a mathematical equation as well. The problems presented in the fourth section of this work provide examples of objective functions capable of playing this role.

Another challenge for one using PSO is how to handle the bounds of the search domain whenever a particle moves beyond it. Many popular strategies that had already been proposed are reviewed and compared for PSO classical version in [23]. Those strategies may be reviewed and understood by PSO users so this person can pick up the one that best fits the optimization problem features.

5. Engineering problems

In this chapter, two engineering problems will be described, one involving the fuel element of a nuclear power plant and the other involving a thermal cogeneration system. In the first problem, the traditional PSO formulation is used to find the optimal fuel element spacing. In the second problem, hybrid optimization algorithms are used to find the operating condition that minimizes total cost of operation of a cogeneration system.

5.1 Springs and dimples of a nuclear fuel bundle spacer grid

In [24], the authors perform the optimization of dimples and spring geometries existing in the nuclear fuel bundle (FB) spacer grid (SG). An FB is a structured group of fuel rods (FRs), and it is also known as fuel assembly, and on the other hand, an FR is a long, slender, zirconium metal tube containing pellets of fissionable material, which provide fuel for nuclear reactors [25]. An SG is a part of the nuclear fuel bundle and, Figure 4 shows a schematic view of a nuclear FB; it is possible to see in this illustration how the FRs and the SGs are assembled together. In addition, Figure 5 gives more details on how an SG’s springs and dimples grip an FR, and Figure 6 shows exactly what parts in the SG are the springs and the dimples that may be in contact with an FR. For this work, the PSO algorithm had been developed in MATLAB® (MathWorks Inc.); meanwhile, the mechanical calculations were performed with finite element analysis (FEA), using ANSYS 15.0 software.

Figure 4.

A schematic view of a nuclear fuel bundle.

Figure 5.

The top view of a spacer grid gripping an FR through its dimples and springs.

Figure 6.

A part of an SG strip with one spring and two dimples.

The springs and the dimples act as supports required having special features once an FR releases a great amount of energy, caused by the nuclear reactions occurring within it. Hence, the material of an FR must face a broad range of temperatures when in operation; for example, around a variation of 300°C, this fact is an important matter for the springs and the dimples as those must not impose an excessive gripping force on the rod, allowing it some axial thermal expansion. On the other hand, the upward water flow cooling the great amount of heat released by fission occurring within the rod creates a flow-induced vibration, so the springs and dimples must also limit the lateral displacement of the fuel rods. Besides that, the SG may also support the FRs through its dimples and springs at many loading conditions, that is, earthquakes and shipping and handling. To support safely the fuel in a nuclear reactor is an important matter during operation, and consequences such as the release of fission products from a fuel rod and a reactor safety shutdown could happen because of a poor design.

Finally, one can understand that as the springs and the dimples of an FB must have a geometry able to comply with conflicting requirements so the FRs remain laterally restrained, avoiding it from bowing and vibrating [26], using an optimization algorithm could be useful.

Jourdan et al. [13] had performed the optimization of the dimples and springs of an FB’s SG using PSO classical version algorithm. The authors chose some geometry variables that should be important to features such as the gripping stiffness and the stress distribution in the spacer grid, which are the optimization goals in their work. Thus, the position vector is written as Xit=di1di2di3di4di5di6T" role="presentation">

, and these lengths are those in Figure 7, while Table 1 shows the range of such variables, that is, the search domain of the problem.

Figure 7.

Variable lengths that should feature the goals of the optimization.

VariableLower bound (mm)Upper bound (mm)
d1" role="presentation">

5070
d2" role="presentation">

1015
d3" role="presentation">

530
d4" role="presentation">

510
d5" role="presentation">

15
d6" role="presentation">

15

Table 1.

Variable boundaries for the SG optimization.

In PSO simulations from Ref. [24], for each position vector Xit" role="presentation">

, there is an FEA model with the geometry variable values of its related vector. In such FEA model, there are boundary conditions of an elastic static analysis. The boundary conditions considered in these simulations regard one spring and two dimples gripping two FRs, one in contact with the spring and the other one in contact with two dimples. Contacts were not modeled actually in order to simplify the model, and those were replaced by displacements similar to the condition of an FR with the diameter of 9.7 mm being gripped in the available space considering the Xit" role="presentation">

geometry. Other boundary conditions are also the restriction of translations and rotations on the welding nodes. Figure 8 presents these boundary conditions regarding any position vector. All simulations were built using SHELL181 finite element [27], considering the material to be the Inconel 718.

Figure 8.

Model’s boundary condition considering any position vector.

The goals of the optimization performed in [24] are three: first, to minimize the stress intensity (SI) within the structure; second, to create an SG geometry featuring a gripping stiffness value as close as possible to some Kreference" role="presentation">

; and finally, to find a geometry that allows some axial thermal expiation by the FR. These three features are the main mechanical design requirements for an SG [26].

A simulation considering a population of P=100" role="presentation">

particles in a swarm and an inertial weight of w=0.3" role="presentation">

was performed in [26]. In order to obtain good results from PSO simulations, in other words, to determine the variable values that might fit on actual desired features, one must derive a fitness function able to properly grade all the optimization goals at once, without privileging none of the goals comparing to all others.

It should be noted that the grades assessed by the fitness function could be in an increasing scale or in a decreasing one, depending on the conception of the PSO algorithm. In [26], the authors chose to perform the search at a decreasing scale, and then the fitness function, Eq. (15), was designed to be minimized.

fX=σ+ckkcalculatedkreferenceifdisplacement0.4mmfX=1,000,000otherwise" role="presentation">
E15

The fitness function implemented assesses three different terms through two conditions. The two conditions regard the fact that the SG must allow some axial thermal expansion by the FR. To do so, a parameter displacement" role="presentation">

is created, and it measures the space that an FR with 9.7 mm diameter will use when gripped by an SG with some position vector geometry. Thus, a geometry producing a displacement" role="presentation">

over 0.4 mm will receive a high grade, meaning that this is an undesired feature, as the algorithm performs its optimization at a decreasing scale. The value of 0.4 mm is considered to be a good value for the design of an SG [28, 29, 30, 31].

The σ" role="presentation">

parameter represents the SI, and then it is easy to understand that as the SI gets lower this term also does, which is desirable. Finally, the term ckkcalculatedkreference" role="presentation"> plays the role of finding a geometry that its stiffness, that is, kcalculated" role="presentation">, gets as close as possible to a reference stiffness kreference" role="presentation">, where this last parameter is set to be 27.2 N/mm [31]. Meanwhile, the parameter ck" role="presentation"> is a coefficient that must be set in order to fit the order of magnitude between the fitness function’s terms, so none of them gets greater importance. In [24], ck" role="presentation">

parameter had been calibrated by performing several PSO simulations, and then, this value was set to be 60. One should notice that the fitness function does not require a unit consistency, as its value is only a mathematical abstraction.

Figure 9 shows the fitness improvement performed to optimize the geometry of an SG’s dimples and spring. This simulation resulted in an optimized geometry with an SI of 196 MPa and a gripping stiffness of 27.2 N/mm.

Figure 9.

Fitness improvements from simulation performed in [24].

In [31], the authors performed an FEA and a real experiment to measure the SI and the gripping stiffness of the Chashma Nuclear Power Plant Unit 1’s (CHASNUPP-1’s) SG spring under the same conditions as considered in [24]. The results from [31] regarding a real SG that is in operation at CHASNUPP-1, which might not have been optimized, are 27.2 N/mm for the gripping stiffness and 816 MPa for the SI; meanwhile, the optimized result found in [24] has the same gripping stiffness although with an SI over 75% lower than CHASNUPP-1’s SG. Thus, when comparing the results of the most likely optimal found using the PSO algorithm with those from a real SG [31], one can conclude that PSO had played its role well to design the component under study.

5.2 Cost of a cogeneration system

The second problem involves minimizing the function that represents the total cost of operation of a cogeneration system called CGAM. It is named after its creators (C. Frangopoulos, G. Tsatsaronis, A. Valero, and M. von Spakovsky) who decided to use the same system to compare the solution of the optimization problem with different methodologies [13]. Figure 10 indicates the system.

Figure 10.

CGAM system.

The CGAM system is a cogeneration system consisting of an air compressor (AC), a combustion chamber (CC), a gas turbine (GT), an air preheater (APH), and a heat recovery steam generator (HRSG), which consists of an economizer for preheating water and an evaporator. The purpose of the cycle is the generation of 30 MW of electricity and 14 kg/s of saturated steam at a pressure of 20 bar.

The economic description of the system used in the present work is the same as the one adopted in the original work and considers the annual fuel cost and the annual cost associated with the acquisition and operation of each equipment. More details can be found in [32]. The equations for each component are presented below:

  1. Air compressor:

ZAC=C11ṁaC12ηACP2P1lnP2P1" role="presentation">
E16
  1. Combustion chamber:

Zcc=C21ṁaC22P4P31+expC23T4C24" role="presentation">
E17
  1. Turbine:

ZGT=C31ṁgC32ηGTlnP4P51+expC33T4C34" role="presentation">
E18
  1. Preheater:

ZAPH=C41ṁgh5h6UTLM0.6" role="presentation">
E19
  1. Heat recovery steam generator:

ZHRSG=C51QPHTLMPH0.8+QPHTLMPH0.8+C52ṁst+C53ṁg1.2" role="presentation">
E20

The general expression for the investment-related cost rate ($/s) of each component is given by the following equation:

Żi,invest=ZiφCRFN.3600" role="presentation">
E21

CRF is the capital recovery factor (18.2%), N is the number of annual plant-operating hours (8000 h), and φ is a maintenance factor (1.06). In addition, cf" role="presentation">

is the fuel cost per unit of energy (0.004 $/MJ). Table 2 indicates the cost constants adopted for each component. The following equation represents the total cost of operation rate:

Air compressorC11=39.5$/kgsC12=0.9" role="presentation">

Combustion chamberC21=25.6$/kgsC22=0.995" role="presentation">

C23=0.018K1C24=26.4" role="presentation">

Gas turbineC31=266.3$/kgsC32=0.92" role="presentation">

C33=0.036K1C34=54.4" role="presentation">

PreheaterC41=39.5$/m1,2U=0.018kW/m2K" role="presentation">

HRSGC51=3650$/kWK0,8C52=11,820$/kgs" role="presentation">

C53=658$/kgs1,2" role="presentation">

Table 2.

Cost constants.

F=cfmḟPCI+ŻAC+ŻAPH+ŻCC+ŻGT+ŻHRSG" role="presentation">
E22

In order to perform the optimization of Eq. (22), the five decision variables adopted in the definition of the original problem are considered: the compression ratio (P2/P1" role="presentation">

), the isentropic efficiency of the compressor (ηCA" role="presentation">), the isentropic efficiency of the turbine (ηGT" role="presentation">), the air temperature at the preheater outlet (T3" role="presentation">), and the fuel gas temperature at the turbine inlet (T4" role="presentation">

). To optimize the objective function, three optimization routines coupling PSO with different deterministic methods were used as indicated in Table 3.


HeuristicDeterministic
Hybrid 1Particle swarmConjugate gradient
Hybrid 2Particle swarmQuasi-Newton
Hybrid 3Particle swarmNewton

Table 3.

Hybrid methods.

To solve the thermodynamic equations of the problem, the professional process simulator IPSEpro® version 6.0 was adopted. IPSEpro® is a process simulator used to model and simulate different thermal systems through their thermodynamic equations. This program was developed by SimTech and has a user-friendly interface, as well as a library with a wide variety of components, allowing the user to model and simulate conventional plants, cogeneration systems, cooling cycles, combined cycles, and more. The optimization method routines were written in MATLAB® (MathWorks Inc.), and the algorithm was integrated with IPSEpro® in order to solve the thermodynamic problem and perform the optimization.

To perform the optimization, the limits for the problem variables were established, as indicated in Table 4 [33].

Limits
7 ≤ P2/P1" role="presentation">
 ≤ 27
0.7 ≤ ηCA" role="presentation">
 ≤ 0.9
0.7 ≤ ηGT" role="presentation">
 ≤ 0.9
700 K ≤ T3" role="presentation">
 ≤ 1100 K
1100 K ≤ T4" role="presentation">
 ≤ 1500 K

Table 4.

Variable limits.

Table 5 presents the results found for the variables in each method and the value of the objective function. Figures 1113 present the graphs of the evolution of the cost function in relation to the function call for the performed optimizations.


Hybrid 1Hybrid 2Hybrid 3
P2/P19.469.048.29
ηCA0.830.830.85
T3600.43612.53606.47
ηGT0.880.880.88
T41210.951212.671214.65
Cost function ($/s)0.339480.339530.33949

Table 5.

Optimization results.

Figure 11.

Hybrid 1 optimization.

Figure 12.

Hybrid 2 optimization.

Figure 13.

Hybrid 3 optimization.

In order to evaluate the algorithm’s efficiency, a comparison was made between the results obtained in the present work and those obtained by [32, 33]. It is worth mentioning that the thermodynamic formulation used by [32] is slightly different from that constructed in the simulator; therefore, some differences in the final value of the objective function were already expected. In [33], the CGAM system was also built in IPSEpro® and the optimization was performed in MATLAB® using the following optimization methods: differential evolution (DE), particle swarm (PSO), simulated annealing (SA), genetic algorithm (GA), and direct pattern search (DPS). A comparison between the results is presented in Figure 14.

Figure 14.

Comparison between the results obtained and bibliographic references.

It is possible to verify that the hybrid methods used in this work have excellent performance, and the values ​found are compatible with the other references. This result consolidated the use of hybrid formulations used to optimize the objective function of the problem.

6. Conclusions

In the present work, it was possible to present the basic fundamentals involving the PSO method. The advantages and disadvantages of the method were discussed, as well as interpretations were provided to its algorithm. It was also possible to discuss about hybrid methods that combine deterministic and heuristic methods in order to extract the advantages of each one.

As discussed earlier, it is impracticable to say that the result obtained by an optimization method such as PSO is the global maximum or minimum, so some authors call the results as the most likely optimal global. Thus, some strategies can be employed in order to verify the validity of the optimal results obtained. One of the strategies is to compare with the results obtained by other optimization algorithms, as used in the present work. In the absence of optimal data available, due to either computational limitations or even lack of results of the subject, it is possible to use as strategy the comparison of information from real physical models, that is, that were not obtained through optimization algorithms, but instead good engineering practice and judgment gained through technical experience.

In addition, it was possible to apply the PSO algorithm to different engineering problems. The first involves the spacer grid of the fuel element and the second involves the optimization of the cost function of a cogeneration system. In both problems, satisfactory results were obtained demonstrating the efficiency of the PSO method.




DOI: 10.5772/intechopen.89633

https://www.intechopen.com/books/swarm-intelligence-recent-advances-new-perspectives-and-applications/particle-swarm-optimization-a-powerful-technique-for-solving-engineering-problems