Current location - Trademark Inquiry Complete Network - Futures platform - What python knowledge is needed to learn multi-objective optimization?
What python knowledge is needed to learn multi-objective optimization?
Multi-objective optimization

Generally speaking, the objective optimization problem refers to obtaining the optimal solution of the objective function through a certain optimization algorithm. When the objective function of optimization is 1, it is called single-objective optimization (Single-

Goal optimization problem,

SOP). When there are two or more optimization objective functions, it is called a multi-objective optimization problem.

MOP. Unlike single-objective optimization whose solution is finite, the solution of multi-objective optimization is usually a set of equilibrium solutions.

Multi-objective optimization algorithms can be classified into two categories: traditional optimization algorithms and intelligent optimization algorithms.

1. Traditional optimization algorithms include weighting method, constraint method and linear programming method. In essence, the multi-objective function is transformed into a single-objective function, and the multi-objective function is solved by a single-objective optimization method.

2. Intelligent optimization algorithms include evolutionary algorithm (EA) and particle swarm optimization (PSO).

Pareto optimal solution:

If x*∈C* and there is no optimal solution x in c, then x* is called Pareto optimal solution of multi-objective optimization model, which is also called efficient solution.

Generally speaking, there is no optimal solution for multi-objective optimization problems, and all possible solutions are called non-inferior solutions, also known as Pareto solutions. Traditional optimization techniques can usually get a Pareo solution set at a time, but

Using intelligent algorithm to solve, we can get more Pareto solutions and deconstruct them into an optimal solution set, which is called Pareto optimal solution. It is composed of those whose improvement of any objective function value must be at the expense of it.

The solution set at the cost of his objective function value is called Pareto optimal domain, or Pareto set for short.

The non-inferior solution set of pareto efficient (optimal) solution refers to the set of solutions with at least one objective function superior to any solution outside the set.

The most famous solution to multi-objective optimization problem is NSGA-II, which is a multi-objective genetic algorithm, but its solution selection process can be used in other optimization algorithms, such as particle swarm optimization algorithm and bee colony algorithm. Here is a brief introduction to the selection algorithm of NSGA-II. It mainly includes three parts:

1. Fast non-dominated sorting

Let's talk about the concept of dominance first. For the solutions X 1 and X2, if all the objective functions corresponding to X 1 are not greater than X2 (minimum problem) and one objective value is less than X2, then X2 is dominated by X 1.

Fast non-dominated sorting is a cyclic grading process: first, find out the non-dominated solution set in the group, record it as the first non-dominated layer, irank= 1(irank is the non-dominated value of individual I), remove it from the group, continue to find the non-dominated solution set in the group, and then irank=2.

2. Individual crowding distance

In order to make the calculation results more evenly distributed in the target space and maintain the diversity of the population, the crowding distance is calculated for each individual, and the individual with large crowding distance is selected. Crowding distance is defined as:

L[i]d=L[i]d+(L[i+ 1]m? L[ me? 1]m)/(fmaxm? fminm)

L[i+ 1]m is the m-th objective function value of i+ 1 individual, and fmaxm and fminm are the maximum and minimum values of the m-th objective function value in the set.

3. Elite strategic choice

The elite strategy is to keep the excellent individuals in the father directly to the offspring to prevent the loss of Pareto optimal solution. The progeny population generated at time t and the parent population are merged, and then the merged new population is sorted in non-dominant order, and then it is added to the population of size n as the new parent in non-dominant order.