1 |
Design Optimization of Fuzzy Logic SystemsDadone, Paolo 29 May 2001 (has links)
Fuzzy logic systems are widely used for control, system identification, and pattern recognition problems. In order to maximize their performance, it is often necessary to undertake a design optimization process in which the adjustable parameters defining a particular fuzzy system are tuned to maximize a given performance criterion. Some data to approximate are commonly available and yield what is called the supervised learning problem. In this problem we typically wish to minimize the sum of the squares of errors in approximating the data.
We first introduce fuzzy logic systems and the supervised learning problem that, in effect, is a nonlinear optimization problem that at times can be non-differentiable. We review the existing approaches and discuss their weaknesses and the issues involved. We then focus on one of these problems, i.e., non-differentiability of the objective function, and show how current approaches that do not account for non-differentiability can diverge. Moreover, we also show that non-differentiability may also have an adverse practical impact on algorithmic performances.
We reformulate both the supervised learning problem and piecewise linear membership functions in order to obtain a polynomial or factorable optimization problem. We propose the application of a global nonconvex optimization approach, namely, a reformulation and linearization technique. The expanded problem dimensionality does not make this approach feasible at this time, even though this reformulation along with the proposed technique still bears a theoretical interest. Moreover, some future research directions are identified.
We propose a novel approach to step-size selection in batch training. This approach uses a limited memory quadratic fit on past convergence data. Thus, it is similar to response surface methodologies, but it differs from them in the type of data that are used to fit the model, that is, already available data from the history of the algorithm are used instead of data obtained according to an experimental design. The step-size along the update direction (e.g., negative gradient or deflected negative gradient) is chosen according to a criterion of minimum distance from the vertex of the quadratic model. This approach rescales the complexity in the step-size selection from the order of the (large) number of training data, as in the case of exact line searches, to the order of the number of parameters (generally lower than the number of training data). The quadratic fit approach and a reduced variant are tested on some function approximation examples yielding distributions of the final mean square errors that are improved (i.e., skewed toward lower errors) with respect to the ones in the commonly used pattern-by-pattern approach. Moreover, the quadratic fit is also competitive and sometimes better than the batch training with optimal step-sizes, thus showing an improved performance of this approach. The quadratic fit approach is also tested in conjunction with gradient deflection strategies and memoryless variable metric methods, showing errors smaller by 1 to 7 orders of magnitude. Moreover, the convergence speed by using either the negative gradient direction or a deflected direction is higher than that of the pattern-by-pattern approach, although the computational cost of the algorithm per iteration is moderately higher than the one of the pattern-by-pattern method. Finally, some directions for future research are identified. / Ph. D.
|
2 |
Método de direções interiores ao epígrafo para a solução de problemas de otimização não-convexos e não-diferenciáveis via dualidade lagrangeanaGómez, Jesús Cernades 07 June 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-03-31T11:28:07Z
No. of bitstreams: 1
jesuscernadesgomez.pdf: 1031961 bytes, checksum: 184ef4c8e577aada634107338cd8a4ee (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-04-24T02:56:11Z (GMT) No. of bitstreams: 1
jesuscernadesgomez.pdf: 1031961 bytes, checksum: 184ef4c8e577aada634107338cd8a4ee (MD5) / Made available in DSpace on 2016-04-24T02:56:11Z (GMT). No. of bitstreams: 1
jesuscernadesgomez.pdf: 1031961 bytes, checksum: 184ef4c8e577aada634107338cd8a4ee (MD5)
Previous issue date: 2013-06-07 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho tem por finalidade apresentar um método para a solução de problemas de otimização
não-convexos e não-diferenciáveis. O método, chamado IED (Interior Epigraph Directions),
aplica-se a problemas de otimização cuja função objetivo é contínua e definida em um
subconjunto compacto de Rn, sujeita a restrições de igualdade e/ou desigualdade.
O método IED considera o problema dual induzido por uma função lagrangeana aumentada e
obtém a solução primal gerando uma sequêmcia de pontos no interior do epígrafo da função
dual. Primeiramente, um subgradiente é usado para gerar uma aproximação linear do problema
dual. Em seguida, usa-se esta aproximação linear para definir-se uma direção de busca interior
ao epígrafo da função dual. Obtém-se então, a partir de um ponto no interior do epígrafo, um
novo ponto interior e, consequêntemente, uma sequência de pontos interiores é construida. Essa
sequência produz uma sequência dual que por sua vez origina uma sequência primal, através da
solução de um subproblema originado pela dualidade.
A análise de convergência do algoritmo é também apresentada bem como resultados numéricos
da solução de problema extraídos da literatura. / This work presents a method for solving constrained nonsmooth and nonconvex optimization
problems. Themethod, called IED (Interior Epigraph Directions) can be applied to optimization
problems with continuos objective functions defined over compact subsets of Rn and subjected
to equalities and/or inequalities constraints.
The IED method considers the dual problem induced by a generalized augmented Lagrangian
function and obtains the primal solution by generating a sequence of iterates in the interior
of the dual function. First, a subgradient is used to build a linear approximation to the dual
problem. Then, this linear approximation is used to define a search direction in the interior of
the dual function. From an interior point of the epigraph, a new point is obtained and an interior
sequence to the epigraph is built, This sequence of interior points generates a dual sequence
which in its turn generates a primal sequence by solving a problem originated by duality.
The convergence analysis is also presented as well as numerical result of several problems
obtained from de literature.
|
3 |
Comportamento do método de direções interiores ao epígrafo (IED) quando aplicado a problemas de programação em dois níveisOliveira, Erick Mário do Nascimento 26 June 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-09-04T12:20:42Z
No. of bitstreams: 1
erickmariodonascimentooliveira.pdf: 3492871 bytes, checksum: 845fa85f6d95efe2e7ad13563f342bc3 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-09-04T13:21:49Z (GMT) No. of bitstreams: 1
erickmariodonascimentooliveira.pdf: 3492871 bytes, checksum: 845fa85f6d95efe2e7ad13563f342bc3 (MD5) / Made available in DSpace on 2018-09-04T13:21:49Z (GMT). No. of bitstreams: 1
erickmariodonascimentooliveira.pdf: 3492871 bytes, checksum: 845fa85f6d95efe2e7ad13563f342bc3 (MD5)
Previous issue date: 2018-06-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho é apresentado o comportamento do algoritmo IED quando aplicado
a problemas de programação em dois níveis. Para isso, o problema do seguidor é
substituído pelas condições necessárias de primeira ordem de Karush-Kuhn-Tucker e,
dessa maneira, o problema de programação em dois níveis é transformado em um problema de otimização com restrições não lineares. Dessa forma, as condições necessárias para utilização do algoritmo IED (Interior Epigraph Directions) são satisfeitas. Esse método tem como característica resolver problemas de otimização não convexa e não diferenciáveis via utilização da técnica de dualidade Lagrangiana, onde as funções de restrições são introduzidas na função objetivo para formar a função Lagrangiana. Além disso, o método considera o problema dual induzido por um esquema generalizado da dualidade Lagrangiana aumentada e obtém a solução primal produzindo uma sequência de pontos no interior do epígrafo da função dual. Dessa forma, o valor da função dual, em algum ponto do espaço dual, é dado pela minimização da Lagrangiana. Por fim, experimentos numéricos são apresentados em relação à utilização do algoritmo IED em problemas de programação em dois níveis encontrados na literatura. / This work presents the behavior of the IED algorithm when applied to bilevel
programming problems. For this, the follower problem is replaced by the first-order
necessary Karush-Kuhn-Tucker’s conditions and thus, the problem of bilevel programming turns into an optimization problem with non-linear constraints. Thus, the conditions required for use of the IED (Interior Epigraph Directions) algorithm are satisfied. This method has the characteristic of solving non-convex and non-differentiable optimization problems using the Lagrangian duality technique, where the constraint functions are introduced into the objective function for formulation of the Lagrangian. Furthermore, the method considers the dual problem induced by a generalized scheme of augmented Lagrangian duality and obtains the primal solution by producing a sequence of points inside the dual function epigraph. Then the value of the dual function, at some point in the dual space, is given by Lagrangian minimization. Finally, numerical experiments are presented showing the use of the IED algorithm in bilevel programming problems found in the literature.
|
Page generated in 0.1135 seconds