Spelling suggestions: "subject:"smooth optimization""
1 |
A coordinate gradient descent method for structured nonsmooth optimization /Yun, Sangwoon, January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 125-133).
|
2 |
On optimality conditions, minimizing and stationary sequences for nonsmooth functions. / CUHK electronic theses & dissertations collectionJanuary 1997 (has links)
Xinbao Li. / Thesis (Ph.D.)--Chinese University of Hong kong, 1997. / Includes bibliographical references (p. 62-64). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
|
3 |
Nonsmooth analysis and optimization.January 1993 (has links)
Huang Liren. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 96). / Abstract --- p.1 / Introduction --- p.2 / References --- p.5 / Chapter Chapter 1. --- Some elementary results in nonsmooth analysis and optimization --- p.6 / Chapter 1. --- "Some properties for ""lim sup"" and ""lim inf""" --- p.6 / Chapter 2. --- The directional derivative of the sup-type function --- p.8 / Chapter 3. --- Some results in nonsmooth analysis and optimization --- p.12 / References --- p.19 / Chapter Chapter 2. --- On generalized second-order derivatives and Taylor expansions in nonsmooth optimization --- p.20 / Chapter 1. --- Introduction --- p.20 / Chapter 2. --- "Dini-directional derivatives, Clark's directional derivatives and generalized second-order directional derivatives" --- p.20 / Chapter 3. --- On Cominetti and Correa's conjecture --- p.28 / Chapter 4. --- Generalized second-order Taylor expansion --- p.36 / Chapter 5. --- Detailed proof of Theorem 2.4.2 --- p.40 / Chapter 6. --- Corollaries of Theorem 2.4.2 and Theorem 2.4.3 --- p.43 / Chapter 7. --- Some applications in optimization --- p.46 / Ref erences --- p.51 / Chapter Chapter 3. --- Second-order necessary and sufficient conditions in nonsmooth optimization --- p.53 / Chapter 1. --- Introduction --- p.53 / Chapter 2. --- Second-order necessary and sufficient conditions without constraint --- p.56 / Chapter 3. --- Second-order necessary conditions with constrains --- p.66 / Chapter 4. --- Sufficient conditions theorem with constraints --- p.77 / References --- p.87 / Appendix --- p.89 / References --- p.96
|
4 |
Saddle point techniques in convex composite and error-in-measurement optimizationHe, Niao 07 January 2016 (has links)
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts.
In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle.
In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing.
In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
|
5 |
Direct Search Methods for Nonsmooth Problems using Global Optimization TechniquesRobertson, Blair Lennon January 2010 (has links)
This thesis considers the practical problem of constrained and unconstrained local optimization. This subject has been well studied when the objective function f is assumed to smooth. However, nonsmooth problems occur naturally and frequently in practice. Here f is assumed to be nonsmooth or discontinuous without forcing smoothness assumptions near, or at, a potential solution. Various methods have been
presented by others to solve nonsmooth optimization problems, however only partial convergence results are possible for these methods.
In this thesis, an optimization method which use a series of local and localized global optimization phases is proposed. The local phase searches for a local minimum
and gives the methods numerical performance on parts of f which are smooth. The localized global phase exhaustively searches for points of descent in a neighborhood of cluster points. It is the localized global phase which provides strong theoretical convergence results on nonsmooth problems.
Algorithms are presented for solving bound constrained, unconstrained and constrained nonlinear nonsmooth optimization problems. These algorithms use direct search methods in the local phase as they can be applied directly to nonsmooth problems because gradients are not explicitly required. The localized global optimization phase uses a new partitioning random search algorithm to direct random sampling into promising subsets of ℝⁿ. The partition is formed using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where f is relatively low, based on previous sampling, from which further samples are drawn directly. For each algorithm, convergence to an essential local minimizer of f is demonstrated under mild conditions. That is, a point x* for which the set of all feasible points with lower f values has Lebesgue measure zero for all sufficiently small neighborhoods of x*. Stopping rules are derived for each algorithm giving practical convergence to estimates of essential local minimizers. Numerical results are presented on a range of nonsmooth test problems for 2 to 10 dimensions showing the methods are effective in practice.
|
6 |
General monotonicity, interpolation of operators, and applicationsUnknown Date (has links)
Assume that {φn} is an orthonormal uniformly bounded (ONB) sequence of complex-valued functions de ned on a measure space (Ω,Σ,µ), and f ∈ L1(Ω,Σ,µ). Let
be the Fourier coefficients of f with respect to {φn} .
R.E.A.C. Paley proved a theorem connecting the Lp-norm of f with a related norm of the sequence {cn}. Hardy and Littlewood subsequently proved that Paley’s result is best possible within its context. Their results were generalized by Dikarev, Macaev, Askey, Wainger, Sagher, and later by Tikhonov, Li yand, Booton and others.The present work continues the generalization of these results. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
7 |
Análise não suave e aplicações em otimizaçãoCosta, Tiago Mendonça de [UNESP] 28 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:18Z (GMT). No. of bitstreams: 0
Previous issue date: 2011-02-28Bitstream added on 2014-06-13T20:48:40Z : No. of bitstreams: 1
costa_tm_me_sjrp.pdf: 1425800 bytes, checksum: f5b08954e14201ee5211145299b1e813 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho, estamos interessados em apresentar uma abordagem relacionando a análise não suave com a otimização. Primeiramente, é realizado um estudo sobre conceitos da análise não suave, como cones normais, cone tangente de Bouligand, subdiferenciais proximal, estrita, limite e de clarke. Com esses conceitos exibimos uma série de resultados, por exemplo, uma caracterização par funções de Lipschitz, subdiferencais da soma, produto e máximo de funções semi-contínuas inferior, uma versão não suave dos multiplicadores de Lagrange, i.e., condições de primeira ordem para otimalidade de problemas de otimização não suaves. Também é feito um estudo sobre as condições de segunda ordem para otimalidade em problemas de otimização não suaves e para isso, foi necessário a apresentação de outros conceitos e propriedades como os de Hessiana generalizada, Jacobiana aproximada a Hessiana proximada. Após a apresentação desses resultados, é feita uma análise sobre dois Teoremas que fornecem, com abordagens distintas, condições suficiente de segunda ordem para problemas de otimização não suaves e este trabalho é finalizado com a aprsentação de um resultado que é considerado uma unificação desses dois Teoremas / In this work we are interested in the presentation of an approach relating Nonsmooth Analysis to Optimization. First we make a study about concepts of nonsmooth analysis such as, normal cone, Bouligand's tangent cone, proximal, strict and limiting Subdiferential, as well as Clarke's Suddifferential. After these, we exhibit a series of results, for example, a characterization of Lipschitz functions, Subdifferential sum, product and maxium rules of lower semicontinuous functions and a nonsmooth version of Lagrange's multiplier rule, that is, a first order necessary condition of optimality for nonsmooth optimization problems. We also made a study about second order optimality conditions for nonsmooth optimization problems. In order to do that, it was necessary to present other concepts and properties about generalized Hessian, approximate Jacobian and approximate Hessian. After presenting these concepts and results, an analysis of two theorems that provide, with different approches, second order conditions for optimality for nonsmooth problems is made. Finally, this dissertation is completed with the exposition of a result that is considered a unification of these two theorems
|
8 |
Derivative Free Algorithms For Large Scale Non-smooth Optimization And Their ApplicationsTor, Ali Hakan 01 February 2013 (has links) (PDF)
In this thesis, various numerical methods are developed to solve nonsmooth and in particular, nonconvex optimization problems. More speci
|
9 |
New Approaches To Desirability Functions By Nonsmooth And Nonlinear OptimizationAkteke-ozturk, Basak 01 July 2010 (has links) (PDF)
Desirability Functions continue to attract attention of scientists and researchers working in
the area of multi-response optimization. There are many versions of such functions, differing
mainly in formulations of individual and overall desirability functions. Derringer and
Suich&rsquo / s desirability functions being used throughout this thesis are still the most preferred
ones in practice and many other versions are derived from these. On the other hand, they have
a drawback of containing nondifferentiable points and, hence, being nonsmooth. Current
approaches to their optimization, which are based on derivative-free search techniques and
modification of the functions by higher-degree polynomials, need to be diversified considering
opportunities offered by modern nonlinear (global) optimization techniques and related
softwares. A first motivation of this work is to develop a new efficient solution strategy for the
maximization of overall desirability functions which comes out to be a nonsmooth composite
constrained optimization problem by nonsmooth optimization methods.
We observe that individual desirability functions used in practical computations are of mintype,
a subclass of continuous selection functions. To reveal the mechanism that gives rise to
a variation in the piecewise structure of desirability functions used in practice, we concentrate
on a component-wise and generically piecewise min-type functions and, later on, max-type functions. It is our second motivation to analyze the structural and topological properties of
desirability functions via piecewise max-type functions.
In this thesis, we introduce adjusted desirability functions based on a reformulation of the
individual desirability functions by a binary integer variable in order to deal with their piecewise
definition. We define a constraint on the binary variable to obtain a continuous optimization
problem of a nonlinear objective function including nondifferentiable points with
the constraints of bounds for factors and responses. After describing the adjusted desirability
functions on two well-known problems from the literature, we implement modified subgradient
algorithm (MSG) in GAMS incorporating to CONOPT solver of GAMS software for
solving the corresponding optimization problems. Moreover, BARON solver of GAMS is
used to solve these optimization problems including adjusted desirability functions. Numerical
applications with BARON show that this is a more efficient alternative solution strategy
than the current desirability maximization approaches.
We apply negative logarithm to the desirability functions and consider the properties of the
resulting functions when they include more than one nondifferentiable point. With this approach
we reveal the structure of the functions and employ the piecewise max-type functions
as generalized desirability functions (GDFs). We introduce a suitable finite partitioning procedure
of the individual functions over their compact and connected interval that yield our
so-called GDFs. Hence, we construct GDFs with piecewise max-type functions which have
efficient structural and topological properties. We present the structural stability, optimality
and constraint qualification properties of GDFs using that of max-type functions.
As a by-product of our GDF study, we develop a new method called two-stage (bilevel) approach
for multi-objective optimization problems, based on a separation of the parameters:
in y-space (optimization) and in x-space (representation). This approach is about calculating
the factor variables corresponding to the ideal solutions of each individual functions in y, and
then finding a set of compromised solutions in x by considering the convex hull of the ideal
factors. This is an early attempt of a new multi-objective optimization method. Our first results
show that global optimum of the overall problem may not be an element of the set of
compromised solution.
The overall problem in both x and y is extended to a new refined (disjunctive) generalized
semi-infinite problem, herewith analyzing the stability and robustness properties of the objective
function. In this course, we introduce the so-called robust optimization of desirability
functions for the cases when response models contain uncertainty. Throughout this thesis, we give several modifications and extensions of the optimization problem of overall desirability
functions.
|
10 |
Pseudospectral techniques for non-smooth evolutionary problemsGuenther, Chris January 1998 (has links)
Thesis (Ph. D.)--West Virginia University, 1998. / Title from document title page. Document formatted into pages; contains xi, 116 p. : ill. (some col.) Includes abstract. Includes bibliographical references (p. 94-98).
|
Page generated in 0.1151 seconds