21 |
Estimation d'un modèle de mélange paramétrique et semiparamétrique par des phi-divergences / Estimation of parametric and semiparametric mixture models using phi-divergencesAl-Mohamad, Diaa 17 November 2016 (has links)
L’étude des modèles de mélanges est un champ très vaste en statistique. Nous présentons dans la première partie de la thèse les phi-divergences et les méthodes existantes qui construisent des estimateurs robustes basés sur des phi-divergences. Nous nous intéressons en particulier à la forme duale des phi-divergences et nous construisons un nouvel estimateur robuste basant sur cette formule. Nous étudions les propriétés asymptotiques de cet estimateur et faisons une comparaison numérique avec les méthodes existantes. Dans un seconde temps, nous introduisons un algorithme proximal dont l’objectif est de calculer itérativement des estimateurs basés sur des critères de divergences statistiques. La convergence de l’algorithme est étudiée et illustrée par différents exemples théoriques et sur des données simulées. Dans la deuxième partie de la thèse, nous construisons une nouvelle structure pour les modèles de mélanges à deux composantes dont l’une est inconnue. La nouvelle approche permet d’incorporer une information a priori linéaire de type moments ou L-moments. Nous étudions les propriétés asymptotiques des estimateurs proposés. Des simulations numériques sont présentées afin de montrer l’avantage de la nouvelle approche en comparaison avec les méthodes existantes qui ne considèrent pas d’information a priori à part une hypothèse de symétrie sur la composante inconnue. / The study of mixture models constitutes a large domain of research in statistics. In the first part of this work, we present phi-divergences and the existing methods which produce robust estimators. We are more particularly interested in the so-called dual formula of phi-divergences. We build a new robust estimator based on this formula. We study its asymptotic properties and give a numerical comparison with existing methods on simulated data. We also introduce a proximal-point algorithm whose aim is to calculate divergence-based estimators. We give some of the convergence properties of this algorithm and illustrate them on theoretical and simulated examples. In the second part of this thesis, we build a new structure for two-component mixture models where one component is unknown. The new approach permits to incorporate a prior linear information about the unknown component such as moment-type and L-moments constraints. We study the asymptotic properties of the proposed estimators. Several experimental results on simulated data are illustrated showing the advantage of the novel approach and the gain from using the prior information in comparison to existing methods which do not incorporate any prior information except for a symmetry assumption over the unknown component.
|
22 |
Overcoming the failure of the classical generalized interior-point regularity conditions in convex optimization. Applications of the duality theory to enlargements of maximal monotone operatorsCsetnek, Ernö Robert 08 December 2009 (has links)
The aim of this work is to present several new results concerning
duality in scalar convex optimization, the formulation of sequential
optimality conditions and some applications of the duality to the theory
of maximal monotone operators.
After recalling some properties of the classical generalized
interiority notions which exist in the literature, we give some
properties of the quasi interior and quasi-relative interior,
respectively. By means of these notions we introduce several
generalized interior-point regularity conditions which guarantee
Fenchel duality. By using an approach due to Magnanti, we derive
corresponding regularity conditions expressed via the quasi
interior and quasi-relative interior which ensure Lagrange
duality. These conditions have the advantage to be applicable in
situations when other classical regularity conditions fail.
Moreover, we notice that several duality results given in the
literature on this topic have either superfluous or contradictory
assumptions, the investigations we make offering in this sense an
alternative.
Necessary and sufficient sequential optimality conditions for a
general convex optimization problem are established via
perturbation theory. These results are applicable even in the
absence of regularity conditions. In particular, we show that
several results from the literature dealing with sequential
optimality conditions are rediscovered and even improved.
The second part of the thesis is devoted to applications of the
duality theory to enlargements of maximal monotone operators in
Banach spaces. After establishing a necessary and sufficient
condition for a bivariate infimal convolution formula, by
employing it we equivalently characterize the
$\varepsilon$-enlargement of the sum of two maximal monotone
operators. We generalize in this way a classical result
concerning the formula for the $\varepsilon$-subdifferential of
the sum of two proper, convex and lower semicontinuous functions.
A characterization of fully enlargeable monotone operators is also
provided, offering an answer to an open problem stated in the
literature. Further, we give a regularity condition for the
weak$^*$-closedness of the sum of the images of enlargements of
two maximal monotone operators.
The last part of this work deals with enlargements of positive sets in SSD spaces. It is shown that many results from the literature concerning enlargements of maximal monotone operators can be generalized to the setting of Banach SSD spaces.
|
23 |
Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restorationHeinrich, André 27 March 2013 (has links) (PDF)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
|
24 |
Farkas - type results for convex and non - convex inequality systemsHodrea, Ioan Bogdan 22 January 2008 (has links) (PDF)
As the title already suggests the aim of the present work is to present Farkas -
type results for inequality systems involving convex and/or non - convex functions.
To be able to give the desired results, we treat optimization problems which involve
convex and composed convex functions or non - convex functions like DC functions
or fractions.
To be able to use the fruitful Fenchel - Lagrange duality approach, to the primal
problem we attach an equivalent problem which is a convex optimization problem.
After giving a dual problem to the problem we initially treat, we provide weak
necessary conditions which secure strong duality, i.e., the case when the optimal
objective value of the primal problem coincides with the optimal objective value of
the dual problem and, moreover, the dual problem has an optimal solution.
Further, two ideas are followed. Firstly, using the weak and strong duality
between the primal problem and the dual problem, we are able to give necessary
and sufficient optimality conditions for the optimal solutions of the primal problem.
Secondly, provided that no duality gap lies between the primal problem and its
Fenchel - Lagrange - type dual we are able to demonstrate some Farkas - type
results and thus to underline once more the connections between the theorems of
the alternative and the theory of duality. One statement of the above mentioned
Farkas - type results is characterized using only epigraphs of functions.
We conclude our investigations by providing necessary and sufficient optimality
conditions for a multiobjective programming problem involving composed convex
functions. Using the well-known linear scalarization to the primal multiobjective
program a family of scalar optimization problems is attached. Further to each of
these scalar problems the Fenchel - Lagrange dual problem is determined. Making
use of the weak and strong duality between the scalarized problem and its dual the
desired optimality conditions are proved. Moreover, the way the dual problem of
the scalarized problem looks like gives us an idea about how to construct a vector
dual problem to the initial one. Further weak and strong vector duality assertions
are provided.
|
25 |
Joint Source-Channel Coding Reliability Function for Single and Multi-Terminal Communication SystemsZhong, Yangfan 15 May 2008 (has links)
Traditionally, source coding (data compression) and channel coding (error protection) are performed separately and sequentially, resulting in what we call a tandem (separate) coding system. In
practical implementations, however, tandem coding might involve a large delay and a high coding/decoding complexity, since one needs to remove the redundancy in the source coding part and then insert certain redundancy in the channel coding part. On the other hand, joint source-channel coding (JSCC), which coordinates source and channel coding or combines them into a single step, may offer substantial improvements over the tandem coding approach.
This thesis deals with the fundamental Shannon-theoretic limits for a variety of communication systems via JSCC. More specifically, we investigate the reliability function (which is the largest rate at which the coding probability of error vanishes exponentially with
increasing blocklength) for JSCC for the following discrete-time communication systems: (i) discrete memoryless systems; (ii) discrete memoryless systems with perfect channel feedback; (iii) discrete memoryless systems with source side information; (iv) discrete systems with Markovian memory; (v) continuous-valued
(particularly Gaussian) memoryless systems; (vi) discrete asymmetric 2-user source-channel systems.
For the above systems, we establish upper and lower bounds for the JSCC reliability function and we analytically compute these bounds. The conditions for which the upper and lower bounds coincide are also provided. We show that the conditions are satisfied for a large class of source-channel systems, and hence exactly determine the reliability function. We next provide a systematic comparison between the JSCC reliability function and the tandem coding reliability function (the reliability function resulting from separate source and channel coding). We show that the JSCC reliability function is substantially larger than the tandem coding
reliability function for most cases. In particular, the JSCC reliability function is close to twice as large as the tandem coding reliability function for many source-channel pairs. This exponent gain provides a theoretical underpinning and justification for JSCC design as opposed to the widely used tandem coding method, since
JSCC will yield a faster exponential rate of decay for the system error probability and thus provides substantial reductions in
complexity and coding/decoding delay for real-world communication systems. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2008-05-13 22:31:56.425
|
26 |
Farkas - type results for convex and non - convex inequality systemsHodrea, Ioan Bogdan 13 December 2007 (has links)
As the title already suggests the aim of the present work is to present Farkas -
type results for inequality systems involving convex and/or non - convex functions.
To be able to give the desired results, we treat optimization problems which involve
convex and composed convex functions or non - convex functions like DC functions
or fractions.
To be able to use the fruitful Fenchel - Lagrange duality approach, to the primal
problem we attach an equivalent problem which is a convex optimization problem.
After giving a dual problem to the problem we initially treat, we provide weak
necessary conditions which secure strong duality, i.e., the case when the optimal
objective value of the primal problem coincides with the optimal objective value of
the dual problem and, moreover, the dual problem has an optimal solution.
Further, two ideas are followed. Firstly, using the weak and strong duality
between the primal problem and the dual problem, we are able to give necessary
and sufficient optimality conditions for the optimal solutions of the primal problem.
Secondly, provided that no duality gap lies between the primal problem and its
Fenchel - Lagrange - type dual we are able to demonstrate some Farkas - type
results and thus to underline once more the connections between the theorems of
the alternative and the theory of duality. One statement of the above mentioned
Farkas - type results is characterized using only epigraphs of functions.
We conclude our investigations by providing necessary and sufficient optimality
conditions for a multiobjective programming problem involving composed convex
functions. Using the well-known linear scalarization to the primal multiobjective
program a family of scalar optimization problems is attached. Further to each of
these scalar problems the Fenchel - Lagrange dual problem is determined. Making
use of the weak and strong duality between the scalarized problem and its dual the
desired optimality conditions are proved. Moreover, the way the dual problem of
the scalarized problem looks like gives us an idea about how to construct a vector
dual problem to the initial one. Further weak and strong vector duality assertions
are provided.
|
27 |
Application of the Duality TheoryLorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning.
First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature.
In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above.
The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization.
We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
|
28 |
Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine LearningLorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning.
First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature.
In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above.
The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization.
We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
|
29 |
Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restorationHeinrich, André 21 March 2013 (has links)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
|
Page generated in 0.0698 seconds