Spelling suggestions: "subject:"convex optimization"" "subject:"konvex optimization""
241 |
Stochastic approximation and least-squares regression, with applications to machine learning / Approximation stochastique et régression par moindres carrés : applications en apprentissage automatiqueFlammarion, Nicolas 24 July 2017 (has links)
De multiples problèmes en apprentissage automatique consistent à minimiser une fonction lisse sur un espace euclidien. Pour l’apprentissage supervisé, cela inclut les régressions par moindres carrés et logistique. Si les problèmes de petite taille sont résolus efficacement avec de nombreux algorithmes d’optimisation, les problèmes de grande échelle nécessitent en revanche des méthodes du premier ordre issues de la descente de gradient. Dans ce manuscrit, nous considérons le cas particulier de la perte quadratique. Dans une première partie, nous nous proposons de la minimiser grâce à un oracle stochastique. Dans une seconde partie, nous considérons deux de ses applications à l’apprentissage automatique : au partitionnement de données et à l’estimation sous contrainte de forme. La première contribution est un cadre unifié pour l’optimisation de fonctions quadratiques non-fortement convexes. Celui-ci comprend la descente de gradient accélérée et la descente de gradient moyennée. Ce nouveau cadre suggère un algorithme alternatif qui combine les aspects positifs du moyennage et de l’accélération. La deuxième contribution est d’obtenir le taux optimal d’erreur de prédiction pour la régression par moindres carrés en fonction de la dépendance au bruit du problème et à l’oubli des conditions initiales. Notre nouvel algorithme est issu de la descente de gradient accélérée et moyennée. La troisième contribution traite de la minimisation de fonctions composites, somme de l’espérance de fonctions quadratiques et d’une régularisation convexe. Nous étendons les résultats existants pour les moindres carrés à toute régularisation et aux différentes géométries induites par une divergence de Bregman. Dans une quatrième contribution, nous considérons le problème du partitionnement discriminatif. Nous proposons sa première analyse théorique, une extension parcimonieuse, son extension au cas multi-labels et un nouvel algorithme ayant une meilleure complexité que les méthodes existantes. La dernière contribution de cette thèse considère le problème de la sériation. Nous adoptons une approche statistique où la matrice est observée avec du bruit et nous étudions les taux d’estimation minimax. Nous proposons aussi un estimateur computationellement efficace. / Many problems in machine learning are naturally cast as the minimization of a smooth function defined on a Euclidean space. For supervised learning, this includes least-squares regression and logistic regression. While small problems are efficiently solved by classical optimization algorithms, large-scale problems are typically solved with first-order techniques based on gradient descent. In this manuscript, we consider the particular case of the quadratic loss. In the first part, we are interestedin its minimization when its gradients are only accessible through a stochastic oracle. In the second part, we consider two applications of the quadratic loss in machine learning: clustering and estimation with shape constraints. In the first main contribution, we provided a unified framework for optimizing non-strongly convex quadratic functions, which encompasses accelerated gradient descent and averaged gradient descent. This new framework suggests an alternative algorithm that exhibits the positive behavior of both averaging and acceleration. The second main contribution aims at obtaining the optimal prediction error rates for least-squares regression, both in terms of dependence on the noise of the problem and of forgetting the initial conditions. Our new algorithm rests upon averaged accelerated gradient descent. The third main contribution deals with minimization of composite objective functions composed of the expectation of quadratic functions and a convex function. Weextend earlier results on least-squares regression to any regularizer and any geometry represented by a Bregman divergence. As a fourth contribution, we consider the the discriminative clustering framework. We propose its first theoretical analysis, a novel sparse extension, a natural extension for the multi-label scenario and an efficient iterative algorithm with better running-time complexity than existing methods. The fifth main contribution deals with the seriation problem. We propose a statistical approach to this problem where the matrix is observed with noise and study the corresponding minimax rate of estimation. We also suggest a computationally efficient estimator whose performance is studied both theoretically and experimentally.
|
242 |
Model-based co-design of sensing and control systems for turbo-charged, EGR-utilizing spark-ignited enginesXu Zhang (9976460) 01 March 2021 (has links)
<div>Stoichiometric air-fuel ratio (AFR) and air/EGR flow control are essential control problems in today’s advanced spark-ignited (SI) engines to enable effective application of the three-way-catalyst (TWC) and generation of required torque. External exhaust gas recirculation (EGR) can be used in SI engines to help mitigate knock, reduce enrichment and improve efficiency[1 ]. However, the introduction of the EGR system increases the complexity of stoichiometric engine-out lambda and torque management, particularly for high BMEP commercial vehicle applications. This thesis develops advanced frameworks for sensing and control architecture designs to enable robust air handling system management, stoichiometric cylinder air-fuel ratio (AFR) control and three-way-catalyst emission control.</div><div><br></div><div><div>The first work in this thesis derives a physically-based, control-oriented model for turbocharged SI engines utilizing cooled EGR and flexible VVA systems. The model includes the impacts of modulation to any combination of 11 actuators, including the throttle valve, bypass valve, fuel injection rate, waste-gate, high-pressure (HP) EGR, low-pressure (LP) EGR, number of firing cylinders, intake and exhaust valve opening and closing timings. A new cylinder-out gas composition estimation method, based on the inputs’ information of cylinder charge flow, injected fuel amount, residual gas mass and intake gas compositions, is proposed in this model. This method can be implemented in the control-oriented model as a critical input for estimating the exhaust manifold gas compositions. A new flow-based turbine-out pressure modeling strategy is also proposed in this thesis as a necessary input to estimate the LP EGR flow rate. Incorporated with these two sub-models, the control-oriented model is capable to capture the dynamics of pressure, temperature and gas compositions in manifolds and the cylinder. Thirteen physical parameters, including intake, boost and exhaust manifolds’ pressures, temperatures, unburnt and burnt mass fractions as well as the turbocharger speed, are defined as state variables. The outputs such as flow rates and AFR are modeled as functions of selected states and inputs. The control-oriented model is validated with a high fidelity SI engine GT-Power model for different operating conditions. The novelty in this physical modeling work includes the development and incorporation of the cylinder-out gas composition estimation method and the turbine-out pressure model in the control-oriented model.</div></div><div><br></div><div><div>The second part of the work outlines a novel sensor selection and observer design algorithm for linear time-invariant systems with both process and measurement noise based on <i>H</i>2 optimization to optimize the tradeoff between the observer error and the number of required sensors. The optimization problem is relaxed to a sequence of convex optimization problems that minimize the cost function consisting of the <i>H</i>2 norm of the observer error and the weighted <i>l</i>1 norm of the observer gain. An LMI formulation allows for efficient solution via semi-definite programing. The approach is applied here, for the first time, to a turbo-charged spark-ignited (SI) engine using exhaust gas recirculation to determine the optimal sensor sets for real-time intake manifold burnt gas mass fraction estimation. Simulation with the candidate estimator embedded in a high fidelity engine GT-Power model demonstrates that the optimal sensor sets selected using this algorithm have the best <i>H</i>2 estimation performance. Sensor redundancy is also analyzed based on the algorithm results. This algorithm is applicable for any type of modern internal combustion engines to reduce system design time and experimental efforts typically required for selecting optimal sensor sets.</div></div><div><br></div><div><div>The third study develops a model-based sensor selection and controller design framework for robust control of air-fuel-ratio (AFR), air flow and EGR flow for turbocharged stoichiometric engines using low pressure EGR, waste-gate turbo-charging, intake throttling and variable valve timing. Model uncertainties, disturbances, transport delays, sensor and actuator characteristics are considered in this framework. Based on the required control performance and candidate sensor sets, the framework synthesizes an H1 feedback controller and evaluates the viability of the candidate sensor set through analysis of the structured</div><div>singular value μ of the closed-loop system in the frequency domain. The framework can also be used to understand if relaxing the controller performance requirements enables the use of a simpler (less costly) sensor set. The sensor selection and controller co-design approach is applied here, for the first time, to turbo-charged engines using exhaust gas circulation. High fidelity GT-Power simulations are used to validate the approach. The novelty of the work in this part can be summarized as follows: (1) A novel control strategy is proposed for the stoichiometric SI engines using low pressure EGR to simultaneously satisfy both the AFR and air/EGR-path control performance requirements; (2) A parametrical method to simultaneously select the sensors and design the controller is first proposed for the internal combustion engines.</div></div><div><br></div><div><div>In the fourth part of the work, a novel two-loop estimation and control strategy is proposed to reduce the emission of the three-way-catalyst (TWC). In the outer loop, an FOS estimator consisting of a TWC model and an extended Kalman-filter is used to estimate the current TWC fractional oxygen state (FOS) and a robust controller is used to control the TWC FOS by manipulating the desired engine λ. The outer loop estimator and controller are combined with an existing inner loop controller. The inner loop controller controls the engine λ based on the desired λ value and the control inaccuracies are considered and compensated by the outer loop robust controller. This control strategy achieves good emission reduction performance and has advantages over the constant λ control strategy and the conventional two-loop switch-type control strategy.</div></div>
|
243 |
Optimization framework for large-scale sparse blind source separation / Stratégies d'optimisation pour la séparation aveugle de sources parcimonieuses grande échelleKervazo, Christophe 04 October 2019 (has links)
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multi-valuées. L’objectif de ce doctorat est cependant d’étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s’articule en quatre parties, traitant chacune un aspect du problème: i) l’introduction d’algorithmes robustes de BSS parcimonieuse ne nécessitant qu’un seul lancement (malgré un choix d’hyper-paramètres délicat) et fortement étayés mathématiquement; ii) la proposition d’une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d’un algorithme classique de BSS parcimonieuse pour l’application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse non-linéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées. / During the last decades, Blind Source Separation (BSS) has become a key analysis tool to study multi-valued data. The objective of this thesis is however to focus on large-scale settings, for which most classical algorithms fail. More specifically, it is subdivided into four sub-problems taking their roots around the large-scale sparse BSS issue: i) introduce a mathematically sound robust sparse BSS algorithm which does not require any relaunch (despite a difficult hyper-parameter choice); ii) introduce a method being able to maintain high quality separations even when a large-number of sources needs to be estimated; iii) make a classical sparse BSS algorithm scalable to large-scale datasets; and iv) an extension to the non-linear sparse BSS problem. The methods we propose are extensively tested on both simulated and realistic experiments to demonstrate their quality. In-depth interpretations of the results are proposed.
|
244 |
Proximal Splitting Methods in Nonsmooth Convex OptimizationHendrich, Christopher 17 July 2014 (has links)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems.
After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators.
The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
|
245 |
Addressing Challenges in Graphical Models: MAP estimation, Evidence, Non-Normality, and Subject-Specific InferenceSagar K N Ksheera (15295831) 17 April 2023 (has links)
<p>Graphs are a natural choice for understanding the associations between variables, and assuming a probabilistic embedding for the graph structure leads to a variety of graphical models that enable us to understand these associations even further. In the realm of high-dimensional data, where the number of associations between interacting variables is far greater than the available number of data points, the goal is to infer a sparse graph. In this thesis, we make contributions in the domain of Bayesian graphical models, where our prior belief on the graph structure, encoded via uncertainty on the model parameters, enables the estimation of sparse graphs.</p>
<p><br></p>
<p>We begin with the Gaussian Graphical Model (GGM) in Chapter 2, one of the simplest and most famous graphical models, where the joint distribution of interacting variables is assumed to be Gaussian. In GGMs, the conditional independence among variables is encoded in the inverse of the covariance matrix, also known as the precision matrix. Under a Bayesian framework, we propose a novel prior--penalty dual called the `graphical horseshoe-like' prior and penalty, to estimate precision matrix. We also establish the posterior convergence of the precision matrix estimate and the frequentist consistency of the maximum a posteriori (MAP) estimator.</p>
<p><br></p>
<p>In Chapter 3, we develop a general framework based on local linear approximation for MAP estimation of the precision matrix in GGMs. This general framework holds true for any graphical prior, where the element-wise priors can be written as a Laplace scale mixture. As an application of the framework, we perform MAP estimation of the precision matrix under the graphical horseshoe penalty.</p>
<p><br></p>
<p>In Chapter 4, we focus on graphical models where the joint distribution of interacting variables cannot be assumed Gaussian. Motivated by the quantile graphical models, where the Gaussian likelihood assumption is relaxed, we draw inspiration from the domain of precision medicine, where personalized inference is crucial to tailor individual-specific treatment plans. With an aim to infer Directed Acyclic Graphs (DAGs), we propose a novel quantile DAG learning framework, where the DAGs depend on individual-specific covariates, making personalized inference possible. We demonstrate the potential of this framework in the regime of precision medicine by applying it to infer protein-protein interaction networks in Lung adenocarcinoma and Lung squamous cell carcinoma.</p>
<p><br></p>
<p>Finally, we conclude this thesis in Chapter 5, by developing a novel framework to compute the marginal likelihood in a GGM, addressing a longstanding open problem. Under this framework, we can compute the marginal likelihood for a broad class of priors on the precision matrix, where the element-wise priors on the diagonal entries can be written as gamma or scale mixtures of gamma random variables and those on the off-diagonal terms can be represented as normal or scale mixtures of normal. This result paves new roads for model selection using Bayes factors and tuning of prior hyper-parameters.</p>
|
246 |
EFFICIENT FILTER DESIGN AND IMPLEMENTATION APPROACHES FOR MULTI-CHANNEL CONSTRAINED ACTIVE SOUND CONTROLYongjie Zhuang (6730208) 21 July 2023 (has links)
<p>In many practical multi-channel active sound control (ASC) applications, such as active noise control (ANC), various constraints need to be satisfied, such as the robust stability constraint, noise amplification constraint, controller output power constraints, etc. One way to enforce these constraints is to add a regularization term to the Wiener filter formulation, which, by tuning only a single parameter, can over-satisfy many constraints and degrade the ANC performance. Another approach for non-adaptive ANC filter design that can produce better ANC performance is to directly solve the constrained optimization problem formulated based on the <em>H</em><sub>2</sub>/<em>H</em><sub>inf</sub> control framework. However, such a formulation does not result in a convex optimization problem and its practicality can be limited by the significant computation time required in the solving process. In this dissertation, the traditional <em>H</em><sub>2</sub>/<em>H</em><sub>inf</sub> formulation is convexified and a global minimum is guaranteed. It is then further reformulated into a cone programming formulation and simplified by exploiting the problem structure in its dual form to obtain a more numerically efficient and stable formulation. A warmstarting strategy is also proposed to further reduce the required iterations. Results show that, compared with the traditional methods, the proposed method is more reliable and the computation time can be reduced from the order of days to seconds. When the acoustic feedback path is not strong enough to cause instability, then only constraints that prevent noise amplification outside the desired noise control band are needed. A singular vector filtering method is proposed to maintain satisfactory noise control performance in the desired noise reduction bands while mitigating noise amplification.</p>
<p><br></p>
<p>The proposed convex conic formulation can be used for a wide range of ASC applications. For example, the improvement in numerical efficiency and stability makes it possible to apply the proposed method to adaptive ANC filter design. Results also show that compared with the conventional constrained adaptive ANC method (leaky FxLMS), the proposed method can achieve a faster convergence rate and better steady-state noise control performance. The proposed conic method can also be used to design the room equalization filter for sound field reproduction and the hear-through filter design for earphones.</p>
<p><br></p>
<p>Besides efficient filter design methods, efficient filter implementation methods are also developed to reduce real-time computations in implementing designed control filters. A polyphase-structure-based filter design and implementation method is developed for ANC systems that can reduce the computation load for high sampling rate real-time filter implementation but does not introduce an additional time delay. Results show that, compared with various traditional low sampling rate implementations, the proposed method can significantly improve the noise control performance. Compared with the non-polyphase high-sampling rate method, the real-time computations that increase with the sampling rate are improved from quadratically to linearly. Another efficient filter implementation method is to use the infinite impulse response (IIR) filter structure instead of the finite impulse response (FIR) filter structure. A stable IIR filter design approach that does not need the computation and relocation of poles is improved to be applicable in the ANC applications. The result demonstrated that the proposed method can achieve better fitting accuracy and noise control performance in high-order applications.</p>
|
247 |
Optimisation et Auto-Optimisation dans les réseaux LTE / Optimization and Self-Optimization in LTE-Advanced NetworksTall, Abdoulaye 17 December 2015 (has links)
Le réseau mobile d’Orange France comprend plus de 100 000 antennes 2G, 3G et 4G sur plusieurs bandes de fréquences sans compter les nombreuses femto-cells fournies aux clients pour résoudre les problèmes de couverture. Ces chiffres ne feront que s’accroître pour répondre à la demande sans cesse croissante des clients pour les données mobiles. Cela illustre le défi énorme que rencontrent les opérateurs de téléphonie mobile en général à savoir gérer un réseau aussi complexe tout en limitant les coûts d’opération pour rester compétitifs. Cette thèse s’attache à utiliser le concept SON (réseaux auto-organisants) pour réduire cette complexité en automatisant les tâches répétitives ou complexes. Plus spécifiquement, nous proposons des algorithmes d’optimisation automatique pour des scénarios liés à la densification par les small cells ou les antennes actives. Nous abordons les problèmes classiques d’équilibrage de charge mais avec un lien backhaul à capacité limitée et de coordination d’interférence que ce soit dans le domaine temporel (notamment avec le eICIC) ou le domaine fréquentiel. Nous proposons aussi des algorithmes d’activation optimale de certaines fonctionnalités lorsque cette activation n’est pas toujours bénéfique. Pour la formulation mathématique et la résolution de tous ces algorithmes, nous nous appuyons sur les résultats de l’approximation stochastique et de l’optimisation convexe. Nous proposons aussi une méthodologie systématique pour la coordination de multiples fonctionnalités SON qui seraient exécutées en parallèle. Cette méthodologie est basée sur les jeux concaves et l’optimisation convexe avec comme contraintes des inégalités matricielles linéaires. / The mobile network of Orange in France comprises more than 100 000 2G, 3G and 4G antennas with severalfrequency bands, not to mention many femto-cells for deep-indoor coverage. These numbers will continue toincrease in order to address the customers’ exponentially increasing need for mobile data. This is an illustrationof the challenge faced by the mobile operators for operating such a complex network with low OperationalExpenditures (OPEX) in order to stay competitive. This thesis is about leveraging the Self-Organizing Network(SON) concept to reduce this complexity by automating repetitive or complex tasks. We specifically proposeautomatic optimization algorithms for scenarios related to network densification using either small cells orActive Antenna Systems (AASs) used for Vertical Sectorization (VeSn), Virtual Sectorization (ViSn) and multilevelbeamforming. Problems such as load balancing with limited-capacity backhaul and interference coordination eitherin time-domain (eICIC) or in frequency-domain are tackled. We also propose optimal activation algorithms forVeSn and ViSn when their activation is not always beneficial. We make use of results from stochastic approximationand convex optimization for the mathematical formulation of the problems and their solutions. We also proposea generic methodology for the coordination of multiple SON algorithms running in parallel using results fromconcave game theory and Linear Matrix Inequality (LMI)-constrained optimization.
|
248 |
Application of the Duality TheoryLorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning.
First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature.
In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above.
The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization.
We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
|
249 |
Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine LearningLorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning.
First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature.
In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above.
The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization.
We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
|
Page generated in 0.0731 seconds