• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 6
  • 3
  • 2
  • 1
  • Tagged with
  • 27
  • 27
  • 12
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Peak to Average Ratio Reduction in Wireless OFDM Communication Systems

HAIDER, KAMRAN January 2006 (has links)
Future mobile communications systems reaching for ever increasing data rates require higher bandwidths than those typical used in today’s cellular systems. By going to higher bandwidth the (for low bandwidth) flat fading radio channel becomes frequency selective and time dispersive. Due to its inherent robustness against time dispersion Orthogonal Frequency Division Multiplex (OFDM) is an attractive candidate for such future mobile communication systems. OFDM partitions the available bandwidth into many subchannels with much lower bandwidth. Such a narrowband subchannel experiences now due to its low bandwidth an almost flat fading leading in addition to above mentioned robustness also to simple implementations. However, one potential drawback with OFDM modulation is the high Peak to Average Ratio (PAR) of the transmitted signal: The signal transmitted by the OFDM system is the superposition of all signals transmitted in the narrowband subchannels. The transmit signal has then due to the central limit theorem a Gaussian distribution leading to high peak values compared to the average power. A system design not taking this into account will have a high clip rate: Each signal sample that is beyond the saturation limit of the power amplifier suffers either clipping to this limit value or other non-linear distortion, both creating additional bit errors in the receiver. One possibility to avoid clipping is to design the system for very high signal peaks. However, this approach leads to very high power consumption (since the power amplifier must have high supply rails) and also complex power amplifiers. The preferred solution is therefore to apply digital signal processing that reduces such high peak values in the transmitted signal thus avoiding clipping. These methods are commonly referred to as PAR reduction. PAR reduction methods can be categorized into transparent methods – here the receiver is not aware of the reduction scheme applied by the transmitter – and non-transparent methods where the receiver needs to know the PAR algorithm applied by the transmitter. This master thesis would focus on transparent PAR reduction algorithms. The performance of PAR reduction method will be analysed both with and without the PSD constrained. The effect of error power on data tones due to clipping will be investigated in this report.
12

Estudo e implementação de um método de restrições ativas para problemas de otimização em caixas / Analysis and design of an active-set method for box-constrained optimization

Jan Marcel Paiva Gentil 23 June 2010 (has links)
Problemas de otimização em caixas são de grande importância, não só por surgirem naturalmente na formulação de problemas da vida prática, mas também por aparecerem como subproblemas de métodos de penalização ou do tipo Lagrangiano Aumentado para resolução de problemas de programação não-linear. O objetivo do trabalho é estudar um algoritmo de restrições ativas para problemas de otimização em caixas recentemente apresentado chamado ASA e compará-lo à versão mais recente de GENCAN, que é também um método de restrições ativas. Para tanto, foi elaborada uma metodologia de testes robusta e minuciosa, que se propõe a remediar vários dos aspectos comumente criticados em trabalhos anteriores. Com isso, puderam ser extraídas conclusões que levaram à melhoria de GENCAN, conforme ficou posteriormente comprovado por meio da metodologia aqui introduzida. / Box-constrained optimization problems are of great importance not only for naturally arising in several real-life problems formulation, but also for their occurrence as sub-problems in both penalty and Augmented Lagrangian methods for solving nonlinear programming problems. This work aimed at studying a recently introduced active-set method for box-constrained optimization called ASA and comparing it to the latest version of GENCAN, which is also an active-set method. For that purpose, we designed a robust and thorough testing methodology intended to remedy many of the widely criticized aspects of prior works. Thereby, we could draw conclusions leading to GENCAN\'s further development, as it later became evident by means of the same methodology herein proposed.
13

Real-time Model Predictive Control with Complexity Guarantees Applied on a Truck and Trailer System

Bourelius, Edvin January 2022 (has links)
In model predictive control an optimization problem is solved in every time step, which in real-time applications has to be solved within a limited time frame. When applied on embedded hardware in fast changing systems it is important to use efficient solvers and crucial to guarantee that the optimization problem can be solved within the time frame. In this thesis a path following controller which follows a motion plan given by a motion planner is implemented to steer a truck and trailer system. To solve the optimization problems which in this thesis are quadratic programs the three different solvers DAQP, qpOASES and OSQP are employed. The computational time of the active-set solvers DAQP, qpOASES and the operator splitting solver OSQP are compared, where the controller using DAQP was found the fastest and therefore most suited to use in this application of real-time model predictive control.  A certification framework for the active-set method is used to give complexity guarantees on the controller using DAQP. The exact worst-case number of iterations when the truck and trailer system is following a straight path is presented. Furthermore, initial experiments show that given enough computational time/power the exact iteration complexity can be determined for every possible quadratic program that can appear in the controller.
14

Applications of Integer Quadratic Programming in Control and Communication

Axehill, Daniel January 2005 (has links)
<p>The main topic of this thesis is integer quadratic programming with applications to problems arising in the areas of automatic control and communication. One of the most widespread modern control principles is the discrete-time method Model Predictive Control (MPC). The main advantage with MPC, compared to most other control principles, is that constraints on control signals and states can easily be handled. In each time step, MPC requires the solution of a Quadratic Programming (QP) problem. To be able to use MPC for large systems, and at high sampling rates, optimization routines tailored for MPC are used. In recent years, the range of application of MPC has been extended from constrained linear systems to so-called hybrid systems. Hybrid systems are systems where continuous dynamics interact with logic. When this extension is made, binary variables are introduced in the problem. As a consequence, the QP problem has to be replaced by a far more challenging Mixed Integer Quadratic Programming (MIQP) problem. Generally, for this type of optimization problems, the computational complexity is exponential in the number of binary optimization variables. In modern communication systems, multiple users share a so-called multi-access channel, where the information sent by different users is separated by using almost orthogonal codes. Since the codes are not completely orthogonal, the decoded information at the receiver is slightly correlated between different users. Further, noise is added during the transmission. To estimate the information originally sent, a maximum likelihood problem involving binary variables is solved. The process of simultaneously estimating the information sent by multiple users is called multiuser detection. In this thesis, the problem to efficiently solve MIQP problems originating from MPC is addressed. Two different algorithms are presented. First, a polynomial complexity preprocessing algorithm for binary quadratic programming problems is presented. By using the algorithm, some, or all, binary variables can be computed efficiently already in the preprocessing phase. In simulations, the algorithm is applied to unconstrained MPC problems with a mixture of real and binary control signals. It has also been applied to the multiuser detection problem, where simulations have shown that the bit error rate can be significantly reduced by using the proposed algorithm as compared to using common suboptimal algorithms. Second, an MIQP algorithm tailored for MPC is presented. The algorithm uses a branch and bound method where the relaxed node problems are solved by a dual active set QP algorithm. In this QP algorithm, the KKT-systems are solved using Riccati recursions in order to decrease the computational complexity. Simulation results show that both the QP solver and the MIQP solver proposed have lower computational complexity than corresponding generic solvers.</p> / Report code: LiU-TEK-LIC-2005:71.
15

Safe optimization algorithms for variable selection and hyperparameter tuning / Algorithmes d’optimisation sûrs pour la sélection de variables et le réglage d’hyperparamètre

Ndiaye, Eugene 04 October 2018 (has links)
Le traitement massif et automatique des données requiert le développement de techniques de filtration des informations les plus importantes. Parmi ces méthodes, celles présentant des structures parcimonieuses se sont révélées idoines pour améliorer l’efficacité statistique et computationnelle des estimateurs, dans un contexte de grandes dimensions. Elles s’expriment souvent comme solution de la minimisation du risque empirique régularisé s’écrivant comme une somme d’un terme lisse qui mesure la qualité de l’ajustement aux données, et d’un terme non lisse qui pénalise les solutions complexes. Cependant, une telle manière d’inclure des informations a priori, introduit de nombreuses difficultés numériques pour résoudre le problème d’optimisation sous-jacent et pour calibrer le niveau de régularisation. Ces problématiques ont été au coeur des questions que nous avons abordées dans cette thèse.Une technique récente, appelée «Screening Rules», propose d’ignorer certaines variables pendant le processus d’optimisation en tirant bénéfice de la parcimonie attendue des solutions. Ces règles d’élimination sont dites sûres lorsqu’elles garantissent de ne pas rejeter les variables à tort. Nous proposons un cadre unifié pour identifier les structures importantes dans ces problèmes d’optimisation convexes et nous introduisons les règles «Gap Safe Screening Rules». Elles permettent d’obtenir des gains considérables en temps de calcul grâce à la réduction de la dimension induite par cette méthode. De plus, elles s’incorporent facilement aux algorithmes itératifs et s’appliquent à un plus grand nombre de problèmes que les méthodes précédentes.Pour trouver un bon compromis entre minimisation du risque et introduction d’un biais d’apprentissage, les algorithmes d’homotopie offrent la possibilité de tracer la courbe des solutions en fonction du paramètre de régularisation. Toutefois, ils présentent des instabilités numériques dues à plusieurs inversions de matrice, et sont souvent coûteux en grande dimension. Aussi, ils ont des complexités exponentielles en la dimension du modèle dans des cas défavorables. En autorisant des solutions approchées, une approximation de la courbe des solutions permet de contourner les inconvénients susmentionnés. Nous revisitons les techniques d’approximation des chemins de régularisation pour une tolérance prédéfinie, et nous analysons leur complexité en fonction de la régularité des fonctions de perte en jeu. Il s’ensuit une proposition d’algorithmes optimaux ainsi que diverses stratégies d’exploration de l’espace des paramètres. Ceci permet de proposer une méthode de calibration de la régularisation avec une garantie de convergence globale pour la minimisation du risque empirique sur les données de validation.Le Lasso, un des estimateurs parcimonieux les plus célèbres et les plus étudiés, repose sur une théorie statistique qui suggère de choisir la régularisation en fonction de la variance des observations. Ceci est difficilement utilisable en pratique car, la variance du modèle est une quantité souvent inconnue. Dans de tels cas, il est possible d’optimiser conjointement les coefficients de régression et le niveau de bruit. Ces estimations concomitantes, apparues dans la littérature sous les noms de Scaled Lasso, Square-Root Lasso, fournissent des résultats théoriques aussi satisfaisants que celui du Lasso tout en étant indépendant de la variance réelle. Bien que présentant des avancées théoriques et pratiques importantes, ces méthodes sont aussi numériquement instables et les algorithmes actuellement disponibles sont coûteux en temps de calcul. Nous illustrons ces difficultés et nous proposons à la fois des modifications basées sur des techniques de lissage pour accroitre la stabilité numérique de ces estimateurs, ainsi qu’un algorithme plus efficace pour les obtenir. / Massive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm.
16

Practical Implementations Of The Active Set Method For Support Vector Machine Training With Semi-definite Kernels

Sentelle, Christopher 01 January 2014 (has links)
The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the models regularization parameter, C, as well as associated kernel parameters. The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse–the case for the `1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regularization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a single value of C with traditional methods. Unfortunately, the majority of active set implementations used for SVM training require positive definite kernels, and those implementations that do allow semi-definite kernels tend to be complex and can exhibit instability and, worse, lack of convergence. This severely limits applicability since it precludes the use of the linear kernel, can be an issue when duplicate data points exist, and doesn’t allow use of low-rank kernel approximations to improve tractability for large datasets. The difficulty, in the case of a semi-definite kernel, arises when a particular active set results in a singular KKT matrix (or the equality-constrained problem formed using the active set is semidefinite). Typically this is handled by explicitly detecting the rank of the KKT matrix. Unfortunately, this adds significant complexity to the implementation; and, if care is not taken, numerical iii instability, or worse, failure to converge can result. This research shows that the singular KKT system can be avoided altogether with simple modifications to the active set method. The result is a practical, easy to implement active set method that does not need to explicitly detect the rank of the KKT matrix nor modify factorization or solution methods based upon the rank. Methods are given for both conventional SVM training as well as for computing the regularization path that are simple and numerically stable. First, an efficient revised simplex method is efficiently implemented for SVM training (SVM-RSQP) with semi-definite kernels and shown to out-perform competing active set implementations for SVM training in terms of training time as well as shown to perform on-par with state-of-the-art SVM training algorithms such as SMO and SVMLight. Next, a new regularization path-following algorithm for semi-definite kernels (Simple SVMPath) is shown to be orders of magnitude faster, more accurate, and significantly less complex than competing methods and does not require the use of external solvers. Theoretical analysis reveals new insights into the nature of the path-following algorithms. Finally, a method is given for computing the approximate regularization path and approximate kernel path using the warm-start capability of the proposed revised simplex method (SVM-RSQP) and shown to provide significant, orders of magnitude, speed-ups relative to the traditional grid search where re-training is performed at each parameter value. Surprisingly, it also shown that even when the solution for the entire path is not desired, computing the approximate path can be seen as a speed-up mechanism for obtaining the solution at a single value. New insights are given concerning the limiting behaviors of the regularization and kernel path as well as the use of low-rank kernel approximations.
17

Applications of Integer Quadratic Programming in Control and Communication

Axehill, Daniel January 2005 (has links)
The main topic of this thesis is integer quadratic programming with applications to problems arising in the areas of automatic control and communication. One of the most widespread modern control principles is the discrete-time method Model Predictive Control (MPC). The main advantage with MPC, compared to most other control principles, is that constraints on control signals and states can easily be handled. In each time step, MPC requires the solution of a Quadratic Programming (QP) problem. To be able to use MPC for large systems, and at high sampling rates, optimization routines tailored for MPC are used. In recent years, the range of application of MPC has been extended from constrained linear systems to so-called hybrid systems. Hybrid systems are systems where continuous dynamics interact with logic. When this extension is made, binary variables are introduced in the problem. As a consequence, the QP problem has to be replaced by a far more challenging Mixed Integer Quadratic Programming (MIQP) problem. Generally, for this type of optimization problems, the computational complexity is exponential in the number of binary optimization variables. In modern communication systems, multiple users share a so-called multi-access channel, where the information sent by different users is separated by using almost orthogonal codes. Since the codes are not completely orthogonal, the decoded information at the receiver is slightly correlated between different users. Further, noise is added during the transmission. To estimate the information originally sent, a maximum likelihood problem involving binary variables is solved. The process of simultaneously estimating the information sent by multiple users is called multiuser detection. In this thesis, the problem to efficiently solve MIQP problems originating from MPC is addressed. Two different algorithms are presented. First, a polynomial complexity preprocessing algorithm for binary quadratic programming problems is presented. By using the algorithm, some, or all, binary variables can be computed efficiently already in the preprocessing phase. In simulations, the algorithm is applied to unconstrained MPC problems with a mixture of real and binary control signals. It has also been applied to the multiuser detection problem, where simulations have shown that the bit error rate can be significantly reduced by using the proposed algorithm as compared to using common suboptimal algorithms. Second, an MIQP algorithm tailored for MPC is presented. The algorithm uses a branch and bound method where the relaxed node problems are solved by a dual active set QP algorithm. In this QP algorithm, the KKT-systems are solved using Riccati recursions in order to decrease the computational complexity. Simulation results show that both the QP solver and the MIQP solver proposed have lower computational complexity than corresponding generic solvers. / <p>Report code: LiU-TEK-LIC-2005:71.</p>
18

Tópicos em otimização com restrições lineares / Topics on linearly-constrained optimization

Andretta, Marina 24 July 2008 (has links)
Métodos do tipo Lagrangiano Aumentado são muito utilizados para minimização de funções sujeitas a restrições gerais. Nestes métodos, podemos separar o conjunto de restrições em dois grupos: restrições fáceis e restrições difíceis. Dizemos que uma restrição é fácil se existe um algoritmo disponível e eficiente para resolver problemas restritos a este tipo de restrição. Caso contrário, dizemos que a restrição é difícil. Métodos do tipo Lagrangiano aumentado resolvem, a cada iteração, problemas sujeitos às restrições fáceis, penalizando as restrições difíceis. Problemas de minimização com restrições lineares aparecem com freqüência, muitas vezes como resultados da aproximação de problemas com restrições gerais. Este tipo de problema surge também como subproblema de métodos do tipo Lagrangiano aumentado. Assim, uma implementação eficiente para resolver problemas com restrições lineares é relevante para a implementação eficiente de métodos para resolução de problemas de programação não-linear. Neste trabalho, começamos considerando fáceis as restrições de caixa. Introduzimos BETRA-ESPARSO, uma versão de BETRA para problemas de grande porte. BETRA é um método de restrições ativas que utiliza regiões de confiança para minimização em cada face e gradiente espectral projetado para sair das faces. Utilizamos BETRA (denso ou esparso) na resolução dos subproblemas que surgem a cada iteração de ALGENCAN (um método de lagrangiano aumentado). Para decidir qual algoritmo utilizar para resolver cada subproblema, desenvolvemos regras que escolhem um método para resolver o subproblema de acordo com suas características. Em seguida, introduzimos dois algoritmos de restrições ativas desenvolvidos para resolver problemas com restrições lineares (BETRALIN e GENLIN). Estes algoritmos utilizam, a cada iteração, o método do Gradiente Espectral Projetado Parcial quando decidem mudar o conjunto de restrições ativas. O método do gradiente Espectral Projetado Parcial foi desenvolvido especialmente para este propósito. Neste método, as projeções são computadas apenas em um subconjunto das restrições, com o intuito de torná-las mais eficientes. Por fim, tendo introduzido um método para minimização com restrições lineares, consideramos como fáceis as restrições lineares. Incorporamos BETRALIN e GENLIN ao arcabouço de Lagrangianos aumentados e verificamos experimentalmente a eficiência e eficácia destes métodos que trabalham explicitamente com restrições lineares e penalizam as demais. / Augmented Lagrangian methods are widely used to solve general nonlinear programming problems. In these methods, one can split the set of constraints in two groups: the set of easy and hard constraints. A constraint is called easy if there is an efficient method available to solve problems subject to that kind of constraint. Otherwise, the constraints are called hard. Augmented Lagrangian methods solve, at each iteration, problems subject to the set of easy constraints while penalizing the set of hard constraints. Linearly constrained problems appear frequently, sometimes as a result of a linear approximation of a problem, sometimes as an augmented Lagrangian subproblem. Therefore, an efficient method to solve linearly constrained problems is important for the implementation of efficient methods to solve nonlinear programming problems. In this thesis, we begin by considering box constraints as the set of easy constraints. We introduce a version of BETRA to solve large scale problems. BETRA is an active-set method that uses a trust-region strategy to work within the faces and spectral projected gradient to leave the faces. To solve each iteration\'s subproblem of ALGENCAN (an augmented Lagrangian method) we use either the dense or the sparse version of BETRA. We develope rules to decide which box-constrained inner solver should be used at each augmented Lagrangian iteration that considers the main characteristics of the problem to be solved. Then, we introduce two active-set methods to solve linearly constrained problems (BETRALIN and GENLIN). These methods use Partial Spectral Projected Gradient method to change the active set of constraints. The Partial Spectral Projected Gradient method was developed specially for this purpose. It computes projections onto a subset of the linear constraints, aiming to make the projections more efficient. At last, having introduced a linearly-constrained solver, we consider the set of linear constraints as the set of easy constraints. We use BETRALIN and GENLIN in the framework of augmented Lagrangian methods and verify, using numerical experiments, the efficiency and robustness of those methods that work with linear constraints and penalize the nonlinear constraints.
19

O Método de Newton e a Função Penalidade Quadrática aplicados ao problema de fluxo de potência ótimo / The Newton\'s method and quadratic penalty function applied to the Optimal Power Flow Problem

Costa, Carlos Ednaldo Ueno 18 February 1998 (has links)
Neste trabalho é apresentada uma abordagem do Método de Newton associado à função penalidade quadrática e ao método dos conjuntos ativos na solução do problema de Fluxo de Potência Ótimo (FPO). A formulação geral do problema de FPO é apresentada, assim como a técnica utilizada na resolução do sistema de equações. A fatoração da matriz Lagrangeana é feita por elementos ao invés das estruturas em blocos. A característica de esparsidade da matriz Lagrangeana é levada em consideração. Resultados dos testes realizados em 4 sistemas (3, 14, 30 e 118 barras) são apresentados. / This work presents an approach on Newton\'s Method associated with the quadratic penalty function and the active set methods in the solution of Optimal Power Flow Problem (OPF). The general formulation of the OPF problem is presented, as will as the technique used in the equation systems resolution. The Lagrangean matrix factorization is carried out by elements instead of structures in blocks. The characteristic of sparsity of the Lagrangean matrix is taken in to account. Numerical results of tests realized in systems of 3, 14, 30 and 118 buses are presented to show the efficiency of the method.
20

Elimination dynamique : accélération des algorithmes d'optimisation convexe pour les régressions parcimonieuses / Dynamic screening : accelerating convex optimization algorithms for sparse regressions

Bonnefoy, Antoine 15 April 2016 (has links)
Les algorithmes convexes de résolution pour les régressions linéaires parcimonieuses possèdent de bonnes performances pratiques et théoriques. Cependant, ils souffrent tous des dimensions du problème qui dictent la complexité de chacune de leur itération. Nous proposons une approche pour réduire ce coût calculatoire au niveau de l'itération. Des stratégies récentes s'appuyant sur des tests d'élimination de variables ont été proposées pour accélérer la résolution des problèmes de régressions parcimonieuse pénalisées tels que le LASSO. Ces approches reposent sur l'idée qu'il est profitable de dédier un petit effort de calcul pour localiser des atomes inactifs afin de les retirer du dictionnaire dans une étape de prétraitement. L'algorithme de résolution utilisant le dictionnaire ainsi réduit convergera alors plus rapidement vers la solution du problème initial. Nous pensons qu'il existe un moyen plus efficace pour réduire le dictionnaire et donc obtenir une meilleure accélération : à l'intérieur de chaque itération de l'algorithme, il est possible de valoriser les calculs originalement dédiés à l'algorithme pour obtenir à moindre coût un nouveau test d'élimination dont l'effet d'élimination augmente progressivement le long des itérations. Le dictionnaire est alors réduit de façon dynamique au lieu d'être réduit de façon statique, une fois pour toutes, avant la première itération. Nous formalisons ce principe d'élimination dynamique à travers une formulation algorithmique générique, et l'appliquons en intégrant des tests d'élimination existants, à l'intérieur de plusieurs algorithmes du premier ordre pour résoudre les problèmes du LASSO et Group-LASSO. / Applications in signal processing and machine learning make frequent use of sparse regressions. Resulting convex problems, such as the LASSO, can be efficiently solved thanks to first-order algorithms, which are general, and have good convergence properties. However those algorithms suffer from the dimension of the problem, which impose the complexity of their iterations. In this thesis we study approaches, based on screening tests, aimed at reducing the computational cost at the iteration level. Such approaches build upon the idea that it is worth dedicating some small computational effort to locate inactive atoms and remove them from the dictionary in a preprocessing stage so that the regression algorithm working with a smaller dictionary will then converge faster to the solution of the initial problem. We believe that there is an even more efficient way to screen the dictionary and obtain a greater acceleration: inside each iteration of the regression algorithm, one may take advantage of the algorithm computations to obtain a new screening test for free with increasing screening effects along the iterations. The dictionary is henceforth dynamically screened instead of being screened statically, once and for all, before the first iteration. Our first contribution is the formalisation of this principle and its application to first-order algorithms, for the resolution of the LASSO and Group-LASSO. In a second contribution, this general principle is combined to active-set methods, whose goal is also to accelerate the resolution of sparse regressions. Applying the two complementary methods on first-order algorithms, leads to great acceleration performances.

Page generated in 0.0472 seconds