• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 53
  • 45
  • 42
  • 39
  • 38
  • 32
  • 25
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A Signal Processing Approach to Voltage-Sensitive Dye Optical Imaging / Une approche mathématique de l'imagerie optique par colorant potentiométrique

Raguet, Hugo 22 September 2014 (has links)
L’imagerie optique par colorant potentiométrique est une méthode d’enregistrement de l’activité corticale prometteuse, mais dont le potentiel réel est limité par la présence d’artefacts et d’interférences dans les acquisitions. À partir de modèles existant dans la littérature, nous proposons un modèle génératif du signal basé sur un mélange additif de composantes, chacune contrainte dans une union d’espaces linéaires déterminés par son origine biophysique. Motivés par le problème de séparation de composantes qui en découle, qui est un problème inverse linéaire sous-déterminé, nous développons : (1) des régularisations convexes structurées spatialement, favorisant en particulier des solutions parcimonieuses ; (2) un nouvel algorithme proximal de premier ordre pour minimiser efficacement la fonctionnelle qui en résulte ; (3) des méthodes statistiques de sélection de paramètre basées sur l’estimateur non biaisé du risque de Stein. Nous étudions ces outils dans un cadre général, et discutons leur utilité pour de nombreux domaines des mathématiques appliqués, en particulier pour les problèmes inverses ou de régression en grande dimension. Nous développons par la suite un logiciel de séparation de composantes en présence de bruit, dans un environnement intégré adapté à l’imagerie optique par colorant potentiométrique. Finalement, nous évaluons ce logiciel sur différentes données, synthétiques et réelles, montrant des résultats encourageants quant à la possibilité d’observer des dynamiques corticales complexes. / Voltage-sensitive dye optical imaging is a promising recording modality for the cortical activity, but its practical potential is limited by many artefacts and interferences in the acquisitions. Inspired by existing models in the literature, we propose a generative model of the signal, based on an additive mixtures of components, each one being constrained within an union of linear spaces, determined by its biophysical origin. Motivated by the resulting component separation problem, which is an underdetermined linear inverse problem, we develop: (1) convex, spatially structured regularizations, enforcing in particular sparsity on the solutions; (2) a new rst-order proximal algorithm for minimizing e›ciently the resulting functional; (3) statistical methods for automatic parameters selection, based on Stein’s unbiased risk estimate.We study thosemethods in a general framework, and discuss their potential applications in variouselds of applied mathematics, in particular for large scale inverse problems or regressions. We develop subsequently a soŸware for noisy component separation, in an integrated environment adapted to voltage-sensitive dye optical imaging. Finally, we evaluate this soŸware on dišerent data set, including synthetic and real data, showing encouraging perspectives for the observation of complex cortical dynamics.
172

Une méthode de dualité pour des problèmes non convexes du Calcul des Variations / A duality method for non-convex problems in Calculus of Variations

Phan, Tran Duc Minh 28 June 2018 (has links)
Dans cette thèse, nous étudions un principe général de convexification permettant de traiter certainsproblèmes variationnels non convexes sur Rd. Grâce à ce principe nous pouvons mettre en oeuvre lespuissantes techniques de dualité et ramener de tels problèmes à des formulations de type primal–dualdans Rd+1, rendant ainsi efficace la recherche numérique de minima globaux. Une théorie de ladualité et des champs de calibration est reformulée dans le cas de fonctionnelles à croissance linéaire.Sous certaines hypothèses, cela nous permet de généraliser un principe d’exclusion découvert parVisintin dans les années 1990 et de réduire le problème initial à la minimisation d’une fonctionnelleconvexe sur Rd. Ce résultat s’applique notamment à une classe de problèmes à frontière libre oumulti-phasique donnant lieu à des tests numériques très convaincants au vu de la qualité des interfacesobtenues. Ensuite nous appliquons la théorie des calibrations à un problème classique de surfacesminimales avec frontière libre et établissons de nouveaux résultats de comparaison avec sa varianteoù la fonctionnelle des surfaces minimales est remplacée par la variation totale. Nous généralisonsla notion de calibrabilité introduite par Caselles-Chambolle et Al. et construisons explicitementune solution duale pour le problème associé à la seconde fonctionnelle en utilisant un potentiellocalement Lipschitzien lié à la distance au cut-locus. La dernière partie de la thèse est consacrée auxalgorithmes d’optimisation de type primal-dual pour la recherche de points selle, en introduisant denouvelles variantes plus efficaces en précision et temps calcul. Nous avons en particulier introduit unevariante semi-implicite de la méthode d’Arrow-Hurwicz qui permet de réduire le nombre d’itérationsnécessaires pour obtenir une qualité satisfaisante des interfaces. Enfin nous avons traité la nondifférentiabilité structurelle des Lagrangiens utilisés à l’aide d’une méthode géométrique de projectionsur l’épigraphe offrant ainsi une alternative aux méthodes classiques de régularisation. / In this thesis, we study a general principle of convexification to treat certain non convex variationalproblems in Rd. Thanks to this principle we are able to enforce the powerful duality techniques andbring back such problems to primal-dual formulations in Rd+1, thus making efficient the numericalsearch of a global minimizer. A theory of duality and calibration fields is reformulated in the caseof linear-growth functionals. Under suitable assumptions, this allows us to revisit and extend anexclusion principle discovered by Visintin in the 1990s and to reduce the original problem to theminimization of a convex functional in Rd. This result is then applied successfully to a class offree boundary or multiphase problems that we treat numerically obtaining very accurate interfaces.On the other hand we apply the theory of calibrations to a classical problem of minimal surfaceswith free boundary and establish new results related to the comparison with its variant where theminimal surfaces functional is replaced by the total variation. We generalize the notion of calibrabilityintroduced by Caselles-Chambolle and Al. and construct explicitly a dual solution for the problemassociated with the second functional by using a locally Lipschitzian potential related to the distanceto the cut-locus. The last part of the thesis is devoted to primal-dual optimization algorithms forthe search of saddle points, introducing new more efficient variants in precision and computationtime. In particular, we experiment a semi-implicit variant of the Arrow-Hurwicz method whichallows to reduce drastically the number of iterations necessary to obtain a sharp accuracy of theinterfaces. Eventually we tackle the structural non-differentiability of the Lagrangian arising fromour method by means of a geometric projection method on the epigraph, thus offering an alternativeto all classical regularization methods.
173

Total variational optical flow for robust and accurate bladder image mosaicing / Calcul du flot optique dans une approche variationnelle totale pour le mosaïquage robuste et précis d’images de la vessie

Ali, Sharib 04 January 2016 (has links)
La cystoscopie est l’examen de référence pour le diagnostic et le traitement du cancer de la vessie. Le champ de vue (CdV) réduit des endoscopes complique le diagnostic et le suivi des lésions. Les mosaïques d’images sont une solution à ce problème car elles visualisent des CdV étendus. Toutefois, pour la vessie, le mosaïque d’images est un véritable défi à cause du faible contraste dans les images, des textures peu prononcées, de la variabilité intra- et inter-patient et des changements d’illumination dans les séquences. Ce défi est également à relever dans d’autres modalités endoscopiques ou dans des scènes non médicales comme les vidéos sous-marines. Dans cette thèse, une énergie variationnelle totale a d’abord été minimisée à l’aide d’un algorithme primal-dual du premier ordre pour obtenir un flot optique fournissant une correspondance dense et précise entre les points homologues des paires d’images. Les correspondances sont ensuite utilisées pour déterminer les paramètres des transformations requises pour le placement des images dans le repère global de la mosaïque. Les méthodes proposées pour l’estimation du flot optique dense incluent un terme d’attache aux données qui minimise le nombre des vecteurs aberrants et un terme de régularisation conçu pour préserver les discontinuités du champ devecteurs. Un algorithme de flot optique qui est robuste vis-à-vis de changements d’illumination importants (et utilisable pour différentes modalités) a également été développé dans ce contexte. La précision et la robustesse des méthodes de recalage proposées ont été testées sur des jeux de données (de flot optique) publiquement accessibles et sur des fantômes de vessies et de la peau. Des résultats sur des données patients acquises avec des cystoscopes rigides et flexibles, en lumière blanche ou en fluorescence, montrent la robustesse des algorithmes proposés. Ces résultats sont complétés par ceux obtenus pour d’autres séquences endoscopiques réelles de dermatoscopie, de scène sous-marine et de données d’exploration spatiale. / Cystoscopy is the reference procedure for the diagnosis and treatment of bladder cancer. The small field of view (FOV) of endoscopes makes both the diagnosis and follow-up of lesions difficult. Image mosaics are a solution to this problem since they visualize large FOVs of the bladder scene. However, due to low contrast, weak texture, inter- and intra-patient texture variability and illumination changes in these image sequences, the task of image mosaicing becomes challenging. This is also a major concern in other endoscopic data and non-medical scenes like underwater videos. In this thesis, a total variational energy has been first minimized using a first-order primal-dual algorithm in convex optimization to obtain optical flow vector fields giving a dense and accurate correspondence between homologous points of the image pairs. The correspondences are then used to obtain transformation parameters for registering the images to one global mosaic coordinate system. The proposed methods for dense optical flow estimation include a data-term which is modeled to minimize at most the outliers and a regularizer which is designed to preserve at their best the flow field discontinuities. An optical flow algorithm, which is robust to strong illumination changes (and which suits to different modalities), has also been developed in this framework. The registration accuracy and robustness of the proposed methods are tested on both publicly available datasets for optical flow estimation and on simulated bladder and skin phantoms. Results on patient data acquired with rigid and flexible cystoscopes under the white light and the fluorescence modality show the robustness of the proposed approaches. These results are also complemented with those of other real endoscopic data, dermoscopic sequences, underwater scenes and space exploration data.
174

Etude de l'épissage grâce à des techniques de régression parcimonieuse dans l'ère du séquençage haut débit de l'ARN / Deciphering splicing with sparse regression techniques in the era of high-throughput RNA sequencing.

Bernard, Elsa 21 September 2016 (has links)
Le nombre de gènes codant pour des protéines chez l’'homme, le vers rond et la mouche des fruits est du même ordre de grandeur. Cette absence de correspondance entre le nombre de gènes d’un eucaryote et sa complexité phénotypique s’explique en partie par le caractère alternatif de l’épissage.L'épissage alternatif augmente considérablement le répertoire fonctionnel de protéines codées par un nombre limité de gènes. Ce mécanisme, très actif lors du développement embryonnaire, participe au devenir cellulaire. De nombreux troubles génétiques, hérités ou acquis (en particulier certains cancers), se caractérisent par une altération de son fonctionnement.Les technologies de séquençage à haut débit de l'ARN donnent accès a une information plus riche sur le mécanisme de l’épissage. Cependant, si la lecture à haut débit des séquences d’ARN est plus rapide et moins coûteuse, les données qui en sont issues sont complexes et nécessitent le développement d’outils algorithmiques pour leur interprétation. En particulier, la reconstruction des transcrits alternatifs requiert une étape de déconvolution non triviale.Dans ce contexte, cette thèse participe à l'étude des événements d'épissage et des transcrits alternatifs sur des données de séquençage à haut débit de l'ARN.Nous proposons de nouvelles méthodes pour reconstruire et quantifier les transcrits alternatifs de façon plus efficace et précise. Nos contributions méthodologiques impliquent des techniques de régression parcimonieuse, basées sur l'optimisation convexe et sur des algorithmes de flots. Nous étudions également une procédure pour détecter des anomalies d'épissage dans un contexte de diagnostic clinique. Nous suggérons un protocole expérimental facilement opérant et développons de nouveaux modèles statistiques et algorithmes pour quantifier des événements d’épissage et mesurer leur degré d'anormalité chez le patient. / The number of protein-coding genes in a human, a nematodeand a fruit fly are roughly equal.The paradoxical miscorrelation between the number of genesin an organism's genome and its phenotypic complexityfinds an explanation in the alternative natureof splicing in higher organisms.Alternative splicing largely increases the functionaldiversity of proteins encoded by a limitednumber of genes.It is known to be involved incell fate decisionand embryonic development,but also appears to be dysregulatedin inherited and acquired human genetic disorders,in particular in cancers.High-throughput RNA sequencing technologiesallow us to measure and question splicingat an unprecedented resolution.However, while the cost of sequencing RNA decreasesand throughput increases,many computational challenges arise from the discrete and local nature of the data.In particular, the task of inferring alternative transcripts requires a non-trivial deconvolution procedure.In this thesis, we contribute to deciphering alternative transcript expressions andalternative splicing events fromhigh-throughput RNA sequencing data.We propose new methods to accurately and efficientlydetect and quantify alternative transcripts.Our methodological contributionslargely rely on sparse regression techniquesand takes advantage ofnetwork flow optimization techniques.Besides, we investigate means to query splicing abnormalitiesfor clinical diagnosis purposes.We suggest an experimental protocolthat can be easily implemented in routine clinical practice,and present new statistical models and algorithmsto quantify splicing events and measure how abnormal these eventsmight be in patient data compared to wild-type situations.
175

Elimination dynamique : accélération des algorithmes d'optimisation convexe pour les régressions parcimonieuses / Dynamic screening : accelerating convex optimization algorithms for sparse regressions

Bonnefoy, Antoine 15 April 2016 (has links)
Les algorithmes convexes de résolution pour les régressions linéaires parcimonieuses possèdent de bonnes performances pratiques et théoriques. Cependant, ils souffrent tous des dimensions du problème qui dictent la complexité de chacune de leur itération. Nous proposons une approche pour réduire ce coût calculatoire au niveau de l'itération. Des stratégies récentes s'appuyant sur des tests d'élimination de variables ont été proposées pour accélérer la résolution des problèmes de régressions parcimonieuse pénalisées tels que le LASSO. Ces approches reposent sur l'idée qu'il est profitable de dédier un petit effort de calcul pour localiser des atomes inactifs afin de les retirer du dictionnaire dans une étape de prétraitement. L'algorithme de résolution utilisant le dictionnaire ainsi réduit convergera alors plus rapidement vers la solution du problème initial. Nous pensons qu'il existe un moyen plus efficace pour réduire le dictionnaire et donc obtenir une meilleure accélération : à l'intérieur de chaque itération de l'algorithme, il est possible de valoriser les calculs originalement dédiés à l'algorithme pour obtenir à moindre coût un nouveau test d'élimination dont l'effet d'élimination augmente progressivement le long des itérations. Le dictionnaire est alors réduit de façon dynamique au lieu d'être réduit de façon statique, une fois pour toutes, avant la première itération. Nous formalisons ce principe d'élimination dynamique à travers une formulation algorithmique générique, et l'appliquons en intégrant des tests d'élimination existants, à l'intérieur de plusieurs algorithmes du premier ordre pour résoudre les problèmes du LASSO et Group-LASSO. / Applications in signal processing and machine learning make frequent use of sparse regressions. Resulting convex problems, such as the LASSO, can be efficiently solved thanks to first-order algorithms, which are general, and have good convergence properties. However those algorithms suffer from the dimension of the problem, which impose the complexity of their iterations. In this thesis we study approaches, based on screening tests, aimed at reducing the computational cost at the iteration level. Such approaches build upon the idea that it is worth dedicating some small computational effort to locate inactive atoms and remove them from the dictionary in a preprocessing stage so that the regression algorithm working with a smaller dictionary will then converge faster to the solution of the initial problem. We believe that there is an even more efficient way to screen the dictionary and obtain a greater acceleration: inside each iteration of the regression algorithm, one may take advantage of the algorithm computations to obtain a new screening test for free with increasing screening effects along the iterations. The dictionary is henceforth dynamically screened instead of being screened statically, once and for all, before the first iteration. Our first contribution is the formalisation of this principle and its application to first-order algorithms, for the resolution of the LASSO and Group-LASSO. In a second contribution, this general principle is combined to active-set methods, whose goal is also to accelerate the resolution of sparse regressions. Applying the two complementary methods on first-order algorithms, leads to great acceleration performances.
176

Topics in convex optimization: interior-point methods, conic duality and approximations

Glineur, Francois 26 January 2001 (has links)
Optimization is a scientific discipline that lies at the boundary between pure and applied mathematics. Indeed, while on the one hand some of its developments involve rather theoretical concepts, its most successful algorithms are on the other hand heavily used by numerous companies to solve scheduling and design problems on a daily basis. Our research started with the study of the conic formulation for convex optimization problems. This approach was already studied in the seventies but has recently gained a lot of interest due to development of a new class of algorithms called interior-point methods. This setting is able to exploit the two most important characteristics of convexity: - a very rich duality theory (existence of a dual problem that is strongly related to the primal problem, with a very symmetric formulation), - the ability to solve these problems efficiently, both from the theoretical (polynomial algorithmic complexity) and practical (implementations allowing the resolution of large-scale problems) point of views. Most of the research in this area involved so-called self-dual cones, where the dual problem has exactly the same structure as the primal: the most famous classes of convex optimization problems (linear optimization, convex quadratic optimization and semidefinite optimization) belong to this category. We brought some contributions in this field: - a survey of interior-point methods for linear optimization, with an emphasis on the fundamental principles that lie behind the design of these algorithms, - a computational study of a method of linear approximation of convex quadratic optimization (more precisely, the second-order cone that can be used in the formulation of quadratic problems is replaced by a polyhedral approximation whose accuracy that can be guaranteed a priori), - an application of semidefinite optimization to classification, whose principle consists in separating different classes of patterns using ellipsoids defined in the feature space (this approach was successfully applied to the prediction of student grades). However, our research focussed on a much less studied category of convex problems which does not rely on self-dual cones, i.e. structured problems whose dual is formulated very differently from the primal. We studied in particular - geometric optimization, developed in the late sixties, which possesses numerous application in the field of engineering (entropy optimization, used in information theory, also belongs to this class of problems) - l_p-norm optimization, a generalization of linear and convex quadratic optimization, which allows the formulation of constraints built around expressions of the form |ax+b|^p (where p is a fixed exponent strictly greater than 1). For each of these classes of problems, we introduced a new type of convex cone that made their formulation as standard conic problems possible. This allowed us to derive very simplified proofs of the classical duality results pertaining to these problems, notably weak duality (a mere consequence of convexity) and the absence of a duality gap (strong duality property without any constraint qualification, which does not hold in the general convex case). We also uncovered a very surprising result that stipulates that geometric optimization can be viewed as a limit case of l_p-norm optimization. Encouraged by the similarities we observed, we developed a general framework that encompasses these two classes of problems and unifies all the previously obtained conic formulations. We also brought our attention to the design of interior-point methods to solve these problems. The theory of polynomial algorithms for convex optimization developed by Nesterov and Nemirovsky asserts that the main ingredient for these methods is a computable self-concordant barrier function for the corresponding cones. We were able to define such a barrier function in the case of l_p-norm optimization (whose parameter, which is the main determining factor in the algorithmic complexity of the method, is proportional to the number of variables in the formulation and independent from p) as well as in the case of the general framework mentioned above. Finally, we contributed a survey of the self-concordancy property, improving some useful results about the value of the complexity parameter for certain categories of barrier functions and providing some insight on the reason why the most commonly adopted definition for self-concordant functions is the best possible.
177

Integrality Gaps for Strong Linear Programming and Semidefinite Programming Relaxations

Georgiou, Konstantinos 17 February 2011 (has links)
The inapproximability for NP-hard combinatorial optimization problems lies in the heart of theoretical computer science. A negative result can be either conditional, where the starting point is a complexity assumption, or unconditional, where the inapproximability holds for a restricted model of computation. Algorithms based on Linear Programming (LP) and Semidefinite Programming (SDP) relaxations are among the most prominent models of computation. The related and common measure of efficiency is the integrality gap, which sets the limitations of the associated algorithmic schemes. A number of systematic procedures, known as lift-and-project systems, have been proposed to improve the integrality gap of standard relaxations. These systems build strong hierarchies of either LP relaxations, such as the Lovasz-Schrijver (LS) and the Sherali-Adams (SA) systems, or SDP relaxations, such as the Lovasz-Schrijver SDP (LS+), the Sherali-Adams SDP (SA+) and the Lasserre (La) systems. In this thesis we prove integrality gap lower bounds for the aforementioned lift-and-project systems and for a number of combinatorial optimization problems, whose inapproximability is yet unresolved. Given that lift-and-project systems produce relaxations that have given the best algorithms known for a series of combinatorial problems, the lower bounds can be read as strong evidence of the inapproximability of the corresponding optimization problems. From the results found in the thesis we highlight the following: For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) LS+ relaxation of the Vertex Cover polytope has integrality gap 2-epsilon. The integrality gap of the standard SDP for Vertex Cover remains 2-o(1) even if all hypermetric inequalities are added to the relaxation. The resulting relaxations are incomparable to the SDP relaxations derived by the LS+ system. Finally, the addition of all ell1 inequalities eliminates all solutions not in the integral hull. For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) SA relaxation of Vertex Cover has integrality gap 2-epsilon. The integrality gap remains tight even for superconstant-level SA+ relaxations. We prove a tight lower bound for the number of tightenings that the SA system needs in order to prove the Pigeonhole Principle. We also prove sublinear and linear rank bounds for the La and SA systems respectively for the Tseitin tautology. Linear levels of the SA+ system treat highly unsatisfiable instances of fixed predicate-P constraint satisfaction problems over q-ary alphabets as fully satisfiable, when the satisfying assignments of the predicates P can be equipped with a balanced and pairwise independent distribution. We study the performance of the Lasserre system on the cut polytope. When the input is the complete graph on 2d+1 vertices, we show that the integrality gap is at least 1+1/(4d(d+1)) for the level-d SDP relaxation.
178

MSE-based Linear Transceiver Designs for Multiuser MIMO Wireless Communications

Tenenbaum, Adam 11 January 2012 (has links)
This dissertation designs linear transceivers for the multiuser downlink in multiple-input multiple-output (MIMO) systems. The designs rely on an uplink/downlink duality for the mean squared error (MSE) of each individual data stream. We first consider the design of transceivers assuming channel state information (CSI) at the transmitter. We consider minimization of the sum-MSE over all users subject to a sum power constraint on each transmission. Using MSE duality, we solve a computationally simpler convex problem in a virtual uplink. The transformation back to the downlink is simplified by our demonstrating the equality of the optimal power allocations in the uplink and downlink. Our second set of designs maximize the sum throughput for all users. We establish a series of relationships linking MSE to the signal-to-interference-plus-noise ratios of individual data streams and the information theoretic channel capacity under linear minimum MSE decoding. We show that minimizing the product of MSE matrix determinants is equivalent to sum-rate maximization, but we demonstrate that this problem does not admit a computationally efficient solution. We simplify the problem by minimizing the product of mean squared errors (PMSE) and propose an iterative algorithm based on alternating optimization with near-optimal performance. The remainder of the thesis considers the more practical case of imperfections in CSI. First, we consider the impact of delay and limited-rate feedback. We propose a system which employs Kalman prediction to mitigate delay; feedback rate is limited by employing adaptive delta modulation. Next, we consider the robust design of the sum-MSE and PMSE minimizing precoders with delay-free but imperfect estimates of the CSI. We extend the MSE duality to the case of imperfect CSI, and consider a new optimization problem which jointly optimizes the energy allocations for training and data stages along with the sum-MSE/PMSE minimizing transceivers. We prove the separability of these two problems when all users have equal estimation error variances, and propose several techniques to address the more challenging case of unequal estimation errors.
179

MSE-based Linear Transceiver Designs for Multiuser MIMO Wireless Communications

Tenenbaum, Adam 11 January 2012 (has links)
This dissertation designs linear transceivers for the multiuser downlink in multiple-input multiple-output (MIMO) systems. The designs rely on an uplink/downlink duality for the mean squared error (MSE) of each individual data stream. We first consider the design of transceivers assuming channel state information (CSI) at the transmitter. We consider minimization of the sum-MSE over all users subject to a sum power constraint on each transmission. Using MSE duality, we solve a computationally simpler convex problem in a virtual uplink. The transformation back to the downlink is simplified by our demonstrating the equality of the optimal power allocations in the uplink and downlink. Our second set of designs maximize the sum throughput for all users. We establish a series of relationships linking MSE to the signal-to-interference-plus-noise ratios of individual data streams and the information theoretic channel capacity under linear minimum MSE decoding. We show that minimizing the product of MSE matrix determinants is equivalent to sum-rate maximization, but we demonstrate that this problem does not admit a computationally efficient solution. We simplify the problem by minimizing the product of mean squared errors (PMSE) and propose an iterative algorithm based on alternating optimization with near-optimal performance. The remainder of the thesis considers the more practical case of imperfections in CSI. First, we consider the impact of delay and limited-rate feedback. We propose a system which employs Kalman prediction to mitigate delay; feedback rate is limited by employing adaptive delta modulation. Next, we consider the robust design of the sum-MSE and PMSE minimizing precoders with delay-free but imperfect estimates of the CSI. We extend the MSE duality to the case of imperfect CSI, and consider a new optimization problem which jointly optimizes the energy allocations for training and data stages along with the sum-MSE/PMSE minimizing transceivers. We prove the separability of these two problems when all users have equal estimation error variances, and propose several techniques to address the more challenging case of unequal estimation errors.
180

Duality and optimality in multiobjective optimization

Bot, Radu Ioan 04 July 2003 (has links) (PDF)
The aim of this work is to make some investigations concerning duality for multiobjective optimization problems. In order to do this we study first the duality for scalar optimization problems by using the conjugacy approach. This allows us to attach three different dual problems to a primal one. We examine the relations between the optimal objective values of the duals and verify, under some appropriate assumptions, the existence of strong duality. Closely related to the strong duality we derive the optimality conditions for each of these three duals. By means of these considerations, we study the duality for two vector optimization problems, namely, a convex multiobjective problem with cone inequality constraints and a special fractional programming problem with linear inequality constraints. To each of these vector problems we associate a scalar primal and study the duality for it. The structure of both scalar duals give us an idea about how to construct a multiobjective dual. The existence of weak and strong duality is also shown. We conclude our investigations by making an analysis over different duality concepts in multiobjective optimization. To a general multiobjective problem with cone inequality constraints we introduce other six different duals for which we prove weak as well as strong duality assertions. Afterwards, we derive some inclusion results for the image sets and, respectively, for the maximal elements sets of the image sets of these problems. Moreover, we show under which conditions they become identical. A general scheme containing the relations between the six multiobjective duals and some other duals mentioned in the literature is derived. / Das Ziel dieser Arbeit ist die Durchführung einiger Untersuchungen bezüglich der Dualität für Mehrzieloptimierungsaufgaben. Zu diesem Zweck wird als erstes mit Hilfe des so genannten konjugierten Verfahrens die Dualität für skalare Optimierungsaufgaben untersucht. Das erlaubt uns zu einer primalen Aufgabe drei unterschiedliche duale Aufgaben zuzuordnen. Wir betrachten die Beziehungen zwischen den optimalen Zielfunktionswerten der drei Dualaufgaben und untersuchen die Existenz der starken Dualität unter naheliegenden Annahmen. Im Zusammenhang mit der starken Dualität leiten wir für jede dieser Dualaufgaben die Optimalitätsbedingungen her. Die obengenannten Ergebnisse werden beim Studium der Dualität für zwei Vektoroptimierungsaufgaben angewandt, und zwar für die konvexe Mehrzieloptimierungsaufgabe mit Kegel-Ungleichungen als Nebenbedingungen und für eine spezielle Quotientenoptimierungsaufgabe mit linearen Ungleichungen als Nebenbedingungen. Wir assoziieren zu jeder dieser vektoriellen Aufgaben eine skalare Aufgabe für welche die Dualität betrachtet wird. Die Formulierung der beiden skalaren Dualaufgaben führt uns zu der Konstruktion der Mehrzieloptimierungsaufgabe. Die Existenz von schwacher und starker Dualität wird bewiesen. Wir schliessen unsere Untersuchungen ab, indem wir eine Analyse von verschiedenen Dualitätskonzepten in der Mehrzieloptimierung durchführen. Zu einer allgemeinen Mehrzieloptimierungsaufgabe mit Kegel-Ungleichungen als Nebenbedingungen werden sechs verschiedene Dualaufgaben eingeführt, für die sowohl schwache als auch starke Dualitätsaussagen gezeigt werden. Danach leiten wir verschiedene Beziehungen zwischen den Bildmengen, bzw., zwischen den Mengen der maximalen Elemente dieser Bildmengen der sechs Dualaufgaben her. Dazu zeigen wir unter welchen Bedingungen werden diese Mengen identisch. Ein allgemeines Schema das die Beziehungen zwischen den sechs dualen Mehrzieloptimierungsaufgaben und andere Dualaufgaben aus der Literatur enthält, wird dargestellt.

Page generated in 0.1119 seconds