Spelling suggestions: "subject:"convex loptimisation"" "subject:"convex doptimisation""
1 |
Distributed optimisation techniques for wireless networksBasutli, Bokamoso January 2016 (has links)
Alongside the ever increasing traffic demand, the fifth generation (5G) cellular network architecture is being proposed to provide better quality of service, increased data rate, decreased latency, and increased capacity. Without any doubt, the 5G cellular network will comprise of ultra-dense networks and multiple input multiple output technologies. This will make the current centralised solutions impractical due to increased complexity. Moreover, the amount of coordination information that needs to be transported over the backhaul links will be increased. Distributed or decentralised solutions are promising to provide better alternatives. This thesis proposes new distributed algorithms for wireless networks which aim to reduce the amount of system overheads in the backhaul links and the system complexity. The analysis of conflicts amongst transmitters, and resource allocation are conducted via the use of game theory, convex optimisation, and auction theory. Firstly, game-theoretic model is used to analyse a mixed quality of service (QoS) strategic non-cooperative game (SNG), for a two-user multiple-input single-output (MISO) interference channel. The players are considered to have different objectives. Following this, the mixed QoS SNG is extended to a multicell multiuser network in terms of signal-to-interference-and-noise ratio (SINR) requirement. In the multicell multiuser setting, each transmitter is assumed to be serving real time users (RTUs) and non-real time users (NRTUs), simultaneously. A novel mixed QoS SNG algorithm is proposed, with its operating point identified as the Nash equilibrium-mixed QoS (NE-mixed QoS). Nash, Kalai-Smorodinsky, and Egalitarian bargain solutions are then proposed to improve the performance of the NE-mixed QoS. The performance of the bargain solutions are observed to be comparable to the centralised solutions. Secondly, user offloading and user association problems are addressed for small cells using auction theory. The main base station wishes to offload some of its users to privately owned small cell access points. A novel bid-wait-auction (BWA) algorithm, which allows single-item bidding at each auction round, is designed to decompose the combinatorial mathematical nature of the problem. An analysis on the existence and uniqueness of the dominant strategy equilibrium is conducted. The BWA is then used to form the forward BWA (FBWA) and the backward BWA (BBWA). It is observed that the BBWA allows more users to be admitted as compared to the FBWA. Finally, simultaneous multiple-round ascending auction (SMRA), altered SMRA (ASMRA), sequential combinatorial auction with item bidding (SCAIB), and repetitive combinatorial auction with item bidding (RCAIB) algorithms are proposed to perform user offloading and user association for small cells. These algorithms are able to allow bundle bidding. It is then proven that, truthful bidding is individually rational and leads to Walrasian equilibrium. The performance of the proposed auction based algorithms is evaluated. It is observed that the proposed algorithms match the performance of the centralised solutions when the guest users have low target rates. The SCAIB algorithm is shown to be the most preferred as it provides high admission rate and competitive revenue to the bidders.
|
2 |
Cross-layer design for OFDMA wireless networks with finite queue length based on game theoryNikolaros, Ilias G. January 2014 (has links)
In next generation wireless networks such as 4G- LTE and WiMax, the demand for high data rates, the scarcity of wireless resources and the time varying channel conditions has led to the adoption of more sophisticated and robust techniques in PHY such as orthogonal frequency division multiplexing (OFDM) and the corresponding access technique known as orthogonal frequency division multiplexing access (OFDMA). Cross-layer schedulers have been developed in order to describe the procedure of resource allocation in OFDMA wireless networks. The resource allocation in OFDMA wireless networks has received great attention in research, by proposing many different ways for frequency diversity exploitation and system’s optimization. Many cross-layer proposals for dynamic resource allocation have been investigated in literature approaching the optimization problem from different viewpoints i.e. maximizing total data rate, minimizing total transmit power, satisfying minimum users’ requirements or providing fairness amongst users. The design of a cross-layer scheduler for OFDMA wireless networks is the topic of this research. The scheduler utilizes game theory in order to make decisions for subcarrier and power allocation to the users with the main concern being to maintain fairness as well as to maximize overall system’s performance. A very well known theorem in cooperative game theory, the Nash Bargaining Solution (NBS), is employed and solved in a close form way, resulting in a Pareto optimal solution. Two different cases are proposed. The first one is the symmetric NBS (S-NBS) where all users have the same weight and therefore all users have the same opportunity for resources and the second one, is the asymmetric NBS (A-NBS), where users have different weights, hence different priorities where the scheduler favours users with higher priorities at expense of lower priority users. As MAC layer is vital for cross-layer, the scheduler is combined with a queuing model based on Markov chain in order to describe more realistically the incoming procedure from the higher layers.
|
3 |
Fonctions de coût pour l'estimation des filtres acoustiques dans les mélanges réverbérants / Cost functions for the estimation of acoustic filters in reverberant mixturesBenichoux, Alexis 14 October 2013 (has links)
On se place dans le cadre du traitement des signaux audio multicanaux et multi-sources. À partir du mélange de plusieurs sources sonores enregistrées en milieu réverbérant, on cherche à estimer les réponses acoustiques (ou filtres de mélange) entre les sources et les microphones. Ce problème inverse ne peut être résolu qu'en prenant en compte des hypothèses sur la nature des filtres. Notre approche consiste d'une part à identifier mathématiquement les hypothèses nécessaires sur les filtres pour pouvoir les estimer et d'autre part à construire des fonctions de coût et des algorithmes permettant de les estimer effectivement. Premièrement, nous avons considéré le cas où les signaux sources sont connus. Nous avons développé une méthode d'estimation des filtres basée sur une régularisation convexe prenant en compte à la fois la nature parcimonieuse des filtres et leur enveloppe de forme exponentielle décroissante. Nous avons effectué des enregistrements en environnement réel qui ont confirmé l'efficacité de cet algorithme. Deuxièmement, nous avons considéré le cas où les signaux sources sont inconnus, mais statistiquement indépendants. Les filtres de mélange peuvent alors être estimés à une indétermination de permutation et de gain près à chaque fréquence par des techniques d'analyse en composantes indépendantes. Nous avons apporté une étude exhaustive des garanties théoriques par lesquelles l'indétermination de permutation peut être levée dans le cas où les filtres sont parcimonieux dans le domaine temporel. Troisièmement, nous avons commencé à analyser les hypothèses sous lesquelles notre algorithme d'estimation des filtres pourrait être étendu à l'estimation conjointe des signaux sources et des filtres et montré un premier résultat négatif inattendu : dans le cadre de la déconvolution parcimonieuse aveugle, pour une famille assez large de fonctions de coût régularisées, le minimum global est trivial. Des contraintes supplémentaires sur les signaux sources ou les filtres sont donc nécessaires. / This work is focused on the processing of multichannel and multisource audio signals. From an audio mixture of several audio sources recorded in a reverberant room, we wish to estimate the acoustic responses (a.k.a. mixing filters) between the sources and the microphones. To solve this inverse problem one need to take into account additional hypotheses on the nature of the acoustic responses. Our approach consists in first identifying mathematically the necessary hypotheses on the acoustic responses for their estimation and then building cost functions and algorithms to effectively estimate them. First, we considered the case where the source signals are known. We developed a method to estimate the acoustic responses based on a convex regularization which exploits both the temporal sparsity of the filters and the exponentially decaying envelope. Real-world experiments confirmed the effectiveness of this method on real data. Then, we considered the case where the sources signal are unknown, but statistically independent. The mixing filters can be estimated up to a permutation and scaling ambiguity. We brought up an exhaustive study of the theoretical conditions under which we can solve the indeterminacy, when the multichannel filters are sparse in the temporal domain. Finally, we started to analyse the hypotheses under which this algorithm could be extended to the joint estimation of the sources and the filters, and showed a first unexpected results : in the context of blind deconvolution with sparse priors, for a quite large family of regularised cost functions, the global minimum is trivial. Additional constraints on the source signals and the filters are needed.
|
4 |
Convex matrix sparsity for demixing with an application to graphical model structure estimation / Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiquesVinyes, Marina 27 November 2018 (has links)
En apprentissage automatique on a pour but d'apprendre un modèle, à partir de données, qui soit capable de faire des prédictions sur des nouvelles données (pas explorées auparavant). Pour obtenir un modèle qui puisse se généraliser sur les nouvelles données, et éviter le sur-apprentissage, nous devons restreindre le modèle. Ces restrictions sont généralement une connaissance a priori de la structure du modèle. Les premières approches considérées dans la littérature sont la régularisation de Tikhonov et plus tard le Lasso pour induire de la parcimonie dans la solution. La parcimonie fait partie d'un concept fondamental en apprentissage automatique. Les modèles parcimonieux sont attrayants car ils offrent plus d'interprétabilité et une meilleure généralisation (en évitant le sur-apprentissage) en induisant un nombre réduit de paramètres dans le modèle. Au-delà de la parcimonie générale et dans de nombreux cas, les modèles sont structurellement contraints et ont une représentation simple de certains éléments fondamentaux, comme par exemple une collection de vecteurs, matrices ou tenseurs spécifiques. Ces éléments fondamentaux sont appelés atomes. Dans ce contexte, les normes atomiques fournissent un cadre général pour estimer ce type de modèles. périodes de modèles. Le but de cette thèse est d'utiliser le cadre de parcimonie convexe fourni par les normes atomiques pour étudier une forme de parcimonie matricielle. Tout d'abord, nous développons un algorithme efficace basé sur les méthodes de Frank-Wolfe et qui est particulièrement adapté pour résoudre des problèmes convexes régularisés par une norme atomique. Nous nous concentrons ensuite sur l'estimation de la structure des modèles graphiques gaussiens, où la structure du modèle est encodée dans la matrice de précision et nous étudions le cas avec des variables manquantes. Nous proposons une formulation convexe avec une approche algorithmique et fournissons un résultat théorique qui énonce les conditions nécessaires pour récupérer la structure souhaitée. Enfin, nous considérons le problème de démixage d'un signal en deux composantes ou plus via la minimisation d’une somme de normes ou de jauges, encodant chacune la structure a priori des composants à récupérer. En particulier, nous fournissons une garantie de récupération exacte dans le cadre sans bruit, basée sur des mesures d'incohérence / The goal of machine learning is to learn a model from some data that will make accurate predictions on data that it has not seen before. In order to obtain a model that will generalize on new data, and avoid overfitting, we need to restrain the model. These restrictions are usually some a priori knowledge of the structure of the model. First considered approaches included a regularization, first ridge regression and later Lasso regularization for inducing sparsity in the solution. Sparsity, also known as parsimony, has emerged as a fundamental concept in machine learning. Parsimonious models are appealing since they provide more interpretability and better generalization (avoid overfitting) through the reduced number of parameters. Beyond general sparsity and in many cases, models are constrained structurally so they have a simple representation in terms of some fundamental elements, consisting for example of a collection of specific vectors, matrices or tensors. These fundamental elements are called atoms. In this context, atomic norms provide a general framework for estimating these sorts of models. The goal of this thesis is to use the framework of convex sparsity provided by atomic norms to study a form of matrix sparsity. First, we develop an efficient algorithm based on Frank-Wolfe methods that is particularly adapted to solve problems with an atomic norm regularization. Then, we focus on the structure estimation of Gaussian graphical models, where the structure of the graph is encoded in the precision matrix and study the case with unobserved variables. We propose a convex formulation with an algorithmic approach and provide a theoretical result that states necessary conditions for recovering the desired structure. Finally, we consider the problem of signal demixing into two or more components via the minimization of a sum of norms or gauges, encoding each a structural prior on the corresponding components to recover. In particular, we provide general exact recovery guarantees in the noiseless setting based on incoherence measures
|
5 |
On the identifiability of highly parameterised models of physical processesRaman, Dhruva Venkita January 2016 (has links)
This thesis is concerned with drawing out high-level insight from otherwise complex mathematical models of physical processes. This is achieved through detailed analysis of model behaviour as constituent parameters are varied. A particular focus is the well-posedness of parameter estimation from noisy data, and its relationship to the parametric sensitivity properties of the model. Other topics investigated include the verification of model performance properties over large ranges of parameters, and the simplification of models based upon their response to parameter perturbation. Several methodologies are proposed, which account for various model classes. However, shared features of the models considered include nonlinearity, parameters with considerable scope for variability, and experimental data corrupted by significant measurement uncertainty. We begin by considering models described by systems of nonlinear ordinary differen- tial equations with parameter dependence. Model output, in this case, can only be obtained by numerical integration of the relevant equations. Therefore, assessment of model behaviour over tracts of parameter space is usually carried out by repeated model simulation over a grid of parameter values. We instead reformulate this as- sessment as an algebraic problem, using polynomial programming techniques. The result is an algorithm that produces parameter-dependent algebraic functions that are guaranteed to bound user-defined aspects of model behaviour over parameter space. We then consider more general classes of parameter-dependent model. A theoretical framework is constructed through which we can explore the duality between model sensitivity to non-local parameter perturbations, and the well-posedness of parameter estimation from significantly noisy data. This results in an algorithm that can uncover functional relations on parameter space over which model output is insensitive and parameters cannot be estimated. The methodology used derives from techniques of nonlinear optimal control. We use this algorithm to simplify benchmark models from the systems biology literature. Specifically, we uncover features such as fast-timescale subsystems and redundant model interactions, together with the sets of parameter values over which the features are valid. We finally consider parameter estimation in models that are acknowledged to im- perfectly describe the modelled process. We show that this invalidates standard statistical theory associated with uncertainty quantification of parameter estimates. Alternative theory that accounts for this situation is then developed, resulting in a computationally tractable approximation of the covariance of a parameter estimate with respect to noise-induced fluctuation of experimental data.
|
6 |
Guidance and Control for Launch and Vertical Descend of Reusable Launchers using Model Predictive Control and Convex OptimisationZaragoza Prous, Guillermo January 2020 (has links)
The increasing market of small and affordable space systems requires fast and reliablelaunch capabilities to cover the current and future demand. This project aims to studyand implement guidance and control schemes for vertical ascent and descent phases ofa reusable launcher. Specifically, the thesis focuses on developing and applying ModelPredictive Control (MPC) and optimisation techniques to several kino-dynamic modelsof rockets. Moreover, the classical MPC method has been modified to include a decreasingfactor for the horizon used, enhancing the performance of the guidance and control.Multiple scenarios of vertical launch, landing and full fligth guidance on Earth and Mars,along with Monte Carlo analysis, were carried out to demonstrate the robustness of thealgorithm against different initial conditions. The results were promising and invite tofurther research.
|
7 |
A Convex Optimisation Approach to Portfolio Allocation / En Konvex Optimerings-metod för PortföljallokeringJyrkäs, Tim January 2023 (has links)
The mean variance framework (MV) developed by Markowitz in his groundbreaking paper offers a quantitative and rational approach to portfolio selection. It is well known to market practitioners however that the MV optimal portfolios tend to perform subpar. One of the issues of the MV portfolios is that they require the inverse of a large covariance matrix, which is often ill-conditioned. In this thesis, we develop a new approach to circumvent these issues. We propose an optimisation approach akin to least squares linear regression and compare the performance with an establish method, covariance shrinkage. When tested on a set of 30 futures contracts, we find that the models yield promising results albeit somewhat lower than that of the benchmark. / Mean variance ramverket (MV) framtaget av Markowitz i sin banbrytande artikel möjliggör en kvantitativ och rationell metod för portföljallokering. Det är däremot ett väletablerat faktum bland marknadsaktörer att Markowitz-optimala portföljer tenderar att prestera relativt dåligt. Ett av tillkortakommandena av ramverket är den ofta problemtyngda inverteringen av, den ofta stora, kovariansmatrisen som är illa konditionerad. I denna uppsats föreslår vi en ny metod för att kringgå detta problem. Vi föreslår en optimeringsmetodologi mycket lik minsta kvadratmetoden i linjär regression. Denna metod utvärderas sedan mot en vedertagen metod, kovarianskrympning. När vi utvärderar vår modell på 30 stycken terminskontrakt ser vi lovande resultat men finner en Sharpekvot något lägre än referensportföljens.
|
8 |
Efficient Route-based Optimal Energy Management for Hybrid Electric VehiclesBerntsson, Simon, Andreasson, Mattias January 2018 (has links)
The requirements on fuel consumption and emissions for passenger cars are getting stricter every year. This has forced the vehicle industry to look for ways to improve the performance of the driveline. With the increasing focus on electrification, a common method is to combine an electrical driveline with a conventional driveline that uses a petrol or diesel engine, thus creating a hybrid electric vehicle. To fully be able to utilise the potential of the driveline in such a vehicle, an efficient energy management strategy is needed. This thesis describes the development of an efficient route-based energy management strategy. Three different optimisation strategies are combined, deterministic dynamic programming, equivalent consumption minimisation strategy and convex optimisation, together with segmentation of the input data. The developed strategy shows a decrease in computational time with up to more than one hundred times compared to a benchmark algorithm. When implemented in Volvo's simulation tool, VSim, substantial fuel savings of up to ten percent is shown compared to a charge-depleting charge-sustain strategy.
|
9 |
Gestion prévisionnelle des réseaux actifs de distribution - relaxation convexe sous incertitude / Operational Planning of Active Distribution Networks - Convex Relaxation under UncertaintySwaminathan, Bhargav Prasanna 22 September 2017 (has links)
Les réseaux électriques subissent deux changements majeurs : le taux croissant de générateurs d’énergie distribuée (GED) intermittents et la dérégulation du système électrique. Les réseaux de distribution et leurs gestionnaires (GRD) sont plus particulièrement touchés. La planification, construction et exploitation des réseaux de la plupart des GRD doivent évoluer face à ces change- ments. Les réseaux actifs de distribution et la gestion intelligente de associée est une solution potentielle. Les GRD pourront ainsi adopter de nouveaux rôles, interagir avec de nouveaux acteurs et proposer de nouveaux services. Ils pourront aussi utiliser la flexibilité de manière optimale au travers, entre autres, d’outils intelligents pour la gestion prévisionnelle de leurs réseaux de moyenne tension (HTA). Développer ces outils est un défi, car les réseaux de distribution ont des spécificités techniques. Ces spécificités sont la présence d’éléments discrets comme les régleurs en charge et la reconfiguration, les flexibilités exogènes, la non-linéarité des calculs de répartition de charge, et l’incertitude liée aux prévisions des GED intermittents. Dans cette thèse, une analyse économique des flexibilités permet d’établir une référence commune pour une utilisation rentable et sans biais dans la gestion prévisionnelle. Des modèles linéaires des flexibilités sont développés en utilisant des reformulations mathématiques exactes. Le calcul de répartition de charge est “convexifié” à travers des reformulations. L’optimalité globale des solutions obtenues, avec ce modèle d’optimisation exact et convexe de gestion prévisionnelle, sont ainsi garanties. Les tests sur deux réseaux permettent d’en valider la performance. L’incertitude des prévisions de GED peut pourtant remettre en cause les solutions obtenues. Afin de résoudre ce problème, trois formulations différentes pour traiter cette incertitude sont développées. Leurs performances sont testées et comparées à travers des simulations. Une analyse permet d’identifier les formulations les plus adaptées pour la gestion prévisionnelle sous incertitude. / Power systems are faced by the rising shares of distributed renewable energy sources (DRES) and the deregulation of the electricity system. Distribution networks and their operators (DSO) are particularly at the front-line. The passive operational practives of many DSOs today have to evolve to overcome these challenges. Active Distribution Networks (ADN), and Active Network Management (ANM) have been touted as a potential solution. In this context, DSOs will streamline investment and operational decisions, creating a cost-effective framework of operations. They will evolve and take up new roles and optimally use flexibility to perform, for example, short-term op- erational planning of their networks. However, the development of such methods poses particular challenges. They are related to the presence of discrete elements (OLTCs and reconfiguration), the use of exogenous (external) flexibilities in these networks, the non-linear nature of optimal power flow (OPF) calculations, and uncertainties present in forecasts. The work leading to this thesis deals with and overcomes these challenges. First, a short-term economic analysis is done to ascertain the utilisation costs of flexibilities. This provides a common reference for different flexibilities. Then, exact linear flexibility models are developed using mathematical reformulation techniques. The OPF equations in operational planning are then convexified using reformulation techniques as well. The mixed-integer convex optimisation model thus developed, called the novel OP formulation, is exact and can guarantee globally optimal solutions. Simulations on two test networks allow us to evaluate the performance of this formulation. The uncertainty in DRES forecasts is then handled via three different formulations developed in this thesis. The best performing formulations under uncertainty are determined via comparison framework developed to test their performance.
|
10 |
Resource management in cooperative MIMO-OFDM cellular systemsTölli, A. (Antti) 01 April 2008 (has links)
Abstract
Radio resource management techniques for broadband wireless systems beyond the existing cellular systems are developed while considering their special characteristics such as multi-carrier techniques, adaptive radio links and multiple-input multiple-output (MIMO) antenna techniques. Special focus is put on the design of linear transmission strategies in a cooperative cellular system where signal processing can be performed in a centralised manner across distributed base station (BS) antenna heads.
A time-division duplex cellular system based on orthogonal frequency division multiplexing (OFDM) with adaptive MIMO transmission is considered in the case where the received signals are corrupted by non-reciprocal inter-cell interference. A bandwidth efficient closed-loop compensation algorithm combined with interference suppression at the receiver is proposed to compensate for the interference and to guarantee the desired Quality of Service (QoS) when the interference structure is known solely at the receiver.
A greedy beam ordering and selection algorithm is proposed to maximise the sum rate of a multiuser MIMO downlink (DL) with a block zero forcing (ZF) transmission. The performance of the block-ZF transmission combined with the greedy scheduling is shown to approach the sum capacity as the number of users increases. The maximum sum rate is often found to be achieved by transmitting to a smaller number of users or beams than the spatial dimensions allow. In addition, a low complexity algorithm for joint user, bit and power allocation with a low signalling overhead is proposed.
Different linear transmission schemes, including the ZF as a special case, are developed for the scenario where the cooperative processing of the transmitted signal is applied to users located within a soft handover (SHO) region. The considered optimisation criteria include minimum power beamformer design; balancing the weighted signal-to-interference-plus-noise ratio (SINR) values per data stream; weighted sum rate maximisation; and balancing the weighted rate per user with additional QoS constraints such as guaranteed bit rate per user. The method can accommodate supplementary constraints, e.g., per antenna or per BS power constraints, and upper/lower bounds for the SINR values of the data streams. The proposed iterative algorithms are shown to provide powerful solutions for difficult non-convex transceiver optimisation problems.
System level evaluation is performed in order to assess the impact of a realistic multi-cell environment on the performance of a cellular MIMO-OFDM system. The users located in the SHO region are shown to benefit from greatly increased transmission rates. Consequently, significant overall system level gains result from cooperative SHO processing. The proposed SHO scheme can be used for providing a more evenly distributed service over the entire cellular network.
|
Page generated in 0.1134 seconds