• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 54
  • 45
  • 44
  • 42
  • 39
  • 35
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Provably efficient algorithms for decentralized optimization

Liu, Changxin 31 August 2021 (has links)
Decentralized multi-agent optimization has emerged as a powerful paradigm that finds broad applications in engineering design including federated machine learning and control of networked systems. In these setups, a group of agents are connected via a network with general topology. Under the communication constraint, they aim to solving a global optimization problem that is characterized collectively by their individual interests. Of particular importance are the computation and communication efficiency of decentralized optimization algorithms. Due to the heterogeneity of local objective functions, fostering cooperation across the agents over a possibly time-varying network is challenging yet necessary to achieve fast convergence to the global optimum. Furthermore, real-world communication networks are subject to congestion and bandwidth limit. To relieve the difficulty, it is highly desirable to design communication-efficient algorithms that proactively reduce the utilization of network resources. This dissertation tackles four concrete settings in decentralized optimization, and develops four provably efficient algorithms for solving them, respectively. Chapter 1 presents an overview of decentralized optimization, where some preliminaries, problem settings, and the state-of-the-art algorithms are introduced. Chapter 2 introduces the notation and reviews some key concepts that are useful throughout this dissertation. In Chapter 3, we investigate the non-smooth cost-coupled decentralized optimization and a special instance, that is, the dual form of constraint-coupled decentralized optimization. We develop a decentralized subgradient method with double averaging that guarantees the last iterate convergence, which is crucial to solving decentralized dual Lagrangian problems with convergence rate guarantee. Chapter 4 studies the composite cost-coupled decentralized optimization in stochastic networks, for which existing algorithms do not guarantee linear convergence. We propose a new decentralized dual averaging (DDA) algorithm to solve this problem. Under a rather mild condition on stochastic networks, we show that the proposed DDA attains an $\mathcal{O}(1/t)$ rate of convergence in the general case and a global linear rate of convergence if each local objective function is strongly convex. Chapter 5 tackles the smooth cost-coupled decentralized constrained optimization problem. We leverage the extrapolation technique and the average consensus protocol to develop an accelerated DDA algorithm. The rate of convergence is proved to be $\mathcal{O}\left( \frac{1}{t^2}+ \frac{1}{t(1-\beta)^2} \right)$, where $\beta$ denotes the second largest singular value of the mixing matrix. To proactively reduce the utilization of network resources, a communication-efficient decentralized primal-dual algorithm is developed based on the event-triggered broadcasting strategy in Chapter 6. In this algorithm, each agent locally determines whether to generate network transmissions by comparing a pre-defined threshold with the deviation between the iterates at present and lastly broadcast. Provided that the threshold sequence is summable over time, we prove an $\mathcal{O}(1/t)$ rate of convergence for convex composite objectives. For strongly convex and smooth problems, linear convergence is guaranteed if the threshold sequence is diminishing geometrically. Finally, Chapter 7 provides some concluding remarks and research directions for future study. / Graduate
92

Compressive Radar Cross Section Computation

Li, Xiang 15 January 2020 (has links)
Compressive Sensing (CS) is a novel signal-processing paradigm that allows sampling of sparse or compressible signals at lower than Nyquist rate. The past decade has seen substantial research on imaging applications using compressive sensing. In this thesis, CS is combined with the commercial electromagnetic (EM) simulation software newFASANT to improve its efficiency in solving EM scattering problems such as Radar Cross Section (RCS) of complex targets at GHz frequencies. This thesis proposes a CS-RCS approach that allows efficient and accurate recovery of under-sampled RCSs measured from a random set of incident angles using an accelerated iterative soft thresh-holding reconstruction algorithm. The RCS results of a generic missile and a Canadian KingAir aircraft model simulated using Physical Optics (PO) as the EM solver at various frequencies and angular resolutions demonstrate good efficiency and accuracy of the proposed method.
93

High-Dimensional Analysis of Regularized Convex Optimization Problems with Application to Massive MIMO Wireless Communication Systems

Alrashdi, Ayed 03 1900 (has links)
In the past couple of decades, the amount of data available has dramatically in- creased. Thus, in modern large-scale inference problems, the dimension of the signal to be estimated is comparable or even larger than the number of available observa- tions. Yet the desired properties of the signal typically lie in some low-dimensional structure, such as sparsity, low-rankness, finite alphabet, etc. Recently, non-smooth regularized convex optimization has risen as a powerful tool for the recovery of such structured signals from noisy linear measurements in an assortment of applications in signal processing, wireless communications, machine learning, computer vision, etc. With the advent of Compressed Sensing (CS), there has been a huge number of theoretical results that consider the estimation performance of non-smooth convex optimization in such a high-dimensional setting. In this thesis, we focus on precisely analyzing the high dimensional error perfor- mance of such regularized convex optimization problems under the presence of im- pairments (such as uncertainties) in the measurement matrix, which has independent Gaussian entries. The precise nature of our analysis allows performance compari- son between different types of these estimators and enables us to optimally tune the involved hyper-parameters. In particular, we study the performance of some of the most popular cases in linear inverse problems, such as the LASSO, Elastic Net, Least Squares (LS), Regularized Least Squares (RLS) and their box-constrained variants. In each context, we define appropriate performance measures, and we sharply an- alyze them in the High-Dimensional Statistical Regime. We use our results for a concrete application of designing efficient decoders for modern massive multi-input multi-output (MIMO) wireless communication systems and optimally allocate their power. The framework used for the analysis is based on Gaussian process methods, in particular, on a recently developed strong and tight version of the classical Gor- don Comparison Inequality which is called the Convex Gaussian Min-max Theorem (CGMT). We use some results from Random Matrix Theory (RMT) in our analysis as well.
94

Optimization of Cross-Layer Network Data based on Multimedia Application Requirements

Rahman, Tasnim 15 August 2019 (has links)
This thesis proposes a convex network utility maximization (NUM) problem that can be solved to optimize a cross-layer network based on user and system defined requirements for quality and link capacity of multimedia applications. The problem can also be converged to a distributed solution using dual decomposition. Current techniques do not address the changing system's requirements for the network in addition to the user's requirements for an application when optimizing a cross-layer network, but rather focus on optimizing a dynamic network to conform to a real-time application or for a specific performance. Optimizing the cross-layer network for the changing system and user requirements allows a more accurate optimization of the overall cross-layer network of any given multi-node, ad-hoc wireless application for data transmission quality and link capacity to meet overall mission demands.
95

A Convex Optimization Framework for the Optimal Design, Energy, and Thermal Management of Li-Ion Battery Packs

Freudiger, Danny January 2021 (has links)
No description available.
96

Far-field pattern synthesis of transmitarray antennas using convex optimization techniques

Defives, Marie January 2022 (has links)
Transmitarrays antennas (TAs) can be seen as the planar counterpart of optical lenses. They are composed of thin radiating elements (unit cells) which introduce different local phase shifts on an incident electromagnetic wave, emitted by a primary source, and re-radiate it. By properly designing the unit cells and their distribution in the TA, the properties of the incident wave, e.g. wavefront and polarization, as well as the pattern of the radiated field can be tailored. Moreover, TAs are suited to low-cost multilayer fabrication processes, e.g. printed circuit board (PCB) technology, and can achieve electronic reconfiguration embedding diodes. Therefore, TAs are natural and cost-effective candidates for applications requiring to steer and shape the antenna beam, such as satellite communications (Satcom) and future terrestrial wireless networks. For instance, satellite antennas radiate contoured beams to cover specific Earth regions, whereas Satcom ground terminals and mobile base stations require very directive beams compliant with prescribed radiation masks. In many cases, the amplitude of the field impinging on the TA is fixed and the TA phase profile, i.e. the spatial distribution of the phase-shifting elements, is the only parameter that can be designed to generate the desired radiation pattern. Thus, versatile, efficient and robust phase-only synthesis methods are essential. Closed-form expressions for the phase profile can be derived only in a few cases and for specific targeted far-field patterns. On the other hand, synthesis approaches based on global optimization techniques, such as genetic algorithms, are general purpose but their convergence and accuracy is often poor, despite the long computation time. In this thesis, a mathematical approach for the phase-only synthesis of TAs using convex optimization is developed to solve diverse pattern shaping problems. The use of convex optimization ensures a good compromise between the generality, robustness and computational cost of the method.First, a model for the analysis of the TA is presented. It accurately predicts the antenna radiation pattern using the equivalence theorem and includes the impact of the spillover, i.e. the direct radiation from the TA feed. Then, the TA synthesis is formulated in terms of the far-field intensity pattern computed by the model. The phase-only synthesis problem is inherently non-convex. However, a sequential convex optimization procedure relying on proper relaxations is proposed to approximately solve it. The accuracy of these sub-optimal solutions is discussed and methods to enhance it are compared. The procedure is successfully applied to synthesize relatively large TAs, with symmetrical and non-symmetrical phase profiles, radiating either focused-beam or shaped-beam patterns, with challenging mask constraints.Finally, three millimeter-wave TAs, comprising different sets of unit cells, are designed using the synthesis procedure. The good agreement between the predicted radiation patterns and those obtained from full-wave simulations of the antennas demonstrates the precision and versatility of the proposed tool, within its range of validity. / Transmitarray antennas (TAs) är en typ av antenna som konsiderades som optiska lenser motparten. Transmitterray antennas (TAs) are a type of antenna that is considered as optical lenses counterpart.De är sammansatta av tunna strålande element eller unit cell (UCs) som introducerar olika lokala fasförskjutningar på en inkommande elektromagnetisk våg och stråla ut den igen. They are composed of thin radiating elements or unit cells (UCs) that introduce different local phase shifts on an incoming electromagnetic wave and radiate it out again.Den här vågen kommer från en primär elektromagnetisk källa. This wave comes from a primary electromagnetic source.Syftet med detta examensarbete är att bestämma hur man ska UC placera för att skapa en önskad utgångsstråle.This master thesis aim is to determine how to place the UC in order to create a desired output beam.TAs är biliga att bygga och kan också vara elektroniska omkonfigurerbara med hjälp av dioder. TAs are cheap to produce and can also be electronically reconfigurable using diodes. TAs används i Satcom-domänen eller för att designa ny hög hastighet nätverk (6G).TAs are used in the Satcom domain or to design new high-speed network (6G). När man skapar en antenn, kan man stämma fas och amplitud av kompositerna för att skapa en önskad utgångsstråle. På TAs är det lite svårare.When someone create an antenna, one can tune phase and amplitude of the composants to create a desired output beam. For TAs it is a little bit more difficult.Faktiskt kan man stämma endast fas i TA- arkitektur. In fact, one can only tune the phase in the TA architecture. Så behöver vi speciell designprocedur som kallas: fassyntesSo, we need special design procedure called: phase-only synthesis.Konvex optimering är en bra kompromiss mellan metodens generalitet och uträkningstimeConvex optimization is a good compromise between generality and computation time.Här presenterar vi en fassyntes metod på skapa TAs som utstrålar en önskad stråle. Here we present a phase-only synthesis method in order to create TAs which radiate a precise beam. Metoden är baserade på konvex optimering.
97

Overcoming local optima in control and optimization of cooperative multi-agent systems

Welikala, Shirantha 15 May 2021 (has links)
A cooperative multi-agent system is a collection of interacting agents deployed in a mission space where each agent is allowed to control its local state so that the fleet of agents collectively optimizes a common global objective. While optimization problems associated with multi-agent systems intend to determine the fixed set of globally optimal agent states, control problems aim to obtain the set of globally optimal agent controls. Associated non-convexities in these problems result in multiple local optima. This dissertation explores systematic techniques that can be deployed to either escape or avoid poor local optima while in search of provably better (still local) optima. First, for multi-agent optimization problems with iterative gradient-based solutions, a distributed approach to escape local optima is proposed based on the concept of boosting functions. These functions temporarily transform gradient components at a local optimum into a set of boosted non-zero gradient components in a systematic manner so that it is more effective compared to the methods where gradient components are randomly perturbed. A novel variable step size adjustment scheme is also proposed to establish the convergence of this distributed boosting process. Developed boosting concepts are successfully applied to the class of coverage problems. Second, as a means of avoiding convergence to poor local optima in multi-agent optimization, the use of greedy algorithms in generating effective initial conditions is explored. Such greedy methods are computationally cheap and can often exploit submodularity properties of the problem to provide performance bound guarantees to the obtained solutions. For the class of submodular maximization problems, two new performance bounds are proposed and their effectiveness is illustrated using the class of coverage problems. Third, a class of multi-agent control problems termed Persistent Monitoring on Networks (PMN) is considered where a team of agents is traversing a set of nodes (targets) interconnected according to a network topology aiming to minimize a measure of overall node state. For this class of problems, a gradient-based parametric control solution developed in a prior work relies heavily on the initial selection of its `parameters' which often leads to poor local optima. To overcome this initialization challenge, the PMN system's asymptotic behavior is analyzed, and an off-line greedy algorithm is proposed to systematically generate an effective set of initial parameters. Finally, for the same class of PMN problems, a computationally efficient distributed on-line Event-Driven Receding Horizon Control (RHC) solution is proposed as an alternative. This RHC solution is parameter-free as it automatically optimizes its planning horizon length and gradient-free as it uses explicitly derived solutions for each RHC problem invoked at each agent upon each event of interest. Hence, unlike the gradient-based parametric control solutions, the proposed RHC solution does not force the agents to converge to one particular behavior that is likely to be a poor local optimum. Instead, it keeps the agents actively searching for the optimum behavior. In each of these four parts of the thesis, an interactive simulation platform is developed (and made available online) to generate extensive numerical examples that highlight the respective contributions made compared to the state of the art.
98

On the Topic of Unconstrained Black-Box Optimization with Application to Pre-Hospital Care in Sweden : Unconstrained Black-Box Optimization

Anthony, Tim January 2021 (has links)
In this thesis, the theory and application of black-box optimization methods are explored. More specifically, we looked at two families of algorithms, descent methods andresponse surface methods (closely related to trust region methods). We also looked at possibilities in using a dimension reduction technique called active subspace which utilizes sampled gradients. This dimension reduction technique can make the descent methods more suitable to high-dimensional problems, which turned out to be most effective when the data have a ridge-like structure. Finally, the optimization methods were used on a real-world problem in the context of pre-hospital care where the objective is to minimize the ambulance response times in the municipality of Umea by changing the positions of the ambulances. Before applying the methods on the real-world ambulance problem, a simulation study was performed on synthetic data, aiming at finding the strengths and weaknesses of the different models when applied to different test functions, at different levels of noise. The results showed that we could improve the ambulance response times across several different performance metrics compared to the response times of the current ambulancepositions. This indicates that there exist adjustments that can benefit the pre-hospitalcare in the municipality of Umea. However, since the models in this thesis work find local and not global optimums, there might still exist even better ambulance positions that can improve the response time further. / I denna rapport undersöks teorin och tillämpningarna av diverse blackbox optimeringsmetoder. Mer specifikt så har vi tittat på två familjer av algoritmer, descentmetoder och responsytmetoder (nära besläktade med tillitsregionmetoder). Vi tittar också på möjligheterna att använda en dimensionreduktionsteknik som kallas active subspace som använder samplade gradienter för att göra descentmetoderna mer lämpade för högdimensionella problem, vilket visade sig vara mest effektivt när datat har en struktur där ändringar i endast en riktning har effekt på responsvärdet. Slutligen användes optimeringsmetoderna på ett verkligt problem från sjukhusvården, där målet var att minimera svarstiderna för ambulansutryckningar i Umeå kommun genom att ändra ambulanspositionerna. Innan metoderna tillämpades på det verkliga ambulansproblemet genomfördes också en simuleringsstudie på syntetiskt data. Detta för att hitta styrkorna och svagheterna hos de olika modellerna genom att undersöka hur dem hanterar ett flertal testfunktioner under olika nivåer av brus. Resultaten visade att vi kunde förbättra ambulansernas responstider över flera olika prestandamått jämfört med responstiderna för de nuvarande ambulanspositionerna. Detta indikerar att det finns förändringar av positioneringen av ambulanser som kan gynna den pre-hospitala vården inom Umeå kommun. Dock, eftersom modellerna i denna rapport hittar lokala och inte globala optimala punkter kan det fortfarande finnas ännu bättre ambulanspositioner som kan förbättra responstiden ytterligare.
99

Minimax D-optimal designs for regression models with heteroscedastic errors

Yzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate
100

Apprentissage statistique pour séquences d’évènements à l’aide de processus ponctuels / Learning from Sequences with Point Processes

Achab, Massil 09 October 2017 (has links)
Le but de cette thèse est de montrer que l'arsenal des nouvelles méthodes d'optimisation permet de résoudre des problèmes d'estimation difficile basés sur les modèles d'évènements.Alors que le cadre classique de l'apprentissage supervisé traite les observations comme une collection de couples de covariables et de label, les modèles d'évènements ne regardent que les temps d'arrivée d'évènements et cherchent alors à extraire de l'information sur la source de donnée.Ces évènements datés sont ordonnés de façon chronologique et ne peuvent dès lors être considérés comme indépendants.Ce simple fait justifie l'usage d'un outil mathématique particulier appelé processus ponctuel pour apprendre une certaine structure à partir de ces évènements.Deux exemples de processus ponctuels sont étudiés dans cette thèse.Le premier est le processus ponctuel derrière le modèle de Cox à risques proportionnels:son intensité conditionnelle permet de définir le ratio de risque, une quantité fondamentale dans la littérature de l'analyse de survie.Le modèle de régression de Cox relie la durée avant l'apparition d'un évènement, appelé défaillance, aux covariables d'un individu.Ce modèle peut être reformulé à l'aide du cadre des processus ponctuels.Le second est le processus de Hawkes qui modélise l'impact des évènements passés sur la probabilité d'apparition d'évènements futurs.Le cas multivarié permet d'encoder une notion de causalité entre les différentes dimensions considérées.Cette thèse est divisée en trois parties.La première s'intéresse à un nouvel algorithme d'optimisation que nous avons développé.Il permet d'estimer le vecteur de paramètre de la régression de Cox lorsque le nombre d'observations est très important.Notre algorithme est basé sur l'algorithme SVRG (Stochastic Variance Reduced Gradient) et utilise une méthode MCMC (Monte Carlo Markov Chain) pour approcher un terme de la direction de descente.Nous avons prouvé des vitesses de convergence pour notre algorithme et avons montré sa performance numérique sur des jeux de données simulés et issus de monde réel.La deuxième partie montre que la causalité au sens de Hawkes peut être estimée de manière non-paramétrique grâce aux cumulants intégrés du processus ponctuel multivarié.Nous avons développer deux méthodes d'estimation des intégrales des noyaux du processus de Hawkes, sans faire d'hypothèse sur la forme de ces noyaux. Nos méthodes sont plus rapides et plus robustes, vis-à-vis de la forme des noyaux, par rapport à l'état de l'art. Nous avons démontré la consistence statistique de la première méthode, et avons montré que la deuxième peut être réduite à un problème d'optimisation convexe.La dernière partie met en lumière les dynamiques de carnet d'ordre grâce à la première méthode d'estimation non-paramétrique introduite dans la partie précédente.Nous avons utilisé des données du marché à terme EUREX, défini de nouveaux modèles de carnet d'ordre (basés sur les précédents travaux de Bacry et al.) et appliqué la méthode d'estimation sur ces processus ponctuels.Les résultats obtenus sont très satisfaisants et cohérents avec une analysé économétrique.Un tel travail prouve que la méthode que nous avons développé permet d'extraire une structure à partir de données aussi complexes que celles issues de la finance haute-fréquence. / The guiding principle of this thesis is to show how the arsenal of recent optimization methods can help solving challenging new estimation problems on events models.While the classical framework of supervised learning treat the observations as a collection of independent couples of features and labels, events models focus on arrival timestamps to extract information from the source of data.These timestamped events are chronologically ordered and can't be regarded as independent.This mere statement motivates the use of a particular mathematical object called point process to learn some patterns from events.Two examples of point process are treated in this thesis.The first is the point process behind Cox proportional hazards model:its conditional intensity function allows to define the hazard ratio, a fundamental quantity in survival analysis literature.The Cox regression model relates the duration before an event called failure to some covariates.This model can be reformulated in the framework of point processes.The second is the Hawkes process which models how past events increase the probability of future events.Its multivariate version enables encoding a notion of causality between the different nodes.The thesis is divided into three parts.The first focuses on a new optimization algorithm we developed to estimate the parameter vector of the Cox regression in the large-scale setting.Our algorithm is based on stochastic variance reduced gradient descent (SVRG) and uses Monte Carlo Markov Chain to estimate one costly term in the descent direction.We proved the convergence rates and showed its numerical performance on both simulated and real-world datasets.The second part shows how the Hawkes causality can be retrieved in a nonparametric fashion from the integrated cumulants of the multivariate point process.We designed two methods to estimate the integrals of the Hawkes kernels without any assumption on the shape of the kernel functions. Our methods are faster and more robust towards the shape of the kernels compared to state-of-the-art methods. We proved the statistical consistency of the first method, and designed turned the second into a convex optimization problem.The last part provides new insights from order book data using the first nonparametric method developed in the second part.We used data from the EUREX exchange, designed new order book model (based on the previous works of Bacry et al.) and ran the estimation method on these point processes.The results are very insightful and consistent with an econometric analysis.Such work is a proof of concept that our estimation method can be used on complex data like high-frequency financial data.

Page generated in 0.1086 seconds