Spelling suggestions: "subject:"oon convex aptimization"" "subject:"oon convex anoptimization""
71 |
Estimation robuste pour les systèmes incertains / Robust estimation for uncertain systemsBayon, Benoît 06 December 2012 (has links)
Un système est dit robuste s'il est possible de garantir son bon comportement dynamique malgré les dispersions de ses caractéristiques lors de sa fabrication, les variations de l'environnement ou encore son vieillissement. Au-delà du fait que la dispersion des caractéristiques est inéluctable, une plus grande dispersion permet notamment de diminuer fortement les coûts de production. La prise en compte explicite de la robustesse par les ingénieurs est donc un enjeu crucial lors de la conception d'un système. Des propriétés robustes peuvent être garanties lors de la synthèse d'un correcteur en boucle fermée. Il est en revanche beaucoup plus difficile de garantir ces propriétés en boucle ouverte, ce qui concerne par exemple des cas comme la synthèse d'estimateur.Prendre en compte la robustesse lors de la synthèse est une problématique importante de la communauté du contrôle robuste. Un certain nombre d'outils ont été développés pour analyser la robustesse d'un système vis-à-vis d'un ensemble d'incertitudes(μ analyse par exemple). Bien que le problème soit intrinsèquement complexe au sens algorithmique, des relaxations ont permis de formuler des conditions suffisantes pour tester la stabilité d'un système vis-à-vis d'un ensemble d'incertitudes. L'émergence de l'Optimisation sous contrainte Inégalité Matricielle Linéaire (LMI) a permis de tester ces conditions suffisantes au moyen d'un algorithme efficace, c'est-à-dire convergeant vers une solution en un temps raisonnable grâce au développement des méthodes des points intérieurs.En se basant sur ces résultats d'analyse, le problème de synthèse de correcteurs en boucle fermée ne peut pas être formulé sous la forme d'un problème d'optimisation pour lequel un algorithme efficace existe. En revanche, pour certains cas comme la synthèse de filtres robustes, le problème de synthèse peut être formulé sous la forme d'un problème d'optimisation sous contrainte LMI pour lequel un algorithme efficace existe. Ceci laisse entrevoir un certain potentiel de l'approche robuste pour la synthèse d'estimateurs.Exploitant ce fait, cette thèse propose une approche complète du problème de synthèse d'estimateurs robustes par l'intermédiaire des outils d'analyse de la commande robuste en conservant le caractère efficace de la synthèse lié aux outils classiques. Cette approche passe par une ré-interprétation de l'estimation nominale (sans incertitude) par l'optimisation sous contrainte LMI, puis par une extension systématique des outils de synthèse et d'analyse développés pour l'estimation nominale à l'estimation robuste.Cette thèse présente des outils de synthèse d'estimateurs, mais également des outils d'analyse qui permettront de tester les performances robustes atteintes par les estimateurs.Les résultats présentés dans ce document sont exprimés sous la forme de théorèmes présentant des contraintes LMI. Ces théorèmes peuvent se mettre de façon systématique sous la forme d'un problème d'optimisation pour lequel un algorithme efficace existe.Pour finir, les problèmes de synthèse d'estimateurs robustes appartiennent à une classe plus générale de problèmes de synthèse robuste : les problèmes de synthèse robuste en boucle ouverte. Ces problèmes de synthèse ont un potentiel très intéressant. Des résultats de base sont formulés pour la synthèse en boucle ouverte, permettant de proposer des méthodes de synthèse robustes dans des cas pour lesquels la mise en place d'une boucle de rétroaction est impossible. Une extension aux systèmes LPV avec une application à la commande de position sans capteur de position est également proposée. / A system is said to be robust if it is possible to guarantee his dynamic behaviour despite dispersion of his features due to production, environmental changes or aging. beyond the fact that a dispersion is ineluctable, a greater one allows to reduce production costs. Thus, considering robustness is a crucial stake during the conception of a system.Robustness can be achieved using feedback, but is more difficult in Open-Loop, which concerns estimator synthesis for instance.Robustness is a major concern of the Robust Control Community. Many tools have been developed to analyse robustness of a system towards a set of uncertainties (μ analysis for instance). And even if the problem is known to be difficult (speaking of complexity), sufficient conditions allow to formulate results to test the robust stability of a system. Thanks to the development of interior point methods, the emergence of optimization under Linear Matrix Inequalities Constraints allows to test these results using an efficient algorithm.Based on these analysis results, the robust controller synthesis problem cannot be recast as a convex optimization problem involving LMI. But for some cases such as filter synthesis, the synthesis problem can recast as a convex optimization problem. This fact let sense that robust control tools have some potential for estimators synthesis.Exploiting this fact, this thesis ofiers a complete approach of robust estimator synthesis, using robust control tools, while keeping what made the nominal approaches successful : eficient computation tools. this approach goes through reinterpretation of nominal estimation using LMI optimization, then propose a systematic extension of these tools to robust estimation.This thesis presents not only synthesis tools, but also analysis tools, allowing to test the robust performance reached by estimators All the results are proposed as convex optimization problems involving LMI.As a conclusion, robust estimator synthesis problems belong to a wider class of problems : robust open-loop synthesis problems, which have a great potential in many applications. Basic results are formulated for open-loop synthesis, providing results for cases where feedback cannot be used. An extension to LPV systems with an application to sensorless control is given.
|
72 |
Multidimensional adaptive radio links for broadband communicationsCodreanu, M. (Marian) 06 November 2007 (has links)
Abstract
Advanced multiple-input multiple-output (MIMO) transceiver structures which utilize the knowledge of channel state information (CSI) at the transmitter side to optimize certain link parameters (e.g., throughput, fairness, spectral efficiency, etc.) under different constraints (e.g., maximum transmitted power, minimum quality of services (QoS), etc.) are considered in this thesis.
Adaptive transmission schemes for point-to-point MIMO systems are considered first. A robust link adaptation method for time-division duplex systems employing MIMO-OFDM channel eigenmode based transmission is developed. A low complexity bit and power loading algorithm which requires low signaling overhead is proposed.
Two algorithms for computing the sum-capacity of MIMO downlink channels with full CSI knowledge are derived. The first one is based on the iterative waterfilling method. The convergence of the algorithm is proved analytically and the computer simulations show that the algorithm converges faster than the earlier variants of sum power constrained iterative waterfilling algorithms. The second algorithm is based on the dual decomposition method. By tracking the instantaneous error in the inner loop, a faster version is developed.
The problem of linear transceiver design in MIMO downlink channels is considered for a case when the full CSI of scheduled users only is available at the transmitter. General methods for joint power control and linear transmit and receive beamformers design are provided. The proposed algorithms can handle multiple antennas at the base station and at the mobile terminals with an arbitrary number of data streams per scheduled user. The optimization criteria are fairly general and include sum power minimization under the minimum signal-to-interference-plus-noise ratio (SINR) constraint per data stream, the balancing of SINR values among data streams, minimum SINR maximization, weighted sum-rate maximization, and weighted sum mean square error minimization. Besides the traditional sum power constraint on the transmit beamformers, multiple sum power constraints can be imposed on arbitrary subsets of the transmit antennas.This extends the applicability of the results to novel system architectures, such as cooperative base station transmission using distributed MIMO antennas. By imposing per antenna power constraints, issues related to the linearity of the power amplifiers can be handled as well.
The original linear transceiver design problems are decomposed as a series of remarkably simpler optimization problems which can be efficiently solved by using standard convex optimization techniques. The advantage of this approach is that it can be easily extended to accommodate various supplementary constraints such as upper and/or lower bounds for the SINR values and guaranteed QoS for different subsets of users. The ability to handle transceiver optimization problems where a network-centric objective (e.g., aggregate throughput or transmitted power) is optimized subject to user-centric constraints (e.g., minimum QoS requirements) is an important feature which must be supported by future broadband communication systems.
|
73 |
Structured sparsity-inducing norms : statistical and algorithmic properties with applications to neuroimaging / Normes parcimonieuses structurées : propriétés statistiques et algorithmiques avec applications à l’imagerie cérébraleJenatton, Rodolphe 24 November 2011 (has links)
De nombreux domaines issus de l’industrie et des sciences appliquées ont été les témoins d’une révolution numérique. Cette dernière s’est accompagnée d’une croissance du volume des données, dont le traitement est devenu un défi technique. Dans ce contexte, la parcimonie est apparue comme un concept central en apprentissage statistique. Il est en effet naturel de vouloir exploiter les données disponibles via un nombre réduit de paramètres. Cette thèse se concentre sur une forme particulière et plus récente de parcimonie, nommée parcimonie structurée. Comme son nom l’indique, nous considérerons des situations où, au delà de la seule parcimonie, nous aurons également à disposition des connaissances a priori relatives à des propriétés structurelles du problème. L’objectif de cette thèse est d'analyser le concept de parcimonie structurée, en se basant sur des considérations statistiques, algorithmiques et appliquées. Nous commencerons par introduire une famille de normes structurées parcimonieuses dont les aspects statistiques sont étudiées en détail. Nous considérerons ensuite l’apprentissage de dictionnaires, où nous exploiterons les normes introduites précédemment dans un cadre de factorisation de matrices. Différents outils algorithmiques efficaces, tels que des méthodes proximales, seront alors proposés. Grâce à ces outils, nous illustrerons sur de nombreuses applications pourquoi la parcimonie structurée peut être bénéfique. Ces exemples contiennent des tâches de restauration en traitement de l’image, la modélisation hiérarchique de documents textuels, ou encore la prédiction de la taille d’objets à partir de signaux d’imagerie par résonance magnétique fonctionnelle. / Numerous fields of applied sciences and industries have been recently witnessing a process of digitisation. This trend has come with an increase in the amount digital data whose processing becomes a challenging task. In this context, parsimony, also known as sparsity, has emerged as a key concept in machine learning and signal processing. It is indeed appealing to exploit data only via a reduced number of parameters. This thesis focuses on a particular and more recent form of sparsity, referred to as structured sparsity. As its name indicates, we shall consider situations where we are not only interested in sparsity, but where some structural prior knowledge is also available. The goal of this thesis is to analyze the concept of structured sparsity, based on statistical, algorithmic and applied considerations. To begin with, we introduce a family of structured sparsity-inducing norms whose statistical aspects are closely studied. In particular, we show what type of prior knowledge they correspond to. We then turn to sparse structured dictionary learning, where we use the previous norms within the framework of matrix factorization. From an optimization viewpoint, we derive several efficient and scalable algorithmic tools, such as working-set strategies and proximal-gradient techniques. With these methods in place, we illustrate on numerous real-world applications from various fields, when and why structured sparsity is useful. This includes, for instance, restoration tasks in image processing, the modelling of text documents as hierarchy of topics, the inter-subject prediction of sizes of objects from fMRI signals, and background-subtraction problems in computer vision.
|
74 |
Wavelet transform modulus : phase retrieval and scattering / Transformée en ondelettes : reconstruction de phase et de scatteringWaldspurger, Irène 10 November 2015 (has links)
Les tâches qui consistent à comprendre automatiquement le contenu d’un signal naturel, comme une image ou un son, sont en général difficiles. En effet, dans leur représentation naïve, les signaux sont des objets compliqués, appartenant à des espaces de grande dimension. Représentés différemment, ils peuvent en revanche être plus faciles à interpréter. Cette thèse s’intéresse à une représentation fréquemment utilisée dans ce genre de situations, notamment pour analyser des signaux audio : le module de la transformée en ondelettes. Pour mieux comprendre son comportement, nous considérons, d’un point de vue théorique et algorithmique, le problème inverse correspondant : la reconstruction d’un signal à partir du module de sa transformée en ondelettes. Ce problème appartient à une classe plus générale de problèmes inverses : les problèmes de reconstruction de phase. Dans un premier chapitre, nous décrivons un nouvel algorithme, PhaseCut, qui résout numériquement un problème de reconstruction de phase générique. Comme l’algorithme similaire PhaseLift, PhaseCut utilise une relaxation convexe, qui se trouve en l’occurence être de la même forme que les relaxations du problème abondamment étudié MaxCut. Nous comparons les performances de PhaseCut et PhaseLift, en termes de précision et de rapidité. Dans les deux chapitres suivants, nous étudions le cas particulier de la reconstruction de phase pour la transformée en ondelettes. Nous montrons que toute fonction sans fréquence négative est uniquement déterminée (à une phase globale près) par le module de sa transformée en ondelettes, mais que la reconstruction à partir du module n’est pas stable au bruit, pour une définition forte de la stabilité. On démontre en revanche une propriété de stabilité locale. Nous présentons également un nouvel algorithme de reconstruction de phase, non-convexe, qui est spécifique à la transformée en ondelettes, et étudions numériquement ses performances. Enfin, dans les deux derniers chapitres, nous étudions une représentation plus sophistiquée, construite à partir du module de transformée en ondelettes : la transformée de scattering. Notre but est de comprendre quelles propriétés d’un signal sont caractérisées par sa transformée de scattering. On commence par démontrer un théorème majorant l’énergie des coefficients de scattering d’un signal, à un ordre donné, en fonction de l’énergie du signal initial, convolé par un filtre passe-haut qui dépend de l’ordre. On étudie ensuite une généralisation de la transformée de scattering, qui s’applique à des processus stationnaires. On montre qu’en dimension finie, cette transformée généralisée préserve la norme. En dimension un, on montre également que les coefficients de scattering généralisés d’un processus caractérisent la queue de distribution du processus. / Automatically understanding the content of a natural signal, like a sound or an image, is in general a difficult task. In their naive representation, signals are indeed complicated objects, belonging to high-dimensional spaces. With a different representation, they can however be easier to interpret. This thesis considers a representation commonly used in these cases, in particular for theanalysis of audio signals: the modulus of the wavelet transform. To better understand the behaviour of this operator, we study, from a theoretical as well as algorithmic point of view, the corresponding inverse problem: the reconstruction of a signal from the modulus of its wavelet transform. This problem belongs to a wider class of inverse problems: phase retrieval problems. In a first chapter, we describe a new algorithm, PhaseCut, which numerically solves a generic phase retrieval problem. Like the similar algorithm PhaseLift, PhaseCut relies on a convex relaxation of the phase retrieval problem, which happens to be of the same form as relaxations of the widely studied problem MaxCut. We compare the performances of PhaseCut and PhaseLift, in terms of precision and complexity. In the next two chapters, we study the specific case of phase retrieval for the wavelet transform. We show that any function with no negative frequencies is uniquely determined (up to a global phase) by the modulus of its wavelet transform, but that the reconstruction from the modulus is not stable to noise, for a strong notion of stability. However, we prove a local stability property. We also present a new non-convex phase retrieval algorithm, which is specific to the case of the wavelet transform, and we numerically study its performances. Finally, in the last two chapters, we study a more sophisticated representation, built from the modulus of the wavelet transform: the scattering transform. Our goal is to understand which properties of a signal are characterized by its scattering transform. We first prove that the energy of scattering coefficients of a signal, at a given order, is upper bounded by the energy of the signal itself, convolved with a high-pass filter that depends on the order. We then study a generalization of the scattering transform, for stationary processes. We show that, in finite dimension, this generalized transform preserves the norm. In dimension one, we also show that the generalized scattering coefficients of a process characterize the tail of its distribution.
|
75 |
Compressive Radar Cross Section ComputationLi, Xiang 15 January 2020 (has links)
Compressive Sensing (CS) is a novel signal-processing paradigm that allows sampling of sparse or compressible signals at lower than Nyquist rate. The past decade has seen substantial research on imaging applications using compressive sensing. In this thesis, CS is combined with the commercial electromagnetic (EM) simulation software newFASANT to improve its efficiency in solving EM scattering problems such as Radar Cross Section (RCS) of complex targets at GHz frequencies. This thesis proposes a CS-RCS approach that allows efficient and accurate recovery of under-sampled RCSs measured from a random set of incident angles using an accelerated iterative soft thresh-holding reconstruction algorithm. The RCS results of a generic missile and a Canadian KingAir aircraft model simulated using Physical Optics (PO) as the EM solver at various frequencies and angular resolutions demonstrate good efficiency and accuracy of the proposed method.
|
76 |
A Convex Optimization Framework for the Optimal Design, Energy, and Thermal Management of Li-Ion Battery PacksFreudiger, Danny January 2021 (has links)
No description available.
|
77 |
Far-field pattern synthesis of transmitarray antennas using convex optimization techniquesDefives, Marie January 2022 (has links)
Transmitarrays antennas (TAs) can be seen as the planar counterpart of optical lenses. They are composed of thin radiating elements (unit cells) which introduce different local phase shifts on an incident electromagnetic wave, emitted by a primary source, and re-radiate it. By properly designing the unit cells and their distribution in the TA, the properties of the incident wave, e.g. wavefront and polarization, as well as the pattern of the radiated field can be tailored. Moreover, TAs are suited to low-cost multilayer fabrication processes, e.g. printed circuit board (PCB) technology, and can achieve electronic reconfiguration embedding diodes. Therefore, TAs are natural and cost-effective candidates for applications requiring to steer and shape the antenna beam, such as satellite communications (Satcom) and future terrestrial wireless networks. For instance, satellite antennas radiate contoured beams to cover specific Earth regions, whereas Satcom ground terminals and mobile base stations require very directive beams compliant with prescribed radiation masks. In many cases, the amplitude of the field impinging on the TA is fixed and the TA phase profile, i.e. the spatial distribution of the phase-shifting elements, is the only parameter that can be designed to generate the desired radiation pattern. Thus, versatile, efficient and robust phase-only synthesis methods are essential. Closed-form expressions for the phase profile can be derived only in a few cases and for specific targeted far-field patterns. On the other hand, synthesis approaches based on global optimization techniques, such as genetic algorithms, are general purpose but their convergence and accuracy is often poor, despite the long computation time. In this thesis, a mathematical approach for the phase-only synthesis of TAs using convex optimization is developed to solve diverse pattern shaping problems. The use of convex optimization ensures a good compromise between the generality, robustness and computational cost of the method.First, a model for the analysis of the TA is presented. It accurately predicts the antenna radiation pattern using the equivalence theorem and includes the impact of the spillover, i.e. the direct radiation from the TA feed. Then, the TA synthesis is formulated in terms of the far-field intensity pattern computed by the model. The phase-only synthesis problem is inherently non-convex. However, a sequential convex optimization procedure relying on proper relaxations is proposed to approximately solve it. The accuracy of these sub-optimal solutions is discussed and methods to enhance it are compared. The procedure is successfully applied to synthesize relatively large TAs, with symmetrical and non-symmetrical phase profiles, radiating either focused-beam or shaped-beam patterns, with challenging mask constraints.Finally, three millimeter-wave TAs, comprising different sets of unit cells, are designed using the synthesis procedure. The good agreement between the predicted radiation patterns and those obtained from full-wave simulations of the antennas demonstrates the precision and versatility of the proposed tool, within its range of validity. / Transmitarray antennas (TAs) är en typ av antenna som konsiderades som optiska lenser motparten. Transmitterray antennas (TAs) are a type of antenna that is considered as optical lenses counterpart.De är sammansatta av tunna strålande element eller unit cell (UCs) som introducerar olika lokala fasförskjutningar på en inkommande elektromagnetisk våg och stråla ut den igen. They are composed of thin radiating elements or unit cells (UCs) that introduce different local phase shifts on an incoming electromagnetic wave and radiate it out again.Den här vågen kommer från en primär elektromagnetisk källa. This wave comes from a primary electromagnetic source.Syftet med detta examensarbete är att bestämma hur man ska UC placera för att skapa en önskad utgångsstråle.This master thesis aim is to determine how to place the UC in order to create a desired output beam.TAs är biliga att bygga och kan också vara elektroniska omkonfigurerbara med hjälp av dioder. TAs are cheap to produce and can also be electronically reconfigurable using diodes. TAs används i Satcom-domänen eller för att designa ny hög hastighet nätverk (6G).TAs are used in the Satcom domain or to design new high-speed network (6G). När man skapar en antenn, kan man stämma fas och amplitud av kompositerna för att skapa en önskad utgångsstråle. På TAs är det lite svårare.When someone create an antenna, one can tune phase and amplitude of the composants to create a desired output beam. For TAs it is a little bit more difficult.Faktiskt kan man stämma endast fas i TA- arkitektur. In fact, one can only tune the phase in the TA architecture. Så behöver vi speciell designprocedur som kallas: fassyntesSo, we need special design procedure called: phase-only synthesis.Konvex optimering är en bra kompromiss mellan metodens generalitet och uträkningstimeConvex optimization is a good compromise between generality and computation time.Här presenterar vi en fassyntes metod på skapa TAs som utstrålar en önskad stråle. Here we present a phase-only synthesis method in order to create TAs which radiate a precise beam. Metoden är baserade på konvex optimering.
|
78 |
Overcoming local optima in control and optimization of cooperative multi-agent systemsWelikala, Shirantha 15 May 2021 (has links)
A cooperative multi-agent system is a collection of interacting agents deployed in a mission space where each agent is allowed to control its local state so that the fleet of agents collectively optimizes a common global objective. While optimization problems associated with multi-agent systems intend to determine the fixed set of globally optimal agent states, control problems aim to obtain the set of globally optimal agent controls. Associated non-convexities in these problems result in multiple local optima. This dissertation explores systematic techniques that can be deployed to either escape or avoid poor local optima while in search of provably better (still local) optima.
First, for multi-agent optimization problems with iterative gradient-based solutions, a distributed approach to escape local optima is proposed based on the concept of boosting functions. These functions temporarily transform gradient components at a local optimum into a set of boosted non-zero gradient components in a systematic manner so that it is more effective compared to the methods where gradient components are randomly perturbed. A novel variable step size adjustment scheme is also proposed to establish the convergence of this distributed boosting process. Developed boosting concepts are successfully applied to the class of coverage problems.
Second, as a means of avoiding convergence to poor local optima in multi-agent optimization, the use of greedy algorithms in generating effective initial conditions is explored. Such greedy methods are computationally cheap and can often exploit submodularity properties of the problem to provide performance bound guarantees to the obtained solutions. For the class of submodular maximization problems, two new performance bounds are proposed and their effectiveness is illustrated using the class of coverage problems.
Third, a class of multi-agent control problems termed Persistent Monitoring on Networks (PMN) is considered where a team of agents is traversing a set of nodes (targets) interconnected according to a network topology aiming to minimize a measure of overall node state. For this class of problems, a gradient-based parametric control solution developed in a prior work relies heavily on the initial selection of its `parameters' which often leads to poor local optima. To overcome this initialization challenge, the PMN system's asymptotic behavior is analyzed, and an off-line greedy algorithm is proposed to systematically generate an effective set of initial parameters.
Finally, for the same class of PMN problems, a computationally efficient distributed on-line Event-Driven Receding Horizon Control (RHC) solution is proposed as an alternative. This RHC solution is parameter-free as it automatically optimizes its planning horizon length and gradient-free as it uses explicitly derived solutions for each RHC problem invoked at each agent upon each event of interest. Hence, unlike the gradient-based parametric control solutions, the proposed RHC solution does not force the agents to converge to one particular behavior that is likely to be a poor local optimum. Instead, it keeps the agents actively searching for the optimum behavior.
In each of these four parts of the thesis, an interactive simulation platform is developed (and made available online) to generate extensive numerical examples that highlight the respective contributions made compared to the state of the art.
|
79 |
On the Topic of Unconstrained Black-Box Optimization with Application to Pre-Hospital Care in Sweden : Unconstrained Black-Box OptimizationAnthony, Tim January 2021 (has links)
In this thesis, the theory and application of black-box optimization methods are explored. More specifically, we looked at two families of algorithms, descent methods andresponse surface methods (closely related to trust region methods). We also looked at possibilities in using a dimension reduction technique called active subspace which utilizes sampled gradients. This dimension reduction technique can make the descent methods more suitable to high-dimensional problems, which turned out to be most effective when the data have a ridge-like structure. Finally, the optimization methods were used on a real-world problem in the context of pre-hospital care where the objective is to minimize the ambulance response times in the municipality of Umea by changing the positions of the ambulances. Before applying the methods on the real-world ambulance problem, a simulation study was performed on synthetic data, aiming at finding the strengths and weaknesses of the different models when applied to different test functions, at different levels of noise. The results showed that we could improve the ambulance response times across several different performance metrics compared to the response times of the current ambulancepositions. This indicates that there exist adjustments that can benefit the pre-hospitalcare in the municipality of Umea. However, since the models in this thesis work find local and not global optimums, there might still exist even better ambulance positions that can improve the response time further. / I denna rapport undersöks teorin och tillämpningarna av diverse blackbox optimeringsmetoder. Mer specifikt så har vi tittat på två familjer av algoritmer, descentmetoder och responsytmetoder (nära besläktade med tillitsregionmetoder). Vi tittar också på möjligheterna att använda en dimensionreduktionsteknik som kallas active subspace som använder samplade gradienter för att göra descentmetoderna mer lämpade för högdimensionella problem, vilket visade sig vara mest effektivt när datat har en struktur där ändringar i endast en riktning har effekt på responsvärdet. Slutligen användes optimeringsmetoderna på ett verkligt problem från sjukhusvården, där målet var att minimera svarstiderna för ambulansutryckningar i Umeå kommun genom att ändra ambulanspositionerna. Innan metoderna tillämpades på det verkliga ambulansproblemet genomfördes också en simuleringsstudie på syntetiskt data. Detta för att hitta styrkorna och svagheterna hos de olika modellerna genom att undersöka hur dem hanterar ett flertal testfunktioner under olika nivåer av brus. Resultaten visade att vi kunde förbättra ambulansernas responstider över flera olika prestandamått jämfört med responstiderna för de nuvarande ambulanspositionerna. Detta indikerar att det finns förändringar av positioneringen av ambulanser som kan gynna den pre-hospitala vården inom Umeå kommun. Dock, eftersom modellerna i denna rapport hittar lokala och inte globala optimala punkter kan det fortfarande finnas ännu bättre ambulanspositioner som kan förbättra responstiden ytterligare.
|
80 |
Minimax D-optimal designs for regression models with heteroscedastic errorsYzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate
|
Page generated in 0.1 seconds