• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 448
  • 96
  • 73
  • 71
  • 66
  • 56
  • 47
  • 43
  • 43
  • 38
  • 38
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Conception d’environnement instrumenté pour la veille à la personne / Design of instrumented environment for human monitoring

Massein, Aurélien 22 November 2018 (has links)
L'instrumentation permet à notre environnement, maison ou bâtiment, de devenir intelligent en s'adaptant à nos modes de vie et en nous assistant au quotidien. Un environnement intelligent est sensible et réactif à nos activités, afin d'améliorer notre qualité de vie. La fiabilité d'identification des activités est ainsi essentielle pour cette intelligence ambiante : elle est directement dépendante du positionnement des capteurs au sein de l'environnement. Cette question essentielle du placement des capteurs est très peu considérée par les systèmes ambiants commercialisés ou même dans la littérature. Pourtant, elle est la source principale de leurs dysfonctionnements où une mauvaise reconnaissance des activités entraîne une mauvaise assistance fournie. Le placement de capteurs consiste à choisir et à positionner des capteurs pertinents pour une identification fiable des activités. Dans cette thèse, nous développons et détaillons une méthodologie de placement de capteurs axée sur l'identifiabilité des activités d'intérêt. Nous la qualifions en nous intéressant à deux évaluations différentes : la couverture des intérêts et l'incertitude de mesures. Dans un premier temps, nous proposons un modèle de l'activité où nous décomposons l'activité en actions caractérisées afin d'être indépendant de toute technologie ambiante (axée connaissances ou données). Nous représentons actions et capteurs par un modèle ensembliste unifiant, permettant de fusionner des informations homogènes de capteurs hétérogènes. Nous en évaluons l'identifiabilité des actions d'intérêt au regard des capteurs placés, par des notions de précision (performance d'identification) et de sensibilité (couverture des actions). Notre algorithme de placement des capteurs utilise la Pareto-optimalité pour proposer une large palette de placements-solutions pertinents et variés, pour ces multiples identifiabilités à maximiser. Nous illustrons notre méthodologie et notre évaluation en utilisant des capteurs de présence, et en choisissant optimalement la caractéristique à couvrir pour chaque action. Dans un deuxième temps, nous nous intéressons à la planification optimale des expériences où l'analyse de la matrice d'information permet de quantifier l'influence des sources d'incertitudes sur l'identification d'une caractéristique d'action. Nous représentons les capteurs continus et l'action caractérisée par un modèle analytique, et montrons que certaines incertitudes doivent être prises en compte et intégrées dans une nouvelle matrice d'information. Nous y appliquons les indices d'observabilité directement pour évaluer l'identifiabilité d'une action caractérisée (incertitude d'identification). Nous illustrons cette évaluation alternative en utilisant des capteurs d'angle, et nous la comparons à la matrice d'information classique. Nous discutons des deux évaluations abordées et de leur complémentarité pour la conception d’environnement instrumenté pour la veille à la personne. / Instrumentation enables our environment, house or building, to get smart through self-adjustment to our lifestyles and through assistance of our daily-life. A smart environment is sensitive and responsive to our activities, in order to improve our quality of life. Reliability of activities' identification is absolutely necessary to such ambient intelligence: it depends directly on sensors' positioning within the environment. This fundamental issue of sensor placement is hardly considered by marketed ambient systems or even into the literature. Yet, it is the main source of ambient systems' malfunctions and failures, because a bad activity recognition leads to a bad delivered assistance. Sensor placement is about choosing and positioning relevant sensors for a reliable identification of activities. In this thesis, we develop and detail a sensor placement methodology driven by identifiability of activities of interest. We quantify it by looking at two different evaluations: coverage of interests and uncertainty of measures. First, we present an activity model that decomposes each activity into characterised actions to be technology-free (either knowledge or data driven one). We depict actions and sensors by a set theoretic model, enabling to fuse homogeneous informations of heterogeneous sensors. We then evaluate each action of interest's identifiability regarding placed sensors, through notions of precision (identification's performance) and sensitivity (action's coverage). Our sensor placement algorithm use Pareto-optimality to offer a wide range of relevant solution-placements, for these multiple identifiabilities to maximise. We showcase our methodology and our evaluation through solving a problem featuring motion and binary sensors, by optimally choosing for each action the characteristic to cover. Finally, we look into optimal design of experiments by analysing the information matrix to quantify how sources of uncertainties influence the identification of an action's characteristic. We depict continuous sensors and the characterised action by an analytical model, and we show that some uncertainties should be considered and included in a new information matrix. We then apply directly observability indexes to evaluate identifiability of a characterised action (uncertainty of identification), and compare our new information matrix to the classical one. We showcase our alternate evaluation through solving a sensor placement problem featuring angular sensors. We discuss both covered evaluations and their complementarity towards the design of instrumented environment for human monitoring.
232

Developing a Decision Making Approach for District Cooling Systems Design using Multi-objective Optimization

Kamali, Aslan 29 June 2016 (has links)
Energy consumption rates have been dramatically increasing on a global scale within the last few decades. A significant role in this increase is subjected by the recent high temperature levels especially at summer time which caused a rapid increase in the air conditioning demands. Such phenomena can be clearly observed in developing countries, especially those in hot climate regions, where people depend mainly on conventional air conditioning systems. These systems often show poor performance and thus negatively impact the environment which in turn contributes to global warming phenomena. In recent years, the demand for urban or district cooling technologies and networks has been increasing significantly as an alternative to conventional systems due to their higher efficiency and improved ecological impact. However, to obtain an efficient design for district cooling systems is a complex task that requires considering a wide range of cooling technologies, various network layout configuration possibilities, and several energy resources to be integrated. Thus, critical decisions have to be made regarding a variety of opportunities, options and technologies. The main objective of this thesis is to develop a tool to obtain preliminary design configurations and operation patterns for district cooling energy systems by performing roughly detailed optimizations and further, to introduce a decision-making approach to help decision makers in evaluating the economic aspects and environmental performance of urban cooling systems at an early design stage. Different aspects of the subject have been investigated in the literature by several researchers. A brief survey of the state of the art was carried out and revealed that mathematical programming models were the most common and successful technique for configuring and designing cooling systems for urban areas. As an outcome of the survey, multi objective optimization models were decided to be utilized to support the decision-making process. Hence, a multi objective optimization model has been developed to address the complicated issue of decision-making when designing a cooling system for an urban area or district. The model aims to optimize several elements of a cooling system such as: cooling network, cooling technologies, capacity and location of system equipment. In addition, various energy resources have been taken into consideration as well as different solar technologies such as: trough solar concentrators, vacuum solar collectors and PV panels. The model was developed based on the mixed integer linear programming method (MILP) and implemented using GAMS language. Two case studies were investigated using the developed model. The first case study consists of seven buildings representing a residential district while the second case study was a university campus district dominated by non-residential buildings. The study was carried out for several groups of scenarios investigating certain design parameters and operation conditions such as: Available area, production plant location, cold storage location constraints, piping prices, investment cost, constant and variable electricity tariffs, solar energy integration policy, waste heat availability, load shifting strategies, and the effect of outdoor temperature in hot regions on the district cooling system performance. The investigation consisted of three stages, with total annual cost and CO2 emissions being the first and second single objective optimization stages. The third stage was a multi objective optimization combining the earlier two single objectives. Later on, non-dominated solutions, i.e. Pareto solutions, were generated by obtaining several multi objective optimization scenarios based on the decision-makers’ preferences. Eventually, a decision-making approach was developed to help decision-makers in selecting a specific solution that best fits the designers’ or decision makers’ desires, based on the difference between the Utopia and Nadir values, i.e. total annual cost and CO2 emissions obtained at the single optimization stages. / Die Energieverbrauchsraten haben in den letzten Jahrzehnten auf globaler Ebene dramatisch zugenommen. Diese Erhöhung ist zu einem großen Teil in den jüngst hohen Temperaturniveaus, vor allem in der Sommerzeit, begründet, die einen starken Anstieg der Nachfrage nach Klimaanlagen verursachen. Solche Ereignisse sind deutlich in Entwicklungsländern zu beobachten, vor allem in heißen Klimaregionen, wo Menschen vor allem konventionelle Klimaanlagensysteme benutzen. Diese Systeme verfügen meist über eine ineffiziente Leistungsfähigkeit und wirken sich somit negativ auf die Umwelt aus, was wiederum zur globalen Erwärmung beiträgt. In den letzten Jahren ist die Nachfrage nach Stadt- oder Fernkältetechnologien und -Netzwerken als Alternative zu konventionellen Systemen aufgrund ihrer höheren Effizienz und besseren ökologischen Verträglichkeit satrk gestiegen. Ein effizientes Design für Fernkühlsysteme zu erhalten, ist allerdings eine komplexe Aufgabe, die die Integration einer breite Palette von Kühltechnologien, verschiedener Konfigurationsmöglichkeiten von Netzwerk-Layouts und unterschiedlicher Energiequellen erfordert. Hierfür ist das Treffen kritischer Entscheidungen hinsichtlich einer Vielzahl von Möglichkeiten, Optionen und Technologien unabdingbar. Das Hauptziel dieser Arbeit ist es, ein Werkzeug zu entwickeln, das vorläufige Design-Konfigurationen und Betriebsmuster für Fernkälteenergiesysteme liefert, indem aureichend detaillierte Optimierungen durchgeführt werden. Zudem soll auch ein Ansatz zur Entscheidungsfindung vorgestellt werden, der Entscheidungsträger in einem frühen Planungsstadium bei der Bewertung städtischer Kühlungssysteme hinsichtlich der wirtschaftlichen Aspekte und Umweltleistung unterstützen soll. Unterschiedliche Aspekte dieser Problemstellung wurden in der Literatur von verschiedenen Forschern untersucht. Eine kurze Analyse des derzeitigen Stands der Technik ergab, dass mathematische Programmiermodelle die am weitesten verbreitete und erfolgreichste Methode für die Konfiguration und Gestaltung von Kühlsystemen für städtische Gebiete sind. Ein weiteres Ergebnis der Analyse war die Festlegung von Mehrzieloptimierungs-Modelles für die Unterstützung des Entscheidungsprozesses. Darauf basierend wurde im Rahmen der vorliegenden Arbeit ein Mehrzieloptimierungs-Modell für die Lösung des komplexen Entscheidungsfindungsprozesses bei der Gestaltung eines Kühlsystems für ein Stadtgebiet oder einen Bezirk entwickelt. Das Modell zielt darauf ab, mehrere Elemente des Kühlsystems zu optimieren, wie beispielsweise Kühlnetzwerke, Kühltechnologien sowie Kapazität und Lage der Systemtechnik. Zusätzlich werden verschiedene Energiequellen, auch solare wie Solarkonzentratoren, Vakuum-Solarkollektoren und PV-Module, berücksichtigt. Das Modell wurde auf Basis der gemischt-ganzzahlig linearen Optimierung (MILP) entwickelt und in GAMS Sprache implementiert. Zwei Fallstudien wurden mit dem entwickelten Modell untersucht. Die erste Fallstudie besteht aus sieben Gebäuden, die ein Wohnviertel darstellen, während die zweite Fallstudie einen Universitätscampus dominiert von Nichtwohngebäuden repräsentiert. Die Untersuchung wurde für mehrere Gruppen von Szenarien durchgeführt, wobei bestimmte Designparameter und Betriebsbedingungen überprüft werden, wie zum Beispiel die zur Verfügung stehende Fläche, Lage der Kühlanlage, örtliche Restriktionen der Kältespeicherung, Rohrpreise, Investitionskosten, konstante und variable Stromtarife, Strategie zur Einbindung der Solarenergie, Verfügbarkeit von Abwärme, Strategien der Lastenverschiebung, und die Wirkung der Außentemperatur in heißen Regionen auf die Leistung des Kühlsystems. Die Untersuchung bestand aus drei Stufen, wobei die jährlichen Gesamtkosten und die CO2-Emissionen die erste und zweite Einzelzieloptimierungsstufe darstellen. Die dritte Stufe war ein Pareto-Optimierung, die die beiden ersten Ziele kombiniert. Im Anschluss wurden nicht-dominante Lösungen, also Pareto-Lösungen, erzeugt, indem mehrere Pareto-Optimierungs-Szenarien basierend auf den Präferenzen der Entscheidungsträger abgebildet wurden. Schließlich wurde ein Ansatz zur Entscheidungsfindung entwickelt, um Entscheidungsträger bei der Auswahl einer bestimmten Lösung zu unterstützen, die am besten den Präferenzen des Planers oder des Entscheidungsträgers enstpricht, basierend auf der Differenz der Utopia und Nadir Werte, d.h. der jährlichen Gesamtkosten und CO2-Emissionen, die Ergebnis der einzelnen Optimierungsstufen sind.
233

Game Theory and Microeconomic Theory for Beamforming Design in Multiple-Input Single-Output Interference Channels

Mochaourab, Rami 11 May 2012 (has links)
In interference-limited wireless networks, interference management techniques are important in order to improve the performance of the systems. Given that spectrum and energy are scarce resources in these networks, techniques that exploit the resources efficiently are desired. We consider a set of base stations operating concurrently in the same spectral band. Each base station is equipped with multiple antennas and transmits data to a single-antenna mobile user. This setting corresponds to the multiple-input single-output (MISO) interference channel (IFC). The receivers are assumed to treat interference signals as noise. Moreover, each transmitter is assumed to know the channels between itself and all receivers perfectly. We study the conflict between the transmitter-receiver pairs (links) using models from game theory and microeconomic theory. These models provide solutions to resource allocation problems which in our case correspond to the joint beamforming design at the transmitters. Our interest lies in solutions that are Pareto optimal. Pareto optimality ensures that it is not further possible to improve the performance of any link without reducing the performance of another link. Strategic games in game theory determine the noncooperative choice of strategies of the players. The outcome of a strategic game is a Nash equilibrium. While the Nash equilibrium in the MISO IFC is generally not efficient, we characterize the necessary null-shaping constraints on the strategy space of each transmitter such that the Nash equilibrium outcome is Pareto optimal. An arbitrator is involved in this setting which dictates the constraints at each transmitter. In contrast to strategic games, coalitional games provide cooperative solutions between the players. We study cooperation between the links via coalitional games without transferable utility. Cooperative beamforming schemes considered are either zero forcing transmission or Wiener filter precoding. We characterize the necessary and sufficient conditions under which the core of the coalitional game with zero forcing transmission is not empty. The core solution concept specifies the strategies with which all players have the incentive to cooperate jointly in a grand coalition. While the core only considers the formation of the grand coalition, coalition formation games study coalition dynamics. We utilize a coalition formation algorithm, called merge-and-split, to determine stable link grouping. Numerical results show that while in the low signal-to-noise ratio (SNR) regime noncooperation between the links is efficient, at high SNR all links benefit in forming a grand coalition. Coalition formation shows its significance in the mid SNR regime where subset link cooperation provides joint performance gains. We use the models of exchange and competitive market from microeconomic theory to determine Pareto optimal equilibria in the two-user MISO IFC. In the exchange model, the links are represented as consumers that can trade goods within themselves. The goods in our setting correspond to the parameters of the beamforming vectors necessary to achieve all Pareto optimal points in the utility region. We utilize the conflict representation of the consumers in the Edgeworth box, a graphical tool that depicts the allocation of the goods for the two consumers, to provide closed-form solution to all Pareto optimal outcomes. The exchange equilibria are a subset of the points on the Pareto boundary at which both consumers achieve larger utility then at the Nash equilibrium. We propose a decentralized bargaining process between the consumers which starts at the Nash equilibrium and ends at an outcome arbitrarily close to an exchange equilibrium. The design of the bargaining process relies on a systematic study of the allocations in the Edgeworth box. In comparison to the exchange model, a competitive market additionally defines prices for the goods. The equilibrium in this economy is called Walrasian and corresponds to the prices that equate the demand to the supply of goods. We calculate the unique Walrasian equilibrium and propose a coordination process that is realized by the arbitrator which distributes the Walrasian prices to the consumers. The consumers then calculate in a decentralized manner their optimal demand corresponding to beamforming vectors that achieve the Walrasian equilibrium. This outcome is Pareto optimal and lies in the set of exchange equilibria. In this thesis, based on the game theoretic and microeconomic models, efficient beamforming strategies are proposed that jointly improve the performance of the systems. The gained results are applicable in interference-limited wireless networks requiring either coordination from the arbitrator or direct cooperation between the transmitters.
234

The economic and environmental impacts of transportation decisions : A multi-objective optimization / De ekonomiska och miljömässiga effekterna av transportbeslut : En multi-objektiv optimering

Eliasson, Joel, Segevall, Arvid January 2022 (has links)
Getinge AB is a global medical technology company. This master’s thesis is based on the outflow of capital equipments from Getinge’s factory in Växjö to four different sales and service units. The purpose of this thesis is to give Getinge a deeper insight of why the customers and the own organization do not know when they can expect their products. This makes most requests urgent and thus prohibits them from using the best environmental and cost efficient modes of transportation. Two sub-problems have been created in order to investigate this. Sub-problem 1 originates from an organizational perspective. The aim of this problem is to examine the possibilities to achieve less urgent transportations by improving the communication between sales and service units, factories and logistics services. This is evaluated based on semi-structured interviews containing both qualitative and quantitative questions with employees rep- resenting the different functions at the company. It appeared that different phrases, explaining the same thing, were used internally leading to confu- sion. Further, the different functions have harmonized follow-up sessions but do not share the information between each other. The resulting information vacuum creates trust issues and unnecessary time margins and buffers. Sub-problem 2 concerns the trade-off between the economic and environmen- tal impacts in relation to the Greenhouse Gas Protocol Scope 3. This trade- off is evaluated by a multi-objective optimization model, where emissions are priced based on the EU ETS market valuation. Current research argues that the choice of transportation mode is the simplest emissions abatement option in terms of implementation. This study indicates that it is possible for Getinge, in the short-term, to decrease costs and emissions by just chang- ing between current transportation modes. However, a long-term strategy should include evaluation of consolidations, alternative fuels and electrified vehicles since the cost of decreasing one kilogram of emissions by changing between current transportation modes will increase. Finally, increased transparency and communication between sales and ser- vice units, factory and logistics services could be achieved via a one point of contact solution. This could avoid unnecessary time margins and buffers and hence open up the possibility of better over all lead time utilization. This could make it easier to use more environmental friendly transportation modes and thus lower emissions and costs, while still satisfying the customers.
235

Robust portfolio optimization with Expected Shortfall / Robust portföljoptimering med ES

Isaksson, Daniel January 2016 (has links)
This thesis project studies robust portfolio optimization with Expected Short-fall applied to a reference portfolio consisting of Swedish linear assets with stocks and a bond index. Specifically, the classical robust optimization definition, focusing on uncertainties in parameters, is extended to also include uncertainties in log-return distribution. My contribution to the robust optimization community is to study portfolio optimization with Expected Shortfall with log-returns modeled by either elliptical distributions or by a normal copula with asymmetric marginal distributions. The robust optimization problem is solved with worst-case parameters from box and ellipsoidal un-certainty sets constructed from historical data and may be used when an investor has a more conservative view on the market than history suggests. With elliptically distributed log-returns, the optimization problem is equivalent to Markowitz mean-variance optimization, connected through the risk aversion coefficient. The results show that the optimal holding vector is almost independent of elliptical distribution used to model log-returns, while Expected Shortfall is strongly dependent on elliptical distribution with higher Expected Shortfall as a result of fatter distribution tails. To model the tails of the log-returns asymmetrically, generalized Pareto distributions are used together with a normal copula to capture multivariate dependence. In this case, the optimization problem is not equivalent to Markowitz mean-variance optimization and the advantages of using Expected Shortfall as risk measure are utilized. With the asymmetric log-return model there is a noticeable difference in optimal holding vector compared to the elliptical distributed model. Furthermore the Expected Shortfall in-creases, which follows from better modeled distribution tails. The general conclusions in this thesis project is that portfolio optimization with Expected Shortfall is an important problem being advantageous over Markowitz mean-variance optimization problem when log-returns are modeled with asymmetric distributions. The major drawback of portfolio optimization with Expected Shortfall is that it is a simulation based optimization problem introducing statistical uncertainty, and if the log-returns are drawn from a copula the simulation process involves more steps which potentially can make the program slower than drawing from an elliptical distribution. Thus, portfolio optimization with Expected Shortfall is appropriate to employ when trades are made on daily basis. / Examensarbetet behandlar robust portföljoptimering med Expected Shortfall tillämpad på en referensportfölj bestående av svenska linjära tillgångar med aktier och ett obligationsindex. Specifikt så utvidgas den klassiska definitionen av robust optimering som fokuserar på parameterosäkerhet till att även inkludera osäkerhet i log-avkastningsfördelning. Mitt bidrag till den robusta optimeringslitteraturen är att studera portföljoptimering med Expected Shortfall med log-avkastningar modellerade med antingen elliptiska fördelningar eller med en norma-copul med asymmetriska marginalfördelningar. Det robusta optimeringsproblemet löses med värsta tänkbara scenario parametrar från box och ellipsoid osäkerhetsset konstruerade från historiska data och kan användas när investeraren har en mer konservativ syn på marknaden än vad den historiska datan föreslår. Med elliptiskt fördelade log-avkastningar är optimeringsproblemet ekvivalent med Markowitz väntevärde-varians optimering, kopplade med riskaversionskoefficienten. Resultaten visar att den optimala viktvektorn är nästan oberoende av vilken elliptisk fördelning som används för att modellera log-avkastningar, medan Expected Shortfall är starkt beroende av elliptisk fördelning med högre Expected Shortfall som resultat av fetare fördelningssvansar. För att modellera svansarna till log-avkastningsfördelningen asymmetriskt används generaliserade Paretofördelningar tillsammans med en normal-copula för att fånga det multivariata beroendet. I det här fallet är optimeringsproblemet inte ekvivalent till Markowitz väntevärde-varians optimering och fördelarna med att använda Expected Shortfall som riskmått används. Med asymmetrisk log-avkastningsmodell uppstår märkbara skillnader i optimala viktvektorn jämfört med elliptiska fördelningsmodeller. Därutöver ökar Expected Shortfall, vilket följer av bättre modellerade fördelningssvansar. De generella slutsatserna i examensarbetet är att portföljoptimering med Expected Shortfall är ett viktigt problem som är fördelaktigt över Markowitz väntevärde-varians optimering när log-avkastningar är modellerade med asymmetriska fördelningar. Den största nackdelen med portföljoptimering med Expected Shortfall är att det är ett simuleringsbaserat optimeringsproblem som introducerar statistisk osäkerhet, och om log-avkastningar dras från en copula så involverar simuleringsprocessen flera steg som potentiellt kan göra programmet långsammare än att dra från en elliptisk fördelning. Därför är portföljoptimering med Expected Shortfall lämpligt att använda när handel sker på daglig basis.
236

Antenna Optimization in Long-Term Evolution Networks

Deng, Qichen January 2013 (has links)
The aim of this master thesis is to study algorithms for automatically tuning antenna parameters to improve the performance of the radio access part of a telecommunication network and user experience. There are four dierent optimization algorithms, Stepwise Minimization Algorithm, Random Search Algorithm, Modied Steepest Descent Algorithm and Multi-Objective Genetic Algorithm to be applied to a model of a radio access network. The performances of all algorithms will be evaluated in this thesis. Moreover, a graphical user interface which is developed to facilitate the antenna tuning simulations will also be presented in the appendix of the report.
237

Desarrollo de una metodología para la selección de lazos de control en sistemas multivariables mediante técnicas de optimización multiobjetivo

Huilcapi Subia, Victor 30 March 2021 (has links)
[ES] El control descentralizado de sistemas multivariables es una tarea compleja y su eficiencia depende principalmente de la selección adecuada de sus lazos de control. Por lo general, para seleccionar estos lazos de control se calculan medidas de interacción entre sus variables. Las metodologías clásicas que se han desarrollado para este propósito pueden dar resultados divergentes (en cuanto a los lazos de control a establecer). Esto es debido, entre otras cosas, a que miden las interacciones entre las variables del sistema de diferentes maneras. Además, normalmente no incorporan en el proceso de selección de lazos de control la sintonización de sus controladores. En esta tesis se ha desarrollado una metodología para seleccionar lazos de control óptimos en sistemas multivariables usando un enfoque de optimización multiobjetivo. La metodología analiza el problema de selección óptima de lazos de control y sintonización óptima de las estructuras de control en un marco de trabajo unificado. La metodología permite analizar las características de cada combinación de lazos de control de manera detallada comparando sus desempeños de forma global, lo cual permite a un diseñador tener información relevante para tomar decisiones adecuadas para controlar eficientemente un proceso multivariable. En la metodología propuesta se muestra como las preferencias del diseñador juegan un papel muy importante en la selección de los lazos de control en un sistema multivariable. En esta tesis se aplica la nueva metodología propuesta a varios problemas de ingeniería de control tanto lineales como no lineales. En estos ejemplos se compara la metodología propuesta con las metodologías clásicas de selección de lazos de control más usadas. Esto ha permitido revelar información valiosa sobre el control descentralizado de sistemas multivariables que no hubiese sido factible obtener con las metodologías tradicionales. / [CA] El control descentralitzat de sistemes multivariables és una tasca complexa i la seua eficiència depén principalment de la selecció adequada dels seus llaços de control. En general per a seleccionar aquests llaços de control es calculen mesures d'interacció entre les seues variables. Les metodologies clàssiques que s'han desenvolupat per a aquest propòsit poden donar resultats divergents (quant als llaços de control a establir). Això és degut, entre altres coses, al fet que mesuren les interaccions entre les variables del sistema de diferents maneres. A més normalment no incorporen en el procés de selecció de llaços de control la sintonització dels seus controladors. En aquesta tesi s'ha desenvolupat una metodologia per a seleccionar llaços de control òptims en sistemes multivariables usant un enfocament d'optimització multi-objectiu. La metodologia analitza el problema de selecció òptima de llaços de control i sintonització òptima de les estructures de control en un marc de treball unificat. La metodologia permet analitzar les característiques de cada combinació de llaços de control de manera detallada comparant els seus acompliments de manera global, la qual cosa permet a un dissenyador tindre informació rellevant per a prendre decisions adequades per a controlar eficientment un procés multivariable. En la metodologia proposada es mostra com les preferències del dissenyador tenen un rol molt important en la selecció dels llaços de control en un sistema multivariable. En aquesta tesi s'aplica la nova metodologia proposada a diversos problemes d'enginyeria de control tant lineals com no lineals. En aquests exemples es compara la metodologia proposada amb les metodologies clàssiques de selecció de llaços de control més usades. Això ha permés revelar informació valuosa sobre el control descentralitzat de sistemes multivariables que no haguera sigut factible obtindre amb les metodologies tradicionals. / [EN] Decentralized control of multivariable systems is a complex problem and its efficiency depends mainly on the suitable selection of its control loops (inputoutput pairings). In general, to select these control loops, measures of interaction between their variables are calculated. The classical methodologies that have been developed for this purpose can give divergent results (in terms of the type of loop pairing to choose). This is because they generally analyze the loop pairing selection and controller tuning independently and optimize a single objective. In this thesis a methodology to select optimal input-output pairings in multivariable systems using a multiobjective optimization approach has been developed. The methodology analyzes the problem of optimal selection of control loops and optimal tuning of control structures in a unified framework. The methodology allows a detailed analysis of the characteristics of each control loop, globally comparing their performance, which allows a designer to have relevant information to make adequate decisions to efficiently control a multivariable process. The proposed methodology shows how the designer's preferences have a very important role in the selection of an input-output pairing in a multivariate system. In this thesis, the new proposed methodology is applied to various control engineering problems, both linear and non-linear. In these examples, the proposed methodology is compared with the classical methodologies of selection of input-output pairings most used for the control of multivariable systems. This has revealed valuable information on the decentralized control of multivariate systems that would not have been feasible to obtain with traditional methodologies. / Este trabajo ha sido subvencionado por la Universidad Politécnica Salesiana (UPS) a través del convenio CB-755-2015 / Huilcapi Subia, V. (2021). Desarrollo de una metodología para la selección de lazos de control en sistemas multivariables mediante técnicas de optimización multiobjetivo [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/165014 / TESIS
238

Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighet

Jakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager.  The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics.  In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected.  Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy.  Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed.  In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren.  Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär.  I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar.  I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet.  För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event.  Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
239

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
240

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015

Page generated in 0.0198 seconds