191 |
Analyse multicritère des politiques publiques environnementales dans l'Union Européenne / Multidimensional Analysis of Environmental Public Policies in the European UnionIsbasoiu, Ancuta 01 July 2019 (has links)
L'Union Européenne a un programme ambitieux pour faire face aux effets du changement climatique, les institutions européennes devant désormais prendre en compte l'environnement dans le cadre de ses politiques. L'objectif de ma thèse consiste à évaluer les impacts des politiques publiques européennes sur l'agriculture et l’environnement, de mesurer leurs effets croisés et d'évaluer l'intérêt d'une meilleure coordination de ces politiques. La thèse vise à enrichir l'analyse économique sur des problématiques importantes recentrées sur la réduction des émissions de gaz à effet de serre (GES) agricoles dans l'UE et le niveau de la production agricole, sous un angle quantitatif. La méthodologie repose sur un modèle de programmation mathématique qui simule l’offre agricole européenne (AROPAj), utilisant les données du Réseau d'Information Comptable Agricole. L'analyse est réalisée à plusieurs niveaux, européen, national, régional et infra-régional, tenant compte de la variabilité du contexte économique qui caractérise l'agriculture européenne sur les six années 2007-2012. Nous évaluons tout d'abord comment l'agriculture peut contribuer à l'atténuation des émissions de GES dans l'UE et nous offrons une analyse détaillée des courbes de coûts marginaux d'abattement. Les résultats indiquent qu’en moyenne, sur la période 2007-2012, l’agriculture européenne peut réduire ses émissions d’environ 10%, 20% et 30% respectivement, pour les prix des émissions de 38, 112.5 et 205 Euros/tCO2eq. Nous montrons que l’agriculture peut offrir une atténuation substantielle et que le potentiel et les coûts d’atténuation varient substantiellement dans le temps et dans l’espace. La deuxième problématique étudiée porte sur la compatibilité entre l’augmentation de la production agricole et la diminution de l’impact de l’agriculture sur l’environnement. En introduisant une approche primale (via un prix du carbone) et une approche duale (via un objectif calorique), nous montrons qu’on peut réduire les émissions de GES et modifier l’offre agricole tout en augmentant la quantité en calories alimentaires. On étend la problématique des émissions de GES, en dissociant les prix des deux gaz (CH4 et N2O). Un système de prix différenciés permet de mieux adapter la politique de régulation climatique en fonction de l'horizon de temps sur lequel on se projette, offrant une flexibilité dans la réduction des coûts d’abattement des émissions. / The European Union has an ambitious agenda to deal with the effects of climate change, the European institutions must now take environment into account within the framework of its policies. The objective of my thesis is to evaluate the impacts of European public policies on agriculture and environment, to measure their crossed effects and to assess the potential for a better coordination of these policies. The thesis aims to enrich the economic analysis on important issues refocused on the reduction of agricultural greenhouse gas emissions in the EU and the level of agricultural production, from a quantitative perspective. The methodology is based on a mathematical programming model that simulates the European agricultural supply (AROPAj), using data from the Farm Accountancy Data Network. The analysis is carried out at several levels, European, national, regional and sub-regional, taking into account the variability of the economic context that characterizes the European agriculture over the six years 2007-2012. We first assess how agriculture may contribute to the mitigation of EU GHG emissions and provide a detailed analysis of marginal abatement cost curves. The results show that, on average, over the period 2007-2012, EU agriculture may reduce its emissions by around 10%, 20% and 30%, respectively for emission prices of 38, 112.5 and 205 EUR/tCO2eq. We show that agriculture may offer substantial mitigation and that mitigation costs and potential vary in time and in space. The second issue studied concerns the compatibility between the increase in agricultural production and the reduction of the impact of agriculture on the environment. By introducing a primal approach (via a carbon price) and a dual approach (via a calorie target), we show that we can reduce GHG emissions and change agricultural supply while increasing the quantity of food calories. We extend the issue of GHG emissions by separating the prices of the two gases (CH4 et N2O). A differentiated price system allows to better adapt the climate regulation policy according to the time horizon on which we are projected, offering flexibility in reducing the emission abatement costs.
|
192 |
平行疊代法解互補問題張泰生, ZHANG, TAI-SHENG Unknown Date (has links)
本論文係研究和發展平行疊代法(PARALLEL ITERATIVE METHOD )以解決數學規劃(
MATHEMATICAL PROGRAMMING)中之互補問題(COMPLEMENTA-RITY PROBLEM)。互補問
題源自解決國防軍事、工程經濟及管理科學等領域之應用,而由於近年來各種超級或
平行電腦不斷地創新,使得發展平行演算法以充分並有效地應用超級或平行電腦來解
決大型科學計算的問題日趨重要。
在本篇論文中,我們分別探討線性互補問題以及非線性互補問題。首先我們發展出一
半非同步(SEMI-ASYNCHRONOUS )法來解決線性互補問題,此法之特性在於其能大幅
地減低因同步法所造成處理機閒置(IDLING)之冗額成本(OVERHEAD);同時,也放
寬了非同步法對問題所加諸之限制,因而擴大了半非同步法所能應用之範圍。我們也
建立了有關該法收斂性(CONVERGENCE )之理論根據。此外,線性互補問題之探討,
實為進一步研究非線性互補問題之基礎。
其次,我們提出一個整體性之架構,探討平行牛頓法(NEWTON METHOD )及其各種變
型(VARIATIONS)來解決各種非線性互補問題,比較並研究各種方法的特性、限制及
執行效率。
然後,針對上述各種演算法,我們在教育部電算中心之IBM 3090上發展並模擬各
該法之平行運算,經由廣泛地實驗測試,以獲得具體之數值結果,來檢驗其效率,並
比較研究各法之適用性與優劣。最後,我們也提出一些相關之問題,以供未來後續研
究之參考。
|
193 |
Biomimetic and autonomic server ensemble orchestrationNakrani, Sunil January 2005 (has links)
This thesis addresses orchestration of servers amongst multiple co-hosted internet services such as e-Banking, e-Auction and e-Retail in hosting centres. The hosting paradigm entails levying fees for hosting third party internet services on servers at guaranteed levels of service performance. The orchestration of server ensemble in hosting centres is considered in the context of maximising the hosting centre's revenue over a lengthy time horizon. The inspiration for the server orchestration approach proposed in this thesis is drawn from nature and generally classed as swarm intelligence, specifically, sophisticated collective behaviour of social insects borne out of primitive interactions amongst members of the group to solve problems beyond the capability of individual members. Consequently, the approach is self-organising, adaptive and robust. A new scheme for server ensemble orchestration is introduced in this thesis. This scheme exploits the many similarities between server orchestration in an internet hosting centre and forager allocation in a honeybee (Apis mellifera) colony. The scheme mimics the way a honeybee colony distributes foragers amongst flower patches to maximise nectar influx, to orchestrate servers amongst hosted internet services to maximise revenue. The scheme is extended by further exploiting inherent feedback loops within the colony to introduce self-tuning and energy-aware server ensemble orchestration. In order to evaluate the new server ensemble orchestration scheme, a collection of server ensemble orchestration methods is developed, including a classical technique that relies on past history to make time varying orchestration decisions and two theoretical techniques that omnisciently make optimal time varying orchestration decisions or an optimal static orchestration decision based on complete knowledge of the future. The efficacy of the new biomimetic scheme is assessed in terms of adaptiveness and versatility. The performance study uses representative classes of internet traffic stream behaviour, service user's behaviour, demand intensity, multiple services co-hosting as well as differentiated hosting fee schedule. The biomimetic orchestration scheme is compared with the classical and the theoretical optimal orchestration techniques in terms of revenue stream. This study reveals that the new server ensemble orchestration approach is adaptive in a widely varying external internet environments. The study also highlights the versatility of the biomimetic approach over the classical technique. The self-tuning scheme improves on the original performance. The energy-aware scheme is able to conserve significant energy with minimal revenue performance degradation. The simulation results also indicate that the new scheme is competitive or better than classical and static methods.
|
194 |
Conservative decision-making and inference in uncertain dynamical systemsCalliess, Jan-Peter January 2014 (has links)
The demand for automated decision making, learning and inference in uncertain, risk sensitive and dynamically changing situations presents a challenge: to design computational approaches that promise to be widely deployable and flexible to adapt on the one hand, while offering reliable guarantees on safety on the other. The tension between these desiderata has created a gap that, in spite of intensive research and contributions made from a wide range of communities, remains to be filled. This represents an intriguing challenge that provided motivation for much of the work presented in this thesis. With these desiderata in mind, this thesis makes a number of contributions towards the development of algorithms for automated decision-making and inference under uncertainty. To facilitate inference over unobserved effects of actions, we develop machine learning approaches that are suitable for the construction of models over dynamical laws that provide uncertainty bounds around their predictions. As an example application for conservative decision-making, we apply our learning and inference methods to control in uncertain dynamical systems. Owing to the uncertainty bounds, we can derive performance guarantees of the resulting learning-based controllers. Furthermore, our simulations demonstrate that the resulting decision-making algorithms are effective in learning and controlling under uncertain dynamics and can outperform alternative methods. Another set of contributions is made in multi-agent decision-making which we cast in the general framework of optimisation with interaction constraints. The constraints necessitate coordination, for which we develop several methods. As a particularly challenging application domain, our exposition focusses on collision avoidance. Here we consider coordination both in discrete-time and continuous-time dynamical systems. In the continuous-time case, inference is required to ensure that decisions are made that avoid collisions with adjustably high certainty even when computation is inevitably finite. In both discrete-time and finite-time settings, we introduce conservative decision-making. That is, even with finite computation, a coordination outcome is guaranteed to satisfy collision-avoidance constraints with adjustably high confidence relative to the current uncertain model. Our methods are illustrated in simulations in the context of collision avoidance in graphs, multi-commodity flow problems, distributed stochastic model-predictive control, as well as in collision-prediction and avoidance in stochastic differential systems. Finally, we provide an example of how to combine some of our different methods into a multi-agent predictive controller that coordinates learning agents with uncertain beliefs over their dynamics. Utilising the guarantees established for our learning algorithms, the resulting mechanism can provide collision avoidance guarantees relative to the a posteriori epistemic beliefs over the agents' dynamics.
|
195 |
A novel approach for the development of policies for socio-technical systemsTaeihagh, Araz January 2011 (has links)
The growth in the interdependence and complexity of socio-technical systems requires the development of tools and techniques to aid in the formulation of better policies. The efforts of this research focus towards developing methodologies and support tools for better policy design and formulation. In this thesis, a new framework and a systematic approach for the formulation of policies are proposed. Focus has been directed to the interactions between policy measures, inspired by concepts in process design and network analysis. Furthermore, we have developed an agent-based approach to create a virtual environment for the exploration and analysis of different configurations of policy measures in order to build policy packages and test the effects of changes and uncertainties while formulating policies. By developing systematic approaches for the formulation and analysis of policies it is possible to analyse different configuration alternatives in greater depth, examine more alternatives and decrease the time required for the overall analysis. Moreover, it is possible to provide real-time assessment and feedback to the domain experts on the effect of changes in the configurations. These efforts ultimately help in forming more effective policies with synergistic and reinforcing attributes while avoiding internal contradictions. This research constitutes the first step towards the development of a general family of computer-based systems that support the design of policies. The results from this research also demonstrate the usefulness of computational approaches in addressing the complexity inherent in the formulation of policies. As a proof of concept, the proposed framework and methodologies have been applied to the formulation of policies that deal with transportation issues and emission reduction, but can be extended to other domains.
|
196 |
Statistical models for neuroimaging meta-analytic inferenceSalimi-Khorshidi, Gholamreza January 2011 (has links)
A statistical meta-analysis combines the results of several studies that address a set of related research hypotheses, thus increasing the power and reliability of the inference. Meta-analytic methods are over 50 years old and play an important role in science; pooling evidence from many trials to provide answers that any one trial would have insufficient samples to address. On the other hand, the number of neuroimaging studies is growing dramatically, with many of these publications containing conflicting results, or being based on only a small number of subjects. Hence there has been increasing interest in using meta-analysis methods to find consistent results for a specific functional task, or for predicting the results of a study that has not been performed directly. Current state of neuroimaging meta-analysis is limited to coordinate-based meta-analysis (CBMA), i.e., using only the coordinates of activation peaks that are reported by a group of studies, in order to "localize" the brain regions that respond to a certain type of stimulus. This class of meta-analysis suffers from a series of problems and hence cannot result in as accurate results as desired. In this research, we describe the problems that existing CBMA methods are suffering from and introduce a hierarchical mixed-effects image-based metaanalysis (IBMA) solution that incorporates the sufficient statistics (i.e., voxel-wise effect size and its associated uncertainty) from each study. In order to improve the statistical-inference stage of our proposed IBMA method, we introduce a nonparametric technique that is capable of adjusting such an inference for spatial nonstationarity. Given that in common practice, neuroimaging studies rarely provide the full image data, in an attempt to improve the existing CBMA techniques we introduce a fully automatic model-based approach that employs Gaussian-process regression (GPR) for estimating the meta-analytic statistic image from its corresponding sparse and noisy observations (i.e., the collected foci). To conclude, we introduce a new way to approach neuroimaging meta-analysis that enables the analysis to result in information such as “functional connectivity” and networks of the brain regions’ interactions, rather than just localizing the functions.
|
197 |
Decomposition in multistage stochastic programming and a constraint integer programming approach to mixed-integer nonlinear programmingVigerske, Stefan 27 March 2013 (has links)
Diese Arbeit leistet Beiträge zu zwei Gebieten der mathematischen Programmierung: stochastische Optimierung und gemischt-ganzzahlige nichtlineare Optimierung (MINLP). Im ersten Teil erweitern wir quantitative Stetigkeitsresultate für zweistufige stochastische gemischt-ganzzahlige lineare Programme auf Situationen in denen Unsicherheit gleichzeitig in den Kosten und der rechten Seite auftritt, geben eine ausführliche Übersicht zu Dekompositionsverfahren für zwei- und mehrstufige stochastische lineare und gemischt-ganzzahlig lineare Programme, und diskutieren Erweiterungen und Kombinationen des Nested Benders Dekompositionsverfahrens und des Nested Column Generationsverfahrens für mehrstufige stochastische lineare Programme die es erlauben die Vorteile sogenannter rekombinierender Szenariobäume auszunutzen. Als eine Anwendung dieses Verfahrens betrachten wir die optimale Zeit- und Investitionsplanung für ein regionales Energiesystem unter Einbeziehung von Windenergie und Energiespeichern. Im zweiten Teil geben wir eine ausführliche Übersicht zum Stand der Technik bzgl. Algorithmen und Lösern für MINLPs und zeigen dass einige dieser Algorithmen innerhalb des constraint integer programming Softwaresystems SCIP angewendet werden können. Letzteres erlaubt uns die Verwendung schon existierender Technologien für gemischt-ganzzahlige linear Programme und constraint Programme für den linearen und diskreten Teil des Problems. Folglich konzentrieren wir uns hauptsächlich auf die Behandlung der konvexen und nichtkonvexen nichtlinearen Nebenbedingungen mittels Variablenschrankenpropagierung, äußerer Approximation und Reformulierung. In einer ausführlichen numerischen Studie untersuchen wir die Leistung unseres Ansatzes anhand von Anwendungen aus der Tagebauplanung und des Aufbaus eines Wasserverteilungssystems und mittels verschiedener Vergleichstests. Die Ergebnisse zeigen, dass SCIP ein konkurrenzfähiger Löser für MINLPs geworden ist. / This thesis contributes to two topics in mathematical programming: stochastic optimization and mixed-integer nonlinear programming (MINLP). In the first part, we extend quantitative continuity results for two-stage stochastic mixed-integer linear programs to include situations with simultaneous uncertainty in costs and right-hand side, give an extended review on decomposition algorithm for two- and multistage stochastic linear and mixed-integer linear programs, and discuss extensions and combinations of the Nested Benders Decomposition and Nested Column Generation methods for multistage stochastic linear programs to exploit the advantages of so-called recombining scenario trees. As an application of the latter, we consider the optimal scheduling and investment planning for a regional energy system including wind power and energy storages. In the second part, we give a comprehensive overview about the state-of-the-art in algorithms and solver technology for MINLPs and show that some of these algorithm can be applied within the constraint integer programming framework SCIP. The availability of the latter allows us to utilize the power of already existing mixed integer linear and constraint programming technologies to handle the linear and discrete parts of the problem. Thus, we focus mainly on the domain propagation, outer-approximation, and reformulation techniques to handle convex and nonconvex nonlinear constraints. In an extensive computational study, we investigate the performance of our approach on applications from open pit mine production scheduling and water distribution network design and on various benchmarks sets. The results show that SCIP has become a competitive solver for MINLPs.
|
198 |
[en] ROBUST STRATEGIC BIDDING IN AUCTION-BASED MARKETS / [pt] ESTRATÉGIA DE OFERTAS ROBUSTA EM MERCADOS BASEADOS EM LEILÃOBRUNO FANZERES DOS SANTOS 12 February 2019 (has links)
[pt] Nesta de tese de doutorado é proposta uma metodologia alternativa para obter estratégias ótimas de oferta sob incerteza que maximizam o lucro de um agente em mercados dotados de um leilão de preço uniforme e envelope fechado com multiplos produtos divisíveis. A estratégia ótima de um agente price maker depende amplamente da informação conhecida dos agentes rivais. Reconhecendo que a oferta dos agentes rivais pode desviar do equilíbrio de mercado e é de difícil caracterização probabilística, nós propomos um modelo de otimização robusta dois estágios com restrições de equilíbrio para obter estratégias de oferta ótimas avessas a risco. O modelo proposto é um modelo de otimização de três níveis passível de ser reescrito como uma instância particular de um programa binível com restrições de equilíbrio. Um conjunto de procedimentos é proposto a fim de construir uma formulação equivalente de de nível único adequado para aplicação de algoritmos de Geração de Coluna e Restrição (GCC). Diferentemente de trabalhos publicados anteriormente em modelos de otimização dois estágios, nossa metodologia de solução não aplica o método de GCC para iterativamente identificar os cenários mais violados dos fatores de incerteza, variáveis que são identificadas através de variáveis contínuas. Na metodologia de solução proposta, o algoritmo GCC é aplicado para identificar um pequeno subconjunto de condições de otimalidade para o modelo de terceiro nível capaz de representar as restrições de equilíbrio do leilão na solução ótima do problema master (problema de oferta). Um estudo de caso numérico baseado em mercados de energia de curto prazo é apresentado para ilustrar a aplicabilidade do modelo robusto proposto. Resultados indicam que mesmo em um caso em que é observada uma imprecisão de 1 porcento na oferta de equilíbrio de Nash dos agentes rivais, a solução robusta provê uma redução
significativa de risco em uma análise fora da amostra. / [en] We propose an alternative methodology to devise profit-maximizing strategic bids under uncertainty in markets endowed with a sealed-bid uniformprice auction with multiple divisible products. The optimal strategic bid of a price maker agent largely depends on the knowledge (information) of the rivals
bidding strategy. By recognizing that the bid of rival competitors may deviate from the equilibrium and are of difficult probabilistic characterization, we proposed a two-stage robust optimization model with equilibrium constraints to devise an risk-averse strategic bid in the auction. The proposed model is
a trilevel optimization problem that can be recast as a particular instance of a bilevel program with equilibrium constraints. Reformulation procedures are proposed to construct a single-level-equivalent formulation suitable for column and constraint generation (CCG) algorithm. Differently from previously reported works on two-stage robust optimization, our solution methodology does not employ the CCG algorithm to iteratively identify violated scenarios for the uncertain factors, which in this thesis are obtained through continuous variables. In the proposed solution methodology, the CCG is applied to identify a small subset of optimality conditions for the third-level model capable of representing the auction equilibrium constraints at the optimum solution of the master (bidding) problem. A numerical case study based on short-term electricity markets is presented to illustrate the applicability of the proposed
robust model. Results show that even for the case where an impression of 1 percent on the rivals offer at the Nash equilibrium is observed, the robust solution provides a non-negligible risk reduction in out-of-sample analysis.
|
199 |
Optimal Jammer Placement to Interdict Wireless Network Services.Shankar, Arun. 2008 June 1900 (has links)
Thesis (Master').
|
200 |
Pokročilá optimalizace toků v sítích / Advanced Optimization of Network FlowsCabalka, Matouš January 2018 (has links)
The master’s thesis focuses on the optimization models in logistics with emphasis on the network interdiction problem. The brief introduction is followed by two overview chapters - graph theory and mathematical programming. Important definitions strongly related to network interdiction problems are introduced in the chapter named Basic concepts of graph theory. Necessary theorems used for solving problems are following the definitions. Next chapter named Introduction to mathematical programming firstly contains concepts from linear programming. Definitions and theorems are chosen with respect to the following maximum flow problem and the derived dual problem. Concepts of stochastic optimization follow. In the fifth chapter, we discuss deterministic models of the network interdiction. Stochastic models of the network interdiction follow in the next chapter. All models are implemented in programmes written in the programming language GAMS, the codes are attached.
|
Page generated in 0.1123 seconds