• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 20
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Processos de decisão Markovianos fatorados com probabilidades imprecisas / Factored Markov decision processes with Imprecise Transition Probabilities

Karina Valdivia Delgado 19 January 2010 (has links)
Em geral, quando modelamos problemas de planejamento probabilístico do mundo real, usando o arcabouço de Processos de Decisão Markovianos (MDPs), é difícil obter uma estimativa exata das probabilidades de transição. A incerteza surge naturalmente na especificação de um domínio, por exemplo, durante a aquisição das probabilidades de transição a partir de um especialista ou de dados observados através de técnicas de amostragem, ou ainda de distribuições de transição não estacionárias decorrentes do conhecimento insuficiente do domínio. Com o objetivo de se determinar uma política robusta, dada a incerteza nas transições de estado, Processos de Decisão Markovianos com Probabilidades Imprecisas (MDP-IPs) têm sido usados para modelar esses cenários. Infelizmente, apesar de existirem diversos algoritmos de solução para MDP-IPs, muitas vezes eles exigem chamadas externas de rotinas de otimização que podem ser extremamente custosas. Para resolver esta deficiência, nesta tese, introduzimos o MDP-IP fatorado e propomos métodos eficientes de programação matemática e programação dinâmica que permitem explorar a estrutura de um domínio de aplicação. O método baseado em programação matemática propõe soluções aproximadas eficientes para MDP-IPs fatorados, estendendo abordagens anteriores de programação linear para MDPs fatorados. Essa proposta, baseada numa formulação multilinear para aproximações robustas da função valor de estados, explora a representação fatorada de um MDP-IP, reduzindo em ordens de magnitude o tempo consumido em relação às abordagens não-fatoradas previamente propostas. O segundo método proposto, baseado em programação dinâmica, resolve o gargalo computacional existente nas soluções de programação dinâmica para MDP-IPs propostas na literatura: a necessidade de resolver múltiplos problemas de otimização não-linear. Assim, mostramos como representar a função valor de maneira compacta usando uma nova estrutura de dados chamada de Diagramas de Decisão Algébrica Parametrizados, e como aplicar técnicas de aproximação para reduzir drasticamente a sobrecarga computacional das chamadas a um otimizador não-linear, produzindo soluções ótimas aproximadas com erro limitado. Nossos resultados mostram uma melhoria de tempo e até duas ordens de magnitude em comparação às abordagens tradicionais enumerativas baseadas em programação dinâmica e uma melhoria de tempo de até uma ordem de magnitude sobre a extensão de técnicas de iteração de valor aproximadas para MDPs fatorados. Além disso, produzimos o menor erro de todos os algoritmos de aproximação avaliados. / When modeling real-world decision-theoretic planning problems with the framework of Markov Decision Processes(MDPs), it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, uncertainty arises in the specification of transitions due to elicitation of MDP transition models from an expert or data, or non-stationary transition distributions arising from insuficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) have been introduced. Unfortunately, while various solutions exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose eficient mathematical programming and dynamic programming methods to exploit its structure. First, we derive eficient approximate solutions for Factored MDP-IPs based on mathematical programming resulting in a multilinear formulation for robust maximin linear-value approximations in Factored MDP-IPs. By exploiting factored structure in MDP-IPs we are able to demonstrate orders of magnitude reduction in solution time over standard exact non-factored approaches. Second, noting that the key computational bottleneck in the dynamic programming solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional at dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error among all approximation algorithm evaluated.
12

Processus Décisionnels de Markov pour l'autonomie ajustable et l'interaction hétérogène entre engins autonomes et pilotés / Markov Decision Processes for adjustable autonomy and heterogeneous interaction between autonomous and piloted robots

Lelerre, Mathieu 17 May 2018 (has links)
Les robots vont être de plus en plus utilisés dans les domaines civils, comme dans le domaine militaire. Ces robots, opérant en flottes, peuvent accompagner des soldats au combat, ou accomplir une mission en étant supervisés par un poste de contrôle. Du fait des exigences d'une opération militaire, il est difficile de laisser les robots décider de leurs actions sans accord d'un opérateur ou surveillance, en fonction de la situation. Dans cette thèse, nous nous attardons sur deux problématiques:D'une part, nous cherchons à exploiter l'autonomie ajustable de sorte à ce qu'un robot puisse accomplir sa mission de la manière la plus efficace possible, tout en respectant des restrictions assignées par un opérateur sur son niveau d'autonomie. Pour cela, celui-ci est en mesure de définir pour un ensemble d'états et d'actions donné un niveau de restriction. Ce niveau peut par exemple imposer au robot la télé-opération pour accéder à une zone à risque.D'autre part, comme nous envisageons la possibilité que plusieurs robots soient déployés en même temps, ces robots doivent se coordonner pour accomplir leurs objectifs. Seulement, comme les opérateurs peuvent prendre le contrôle de certains d'entre eux, la question de la coordination se pose. En effet, l'opérateur ayant ses propres préférences, perception de l'environnement, connaissances et étant sujet aux stress, hésitations, il est difficile de prévoir les actions que celui-ci va effectuer, et donc de s'y coordonner. Nous proposerons dans cette thèse une approche visant à estimer la politique exécutée par un robot télé-opéré à partir d'apprentissage basé sur les actions observés de ce robot.La notion de planification est très présente dans ces travaux. Ceux-ci se baseront sur des modèles de planifications comme les Processus Décisionnels de Markov. / Robots will be more and more used in both civil and military fields. These robots, operating in fleet, can accompany soldiers in fight, or accomplish a mission while being supervised by a control center. Considering the requirement of a military operation, it is complicated to let robots decide their action without an operator agreement or watch, in function of the situation.In this thesis, we focus on two problematics:First, we try to exploit adjustable autonomy to make a robot accomplishes is mission as efficiency as possible, while he respects restrictions, assigned by an operator, on his autonomy level. For this, it is able to define for given sets of states and actions a restriction level. This restriction can force, for example, the need of being tele-operated to access a dangerous zone.Secondly, we consider that several robots can be deployed at the same time. These robots have to coordinate to accomplish their objectives. However, since operators can take the control of some robots, the coordination is harder. In fact, the operator has preferences, perception, hesitation, stress that are not modeled by the agent. It is then hard to estimate his next actions, so to coordinate with him. We propose in this thesis an approach to estimate the policy executed by a tele-operated robot from learning methods, based on observed actions from this robot.The notion of planning his important in these works. These are based on planning models, such as Markov Decision Processes.
13

Optimization-Based Solutions for Planning and Control / Optimization-based Solutions to Optimal Operation under Uncertainty and Disturbance Rejection

Jalanko, Mahir January 2021 (has links)
Industrial automation systems normally consist of four different hierarchy levels: planning, scheduling, real-time optimization, and control. At the planning level, the goal is to compute an optimal production plan that minimizes the production cost while meeting process constraints. The planning model is typically formulated as a mixed integer nonlinear programming (MINLP), which is hard to solve to global optimality due to nonconvexity and large dimensionality attributes. Uncertainty in component qualities in gasoline blending due to measurement errors and variation in upstream processes may lead to off-specification products which require re-blending. Uncertainty in product demands may lead to a suboptimal solution and fail in capturing some potential profit due to shortage in products supply. While incorporating process uncertainties is essential to reducing the production cost and increasing profitability, it comes with the disadvantage of increasing the complexity of the MINLP planning model. The key contribution in the planning level is to employ the inventory pinch decomposition method to consider uncertainty in components qualities and products demands to reduce the production cost and increase profitability of the gasoline blend application. At the control level, the goal is to ensure desired operation conditions by meeting process setpoints, ensure process safety, and avoid process failures. Model predictive control (MPC) is an advanced control strategy that utilizes a dynamic model of the process to predict future process dynamic behavior over a time horizon. The effectiveness of the MPC relies heavily on the availability of a reasonably accurate process model. The key contributions in the control level are: (1) investigate the use of different system identification methods for the purpose of developing a dynamic model for high-purity distillation column, which is a highly nonlinear process. (2) Develop a novel hybrid based MPC to improve the control of the column and achieve flooding-free control. / Dissertation / Doctor of Philosophy (PhD) / The operation of a chemical process involves many decisions which are normally distributed into levels referred to as process automation hierarchy. The process automation hierarchy levels are planning, scheduling, real-time optimization, and control. This thesis addresses two of the levels in the process automation hierarchy, which are planning and control. At the planning level, the objective is to ensure optimal utilization of raw materials and equipment to reduce production cost. At the control level, the objective is to meet and follow process setpoints determined by the real-time optimization level. The main goals of the thesis are: (1) develop an efficient algorithm to solve a large-scale planning problem that incorporates uncertainties in components qualities and products demands to reduce the production cost and maximize profit for gasoline blending application. (2) Develop a novel hybrid-based model predictive control to improve the control strategy of an industrial distillation column that faces flooding issues.
14

Human-help in automated planning under uncertainty / Ajuda humana em planejamento automatizado sob incerteza

Franch, Ignasi Andrés 21 September 2018 (has links)
Planning is the sub-area of artificial intelligence that studies the process of selecting actions to lead an agent, e.g. a robot or a softbot, to a goal state. In many realistic scenarios, any choice of actions can lead the robot into a dead-end state, that is, a state from which the goal cannot be reached. In such cases, the robot can, pro-actively, resort to human help in order to reach the goal, an approach called symbiotic autonomy. In this work, we propose two different approaches to tackle this problem: (I) contingent planning, where the initial state is partially observable, configuring a belief state, and the outcomes of the robot actions are non-deterministic; and (II) probabilistic planning, where the initial state may be partially or totally observable and the actions have probabilistic outcomes. In both approaches, the human help is considered a scarce resource that should be used only when necessary. In contingent planning, the problem is to find a policy (a function mapping belief states into actions) that: (i) guarantees the agent will always reach the goal (strong policy); (ii) guarantees that the agent will eventually reach the goal (strong cyclic policy), or (iii) does not guarantee achieving the goal (weak policy). In this scenario, we propose a contingent planning system that considers human help to transform weak policies into strong (cyclic) policies. To do so, two types of human help are included: (i) human actions that modify states and/or belief states; and (ii) human observations that modify belief states. In probabilistic planning, the problem is to find a policy (a function mapping between world states and actions) that can be one of these two types: a proper policy, where the agent has probability 1 of reaching the goal; or an improper policy, in the case of unavoidable dead-ends. In general, the goal of the agent is to find a policy that minimizes the expected accumulated cost of the actions while maximizes the probability of reaching the goal. In this scenario, this work proposes probabilistic planners that consider human help to transform improper policies into proper policies however, considering two new (alternative) criteria: either to minimize the probability of using human actions or to minimize the expected number of human actions. Furthermore, we show that optimal policies under these criteria can be efficiently computed either by increasing human action costs or given a penalty when a human help is used. Solutions proposed in both scenarios, contingent planning and probabilistic planning with human help, were evaluated over a collection of planning problems with dead-ends. The results show that: (i) all generated policies (strong (cyclic) or proper) include human help only when necessary; and (ii) we were able to find policies for contingent planning problems with up to 10^15000 belief states and for probabilistic planning problems with more than 3*10^18 physical states. / Planejamento é a subárea de Inteligência Artificial que estuda o processo de selecionar ações que levam um agente, por exemplo um robô, de um estado inicial a um estado meta. Em muitos cenários realistas, qualquer escolha de ações pode levar o robô para um estado que é um beco-sem-saída, isto é, um estado a partir do qual a meta não pode ser alcançada. Nestes casos, o robô pode, pró-ativamente, pedir ajuda humana para alcançar a meta, uma abordagem chamada autonomia simbiótica. Neste trabalho, propomos duas abordagens diferentes para tratar este problema: (I) planejamento contingente, em que o estado inicial é parcialmente observável, configurando um estado de crença, e existe não-determinismo nos resultados das ações; e (II) planejamento probabilístico, em que o estado inicial é totalmente observável e as ações tem efeitos probabilísticos. Em ambas abordagens a ajuda humana é considerada um recurso escasso e deve ser usada somente quando estritamente necessária. No planejamento contingente, o problema é encontrar uma política (mapeamento entre estados de crença e ações) com: (i) garantia de alcançar a meta (política forte); (ii) garantia de eventualmente alcançar a meta (política forte-cíclica), ou (iii) sem garantia de alcançar a meta (política fraca). Neste cenário, uma das contribuições deste trabalho é propor sistemas de planejamento contingente que considerem ajuda humana para transformar políticas fracas em políticas fortes (cíclicas). Para isso, incluímos ajuda humana de dois tipos: (i) ações que modificam estados do mundo e/ou estados de crença; e (ii) observações que modificam estados de crenças. Em planejamento probabilístico, o problema é encontrar uma política (mapeamento entre estados do mundo e ações) que pode ser de dois tipos: política própria, na qual o agente tem probabilidade 1 de alcançar a meta; ou política imprópria, caso exista um beco-sem-saída inevitável. O objetivo do agente é, em geral, encontrar uma política que minimize o custo esperado acumulado das ações enquanto maximize a probabilidade de alcançar a meta. Neste cenário, este trabalho propõe sistemas de planejamento probabilístico que considerem ajuda humana para transformar políticas impróprias em políticas próprias, porém considerando dois novos critérios: minimizar a probabilidade de usar ações do humano e minimizar o número esperado de ações do humano. Mostramos ainda que políticas ótimas sob esses novos critérios podem ser computadas de maneira eficiente considerando que ações humanas possuem um custo alto ou penalizando o agente ao pedir ajuda humana. Soluções propostas em ambos cenários, planejamento contingente e planejamento probabilístico com ajuda humana, foram empiricamente avaliadas sobre um conjunto de problemas de planejamento com becos-sem-saida. Os resultados mostram que: (i) todas as políticas geradas (fortes (cíclicas) ou próprias) incluem ajuda humana somente quando necessária; e (ii) foram encontradas políticas para problemas de planejamento contingente com até 10^15000 estados de crença e para problemas de planejamento probabilístico com até 3*10^18 estados do mundo.
15

Exploiting imprecise information sources in sequential decision making problems under uncertainty / Tirer profit de sources d'information imprécises pour la décision séquentielle dans l'incertain

Drougard, Nicolas 18 December 2015 (has links)
Les Processus Décisionnels de Markov Partiellement Observables (PDMPOs) permettent de modéliser facilement lesproblèmes probabilistes de décision séquentielle dans l'incertain. Lorsqu'il s'agit d'une mission robotique, lescaractéristiques du robot et de son environnement nécessaires à la définition de la mission constituent le système. Son étatn'est pas directement visible par l'agent (le robot). Résoudre un PDMPO revient donc à calculer une stratégie qui remplit lamission au mieux en moyenne, i.e. une fonction prescrivant les actions à exécuter selon l'information reçue par l'agent. Cetravail débute par la mise en évidence, dans le contexte robotique, de limites pratiques du modèle PDMPO: ellesconcernent l'ignorance de l'agent, l'imprécision du modèle d'observation ainsi que la complexité de résolution. Unhomologue du modèle PDMPO appelé pi-PDMPO, simplifie la représentation de l'incertitude: il vient de la Théorie desPossibilités Qualitatives qui définit la plausibilité des événements de manière qualitative, permettant la modélisation del'imprécision et de l'ignorance. Une fois les modèles PDMPO et pi-PDMPO présentés, une mise à jour du modèle possibilisteest proposée. Ensuite, l'étude des pi-PDMPOs factorisés permet de mettre en place un algorithme appelé PPUDD utilisantdes Arbres de Décision Algébriques afin de résoudre plus facilement les problèmes structurés. Les stratégies calculées parPPUDD, testées par ailleurs lors de la compétition IPPC 2014, peuvent être plus efficaces que celles des algorithmesprobabilistes dans un contexte d'imprécision ou de grande dimension. Cette thèse propose d'utiliser les possibilitésqualitatives dans le but d'obtenir des améliorations en termes de temps de calcul et de modélisation. / Partially Observable Markov Decision Processes (POMDPs) define a useful formalism to express probabilistic sequentialdecision problems under uncertainty. When this model is used for a robotic mission, the system is defined as the featuresof the robot and its environment, needed to express the mission. The system state is not directly seen by the agent (therobot). Solving a POMDP consists thus in computing a strategy which, on average, achieves the mission best i.e. a functionmapping the information known by the agent to an action. Some practical issues of the POMDP model are first highlightedin the robotic context: it concerns the modeling of the agent ignorance, the imprecision of the observation model and thecomplexity of solving real world problems. A counterpart of the POMDP model, called pi-POMDP, simplifies uncertaintyrepresentation with a qualitative evaluation of event plausibilities. It comes from Qualitative Possibility Theory whichprovides the means to model imprecision and ignorance. After a formal presentation of the POMDP and pi-POMDP models,an update of the possibilistic model is proposed. Next, the study of factored pi-POMDPs allows to set up an algorithmnamed PPUDD which uses Algebraic Decision Diagrams to solve large structured planning problems. Strategies computedby PPUDD, which have been tested in the context of the competition IPPC 2014, can be more efficient than those producedby probabilistic solvers when the model is imprecise or for high dimensional problems. This thesis proposes some ways ofusing Qualitative Possibility Theory to improve computation time and uncertainty modeling in practice.
16

Full Automation of Air Traffic Management in High Complexity Airspace / Vollautomatisierung der Flugsicherung in Lufträumen hoher Komplexität

Ehrmanntraut, Rüdiger 20 May 2010 (has links) (PDF)
The thesis is that automation of en-route Air Traffic Management in high complexity airspace can be achieved with a combination of automated tactic planning in a look-ahead time horizon of up to two hours complemented with automated tactic conflict resolution functions. The literature review reveals that no significant results have yet been obtained and that full automation could be approached with a complementary integration of automated tactic resolutions AND planning. The focus shifts to ‘planning for capacity’ and ‘planning for resolution’ and also – but not only – for ‘resolution’. The work encompasses a theoretical part on planning, and several small scale studies of empirical, mathematical or simulated nature. The theoretical part of the thesis on planning under uncertainties attempts to conceive a theoretical model which abstracts specificities of planning in Air Traffic Management into a generic planning model. The resulting abstract model treats entities like the planner, the strategy, the plan and the actions, always considering the impact of uncertainties. The work innovates in specifying many links from the theory to the application in planning of air traffic management, and especially the new fields of tactical capacity management. The second main part of the thesis comprises smaller self-containing works on different aspects of the concept grouped into a section on complexity, another on tactic planning actions, and the last on planners. The produced studies are about empirical measures of conflicts and conflict densities to get a better understanding of the complexity of air traffic; studies on traffic organisation using tactical manoeuvres like speed control, lateral offset and tactical direct using fast time simulation; and studies on airspace design like sector optimisation, dynamic sectorisation and its optimisation using optimisation techniques. In conclusion it is believed that this work will contribute to further automation attempts especially by its innovative focus which is on planning, base on a theory of planning, and its findings already influence newer developments.
17

Risk-aware Autonomous Driving Using POMDPs and Responsibility-Sensitive Safety / POMDP-modellerad Riskmedveten Autonom Körning med Riskmått

Skoglund, Caroline January 2021 (has links)
Autonomous vehicles promise to play an important role aiming at increased efficiency and safety in road transportation. Although we have seen several examples of autonomous vehicles out on the road over the past years, how to ensure the safety of autonomous vehicle in the uncertain and dynamic environment is still a challenging problem. This thesis studies this problem by developing a risk-aware decision making framework. The system that integrates the dynamics of an autonomous vehicle and the uncertain environment is modelled as a Partially Observable Markov Decision Process (POMDP). A risk measure is proposed based on the Responsibility-Sensitive Safety (RSS) distance, which quantifying the minimum distance to other vehicles for ensuring safety. This risk measure is incorporated into the reward function of POMDP for achieving a risk-aware decision making. The proposed risk-aware POMDP framework is evaluated in two case studies. In a single-lane car following scenario, it is shown that the ego vehicle is able to successfully avoid a collision in an emergency event where a vehicle in front of it makes a full stop. In the merge scenario, the ego vehicle successfully enters the main road from a ramp with a satisfactory distance to other vehicles. As a conclusion, the risk-aware POMDP framework is able to realize a trade-off between safety and usability by keeping a reasonable distance and adapting to other vehicles behaviours. / Autonoma fordon förutspås spela en stor roll i framtiden med målen att förbättra effektivitet och säkerhet för vägtransporter. Men även om vi sett flera exempel av autonoma fordon ute på vägarna de senaste åren är frågan om hur säkerhet ska kunna garanteras ett utmanande problem. Det här examensarbetet har studerat denna fråga genom att utveckla ett ramverk för riskmedvetet beslutsfattande. Det autonoma fordonets dynamik och den oförutsägbara omgivningen modelleras med en partiellt observerbar Markov-beslutsprocess (POMDP från engelskans “Partially Observable Markov Decision Process”). Ett riskmått föreslås baserat på ett säkerhetsavstånd förkortat RSS (från engelskans “Responsibility-Sensitive Safety”) som kvantifierar det minsta avståndet till andra fordon för garanterad säkerhet. Riskmåttet integreras i POMDP-modellens belöningsfunktion för att åstadkomma riskmedvetna beteenden. Den föreslagna riskmedvetna POMDP-modellen utvärderas i två fallstudier. I ett scenario där det egna fordonet följer ett annat fordon på en enfilig väg visar vi att det egna fordonet kan undvika en kollision då det framförvarande fordonet bromsar till stillastående. I ett scenario där det egna fordonet ansluter till en huvudled från en ramp visar vi att detta görs med ett tillfredställande avstånd till andra fordon. Slutsatsen är att den riskmedvetna POMDP-modellen lyckas realisera en avvägning mellan säkerhet och användbarhet genom att hålla ett rimligt säkerhetsavstånd och anpassa sig till andra fordons beteenden.
18

[en] IMPACT OF ORGANIZATIONAL ENVIRONMENT UNCERTAINTY IN THE PLANNING PROCESS: THE CASE OF VARIG / [pt] IMPACTO DA INCERTEZA DO AMBIENTE ORGANIZACIONAL NO PROCESSO DE PLANEJAMENTO: O CASO VARIG

MIRIAM DA SILVA PIZZO 10 June 2003 (has links)
[pt] Neste trabalho pretende-se mostrar a relevância da realização do planejamento, mesmo em situações complexas que envolvam os mais diversos fatores internos e externos à organização, e a importância de saber a melhor forma de tomada de decisão de acordo com cada circunstância. Visando compreender os elementos estratégicos de uma empresa inserida em um mercado altamente dinâmico, desenvolveu-se um arcabouço teórico tratando de planejamento em condições de incerteza e análise do ambiente organizacional. Tendo como base esses elementos, elaborou-se uma avaliação do setor de aviação comercial e uma análise estratégica companhia aérea VARIG Brasil. Os resultados dessa avaliação indicaram as deficiências do posicionamento estratégico dessa metodologia dos processos decisórios para que se possa obter melhor desempenho. / [en] The objective of this dissertation is to show the relevance of planning even in complex situations, involving the most diverse factors, internal or external to the organization, as well as the importance of recognizing the best alternatives in decision making, according to each circumstance. Willing to understand the strategic elements of an organization inserted in a highly dynamic market, a theoretical basis has been developed dealing with planning under uncertainty and organizational environment analysis. With such basic elements, an evaluation of the commercial aviation business and a strategic analysis of VARIG Brasil airline were elaborated. The results of this evaluation indicated the deficiencies of the strategic position of the Company and pointed out the need of a revaluating and improving the decision process in order to attain better performance.
19

Full Automation of Air Traffic Management in High Complexity Airspace

Ehrmanntraut, Rüdiger 29 March 2010 (has links)
The thesis is that automation of en-route Air Traffic Management in high complexity airspace can be achieved with a combination of automated tactic planning in a look-ahead time horizon of up to two hours complemented with automated tactic conflict resolution functions. The literature review reveals that no significant results have yet been obtained and that full automation could be approached with a complementary integration of automated tactic resolutions AND planning. The focus shifts to ‘planning for capacity’ and ‘planning for resolution’ and also – but not only – for ‘resolution’. The work encompasses a theoretical part on planning, and several small scale studies of empirical, mathematical or simulated nature. The theoretical part of the thesis on planning under uncertainties attempts to conceive a theoretical model which abstracts specificities of planning in Air Traffic Management into a generic planning model. The resulting abstract model treats entities like the planner, the strategy, the plan and the actions, always considering the impact of uncertainties. The work innovates in specifying many links from the theory to the application in planning of air traffic management, and especially the new fields of tactical capacity management. The second main part of the thesis comprises smaller self-containing works on different aspects of the concept grouped into a section on complexity, another on tactic planning actions, and the last on planners. The produced studies are about empirical measures of conflicts and conflict densities to get a better understanding of the complexity of air traffic; studies on traffic organisation using tactical manoeuvres like speed control, lateral offset and tactical direct using fast time simulation; and studies on airspace design like sector optimisation, dynamic sectorisation and its optimisation using optimisation techniques. In conclusion it is believed that this work will contribute to further automation attempts especially by its innovative focus which is on planning, base on a theory of planning, and its findings already influence newer developments.
20

Supply chain planning models with general backorder penalties, supply and demand uncertainty, and quantity discounts

Megahed, Aly 21 September 2015 (has links)
In this thesis, we study three supply chain planning problems. The first two problems fall in the tactical planning level, while the third one falls in the strategic/tactical level. We present a direct application for the first two planning problems in the wind turbines industry. For the third problem, we show how it can be applied to supply chains in the food industry. Many countries and localities have the explicitly stated goal of increasing the fraction of their electrical power that is generated by wind turbines. This has led to a rapid growth in the manufacturing and installation of wind turbines. The globally installed capacity for the manufacturing of different components of the wind turbine is nearly fully utilized. Because of the large penalties for missing delivery deadlines for wind turbines, the effective planning of its supply chain has a significant impact on the profitability of the turbine manufacturers. Motivated by the planning challenges faced by one of the world’s largest manufacturers of wind turbines, we present a comprehensive tactical supply chain planning model for manufacturing of wind turbines in the first part of this thesis. The model is multi-period, multi-echelon, and multi-commodity. Furthermore, the model explicitly incorporates backorder penalties with a general cost structure, i.e., the cost structure does not have to be linear in function of the backorder delay. To the best of our knowledge, modeling-based supply chain planning has not been applied to wind turbines, nor has a model with all the above mentioned features been described in the literature. Based on real-world data, we present numerical results that show the significant impact of the capability to model backorder penalties with general cost structures on the overall cost of supply chains for wind turbines. With today’s rapidly changing global market place, it is essential to model uncertainty in supply chain planning. In the second part of this thesis, we develop a two-stage stochastic programming model for the comprehensive tactical planning of supply chains under supply uncertainty. In the first stage, procurement decisions are made while in the second stage, production, inventory, and delivery decisions are made. The considered supply uncertainty combines supplier random yields and stochastic lead times, and is thus the most general form of such uncertainty to date. We apply our model to the same wind turbines supply chain. We illustrate theoretical and numerical results that show the impact of supplier uncertainty/unreliability on the optimal procurement decisions. We also quantify the value of modeling uncertainty versus deterministic planning. Supplier selection with quantity discounts has been an active research problem in the operations research community. In this the last part of this thesis, we focus on a new quantity discounts scheme offered by suppliers in some industries. Suppliers are selected for a strategic planning period (e.g., 5 years). Fixed costs associated with suppliers’ selection are paid. Orders are placed monthly from any of the chosen suppliers, but the quantity discounts are based on the aggregated annual order quantities. We incorporate all this in a multi-period multi-product multi-echelon supply chain planning problem and develop a mixed integer programming (MIP) model for it. Leading commercial MIP solvers take 40 minutes on average to get any feasible solution for realistic instances of our model. With the aim of getting high-quality feasible solutions quickly, we develop an algorithm that constructs a good initial solution and three other iterative algorithms that improve this initial solution and are capable of getting very fast high quality primal solutions. Two of the latter three algorithms are based on MIP-based local search and the third algorithm incorporates a variable neighborhood Descent (VND) combining the first two. We present numerical results for a set of instances based on a real-world supply chain in the food industry and show the efficiency of our customized algorithms. The leading commercial solver CPLEX finds only a very few feasible solutions that have lower total costs than our initial solution within a three hours run time limit. All our iterative algorithms well outperform CPLEX. The VND algorithm has the best average performance. Its average relative gap to the best known feasible solution is within 1% in less than 40 minutes of computing time.

Page generated in 0.0759 seconds