• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Risk-Aware Planning by Extracting Uncertainty from Deep Learning-Based Perception

Toubeh, Maymoonah I. 07 December 2018 (has links)
The integration of deep learning models and classical techniques in robotics is constantly creating solutions to problems once thought out of reach. The issues arising in most models that work involve the gap between experimentation and reality, with a need for strategies that assess the risk involved with different models when applied in real-world and safety-critical situations. This work proposes the use of Bayesian approximations of uncertainty from deep learning in a robot planner, showing that this produces more cautious actions in safety-critical scenarios. The case study investigated is motivated by a setup where an aerial robot acts as a "scout'' for a ground robot when the below area is unknown or dangerous, with applications in space exploration, military, or search-and-rescue. Images taken from the aerial view are used to provide a less obstructed map to guide the navigation of the robot on the ground. Experiments are conducted using a deep learning semantic image segmentation, followed by a path planner based on the resulting cost map, to provide an empirical analysis of the proposed method. The method is analyzed to assess the impact of variations in the uncertainty extraction, as well as the absence of an uncertainty metric, on the overall system with the use of a defined factor which measures surprise to the planner. The analysis is performed on multiple datasets, showing a similar trend of lower surprise when uncertainty information is incorporated in the planning, given threshold values of the hyperparameters in the uncertainty extraction have been met. / Master of Science / Deep learning (DL) is the phrase used to refer to the use of large hierarchical structures, often called neural networks, to approximate semantic information from data input of various forms. DL has shown superior performance at many tasks, such as several forms of image understanding, often referred to as computer vision problems. Deep learning techniques are trained using large amounts of data to map input data to output interpretation. The method should then perform correct input-output mappings on new data, different from the data it was trained on. Robots often carry various sensors from which it is possible to make interpretations about the environment. Inputs from a sensor can be high dimensional, such as pixels given by a camera, and processing these inputs can be quite tedious and inefficient given a human interpreter. Deep learning has recently been adopted by roboticists as a means of automatically interpreting and representing sensor inputs, like images. The issue that arises with the traditional use of deep learning is twofold: it forces an interpretation of the inputs even when an interpretation is not applicable, and it does not provide a measure of certainty with its outputs. Many techniques have been developed to address this issue with deep learning. These techniques aim to produce a measure of uncertainty associated with DL outputs, such that even when an incorrect or inapplicable output is produced, it is accompanied with a high level of uncertainty. To explore the efficacy and applicability of these uncertainty extraction techniques, this thesis looks at their use as applied to part of a robot planning system. Specifically, the input to the robot planner is an overhead image taken by an unmanned aerial vehicle (UAV) and the output is a path from a set start and goal position to be taken by an unmanned ground vehicle (UGV) below. The image is passed through a deep learning portion of the system that performs what is called semantic segmentation, mapping each pixel to a meaningful class, on the image. Based on the segmentation, each pixel is given a cost proportionate to the perceived level of safety associated with that class. A cost map is thus formed on the entire image, from which traditional robotics techniques are used to plan a path from start to goal. A comparison is performed between the risk-neutral case which uses the conventional DL method and the risk-aware case which uses uncertainty information accompanying the modified DL technique. The overall effects on the robot system are envisioned by observing a metric called the surprise factor, where a high surprise factor signifies a poor prediction of the actual cost associated with a path. The risk-neutral case is shown to have a higher surprise factor than the proposed risk-aware setup, both on average and in safety-critical case studies.
2

Multi-Objective Resource Provisioning in Network Function Virtualization Infrastructures

Oliveira, Diogo 09 April 2018 (has links)
Network function virtualization (NFV) and software-dened networking (SDN) are two recent networking paradigms that strive to increase manageability, scalability, pro- grammability and dynamism. The former decouples network functions and hosting devices, while the latter decouples the data and control planes. As more and more service providers adopt these new paradigms, there is a growing need to address multi-failure conditions, particularly those arising from large-scale disaster events. Overall, addressing the virtual network function (VNF) placement and routing problem is crucial to deploy NFV surviv- ability. In particular, many studies have inspected non-survivable VNF provisioning, however no known work have proposed survivable/resilient solutions for multi-failure scenarios. In light of the above, this work proposes and deploys a survivable multi-objective provisioning solution for NFV infrastructures. Overall, this study initially proposes multi- objective solutions to eciently solve the VNF mapping/placement and routing problem. In particular, a integer linear programming (ILP) optimization and a greedy heuristic meth- ods try to maximize the requests acceptance rate while minimizing costs and implementing trac engineering (TE) load-balancing. Next, these schemes are expanded to perform \risk- aware" virtual function mapping and trac routing in order to improve the reliability of user services. Furthermore, additionally to the ILP optimization and greedy heuristic schemes, a metaheuristic genetic algorithm (GA) is also introduced, which is more suitable for large- scale networks. Overall, these solutions are then tested in idealistic and realistic stressor scenarios in order to evaluate their performance, accuracy and reliability.
3

Uncertainty-aware path planning on aerial imagery and unknown environments

Moore, Charles Alan 10 May 2024 (has links) (PDF)
Off-road autonomous navigation faces a significant challenge due to the lack of maps or road markings for planning paths. Classical path planning methods assume a perfectly known envi- ronment, neglecting the inherent perception and sensing uncertainty from detecting terrain and obstacles in off-road environments. This research proposes an uncertainty-aware path planning method, URA*, using aerial images for autonomous navigation in off-road environments. An ensemble convolutional neural network model is used to perform pixel-level traversability estima- tion from aerial images of the region of interest. Traversability predictions are represented as a grid of traversal probability values. An uncertainty-aware planner is applied to compute the best path from a start point to a goal point, considering these noisy traversal probability estimates. The proposed planner also incorporates replanning techniques for rapid replacement during online robot operation. The method is evaluated on the Massachusetts Road Dataset, DeepGlobe dataset, and aerial images from CAVS proving grounds at MSU.
4

Trajectory Planning for Autonomous Underwater Vehicles: A Stochastic Optimization Approach

Albarakati, Sultan 30 August 2020 (has links)
In this dissertation, we develop a new framework for 3D trajectory planning of Autonomous Underwater Vehicles (AUVs) in realistic ocean scenarios. The work is divided into three parts. In the first part, we provide a new approach for deterministic trajectory planning in steady current, described using Ocean General Circulation Model (OGCM) data. We apply a Non-Linear Programming (NLP) to the optimal-time trajectory planning problem. To demonstrate the effectivity of the resulting model, we consider the optimal time trajectory planning of an AUV operating in the Red Sea and the Gulf of Aden. In the second part, we generalize our 3D trajectory planning framework to time-dependent ocean currents. We also extend the framework to accommodate multi-objective criteria, focusing specifically on the Pareto front curve between time and energy. To assess the effectiveness of the extended framework, we initially test the methodology in idealized settings. The scheme is then demonstrated for time-energy trajectory planning problems in the Gulf of Aden. In the last part, we account for uncertainty in the ocean current field, is described by an ensemble of flow realizations. The proposed approach is based on a non-linear stochastic programming methodology that uses a risk-aware objective function, accounting for the full variability of the flow ensemble. We formulate stochastic problems that aim to minimize a risk measure of the travel time or energy consumption, using a flexible methodology that enables the user to explore various objectives, ranging seamlessly from risk-neutral to risk-averse. The capabilities of the approach are demonstrated using steady and transient currents. Advanced visualization tools have been further designed to simulate results.
5

Parsimonious, Risk-Aware, and Resilient Multi-Robot Coordination

Zhou, Lifeng 28 May 2020 (has links)
In this dissertation, we study multi-robot coordination in the context of multi-target tracking. Specifically, we are interested in the coordination achieved by means of submodular function optimization. Submodularity encodes the diminishing returns property that arises in multi-robot coordination. For example, the marginal gain of assigning an additional robot to track the same target diminishes as the number of robots assigned increases. The advantage of formulating coordination problems as submodular optimization is that a simple, greedy algorithm is guaranteed to give a good performance. However, often this comes at the expense of unrealistic models and assumptions. For example, the standard formulation does not take into account the fact that robots may fail, either randomly or due to adversarial attacks. When operating in uncertain conditions, we typically seek to optimize the expected performance. However, this does not give any flexibility for a user to seek conservative or aggressive behaviors from the team of robots. Furthermore, most coordination algorithms force robots to communicate at each time step, even though they may not need to. Our goal in this dissertation is to overcome these limitations by devising coordination algorithms that are parsimonious in communication, allow a user to manage the risk of the robot performance, and are resilient to worst-case robot failures and attacks. In the first part of this dissertation, we focus on designing parsimonious communication strategies for target tracking. Specifically, we investigate the problem of determining when to communicate and who to communicate with. When the robots use range sensors, the tracking performance is a function of the relative positions of the robots and the targets. We propose a self-triggered communication strategy in which a robot communicates its own position with its neighbors only when a certain set of conditions are violated. We prove that this strategy converges to the optimal robot positions for tracking a single target and in practice, reduces the number of communication messages by 30%. When tracking multiple targets, we can reduce the communication by forming subsets of robots and assigning one subset to track a target. We investigate a number of measures for tracking quality based on the observability matrix and show which ones are submodular and which ones are not. For non-submodular measures, we show a greedy algorithm gives a 1/(n+1) approximation, if we restrict the subset to n robots. In optimizing submodular functions, a common assumption is that the function value is deterministic, which may not hold in practice. For example, the sensor performance may depend on environmental conditions which are not known exactly. In the second part of the dissertation, we design an algorithm for stochastic submodular optimization. The standard formulation for stochastic optimization optimizes the expected performance. However, the expectation is a risk-neutral measure. Instead, we optimize the Conditional Value-at-Risk (CVaR), which allows the user the flexibility of choosing a risk level. We present an algorithm, based on the greedy algorithm, and prove that its performance has bounded suboptimality and improves with running time. We also present an online version of the algorithm to adapt to real-time scenarios. In the third part of this dissertation, we focus on scenarios where a set of robots may fail naturally or due to adversarial attacks. Our objective is to track as many targets as possible, a submodular measure, assuming worst-case robot failures. We present both centralized and distributed resilient tracking algorithms to cope with centralized and distributed communication settings. We prove these algorithms give a constant-factor approximation of the optimal in polynomial running time. / Doctor of Philosophy / Today, robotics and autonomous systems have been increasingly used in various areas such as manufacturing, military, agriculture, medical sciences, and environmental monitoring. However, most of these systems are fragile and vulnerable to adversarial attacks and uncertain environmental conditions. In most cases, even if a part of the system fails, the entire system performance can be significantly undermined. As robots start to coexist with humans, we need algorithms that can be trusted under real-world, not just ideal conditions. Thus, this dissertation focuses on enabling security, trustworthiness, and long-term autonomy in robotics and autonomous systems. In particular, we devise coordination algorithms that are resilient to attacks, trustworthy in the face of the uncertain conditions, and allow the long-term operation of multi-robot systems. We evaluate our algorithms through extensive simulations and proof-of-concept experiments. Generally speaking, multi-robot systems form the "physical" layer of Cyber-Physical Sytems (CPS), the Internet of Things (IoT), and Smart City. Thus, our research can find applications in the areas of connected and autonomous vehicles, intelligent transportation, communications and sensor networks, and environmental monitoring in smart cities.
6

R-BPM: uma metodologia para gestão de riscos em iniciativas de BPM

FERREIRA, Fabio da Silva 01 July 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-07-27T12:57:10Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao_FabioFerreira_Digital.pdf: 2972458 bytes, checksum: f0d2fa1803fb21110efdc841f63933b1 (MD5) / Made available in DSpace on 2017-07-27T12:57:10Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao_FabioFerreira_Digital.pdf: 2972458 bytes, checksum: f0d2fa1803fb21110efdc841f63933b1 (MD5) Previous issue date: 2016-07-01 / Na busca por agilidade, economia e qualidade em seus processos, um número crescente de empresas tem adotado técnicas de Gerenciamento de Processos de Negócios (do original em inglês: Business Process Management - BPM), já que esta permite que a organização torne seus processos mais eficientes, com maior precisão, rapidez, flexibilidade e qualidade. No entanto, mesmo gerenciados, os processos podem enfrentar riscos que podem causar forte impacto sobre os objetivos da organização se estes riscos não forem gerenciados de forma apropriada. Como o gerenciamento de riscos demanda recursos e a execução de várias atividades (entrevistas, análises, reuniões etc.) que também são demandados pelo BPM, a integração destes dois campos tem sido tema de pesquisa frequente nos últimos anos. Um problema com os trabalhos existentes, no entanto, é que as atividades de gerenciamento de riscos propostas são aplicadas apenas a algumas fases do ciclo de vida BPM. Esta dissertação teve como objetivo construir e avaliar uma metodologia para realizar o gerenciamento de riscos em processos de negócios de forma integrada com o ciclo de vida BPM. A metodologia, chamada R-BPM, é composta por um conjunto de fases e uma ferramenta de apoio. Ela é inspirada na estrutura de gerenciamento de riscos do COSO (Committee of Sponsoring Organizations of the Treadway Commission) e foi construída através da abordagem de Design Science Research, que envolve um ciclo iterativo de construção e avaliação. Para avaliar a metodologia e a ferramenta de software construída para apoiá-la, foi realizado um estudo de caso em uma organização pública. Os artefatos foram avaliados através de grupos focais e surveys com especialistas da organização. Os resultados mostraram que a metodologia R-BPM, além de permitir que as atividades relacionadas à gestão de riscos sejam executadas em conjunto com o ciclo de vida BPM, permite também compartilhar a responsabilidade pelos riscos e fornecer melhores condições para os analistas e donos dos processos avaliá-los. Como a metodologia foi utilizada para resolver um problema do mundo real na organização estudada, esta pesquisa também contribuiu para a disseminação de conhecimento acadêmico para o mercado. / In search for agility, economy and quality in their processes, an increasing number of companies have adopted techniques of Business Process Management (BPM), as this allows the organization make its processes more efficient with greater precision, speed, flexibility and quality. However, even when managed, processes may face risks that can cause strong impact on the organization's goals if these risks are not managed appropriately. As risk management requires resources and the execution of many activities (interviews, analysis, meetings and so on) which are also demanded by BPM, the integration of these two fields has been a research theme frequent in recent years. A problem with existing works, however, is that the risk management activities proposed are applied only to some phases of the BPM lifecycle. This work aimed to construct and evaluate a methodology to manage risks of business processes integrated with the BPM lifecycle. The methodology, called R-BPM, is composed of a set of phases and a supporting tool. It is inspired by the COSO (Committee of Sponsoring Organizations of the Treadway Commission) risk management structure and was built by Design Science Research strategy, which involves an iterative cycle of construction and evaluation. To evaluate the methodology and the software tool built to support it, we conducted a case study in a public organization. The artifacts were assessed through focus groups and surveys with the organization's experts. The results showed that the R-BPM allows the activities related to risk management can now be implemented together with the BPM lifecycle, as well as shares responsibility for the risks and provides better conditions for process owners evaluate them. As the methodology was used to solve a real-world problem in the organization studied, this research also contributed to the dissemination of academic knowledge to market.
7

Managing Climate Overshoot Risk with Reinforcement Learning : Carbon Dioxide Removal, Tipping Points and Risk-constrained RL / Hantering av risk vid överskjutning av klimatmål med förstärkande inlärning : Koldioxidinfångning, tröskelpunkter och riskbegränsad förstärkande inlärning

Kerakos, Emil January 2024 (has links)
In order to study how to reach different climate targets, scientists and policymakers rely on results from computer models known as Integrated Assessment Models (IAMs). These models are used to quantitatively study different ways of achieving warming targets such as the Paris goal of limiting warming to 1.5-2.0 °C, deriving climate mitigation pathways that are optimal in some sense. However, when applied to the Paris goal many IAMs derive pathways that overshoot the temperature target: global temperature temporarily exceeds the warming target for a period of time, before decreasing and stabilizing at the target. Although little is known with certainty about the impacts of overshooting, recent studies indicate that there may be major risks entailed. This thesis explores two different ways of including overshoot risk in a simple IAM by introducing stochastic elements to it. Then, algorithms from Reinforcement Learning (RL) are applied to the model in order to find pathways that take overshoot risk into consideration. In one experiment we apply standard risk-neutral RL to the DICE model extended with a probabilistic damage function and carbon dioxide removal technologies. In the other experiment, the model is further augmented with a probabilistic tipping element model. Using risk-constrained RL we then train an algorithm to optimally control this model, whilst controlling the conditional-value-at-risk of triggering tipping elements below a user-specified threshold. Although some instability and convergence issues are present during training, in both experiments the agents are able to achieve policies that outperform a simple baseline. Furthermore, the risk-constrained agent is also able to (approximately) control the tipping risk metric below a desired threshold in the second experiment. The final policies are analysed for domain insights, indicating that carbon removal via temporal carbon storage solutions could be a sizeable contributor to negative emissions on a time-horizon relevant for overshooting. In the end, recommended next steps for future work are discussed. / För att studera hur globala klimatmål kan nås använder forskare och beslutsfattare resultat från integrerade bedömningsmodeller (IAM:er). Dessa modeller används för att kvantitativt förstå olika vägar till temperaturmål, så som Parisavtalets mål om att begränsa den globala uppvärmningen till 1.5-2.0 °C. Resultaten från dessa modeller är så kallade ”mitigation pathways” som är optimala utifrån något uppsatt kriterium. När sådana modellkörningar görs med Parismålet erhålls dock ofta optimala pathways som överskjuter temperaturmålet tillfälligt: den globala temperaturen överstiger målet i en period innan den sjunker och till slut stabiliseras vid det satta målet. Kunskapen om vilken påverkan en överskjutning har är idag begränsad, men flertalet nyligen gjorda studier indikerar att stora risker potentiellt kan medföras. I denna uppsats utforskas två olika sätt att inkludera överskjutningsrisk i en enkel IAM genom användandet av stokastiska element. Därefter används Förstärkande Inlärning på modellen för att erhålla modellösningar som tar hänsyn till överkjutningsrisk. I ett av experimenten utökas IAM:en med en stokastisk skadefunktion och tekniker för koldioxidinfångning varpå vanlig Förstärkande Inlärning appliceras. I det andra experimentet utökas modellen ytterligare med en stokastisk modell för tröskelpunkter. Med hjälp av risk-begränsad Förstärkande Inlärning tränas därefter en modell för att optimalt kontrollera denna IAM samtidigt som risken att utlösa tröskelpunkter kontrolleras till en nivå satt av användaren. Även om en viss grad av instabilitet och problem med konvergens observeras under inlärningsprocessen så lyckas agenterna i båda experimenten hitta beslutsregler som överträffar en enkel baslinje. Vidare lyckas beslutsregeln som erhålls i det andra experimentet, med den risk-begränsade inlärningen, approximativt kontrollera risken att utlösa tröskelpunkter till det specificerade värdet. Efter träning analyseras de bästa beslutsreglerna i syfte att finna domänmässiga insikter, varav en av dessa insikter är att temporära kollager kan ge betydande bidrag för koldioxidinfångning i en tidshorisont relevant vid överskjutning. Slutligen diskuteras möjliga nästa steg för framtida arbeten inom området.

Page generated in 0.025 seconds