41 |
Risk-Aware Planning by Extracting Uncertainty from Deep Learning-Based PerceptionToubeh, Maymoonah I. 07 December 2018 (has links)
The integration of deep learning models and classical techniques in robotics is constantly creating solutions to problems once thought out of reach. The issues arising in most models that work involve the gap between experimentation and reality, with a need for strategies that assess the risk involved with different models when applied in real-world and safety-critical situations. This work proposes the use of Bayesian approximations of uncertainty from deep learning in a robot planner, showing that this produces more cautious actions in safety-critical scenarios. The case study investigated is motivated by a setup where an aerial robot acts as a "scout'' for a ground robot when the below area is unknown or dangerous, with applications in space exploration, military, or search-and-rescue. Images taken from the aerial view are used to provide a less obstructed map to guide the navigation of the robot on the ground. Experiments are conducted using a deep learning semantic image segmentation, followed by a path planner based on the resulting cost map, to provide an empirical analysis of the proposed method. The method is analyzed to assess the impact of variations in the uncertainty extraction, as well as the absence of an uncertainty metric, on the overall system with the use of a defined factor which measures surprise to the planner. The analysis is performed on multiple datasets, showing a similar trend of lower surprise when uncertainty information is incorporated in the planning, given threshold values of the hyperparameters in the uncertainty extraction have been met. / Master of Science / Deep learning (DL) is the phrase used to refer to the use of large hierarchical structures, often called neural networks, to approximate semantic information from data input of various forms. DL has shown superior performance at many tasks, such as several forms of image understanding, often referred to as computer vision problems. Deep learning techniques are trained using large amounts of data to map input data to output interpretation. The method should then perform correct input-output mappings on new data, different from the data it was trained on.
Robots often carry various sensors from which it is possible to make interpretations about the environment. Inputs from a sensor can be high dimensional, such as pixels given by a camera, and processing these inputs can be quite tedious and inefficient given a human interpreter. Deep learning has recently been adopted by roboticists as a means of automatically interpreting and representing sensor inputs, like images. The issue that arises with the traditional use of deep learning is twofold: it forces an interpretation of the inputs even when an interpretation is not applicable, and it does not provide a measure of certainty with its outputs. Many techniques have been developed to address this issue with deep learning. These techniques aim to produce a measure of uncertainty associated with DL outputs, such that even when an incorrect or inapplicable output is produced, it is accompanied with a high level of uncertainty.
To explore the efficacy and applicability of these uncertainty extraction techniques, this thesis looks at their use as applied to part of a robot planning system. Specifically, the input to the robot planner is an overhead image taken by an unmanned aerial vehicle (UAV) and the output is a path from a set start and goal position to be taken by an unmanned ground vehicle (UGV) below. The image is passed through a deep learning portion of the system that performs what is called semantic segmentation, mapping each pixel to a meaningful class, on the image. Based on the segmentation, each pixel is given a cost proportionate to the perceived level of safety associated with that class. A cost map is thus formed on the entire image, from which traditional robotics techniques are used to plan a path from start to goal.
A comparison is performed between the risk-neutral case which uses the conventional DL method and the risk-aware case which uses uncertainty information accompanying the modified DL technique. The overall effects on the robot system are envisioned by observing a metric called the surprise factor, where a high surprise factor signifies a poor prediction of the actual cost associated with a path. The risk-neutral case is shown to have a higher surprise factor than the proposed risk-aware setup, both on average and in safety-critical case studies.
|
42 |
The Use of the CAfFEINE Framework in a Step-by-Step Assembly GuideKetchum, Devin Kyle 29 January 2020 (has links)
Today's technology is becoming more interactive with voice assistants like Siri. However, interactive systems such as Siri make mistakes. The purpose of this thesis is to explore using affect as an implicit feedback channel so that such mistakes would be easily corrected in real time. The CAfFEINE Framework, which was created by Dr. Saha, is a context-aware affective feedback loop in an intelligent environment. For the research described in this thesis, the focus will be on analyzing a user's physiological response to the service provided by an intelligent environment. To test this feedback loop, an experiment was constructed using an on-screen, step-by-step assembly guide for a Tangram puzzle. To categorize the user's response to the experiment, baseline readings were gathered for a user's stressed and non-stressed state. The Paced Stroop Test and two other baseline tests were conducted to gather these two states. The data gathered in the baseline tests was then used to train a support vector machine to predict the user's response to the Tangram experiment.
During the data analysis phase of the research, the results for the predictions on the Tangram experiment were not as expected. Multiple trials of training data for the support vector machine were explored, but the data gathered throughout this research was not enough to draw proper conclusions. More focus was then given to analyzing the pre-processed data of the baseline tests in an attempt to find a factor or group of factors to determine if the user's physiological responses would be useful to train the Support Vector Machine. There were trends found when comparing the area under the curves of the Paced Stroop Test phasic driver plots. It was found that these comparison factors might be a useful approach for differentiating users based upon their physiological responses during the Paced Stroop Test. / Master of Science / The purpose of this thesis was to use the CAfFEINE Framework, proposed by Dr. Saha, in a real-world environment. Dr. Saha's Framework utilizes a user's physical responses, i.e. heart rate, in a smart environment to give information to the smart devices. For example, if Siri were to give a user directions to someone's home and told that user to turn right when the user knew they needed to turn left. That user would have a physical reaction as in their heart rate would increase. If the user were wearing a smart watch, Siri would be able to see the heart rate increase and realize, from past experiences with that user, that the information she gave to the user was incorrect. Then she would be able to correct herself.
My research focused on measuring user reaction to a smart service provided in a real-world situation using a Tangram puzzle as a mock version of an industrial assembly situation. The users were asked to follow on-screen instructions to assemble the Tangram puzzle. Their reactions were recorded through a smart watch and analyzed post-experiment. Based on the results of a Paced Stroop Test they took before the experiment, a computer algorithm would predict their stress levels for each service provided by the step-by-step instruction guide. However, the results did not turn out as expected. Therefore, the rest of the research focused more on why the results did not support Dr. Saha's previous Framework results.
|
43 |
Efficient and Portable Middleware for Application-level AdaptationRao, Deepak 23 May 2001 (has links)
Software-intensive systems operating in a shared environment must utilize a "request, acquire and release" protocol. In the popular client-server architecture resource-poor clients rely on servers for the needed capabilities. With mobile clients using wireless connectivity, the disparity in resource needs can force the consideration of adaptation by clients, leading to a strategy of self-reliance. Achieving self-reliance through adaptation becomes even more attractive in environments, which are dynamic and continually changing. A more comprehensive strategy is for the mobile client to recognize the changing resource levels and plan for any such degradation; that is, the applications in the mobile client need to adapt to the changing environment and availability of resources.
Portable adaptation middleware that is sensitive to architecture and context changes in network operations is designed and implemented. The Adaptation Middleware not only provides the flexibility for the client applications to adapt to changing resources around them, but also to changing resource levels within the client applications. Further, the Adaptation Middleware imposes few changes on the structure of the client application. The Adaptation Middleware creates the adaptations; the client remains unaware and unconcerned with these adaptations.
The Adaptation Middleware in this study also enables a more informative cost estimation with regard to applications such as mobile agents. A sample application developed using the Adaptation Middleware shows performance improvements in the range of 31% to 54%. A limited set of experiments show an average response time of 68 milliseconds, which seems acceptable for most applications. Further, the Adaptation Middleware permits increased stability for applications demonstrating demand levels subject to high uncertainty. / Master of Science
|
44 |
Policy-based approach for context-aware systemsAl-Sammarraie, Mohammed January 2011 (has links)
Pervasive (ubiquitous) computing is a new paradigm where the computers are submerged into the background of the everyday life. One important aspect of pervasive systems is context-awareness. Context-aware systems are those that can adapt their behaviours according to the current context. Context-aware applications are being integrated into our everyday activity aspects such as: health care, smart homes and transportations. There exist a wide range of context-aware applications such as: mobile phones, learning systems, smart vehicles. Some context-aware systems are critical since the consequence of failing to identify a given context may be catastrophic. For example, an auto-pilot system is a critical context-aware system; it senses the humidity, clouds, wind speed and accordingly adjusts the altitude, throttle and other parameters. Being a critical context-aware system has to be provably correct. Policy-based approaches has been used in many applications but not in context-aware systems. In this research, we want to discover the anatomy (i.e. architecture, structure and operational behaviour) of policy-based management as applied to context-aware systems, and how policies are managed within such a dynamic system. We propose a novel computational model and its formalisation is presented using the Calculus of Context-aware Ambients (CCA). CCA has been proposed as a suitable mathematical notation to model mobile and context-aware systems. We decided to use CCA due to three reasons: (i) in CCA, mobility and context-awareness are primitive constructs and are treated as first-class citizens; (ii) properties of a system can be formally analysed; (iii) CCA specifications are executable, and thus, leading to rapid prototyping and early validation of the system properties. We, then show how policies can be expressed in CCA. For illustration, the specification of the event-condition-action (ECA) conceptual policy model is modelled in CCA in a natural fashion. We also propose a policy-based architecture for context-aware systems, showing its different components, and how they interact. Furthermore, we give the specification of the policy enforcement mechanism used in our proposed architecture in CCA. To evaluate our approach, a real-world case study of an infostation-based mobile learning (mLearning) system is chosen. This mLearning system is deployed across a university campus to enable mobile users to access mobile services (mServices) represented by course materials (lectures, tests and tutorials) and communication services (intelligent message notification and VoIP). Users can access the mServices through their mobile devices (Hand-set phones, PDAs and laptops) regardless of their device type or location within a university campus. We have specified the mLearning system in CCA (i.e. specification based on policies of the mServices), afterwards, the specification is simulated using the CCA interpreter tool. We have developed an animation tool specially designed for the mLearning system. The animation tool provides graphical representation of the CCA processes. In terms of safety and liveness, some important properties of the mLearning system have been validated as a proof of concept.
|
45 |
Foundations of Human-Aware Planning -- A Tale of Three ModelsJanuary 2018 (has links)
abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
|
46 |
Aware as a Theory of Japanese AestheticsFlowers, Johnathan Charles 01 December 2011 (has links)
Aware, as generally conceived in Japanese aesthetics, refers to the felt content within a particular work of art that drives the aesthetic value of that work. In this thesis presents a theory of art that places aware as central to the aesthetic experience in the Japanese as derived from Shinto and Buddhist ontology, as well as the aesthetic theories of Motoori Norinaga. This theory is then contrasted with the aesthetic theory of Susanne K. Langer as presented in Philosophy in a New Key, Feeling and Form, and Problems of Art, to provide a full explication of what it means to have an aesthetic experience or create art in the Japanese context.
|
47 |
Contention-Aware and Power-Constrained Scheduling for Chip Multicore ProcessorsKundan, Shivam 01 December 2019 (has links)
The parallel nature of process execution on chip multiprocessors (CMPs) has considerably boosted levels of application performance in the past decade. Generally, a certain number of computing resources are shared among the several cores of a CMP, such as shared last-level caches, shared-buses, and shared-memory. This ensures architectural simplicity while also boosting performance for multi-threaded applications. However, a consequence of sharing computing resources is that concurrently executing applications may suffer performance degradation if their collective resource requirements exceed the total amount of resources available. If resource allocation is not carefully considered, the potential performance gain from having multiple cores may be outweighed by the losses due to contention among processes for shared resources. Furthermore, CMPs with inbuilt dynamic voltage-frequency scaling (DVFS) may try to compensate for the performance loss by scaling to a higher frequency. For performance degradation due to shared-resource contention, this does not necessarily improve performance but guarantees a significant penalty on power consumption due to the quadratic relation of electrical power and voltage (P ∝ V^{2}*f).
|
48 |
AUTOMATED LAYOUT-INCLUSIVE SYNTHESIS OF ANALOG CIRCUITS USING SYMBOLIC PERFORMANCE MODELSRANJAN, MUKESH January 2005 (has links)
No description available.
|
49 |
Designing Energy-Aware Optimization Techniques through Program Behaviour AnalysisKommaraju, Ananda Varadhan January 2014 (has links) (PDF)
Green computing techniques aim to reduce the power foot print of modern embedded devices with particular emphasis on processors, the power hot-spots of these devices. In this thesis we propose compiler-driven and profile-driven optimizations that reduce power consumption in a modern embedded processor. We show that these optimizations reduce power consumption in functional units and memory subsystems with very low performance loss. We present three new techniques to reduce power consumption in processors, namely, transition aware scheduling, leakage reduction in data caches using criticality analysis, and dynamic power reduction in data caches using locality analysis of data regions.
A novel instruction scheduling technique to address leakage power consumption in functional units is proposed. This scheduling technique, transition aware scheduling, is motivated by idle periods that arise in the utilization of functional units during program execution. A continuously large idle period in a functional unit can be exploited to place the unit in low power state. This novel scheduling algorithm increases the duration of idle periods without hampering performance and drives power gating in these periods. A power model defined with idle cycles as a parameter shows that this technique saves up to 25% of leakage power with very low performance impact.
In modern embedded programs, data regions can be classified as critical and non-critical. Critical data regions significantly impact the performance. A new technique to identify such data regions through profiling is proposed. This technique along with a new criticality based cache policy is used to control the power state of the data cache. This scheme allocates non-critical data regions to low-power cache regions, thereby reducing leakage power consumption by up to 40% without compromising on the performance.
This profiling technique is extended to identify data regions that have low locality. Some data regions have high data reuse. A locality based cache policy based on cache parameters like size and associativity is proposed. This scheme reduces dynamic as well as static power consumption in the cache subsystem. This optimization reduces 25% of the total power consumption in the data caches without hampering the execution time.
In this thesis, the problem of power consumption of a program is decoupled from the number of processor cores. The underlying architecture model is simplified to abstract away a variety of processor scenarios. This simplified model can be scaled up to be implemented in various multi-core architecture models like Chip Multi-Processors, Simultaneous Multi-Threaded Processors, Chip Multi-Threaded Processors, to name a few.
The three techniques proposed in this thesis leverage underlying hardware features like low power functional units, drowsy caches and split data caches. These techniques reduce power consumption of a wide range of benchmarks with low performance loss.
|
50 |
Task Oriented Privacy-preserving (TOP) Technologies Using Automatic Feature SelectionJafer, Yasser January 2016 (has links)
A large amount of digital information collected and stored in datasets creates vast opportunities for knowledge discovery and data mining. These datasets, however, may contain sensitive information about individuals and, therefore, it is imperative to ensure that their privacy is protected.
Most research in the area of privacy preserving data publishing does not make any assumptions about an intended analysis task applied on the dataset. In many domains such as healthcare, finance, etc; however, it is possible to identify the analysis task beforehand. Incorporating such knowledge of the ultimate analysis task may improve the quality of the anonymized data while protecting the privacy of individuals. Furthermore, the existing research which consider the ultimate analysis task (e.g., classification) is not suitable for high-dimensional data.
We show that automatic feature selection (which is a well-known dimensionality reduction technique) can be utilized in order to consider both aspects of privacy and utility simultaneously. In doing so, we show that feature selection can enhance existing privacy preserving techniques addressing k-anonymity and differential privacy and protect privacy while reducing the amount of modifications applied to the dataset; hence, in most of the cases achieving higher utility.
We consider incorporating the concept of privacy-by-design within the feature selection process. We propose techniques that turn filter-based and wrapper-based feature selection into privacy-aware processes. To this end, we build a layer of privacy on top of regular feature selection process and obtain a privacy preserving feature selection that is not only guided by accuracy but also the amount of protected private information.
In addition to considering privacy after feature selection we introduce a framework for a privacy-aware feature selection evaluation measure. That is, we incorporate privacy during feature selection and obtain a list of candidate privacy-aware attribute subsets that consider (and satisfy) both efficacy and privacy requirements simultaneously.
Finally, we propose a multi-dimensional, privacy-aware evaluation function which incorporates efficacy, privacy, and dimensionality weights and enables the data holder to obtain a best attribute subset according to its preferences.
|
Page generated in 0.0163 seconds