• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 339
  • 83
  • 33
  • 21
  • 12
  • 10
  • 8
  • 7
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 656
  • 277
  • 186
  • 167
  • 129
  • 89
  • 85
  • 75
  • 72
  • 70
  • 68
  • 61
  • 60
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The Use of the CAfFEINE Framework in a Step-by-Step Assembly Guide

Ketchum, Devin Kyle 29 January 2020 (has links)
Today's technology is becoming more interactive with voice assistants like Siri. However, interactive systems such as Siri make mistakes. The purpose of this thesis is to explore using affect as an implicit feedback channel so that such mistakes would be easily corrected in real time. The CAfFEINE Framework, which was created by Dr. Saha, is a context-aware affective feedback loop in an intelligent environment. For the research described in this thesis, the focus will be on analyzing a user's physiological response to the service provided by an intelligent environment. To test this feedback loop, an experiment was constructed using an on-screen, step-by-step assembly guide for a Tangram puzzle. To categorize the user's response to the experiment, baseline readings were gathered for a user's stressed and non-stressed state. The Paced Stroop Test and two other baseline tests were conducted to gather these two states. The data gathered in the baseline tests was then used to train a support vector machine to predict the user's response to the Tangram experiment. During the data analysis phase of the research, the results for the predictions on the Tangram experiment were not as expected. Multiple trials of training data for the support vector machine were explored, but the data gathered throughout this research was not enough to draw proper conclusions. More focus was then given to analyzing the pre-processed data of the baseline tests in an attempt to find a factor or group of factors to determine if the user's physiological responses would be useful to train the Support Vector Machine. There were trends found when comparing the area under the curves of the Paced Stroop Test phasic driver plots. It was found that these comparison factors might be a useful approach for differentiating users based upon their physiological responses during the Paced Stroop Test. / Master of Science / The purpose of this thesis was to use the CAfFEINE Framework, proposed by Dr. Saha, in a real-world environment. Dr. Saha's Framework utilizes a user's physical responses, i.e. heart rate, in a smart environment to give information to the smart devices. For example, if Siri were to give a user directions to someone's home and told that user to turn right when the user knew they needed to turn left. That user would have a physical reaction as in their heart rate would increase. If the user were wearing a smart watch, Siri would be able to see the heart rate increase and realize, from past experiences with that user, that the information she gave to the user was incorrect. Then she would be able to correct herself. My research focused on measuring user reaction to a smart service provided in a real-world situation using a Tangram puzzle as a mock version of an industrial assembly situation. The users were asked to follow on-screen instructions to assemble the Tangram puzzle. Their reactions were recorded through a smart watch and analyzed post-experiment. Based on the results of a Paced Stroop Test they took before the experiment, a computer algorithm would predict their stress levels for each service provided by the step-by-step instruction guide. However, the results did not turn out as expected. Therefore, the rest of the research focused more on why the results did not support Dr. Saha's previous Framework results.
42

Efficient and Portable Middleware for Application-level Adaptation

Rao, Deepak 23 May 2001 (has links)
Software-intensive systems operating in a shared environment must utilize a "request, acquire and release" protocol. In the popular client-server architecture resource-poor clients rely on servers for the needed capabilities. With mobile clients using wireless connectivity, the disparity in resource needs can force the consideration of adaptation by clients, leading to a strategy of self-reliance. Achieving self-reliance through adaptation becomes even more attractive in environments, which are dynamic and continually changing. A more comprehensive strategy is for the mobile client to recognize the changing resource levels and plan for any such degradation; that is, the applications in the mobile client need to adapt to the changing environment and availability of resources. Portable adaptation middleware that is sensitive to architecture and context changes in network operations is designed and implemented. The Adaptation Middleware not only provides the flexibility for the client applications to adapt to changing resources around them, but also to changing resource levels within the client applications. Further, the Adaptation Middleware imposes few changes on the structure of the client application. The Adaptation Middleware creates the adaptations; the client remains unaware and unconcerned with these adaptations. The Adaptation Middleware in this study also enables a more informative cost estimation with regard to applications such as mobile agents. A sample application developed using the Adaptation Middleware shows performance improvements in the range of 31% to 54%. A limited set of experiments show an average response time of 68 milliseconds, which seems acceptable for most applications. Further, the Adaptation Middleware permits increased stability for applications demonstrating demand levels subject to high uncertainty. / Master of Science
43

Policy-based approach for context-aware systems

Al-Sammarraie, Mohammed January 2011 (has links)
Pervasive (ubiquitous) computing is a new paradigm where the computers are submerged into the background of the everyday life. One important aspect of pervasive systems is context-awareness. Context-aware systems are those that can adapt their behaviours according to the current context. Context-aware applications are being integrated into our everyday activity aspects such as: health care, smart homes and transportations. There exist a wide range of context-aware applications such as: mobile phones, learning systems, smart vehicles. Some context-aware systems are critical since the consequence of failing to identify a given context may be catastrophic. For example, an auto-pilot system is a critical context-aware system; it senses the humidity, clouds, wind speed and accordingly adjusts the altitude, throttle and other parameters. Being a critical context-aware system has to be provably correct. Policy-based approaches has been used in many applications but not in context-aware systems. In this research, we want to discover the anatomy (i.e. architecture, structure and operational behaviour) of policy-based management as applied to context-aware systems, and how policies are managed within such a dynamic system. We propose a novel computational model and its formalisation is presented using the Calculus of Context-aware Ambients (CCA). CCA has been proposed as a suitable mathematical notation to model mobile and context-aware systems. We decided to use CCA due to three reasons: (i) in CCA, mobility and context-awareness are primitive constructs and are treated as first-class citizens; (ii) properties of a system can be formally analysed; (iii) CCA specifications are executable, and thus, leading to rapid prototyping and early validation of the system properties. We, then show how policies can be expressed in CCA. For illustration, the specification of the event-condition-action (ECA) conceptual policy model is modelled in CCA in a natural fashion. We also propose a policy-based architecture for context-aware systems, showing its different components, and how they interact. Furthermore, we give the specification of the policy enforcement mechanism used in our proposed architecture in CCA. To evaluate our approach, a real-world case study of an infostation-based mobile learning (mLearning) system is chosen. This mLearning system is deployed across a university campus to enable mobile users to access mobile services (mServices) represented by course materials (lectures, tests and tutorials) and communication services (intelligent message notification and VoIP). Users can access the mServices through their mobile devices (Hand-set phones, PDAs and laptops) regardless of their device type or location within a university campus. We have specified the mLearning system in CCA (i.e. specification based on policies of the mServices), afterwards, the specification is simulated using the CCA interpreter tool. We have developed an animation tool specially designed for the mLearning system. The animation tool provides graphical representation of the CCA processes. In terms of safety and liveness, some important properties of the mLearning system have been validated as a proof of concept.
44

Foundations of Human-Aware Planning -- A Tale of Three Models

January 2018 (has links)
abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
45

Aware as a Theory of Japanese Aesthetics

Flowers, Johnathan Charles 01 December 2011 (has links)
Aware, as generally conceived in Japanese aesthetics, refers to the felt content within a particular work of art that drives the aesthetic value of that work. In this thesis presents a theory of art that places aware as central to the aesthetic experience in the Japanese as derived from Shinto and Buddhist ontology, as well as the aesthetic theories of Motoori Norinaga. This theory is then contrasted with the aesthetic theory of Susanne K. Langer as presented in Philosophy in a New Key, Feeling and Form, and Problems of Art, to provide a full explication of what it means to have an aesthetic experience or create art in the Japanese context.
46

Contention-Aware and Power-Constrained Scheduling for Chip Multicore Processors

Kundan, Shivam 01 December 2019 (has links)
The parallel nature of process execution on chip multiprocessors (CMPs) has considerably boosted levels of application performance in the past decade. Generally, a certain number of computing resources are shared among the several cores of a CMP, such as shared last-level caches, shared-buses, and shared-memory. This ensures architectural simplicity while also boosting performance for multi-threaded applications. However, a consequence of sharing computing resources is that concurrently executing applications may suffer performance degradation if their collective resource requirements exceed the total amount of resources available. If resource allocation is not carefully considered, the potential performance gain from having multiple cores may be outweighed by the losses due to contention among processes for shared resources. Furthermore, CMPs with inbuilt dynamic voltage-frequency scaling (DVFS) may try to compensate for the performance loss by scaling to a higher frequency. For performance degradation due to shared-resource contention, this does not necessarily improve performance but guarantees a significant penalty on power consumption due to the quadratic relation of electrical power and voltage (P ∝ V^{2}*f).
47

AUTOMATED LAYOUT-INCLUSIVE SYNTHESIS OF ANALOG CIRCUITS USING SYMBOLIC PERFORMANCE MODELS

RANJAN, MUKESH January 2005 (has links)
No description available.
48

Designing Energy-Aware Optimization Techniques through Program Behaviour Analysis

Kommaraju, Ananda Varadhan January 2014 (has links) (PDF)
Green computing techniques aim to reduce the power foot print of modern embedded devices with particular emphasis on processors, the power hot-spots of these devices. In this thesis we propose compiler-driven and profile-driven optimizations that reduce power consumption in a modern embedded processor. We show that these optimizations reduce power consumption in functional units and memory subsystems with very low performance loss. We present three new techniques to reduce power consumption in processors, namely, transition aware scheduling, leakage reduction in data caches using criticality analysis, and dynamic power reduction in data caches using locality analysis of data regions. A novel instruction scheduling technique to address leakage power consumption in functional units is proposed. This scheduling technique, transition aware scheduling, is motivated by idle periods that arise in the utilization of functional units during program execution. A continuously large idle period in a functional unit can be exploited to place the unit in low power state. This novel scheduling algorithm increases the duration of idle periods without hampering performance and drives power gating in these periods. A power model defined with idle cycles as a parameter shows that this technique saves up to 25% of leakage power with very low performance impact. In modern embedded programs, data regions can be classified as critical and non-critical. Critical data regions significantly impact the performance. A new technique to identify such data regions through profiling is proposed. This technique along with a new criticality based cache policy is used to control the power state of the data cache. This scheme allocates non-critical data regions to low-power cache regions, thereby reducing leakage power consumption by up to 40% without compromising on the performance. This profiling technique is extended to identify data regions that have low locality. Some data regions have high data reuse. A locality based cache policy based on cache parameters like size and associativity is proposed. This scheme reduces dynamic as well as static power consumption in the cache subsystem. This optimization reduces 25% of the total power consumption in the data caches without hampering the execution time. In this thesis, the problem of power consumption of a program is decoupled from the number of processor cores. The underlying architecture model is simplified to abstract away a variety of processor scenarios. This simplified model can be scaled up to be implemented in various multi-core architecture models like Chip Multi-Processors, Simultaneous Multi-Threaded Processors, Chip Multi-Threaded Processors, to name a few. The three techniques proposed in this thesis leverage underlying hardware features like low power functional units, drowsy caches and split data caches. These techniques reduce power consumption of a wide range of benchmarks with low performance loss.
49

Task Oriented Privacy-preserving (TOP) Technologies Using Automatic Feature Selection

Jafer, Yasser January 2016 (has links)
A large amount of digital information collected and stored in datasets creates vast opportunities for knowledge discovery and data mining. These datasets, however, may contain sensitive information about individuals and, therefore, it is imperative to ensure that their privacy is protected. Most research in the area of privacy preserving data publishing does not make any assumptions about an intended analysis task applied on the dataset. In many domains such as healthcare, finance, etc; however, it is possible to identify the analysis task beforehand. Incorporating such knowledge of the ultimate analysis task may improve the quality of the anonymized data while protecting the privacy of individuals. Furthermore, the existing research which consider the ultimate analysis task (e.g., classification) is not suitable for high-dimensional data. We show that automatic feature selection (which is a well-known dimensionality reduction technique) can be utilized in order to consider both aspects of privacy and utility simultaneously. In doing so, we show that feature selection can enhance existing privacy preserving techniques addressing k-anonymity and differential privacy and protect privacy while reducing the amount of modifications applied to the dataset; hence, in most of the cases achieving higher utility. We consider incorporating the concept of privacy-by-design within the feature selection process. We propose techniques that turn filter-based and wrapper-based feature selection into privacy-aware processes. To this end, we build a layer of privacy on top of regular feature selection process and obtain a privacy preserving feature selection that is not only guided by accuracy but also the amount of protected private information. In addition to considering privacy after feature selection we introduce a framework for a privacy-aware feature selection evaluation measure. That is, we incorporate privacy during feature selection and obtain a list of candidate privacy-aware attribute subsets that consider (and satisfy) both efficacy and privacy requirements simultaneously. Finally, we propose a multi-dimensional, privacy-aware evaluation function which incorporates efficacy, privacy, and dimensionality weights and enables the data holder to obtain a best attribute subset according to its preferences.
50

Context-aware and adaptive usage control model

Almutairi, Abdulgader January 2013 (has links)
Information protection is a key issue for the acceptance and adoption of pervasive computing systems where various portable devices such as smart phones, Personal Digital Assistants (PDAs) and laptop computers are being used to share information and to access digital resources via wireless connection to the Internet. Because these are resources constrained devices and highly mobile, changes in the environmental context or device context can affect the security of the system a great deal. A proper security mechanism must be put in place which is able to cope with changing environmental and system context. Usage CONtrol (UCON) model is the latest major enhancement of the traditional access control models which enables mutability of subject and object attributes, and continuity of control on usage of resources. In UCON, access permission decision is based on three factors: authorisations, obligations and conditions. While authorisations and obligations are requirements that must be fulfilled by the subject and the object, conditions are subject and object independent requirements that must be satisfied by the environment. As a consequence, access permission may be revoked (and the access stopped) as a result of changes in the environment regardless of whether the authorisations and obligations requirements are met. This constitutes a major shortcoming of the UCON model in pervasive computing systems which constantly strive to adapt to environmental changes so as to minimise disruptions to the user. We propose a Context-Aware and Adaptive Usage Control (CA-UCON) model which extends the traditional UCON model to enable adaptation to environmental changes in the aim of preserving continuity of access. Indeed, when the authorisation and obligations requirements are fulfilled by the subject and object, and the conditions requirements fail due to changes in the environmental or the system context, our proposed model CA-UCON triggers specific actions in order to adapt to the new situation, so as to ensure continuity of usage. We then propose an architecture of CA-UCON model, presenting its various components. In this model, we integrated the adaptation decision with usage decision architecture, the comprehensive definition of each components and reveals the functions performed by each components in the architecture are presented. We also propose a novel computational model of our CA-UCON architecture. This model is formally specified as a finite state machine. It demonstrates how the access request of the subject is handled in CA-UCON model, including detail with regards to revoking of access and actions undertaken due to context changes. The extension of the original UCON architecture can be understood from this model. The formal specification of the CA-UCON is presented utilising the Calculus of Context-aware Ambients (CCA). This mathematical notation is considered suitable for modelling mobile and context-aware systems and has been preferred over alternatives for the following reasons: (i) Mobility and Context awareness are primitive constructs in CCA; (ii) A system's properties can be formally analysed; (iii) Most importantly, CCA specifications are executable allowing early validation of system properties and accelerated development of prototypes. For evaluation of CA-UCON model, a real-world case study of a ubiquitous learning (u-learning) system is selected. We propose a CA-UCON model for the u-learning system. This model is then formalised in CCA and the resultant specification is executed and analysed using an execution environment of CCA. Finally, we investigate the enforcement approaches for CA-UCON model. We present the CA-UCON reference monitor architecture with its components. We then proceed to demonstrate three types of enforcement architectures of the CA-UCON model: centralised architecture, distributed architecture and hybrid architecture. These are discussed in detail, including the analysis of their merits and drawbacks.

Page generated in 0.0505 seconds