Spelling suggestions: "subject:"contextaware"" "subject:"contextware""
161 |
Occupancy Sensor System : For Context-aware ComputingHübinette, Daniel January 2007 (has links)
This masters thesis project, "Occupancy Sensor System", was conducted at Kungliga Tekniska Högskolan (KTH), Stockholm, Sweden, during the period 2007-04-24 – 2007-12-17. The goal of the project was to design an occupancy sensor system that determines if there exists more than one person in a defined region. The output of this system is for use in a context-aware system at the KTH Center for Wireless Systems (Wireless@KTH). The system is important because there is a need for specific input to context-aware systems concerning occupancy of spaces and because this thesis has focused on a problem that enables new complex and interesting services. Additionally, the specific problem of determining not only occupancy, but if this occupancy is zero, one, many has not been widely examined previously. The significance of zero occupants indicating an empty room has already been recognized as having economic and environmental value in terms of heating, ventilating, air-conditioning, and lighting. However, there has not been an effort to differentiate between a person being alone or if more than one person is present. A context-aware system might be able to use this later information to infer that a meeting is taking place in a meeting room, a class taking place in a classroom or that an individual is alone in a conference room, class room, etc. Thus enabling context-aware services to change their behavior based upon the differences in these situations. An occupancy sensor system prototype was designed to monitor a boundary by using a thermal detector, gumstix computer, an analog to digital converter prototype board, laptop computer, and a context broker. The testing and evaluation of the system, proved it to be sound. However, there are still further improvements and tests to be made. These improvements include: dynamic configuration of the system, communication between the different system entities, detection algorithms, and code improvements. Tests measuring accuracy of a detection algorithm and determining optimal detector placement need to be performed. The next step is to design applications that use the context information provided from the occupancy sensor system and expand the system to use multiple detectors. / Examensarbetet "Occupancy Sensor System" genomfördes på Kungliga Tekniska Högskolan (KTH), Stockholm, Sverige, under perioden 2007-04-24 – 2007-12-17. Målet med examensarbetet var att designa ett sensorsystem, som avgör om ett rum är befolkat med fler än en person i ett definierat område. Resultatet av detta system är till för användning i ett kontextmedvetet system som finns i KTH Center for Wireless Systems (Wireless@KTH). Systemet är viktigt eftersom det finns ett behov för specifik input till kontextmedvetna system som berör befolkning av rum och eftersom detta examensarbete har fokuserat på ett problem som möjliggör nya komplexa och intressanta tjänster. Dessutom har det inte tidigare undersökts i vidare bemärkelse hur man kan avgöra om ett rum befolkats av noll, en eller flera personer. Betydelsen av att ett rum är obefolkat har redan ansetts ha ekonomiskt och miljöbetingat värde vad gäller uppvärming, ventilation, luftkonditionering och belysning. Däremot har det inte gjorts ansträngningar att differentiera mellan att en ensam person eller flera är närvarande. Ett kontextmedvetet system skulle kunna använda den senare nämnda informationen för att dra slutsatsen att ett möte pågår i ett mötesrum, en lektion är igång i ett klassrum o.s.v. Detta möjliggör i sin tur för kontextmedvetna tjänster att ändra på sina beteenden baserat på skillnaderna i dessa situationer. En prototyp utvecklades för att övervaka en gräns genom användningen av en termisk detektor, gumstixdator, analog till digital signalkonverterare, bärbar dator och en context broker (kontextförmedlare). Testningar och utvärderingar av systemet visade att systemet var dugligt. Flera förbättringar och tester behöver dock göras i framtiden. Dessa förbättringar inkluderar: dynamisk konfiguration av systemet, kommunikation mellan de olika systementiteterna, detektionsalgoritmer och kodförbättringar. Återstående tester inkluderar mätning av en detektionsalgoritms tillförlitlighet samt optimal placering av detektorer. Nästa steg är att utveckla applikationer som använder kontextinformationen från systemet samt att utveckla systemet till att kunna använda flera detektorer.
|
162 |
Context dependency analysis in ubiquitous computing / Analyse de dépendance contexte dans ubiquitous computingBaloch, Raheel Ali 17 February 2012 (has links)
Pour fournir aux utilisateurs des services personnalisés d'adaptation en utilisant uniquement les ressources informatiques accessibles dans un environnement de cloud computing, les applications contexte, conscients besoin d'assimiler à la fois le contexte accessible et dérivés, c'est à dire une combinaison de plus d'un senti données et d'informations dans l'environnement. Contexte des données de dépendance, la dépendance qui se pose entre le contexte des données du producteur et du consommateur, peut se présenter dans un système en raison de nombreuses raisons. Mais comme le nombre de dépendances de contexte pour une augmentation des services, la plus complexe, le système devient à gérer. La thèse aborde les questions de la façon d'identifier les dépendances de contexte, représentent des dépendances de contexte tels, puis les réduire dans un système. Dans la première partie de la thèse, nous présentons deux approches efficaces pour déterminer les relations de dépendance entre les différents services du contexte dans l'environnement informatique ubiquitaire pour aider à mieux analyser les services omniprésents. Une approche est basée sur la théorie des graphes, et nous avons utilisé le tri topologique pour déterminer les dépendances de contexte. La deuxième approche est basée sur la résolution des réseaux de contraintes qui détermine si une entité est affectée lorsque l'état d'une certaine entité autre a changé son état, c.-à-détermination de la nature dynamique de la dépendance contexte. Dans la deuxième partie de la thèse, nous présentons un mode de représentation des dépendances de contexte au sein d'un système. Notre modèle qui représente les dépendances de contexte est basé sur la théorie des ensembles et la logique des prédicats du premier ordre. Le modèle de représentation contexte de dépendance représente également d'autres sources pour l'acquisition de contexte qui peuvent être utilisés dans une affaire dans laquelle les producteurs contexte privilégiées ne sont pas disponibles pour desservir le contexte souhaité pour le consommateur un contexte pertinent, pas plus. En outre, nous essayons de réduire les dépendances de contexte en présentant l'idée du contexte de profil, qui est basé sur la proposition d'un cadre ouvert pour l'acquisition de contexte, la gestion et la distribution. Cette approche heuristique est basée sur l'idée d'utiliser les nœuds mobiles dans un réseau ad hoc avec superposition de plus de ressources que le producteur lui-même contexte pour stocker diverses informations contextuelles sous la bannière du contexte profil, et en outre, fournir le contexte profil au lieu de chaque contexte individuellement sur la base sur les requêtes des nœuds reçoivent des consommateurs contexte. Réunissant les informations de contexte et de mises à jour de contexte à partir de diverses sources, le soutien aux décisions contexte, conscients peut être mis en œuvre efficacement dans un environnement mobile en s'attaquant aux problèmes de dépendance en utilisant le contexte contexte profil / To provide users with personalized adaptive services only using the accessible computing resources in a cloud environment, context aware applications need to assimilate both the accessed and derived context, i.e. a combination of more than one sensed data and information in the environment. Context data dependency, dependency that arises between the context data producer and consumer, may get introduced in a system due to numerous reasons. But as the number of context dependencies for a service increases, the more complex the system becomes to manage. The thesis addresses issues of how to identify context dependencies, represent such context dependencies and then reduce them in a system. In the first part of the thesis, we present two efficient approaches to determine context dependency relations among various services in ubiquitous computing environment to help better analyse the pervasive services. One approach is based on graph theory, and we have used the topological sort to determine the context dependencies. The second approach is based on solving constraint networks which determines whether an entity is affected when the state of a certain other entity has its state changed, i.e. determining the dynamic nature of context dependency. In the second part of the thesis, we present a mode for representation of context dependencies within a system. Our model that represents context dependencies is based on set theory and first-order predicate logic. The context dependency representation model also represents alternative sources for context acquisition that can be utilized in a case in which the preferred context producers are not available to service the desired context to the relevant context consumer any more. Further, we try to reduce the context dependencies by presenting the idea of profile context, which is based on the proposal of an open framework for context acquisition, management and distribution. This heuristic approach is based on the idea of utilizing mobile nodes in an ad hoc overlay network with more resources than the context producer itself to store various contextual information under the banner of profile context, and further, provide profile context instead of each context individually based on the queries the nodes receive from the context consumers. Bringing together the context information and context updates from various sources, support for context aware decisions can be implemented efficiently in a mobile environment by addressing the issues of context dependency using profile context
|
163 |
Reward-driven Training of Random Boolean Network Reservoirs for Model-Free EnvironmentsGargesa, Padmashri 27 March 2013 (has links)
Reservoir Computing (RC) is an emerging machine learning paradigm where a fixed kernel, built from a randomly connected "reservoir" with sufficiently rich dynamics, is capable of expanding the problem space in a non-linear fashion to a higher dimensional feature space. These features can then be interpreted by a linear readout layer that is trained by a gradient descent method. In comparison to traditional neural networks, only the output layer needs to be trained, which leads to a significant computational advantage. In addition, the short term memory of the reservoir dynamics has the ability to transform a complex temporal input state space to a simple non-temporal representation. Adaptive real-time systems are multi-stage decision problems that can be used to train an agent to achieve a preset goal by performing an optimal action at each timestep. In such problems, the agent learns through continuous interactions with its environment. Conventional techniques to solving such problems become computationally expensive or may not converge if the state-space being considered is large, partially observable, or if short term memory is required in optimal decision making. The objective of this thesis is to use reservoir computers to solve such goal-driven tasks, where no error signal can be readily calculated to apply gradient descent methodologies. To address this challenge, we propose a novel reinforcement learning approach in combination with reservoir computers built from simple Boolean components. Such reservoirs are of interest because they have the potential to be fabricated by self-assembly techniques. We evaluate the performance of our approach in both Markovian and non-Markovian environments. We compare the performance of an agent trained through traditional Q-Learning. We find that the reservoir-based agent performs successfully in these problem contexts and even performs marginally better than Q-Learning agents in certain cases. Our proposed approach allows to retain the advantage of traditional parameterized dynamic systems in successfully modeling embedded state-space representations while eliminating the complexity involved in training traditional neural networks. To the best of our knowledge, our method of training a reservoir readout layer through an on-policy boot-strapping approach is unique in the field of random Boolean network reservoirs.
|
164 |
COFFEE: Context Observer for Fast Enthralling EntertainmentLenz, Anthony M 01 June 2014 (has links) (PDF)
Desktops, laptops, smartphones, tablets, and the Kinect, oh my! With so many devices available to the average consumer, the limitations and pitfalls of each interface are becoming more apparent. Swimming in devices, users often have to stop and think about how to interact with each device to accomplish the current tasks at hand. The goal of this thesis is to minimize user cognitive effort in handling multiple devices by creating a context aware hybrid interface. The context aware system will be explored through the hybridization of gesture and touch interfaces using a multi-touch coffee table and the next-generation Microsoft Kinect. Coupling gesture and touch interfaces creates a novel multimodal interface that can leverage the benefits of both gestures and touch. The hybrid interface is able to utilize the more intuitive and dynamic use of gestures, while maintaining the precision of a tactile touch interface. Joining these two interfaces in an intuitive and context aware way will open up a new avenue for design and innovation.
|
165 |
REQUIREMENTS ANALYSIS FOR A CONTEXT-AWARE MULTI-AGENCY EMERGENCY RESPONSE SYSTEMWay, Steven C. 10 1900 (has links)
<p>REQUIREMENTS ANALYSIS FOR A CONTEXT-AWARE MULTI-AGENCY EMERGENCY RESPONSE SYSTEM</p> / <p>Society faces many natural and man-made disasters which can have a large impact in terms of deaths, injuries, monetary losses, psychological distress, and economic effects. Society needs to find ways to prevent or reduce the negative impact of these disasters as much as possible. Information systems have been used to assist emergency response to a certain degree in some cases. However, there is still a lack of understanding on how to build an effective emergence response system. To identify the basic requirements of such systems, a grounded theory research method is used for data collection and analysis. Data from firsthand interviews and observations was combined with literature and analyzed to discover several emergent issues and concepts regarding disaster response. The issues and concepts were organized into four categories: i) context-awareness; ii) multi-party relationships; iii) task-based coordination; and iv) information technology support, which together identified the needs of disaster response coordination. Using evidence from the data, these factors were related to one another to develop a framework for context-aware multi-party coordination systems (CAMPCS). This study contributes to the field of emergency management as the framework represents a comprehensive theory for disaster response coordination that can guide future research on emergency management coordination.</p> / Doctor of Philosophy (PhD)
|
166 |
Context-aware Learning from Partial ObservationsGligorijevic, Jelena January 2018 (has links)
The Big Data revolution brought an increasing availability of data sets of unprecedented scales, enabling researchers in machine learning and data mining communities to escalate in learning from such data and providing data-driven insights, decisions, and predictions. However, on their journey, they are faced with numerous challenges, including dealing with missing observations while learning from such data or making predictions on previously unobserved or rare (“tail”) examples, which are present in a large span of domains including climate, medical, social networks, consumer, or computational advertising domains. In this thesis, we address this important problem and propose tools for handling partially observed or completely unobserved data by exploiting information from its context. Here, we assume that the context is available in the form of a network or sequence structure, or as additional information to point-informative data examples. First, we propose two structured regression methods for dealing with missing values in partially observed temporal attributed graphs, based on the Gaussian Conditional Random Fields (GCRF) model, which draw power from the network/graph structure (context) of the unobserved instances. Marginalized Gaussian Conditional Random Fields (m-GCRF) model is designed for dealing with missing response variable value (labels) in graph nodes, whereas Deep Feature Learning GCRF is able to deal with missing values in explanatory variables while learning feature representation jointly with learning complex interactions of nodes in a graph and together with the overall GCRF objective. Next, we consider unsupervised and supervised shallow and deep neural models for monetizing web search. We focus on two sponsored search tasks here: (i) query-to-ad matching, where we propose novel shallow neural embedding model worLd2vec with improved local query context (location) utilization and (ii) click-through-rate prediction for ads and queries, where Deeply Supervised Semantic Match model is introduced for dealing with unobserved and tail queries click-through-rate prediction problem, while jointly learning the semantic embeddings of a query and an ad, as well as their corresponding click-through-rate. Finally, we propose a deep learning approach for ranking investigators based on their expected enrollment performance on new clinical trials, that learns from both, investigator and trial-related heterogeneous (structured and free-text) data sources, and is applicable to matching investigators to new trials from partial observations, and for recruitment of experienced investigators, as well as new investigators with no previous experience in enrolling patients in clinical trials. Experimental evaluation of the proposed methods on a number of synthetic and diverse real-world data sets shows surpassing performance over their alternatives. / Computer and Information Science
|
167 |
Mobility Management Scheme for Context-Aware Transactions in Pervasive and Mobile CyberspaceYounas, M., Awan, Irfan U. January 2013 (has links)
No / Rapid advances in software systems, wireless networks, and embedded devices have led to the development of a pervasive and mobile cyberspace that provides an infrastructure for anywhere/anytime service provisioning in different domains such as engineering, commerce, education, and entertainment. This style of service provisioning enables users to freely move between geographical areas and to continuously access information and conduct online transactions. However, such a high mobility may cause performance and reliability problems during the execution of transactions. For example, the unavailability of sufficient bandwidth can result in failure of transactions when users move from one area (cell) to another. We present a context-aware transaction model that dynamically adapts to the users' needs and execution environments. Accordingly, we develop a new mobility management scheme that ensures seamless connectivity and reliable execution of context-aware transactions during mobility of users. The proposed scheme is designed and developed using a combination of different queuing models. We conduct various experiments in order to show that the proposed scheme optimizes the mobility management process and increases the throughput of context-aware transactions.
|
168 |
Service-Oriented Sensor-Actuator NetworksRezgui, Abdelmounaam 09 January 2008 (has links)
In this dissertation, we propose service-oriented sensor-actuator networks (SOSANETs) as a new paradigm for building the next generation of customizable, open, interoperable sensor-actuator networks. In SOSANETs, nodes expose their capabilities to applications in the form of service profiles. A node's service profile consists of a set of services (i.e., sensing and actuation capabilities) that it provides and the quality of service (QoS) parameters associated with those services (delay, accuracy, freshness, etc.). SOSANETs provide the benefits of both application-specific SANETs and generic SANETs. We first define a query model and an architecture for SOSANETs. The proposed query model offers a simple, uniform query interface whereby applications specify sensing and actuation queries independently from any specific deployment of the underlying SOSANET. We then present μRACER (Reliable Adaptive serviCe-driven Efficient Routing), a routing protocol suite for SOSANETs. μRACER consists of three routing protocols, namely, SARP (Service-Aware Routing Protocol), CARP (Context-Aware Routing Protocol), and TARP (Trust-Aware Routing Protocol). SARP uses an efficient service-aware routing approach that aggressively reduces downstream traffic by translating service profiles into efficient paths. CARP supports QoS by dynamically adapting each node's routing behavior and service profile according to the current context of that node, i.e. number of pending queries and number and type of messages to be routed. Finally, TARP achieves high end-to-end reliability through a scalable reputation-based approach in which each node is able to locally estimate the next hop of the most reliable path to the sink. We also propose query optimization techniques that contribute to the efficient execution of queries in SOSANETs. To evaluate the proposed service-oriented architecture, we implemented TinySOA, a prototype SOSANET built on top of TinyOS with uRACER as its routing mechansim. TinySOA is designed as a set of layers with a loose interaction model that enables several cross-layer optimization options. We conducted an evaluation of TinySOA that included a comparison with TinyDB. The obtained empirical results show that TinySOA achieves significant improvements on many aspects including energy consumption, scalability, reliability and response time. / Ph. D.
|
169 |
Deep Learning Models for Context-Aware Object DetectionArefiyan Khalilabad, Seyyed Mostafa 15 September 2017 (has links)
In this thesis, we present ContextNet, a novel general object detection framework for incorporating context cues into a detection pipeline. Current deep learning methods for object detection exploit state-of-the-art image recognition networks for classifying the given region-of-interest (ROI) to predefined classes and regressing a bounding-box around it without using any information about the corresponding scene. ContextNet is based on an intuitive idea of having cues about the general scene (e.g., kitchen and library), and changes the priors about presence/absence of some object classes. We provide a general means for integrating this notion in the decision process about the given ROI by using a pretrained network on the scene recognition datasets in parallel to a pretrained network for extracting object-level features for the corresponding ROI. Using comprehensive experiments on the PASCAL VOC 2007, we demonstrate the effectiveness of our design choices, the resulting system outperforms the baseline in most object classes, and reaches 57.5 mAP (mean Average Precision) on the PASCAL VOC 2007 test set in comparison with 55.6 mAP for the baseline. / MS / The object detection problem is to find objects of interest in a given image and draw boxes around them with object labels. With the emergence of deep learning in recent years, current object detection methods use deep learning technologies. The detection process is solely based on features which are extracted from several thousand regions in the given image. We propose a novel framework for incorporating scene information in the detection process. For example, if we know the image is taken from a kitchen, the probability of seeing a cow or an airplane decreases and observation probability of plates and persons increases. Our new detection network uses this intuition to improve the detection accuracy. Using extensive experiments, we show the proposed methods outperform the baseline for almost all object types.
|
170 |
Malleable Contextual Partitioning and Computational DreamingBrar, Gurkanwal Singh 20 January 2015 (has links)
Computer Architecture is entering an era where hundreds of Processing Elements (PE) can be integrated onto single chips even as decades-long, steady advances in instruction, thread level parallelism are coming to an end. And yet, conventional methods of parallelism fail to scale beyond 4-5 PE's, well short of the levels of parallelism found in the human brain. The human brain is able to maintain constant real time performance as cognitive complexity grows virtually unbounded through our lifetime. Our underlying thesis is that contextual categorization leading to simplified algorithmic processing is crucial to the brains performance efficiency. But, since the overheads of such reorganization are unaffordable in real time, we also observe the critical role of sleep and dreaming in the lives of all intelligent beings. Based on the importance of dream sleep in memory consolidation, we propose that it is also responsible for contextual reorganization. We target mobile device applications that can be personalized to the user, including speech, image and gesture recognition, as well as other kinds of personalized classification, which are arguably the foundation of intelligence. These algorithms rely on a knowledge database of symbols, where the database size determines the level of intelligence. Essential to achieving intelligence and a seamless user interface however is that real time performance be maintained. Observing this, we define our chief performance goal as: Maintaining constant real time performance against ever increasing algorithmic and architectural complexities. Our solution is a method for Malleable Contextual Partitioning (MCP) that enables closer personalization to user behavior. We conceptualize a novel architectural framework, the Dream Architecture for Lateral Intelligence (DALI) that demonstrates the MCP approach. The DALI implements a dream phase to execute MCP in ideal MISD parallelism and reorganize its architecture to enable contextually simplified real time operation. With speech recognition as an example application, we show that the DALI is successful in achieving the performance goal, as it maintains constant real time recognition, scaling almost ideally, with PE numbers up to 16 and vocabulary size up to 220 words. / Master of Science
|
Page generated in 0.0407 seconds