121 |
Towards Internet of Things Interaction Framework Using Geometric Annotated Multimedia ObjectsRahman, Abu Saleh Md Ma January 2017 (has links)
The prevalent visions of ambient intelligence leverage natural interactions between users and available services in a smart space. In recent years, we have seen a huge interest from industry and academia in using handheld devices to interact with things, places and people in the real world. To facilitate such interactions, things are usually annotated with RFID tags or visual markers. These tags or markers are read by a handheld device equipped with an integrated RFID reader or a camera, in order to fetch related information and initiate further actions. Interacting with the Internet of Things (IoT) in a real environment has become increasingly desirable and feasible. This thesis contributes to the domain of physical interactions with IoT; however, we use a spatial-geometric approach instead of RFID or marker based solutions. Using this approach, for example, a user can point his/her handheld device to an annotated thing, from a distance, for the purpose of interaction. The pointing direction and location is determined based on the fusion of the mobile position and of the accelerometer data of the handheld device. To annotate things, their geometric coordinates are specified and related information or services are associated to them. In this thesis, we present a comprehensive and extensible framework to integrate various physical interactions with IoT into multimedia applications. The framework supports the implementations of pointMe, touchMe, and context-aware based interactions with geometrically annotated IoT. We define specific methods and practices that can be incorporated in order to build the interactions. We realize smart home, atlas learning, presentation interaction, smart haptic interaction, and learning based video interaction game prototypes in order to perform experiments and demonstrate the applicability and potential of the proposed geometric based annotation approach. In the analysis of the interaction techniques of the prototypes, we present the advantages and disadvantages of the geometric based annotation of IoT as seen by potential users, in comparison to RFID tags or visual markers based approaches.
|
122 |
Design Automation Flow using Library Adaptation for Variation Aware Logic SynthesisAtluri, Lava Kumar 03 June 2014 (has links)
No description available.
|
123 |
Delay-Aware Multi-Path Routing in a Multi-Hop Network: Algorithms and ApplicationsLiu, Qingyu 21 June 2019 (has links)
Delay is known to be a critical performance metric for various real-world routing applications including multimedia communication and freight delivery. Provisioning delay-minimal (or at least delay-bounded) routing services for all traffic of an application is highly important. As a basic paradigm of networking, multi-path routing has been proven to be able to obtain lower delay performance than the single-path routing, since traffic congestions can be avoided. However, to our best knowledge, (i) many of existing delay-aware multi-path routing studies only consider the aggregate traffic delay. Considering that even the solution achieving the optimal aggregate traffic delay has a possibly unbounded delay performance for certain individual traffic unit, those studies may be insufficient in practice; besides, (ii) most existing studies which optimize or bound delays of all traffic are best-effort, where the achieved solutions have no theoretical performance guarantee.
In this dissertation, we study four delay-aware multi-path routing problems, with the delay performances of all traffic taken into account. Three of them are in communication and one of them is in transportation. Note that our study differ from all related ones as we are the first to study the four fundamental problems to our best knowledge. Although we prove that our studied problems are all NP-hard, we design approximation algorithms with theoretical performance guarantee for solving each of them. To be specific, we claim the following contributions.
Minimize maximum delay and average delay. First, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to stream a flow at a fixed rate to a receiver. Two important delay metrics are the average sender-to-receiver delay and the maximum sender-to-receiver delay. Existing results say that the two delay metrics of a flow cannot be both within bounded-ratio gaps to the optimal. In comparison, we design three different flow solutions, each of which can minimize the two delay metrics simultaneously within a $(1/epsilon)$-ratio gap to the optimal, at a cost of only delivering $(1-epsilon)$-fraction of the flow, for any user-defined $epsilonin(0,1)$. The gap $(1/epsilon)$ is proven to be at least near-tight, and we further show that our solutions can be extended to the multiple-unicast setting.
Minimize Age-of-Information (AoI). Second, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to periodically send a batch of data to a receiver. We study a newly proposed delay-sensitive networking performance metric, AoI, defined as the elapsed time since the generation of the last received data. We consider the problem of minimizing AoI subject to throughput requirements, which we prove is NP-hard. We note that our AoI problem differs from existing ones in that we are the first to consider the batch generation of data and multi-path communication. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Our framework can build upon any polynomial-time $alpha$-approximation algorithm of the maximum delay minimization problem, to construct an $(alpha+c)$-approximate solution for minimizing AoI. Here $c$ is a constant dependent on throughput requirements.
Maximize network utility. Third, we consider a multiple-unicast setting where in a multi-hop network there exist many network users. Each user requires a sender to use multiple paths to stream a flow to a receiver, incurring an utility that is a function of the experienced maximum delay or the achieved throughput. Our objective is to maximize the aggregate utility of all users under throughput requirements and maximum delay constraints. We observe that it is NP-complete either to construct an optimal solution under relaxed maximum delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Hence it is non-trivial even to obtain approximate solutions satisfying relaxed constraints in a polynomial time. We develop a polynomial-time approximation algorithm. Our algorithm obtains solutions with constant approximation ratios under realistic conditions, at the cost of violating constraints by up to constant-ratios.
Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a common truck operation scenario where a truck is driving in a national highway network to fulfill multiple transportation tasks in order. We study an NP-hard timely eco-routing problem of minimizing total fuel consumption under task pickup and delivery time window constraints. We note that optimizing task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design a fast and efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its optimality gap in case the conditions are not met. We simulate a heavy-duty truck driving across the US national highway system, and empirically observe that the fuel consumption achieved by our heuristic can be $22%$ less than that achieved by the fastest-/shortest- path baselines. Furthermore, the fuel saving of our heuristic as compared to the baselines is robust to the number of tasks. / Doctor of Philosophy / We consider a network modeled as a directed graph, where it takes time for data to traverse each link in the network. It models many critical applications both in the communication area and in the transportation field. For example, both the European education network and the US national highway network can be modeled as directed graphs. We consider a scenario where a source node is required to send multiple (a set of) data packets to a destination node through the network as fast as possible, possibly using multiple source-to-destination paths. In this dissertation we study four problems all of which try to figure out routing solutions to send the set of data packets, with an objective of minimizing experienced travel time or subject to travel time constraints. Although all of our four problems are NP-hard, we design approximation algorithms to solve them and obtain solutions with theoretically bounded gaps as compared to the optimal. The first three problems are in the communication area, and the last problem is in the transportation field. We claim the following specific contributions. Minimize maximum delay and average delay. First, we consider the setting of simultaneously minimizing the average travel time and the worst (largest) travel time of sending the set of data packets from source to destination. Existing results say that the two metrics of travel time cannot be minimized to be both within bounded-ratio gaps to the optimal. As a comparison, we design three different routing solutions, each of which can minimize the two metrics of travel time simultaneously within a constant bounded ratio-gap to the optimal, but at a cost of only delivering a portion of the data. Minimize Age-of-Information (AoI). Second, we consider the problem of minimizing a newly proposed travel-time-sensitive performance metric, i.e., AoI, which is the elapsed time since the generation of the last received data. Our AoI study differs from existing ones in that we are the first to consider a set of data and multi-path routing. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Maximize network utility. Third, we consider a more general setting with multiple source destination pairs. Each source incurs a utility that is a function of the experienced travel time or the achieved throughput to send data to its destination. Our objective is to maximize the aggregate utility under throughput requirements and travel time constraints. We develop a polynomial-time approximation algorithm, at the cost of violating constraints by up to constant-ratios. It is non-trivial to design such algorithms, as we prove that it is NPcomplete either to construct an optimal solution under relaxed delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a truck and multiple transportation tasks in order, where each task requires the truck to pick up cargoes at a source timely, and deliver them to a destination timely. The need of coordinating task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design an efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its performance gap as compared to the optimal in case the conditions are not met.
|
124 |
Resource Optimized Scheduling For Enhanced Power Efficiency And Throughput On Chip Multi Processor PlatformsKundan, Shivam 01 May 2024 (has links) (PDF)
The parallel nature of process execution on Chip Multi-Processors (CMPs) has boosted levels of application performance far beyond the capabilities of erstwhile single-core designs. Generally, CMPs offer improved performance by integrating multiple simpler cores onto a single die that share certain computing resources among them such as last-level caches, data buses, and main memory. This ensures architectural simplicity while also boosting performance for multi-threaded applications. However, a major trade-off associated with this approach is that concurrently executing applications incur performance degradation if their collective resource requirements exceed the total amount of resources available to the system. If dynamic resource allocation is not carefully considered, the potential performance gain from having multiple cores may be outweighed by the losses due to contention for allocation of shared resources. Additionally, CMPs with inbuilt dynamic voltage-frequency scaling (DVFS) mechanisms may try to compensate for the performance bottleneck by scaling to higher clock frequencies. For performance degradation due to shared-resource contention, this does not necessarily improve performance but does ensure a significant penalty on power consumption due to the quadratic relation of electrical power and voltage (P_dynamic ∝ V^2 * f).This dissertation presents novel methodologies for balancing the competing requirements of high performance, fairness of execution, and enforcement of priority, while also ensuring overall power efficiency of CMPs. Specifically, we (1) Analyze the problem of resource interference during concurrent process execution and propose two fine-grained scheduling methodologies for improving overall performance and fairness, (2) Develop an approach for enforcement of priority (i.e., minimum performance) for specific processes while avoiding resource starvation for others, and (3) Present a machine-learning approach for maximizing the power efficiency (performance-per-Watt) of CMPs through estimation of a workload's performance and power consumption limits at different clock frequencies.As modern computing workloads become increasingly dynamic, and computers themselves become increasingly ubiquitous, the problem of finding the ideal balance between performance and power consumption of CMPs is of particular relevance today, especially given the unprecedented proliferation of embedded devices for use in Internet-of-Things, edge computing, smart wearables, and even exotic experiments such as space probes comprised entirely of a CMP, sensors, and an antenna ("space chips"). Additionally, reducing power consumption while maintaining constant performance can contribute to addressing the growing problem of dark silicon.
|
125 |
Verification of Data-aware Business Processes in the Presence of OntologiesSantoso, Ario 14 November 2016 (has links) (PDF)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging.
In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs.
We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
|
126 |
Analys av metoder för att beräkna livsmedels vattenfotavtryckLundmark, Lina January 2019 (has links)
Vatten är en nödvändig resurs för allt liv på jorden. Med en ökande befolkningsmängd förväntas även sötvattenanvändningen att öka, vilket ställer krav på att hanteringen av de vattenresurser som finns sker på ett hållbart sätt. Jordbrukssektorn är i dagsläget den största konsumenten av vatten, varpå det är viktigt att uppmärksamma konsumenter om vattenanvändning vid produktion av livsmedel så att kunskapen ökar kring hur vatten används idag. Ett verktyg för att bedöma miljöpåverkan från vattenanvändning är det så kallade vattenfotavtrycket. De senaste åren har flera beräkningsmetoder tillkommit för att beräkna vattenfotavtryck, och dessa tar hänsyn till olika aspekter. Syftet med denna studie var att utvärdera tre sådana metoder och använda dem för att beräkna vattenfotavtrycket för ett antal livsmedel, jämföra resultatet och slutligen ta fram en rekommendation kring vilken eller vilka metoder som lämpar sig för konsumentvägledning. De metoder som undersöktes var TOTAL som är en metod av Water Footprint Network (WFN), metoden WSI och metoden AWARE. Resultatet visade att vissa nötter fick särskilt högt vattenfotavtryck oavsett vilken metod som användes, för exempelvis mandlar erhölls med respektive metod ett vattenfotavtryck motsvarande 15 m3 vatten/kg, 3,3 m3 WSI-H2O-ekvivalenter/kg samt 165 m3 AWARE-H2O-ekvivalenter/kg. Att resultaten har olika enheter samt storleksordningar beror på att metoderna är olika uppbyggda. Generellt fick baljväxter, spannmål samt frukt och grönt låga resultat, dock varierade resultaten något beroende på vilken metod som användes. Detta beror bland annat på att endast WSI och AWARE tar hänsyn till hur den lokala vattensituationen ser ut där vattnet används. Vid jämförelse av metoderna ansågs både metoden TOTAL samt AWARE vara lämpliga att använda för konsumentvägledning då den förstnämnda är väl beprövad samt lättförstådd medan den sistnämnda är en uppdaterad indikator som tar hänsyn till lokal vattenbrist. / Water is a vital resource for all life on earth. With an increasing population, the use of freshwater is also expected to increase, which requires a sustainable management of existing water resources. The agricultural sector is currently the largest consumer of water, and it is important to pay attention to consumers about water use in food production so that knowledge is increasing about how water is used today. The so-called water footprint is a tool for assessing the amount of water used to produce a good or a service. In recent years, several calculation methods have been added to calculate water footprints, and these take into account various aspects. The purpose of this study was to evaluate three such methods and use them to calculate the water footprint for a number of foods, compare the results and finally give a recommendation on which method or methods that are best suited for consumer guidance. The assessed methods were TOTAL by Water Footprint Network (WFN), the WSI method and the AWARE method. The results showed that some nuts had a particularly high water footprint regardless of the method used. Almonds, for example, obtained with each method a water footprint corresponding to 15 m3 water/kg, 3.3 m3 WSI-H2O-equivalents/kg and 165 m3 AWARE-H2O-equivalents/kg. The fact that the results have different units and orders of magnitude is because the different structure of the methods. Generally, legumes, cereals and fruits and vegetables had low water footprints, but the results varied somewhat depending on the method used. This is partly due to the fact that only WSI and AWARE take into account how the water situations looks where the water is used. When comparing the methods, both TOTAL and AWARE were considered suitable for use for consumer guidance, since the former is well-proven and easily understood while the latter is an updated indicator that takes local water shortage into account.
|
127 |
Entity-based coherence in statistical machine translation : a modelling and evaluation perspectiveWetzel, Dominikus Emanuel January 2018 (has links)
Natural language documents exhibit coherence and cohesion by means of interrelated structures both within and across sentences. Sentences do not stand in isolation from each other and only a coherent structure makes them understandable and sound natural to humans. In Statistical Machine Translation (SMT) only little research exists on translating a document from a source language into a coherent document in the target language. The dominant paradigm is still one that considers sentences independently from each other. There is both a need for a deeper understanding of how to handle specific discourse phenomena, and for automatic evaluation of how well these phenomena are handled in SMT. In this thesis we explore an approach how to treat sentences as dependent on each other by focussing on the problem of pronoun translation as an instance of a discourse-related non-local phenomenon. We direct our attention to pronoun translation in the form of cross-lingual pronoun prediction (CLPP) and develop a model to tackle this problem. We obtain state-of-the-art results exhibiting the benefit of having access to the antecedent of a pronoun for predicting the right translation of that pronoun. Experiments also showed that features from the target side are more informative than features from the source side, confirming linguistic knowledge that referential pronouns need to agree in gender and number with their target-side antecedent. We show our approach to be applicable across the two language pairs English-French and English-German. The experimental setting for CLPP is artificially restricted, both to enable automatic evaluation and to provide a controlled environment. This is a limitation which does not yet allow us to test the full potential of CLPP systems within a more realistic setting that is closer to a full SMT scenario. We provide an annotation scheme, a tool and a corpus that enable evaluation of pronoun prediction in a more realistic setting. The annotated corpus consists of parallel documents translated by a state-of-the-art neural machine translation (NMT) system, where the appropriate target-side pronouns have been chosen by annotators. With this corpus, we exhibit a weakness of our current CLPP systems in that they are outperformed by a state-of-the-art NMT system in this more realistic context. This corpus provides a basis for future CLPP shared tasks and allows the research community to further understand and test their methods. The lack of appropriate evaluation metrics that explicitly capture non-local phenomena is one of the main reasons why handling non-local phenomena has not yet been widely adopted in SMT. To overcome this obstacle and evaluate the coherence of translated documents, we define a bilingual model of entity-based coherence, inspired by work on monolingual coherence modelling, and frame it as a learning-to-rank problem. We first evaluate this model on a corpus where we artificially introduce coherence errors based on typical errors CLPP systems make. This allows us to assess the quality of the model in a controlled environment with automatically provided gold coherence rankings. Results show that this model can distinguish with high accuracy between a human-authored translation and one with coherence errors, that it can also distinguish between document pairs from two corpora with different degrees of coherence errors, and that the learnt model can be successfully applied when the test set distribution of errors comes from a different one than the one from the training data, showing its generalization potentials. To test our bilingual model of coherence as a discourse-aware SMT evaluation metric, we apply it to more realistic data. We use it to evaluate a state-of-the-art NMT system against post-editing systems with pronouns corrected by our CLPP systems. For verifying our metric, we reuse our annotated parallel corpus and consider the pronoun annotations as proxy for human document-level coherence judgements. Experiments show far lower accuracy in ranking translations according to their entity-based coherence than on the artificial corpus, suggesting that the metric has difficulties generalizing to a more realistic setting. Analysis reveals that the system translations in our test corpus do not differ in their pronoun translations in almost half of the document pairs. To circumvent this data sparsity issue, and to remove the need for parameter learning, we define a score-based SMT evaluation metric which directly uses features from our bilingual coherence model.
|
128 |
Compact variation-aware standard cells for statistical static timing analysisAftabjahani, Seyed-Abdollah 09 June 2011 (has links)
This dissertation reports on a new methodology to characterize and simulate a standard cell library to be used for statistical static timing analysis. A compact variation-aware timing model for a standard cell in a cell library has been developed. The model incorporates variations in the input waveform and loading, process parameters, and the environment into the cell timing model. Principal component analysis (PCA) has been used to form a compact model of a set of waveforms impacted by these sources of variation. Cell characterization involves determining equations describing how waveforms are transformed by a cell as a function of the input waveforms, process parameters, and the environment. Different versions of factorial designs and Latin hypercube sampling have been explored to model cells, and their complexity and accuracy have been compared. The models have been evaluated by calculating the delay of paths. The results demonstrate improved accuracy in comparison with table-based static timing analysis at comparable computational cost. Our methodology has been expanded to adapt to interconnect dominant circuits by including a resistive-capacitive load model. The results show the feasibility of using the new load model in our methodology. We have explored comprehensive accuracy improvement methods to tune the methodology for the best possible results.
The following is a summary of the main contributions of this work to the statistical static timing analysis:
(a) accurate waveform modeling for standard cells using statistical waveform models based on principal components;
(b) compact performance modeling of standard cells using experimental design statistical techniques; and
(c) variation-aware performance modeling of standard cells considering the effect of variation parameters on performance, where variation parameters include loading, waveform shape, process parameters (gate length and threshold voltage of NMOS and PMOS transistors), and environmental parameters (supply voltage and temperature); and
(f) extending our methodology to support resistive-capacitive loads to be applicable to interconnect dominant circuits; and
(e) classifying the sources of error for our variational waveform model and cell models and introducing of the related accuracy improvement methods; and
(f) introducing our fast block-based variation-aware statistical dynamic timing analysis framework and showing that (i) using compiler-compiler techniques, we can generate our timing models, test benches, and data analysis for each circuit, which are compiled to machine-code to reduce the overhead of dynamic timing simulation, and (ii) using the simulation engine, we can perform statistical timing analysis to measure the performance distribution of a circuit using a high-level model for gate delay changes, which can be linked to their parameter variation.
|
129 |
A Distributed Architecture for Computing Context in Mobile DevicesDargie, Waltenegus 27 May 2006 (has links) (PDF)
Context-aware computing aims at making mobile devices sensitive to the social and physical settings in which they are used. A necessary requirement to achieve this goal is to enable those devices to establish a shared understanding of the desired settings. Establishing a shared understanding entails the need to manipulate sensed data in order to capture a real world situation wholly, conceptually, and meaningfully. Quite often, however, the data acquired from sensors can be inexact, incomplete, and/or uncertain. Inexact sensing arises mostly due to the inherent limitation of sensors to capture a real world phenomenon precisely. Incompleteness is caused by the absence of a mechanism to capture certain real-world aspects; and uncertainty stems from the lack of knowledge about the reliability of the sensing sources, such as their sensing range, accuracy, and resolution. The thesis identifies a set of criteria for a context-aware system to capture dynamic real-world situations. On the basis of these criteria, a distributed architecture is designed, implemented and tested. The architecture consists of Primitive Context Servers, which abstract the acquisition of primitive contexts from physical sensors; Aggregators, to minimise error caused by inconsistent sensing, and to gather correlated primitive contexts pertaining to a particular entity or situation; a Knowledge Base and an Empirical Ambient Knowledge Component, to model dynamic properties of entities with facts and beliefs; and a Composer, to reason about dynamic real-world situations on the basis of sensed data. Two additional components, namely, the Event Handler and the Rule Organiser, are responsible for dynamically generating context rules by associating decision events ? signifying a user?s activity ? with the context in which those decision events are produced. Context-rules are essential elements with which the behaviour of mobile devices can be controlled and useful services can be provided. Four estimation and recognition schemes, namely, Fuzzy Logic, Hidden Markov Models, Dempster-Schafer Theory of Evidence, and Bayesian Networks, are investigated, and their suitability for the implementation of the components of the architecture of the thesis is studied. Subsequently, fuzzy sets are chosen to model dynamic properties of entities. Dempster-Schafer?s combination theory is chosen for aggregating primitive contexts; and Bayesian Networks are chosen to reason about a higher-level context, which is an abstraction of a real-world situation. A Bayesian Composer is implemented to demonstrate the capability of the architecture in dealing with uncertainty, in revising the belief of the Empirical Ambient Knowledge Component, in dealing with the dynamics of primitive contexts and in dynamically defining contextual states. The Composer could be able to reason about the whereabouts of a person in the absence of any localisation sensor. Thermal, relative humidity, light intensity properties of a place as well as time information were employed to model and reason about a place. Consequently, depending on the variety and reliability of the sensors employed, the Composer could be able to discriminate between rooms, corridors, a building, or an outdoor place with different degrees of uncertainty. The Context-Aware E-Pad (CAEP) application is designed and implemented to demonstrate how applications can employ a higher-level context without the need to directly deal with its composition, and how a context rule can be generated by associating the activities (decision events) of a mobile user with the context in which the decision events are produced.
|
130 |
Collaborative review and analysis of science literatureBayat, Samaneh Unknown Date
No description available.
|
Page generated in 0.0652 seconds