Spelling suggestions: "subject:"informatics""
291 |
Infrastructuring Knowledge as Participatory Intervention Technologically Enhanced EnvironmentMacchia, Teresa January 2016 (has links)
Over the last twenty years, Digital Interactive Technology (DIT) has been extensively introduced in museum contexts. Providing guidelines for aware introduction of DIT in museum, this thesis aims to encourage a participatory experience regarding the creation of knowledge in museum. Believing that creating knowledge is a complex process that simultaneously involves multiple actions, actors, actants, and situations, I adopt the concept of Infrastructuring Knowledge for describing the participatory dynamics among human and technology for creating knowledge and for museum to be places for memories, amusing and sharing experience. Ethnography in museum environment provides first hand information for understanding the museum visiting experience. Such understanding provides three key stimulus for DIT to Infrastructuring Knowledge: firstly, stimulating people’s dialogue; secondly, supporting people’s cooperation and participation by providing occasions for sharing information and creating knowledge; thirdly, people re-frame the use of DIT when needed. Designing DIT for museum may follow some specific lines and principles for addressing challenges related to overwhelming and overstimulating spaces, and for promoting a sustainable future in respect of the production (or not) of new technology. Following lines and principles for reacting to the overwhelming and overstimulating adoption of technology in public space, I propose an -ing approach to design. This approach aims to stimulate a design for people to adopt a subjective and participatory interpretation of Digital Interactive Technology.
|
292 |
A Programmable Enforcement Framework for Security PoliciesNgo, Nguyen Nhat Minh January 2016 (has links)
This thesis proposes the MAP-REDUCE framework, a programmable framework, that can be used to construct enforcement mechanisms of different security policies. The framework is based on the idea of secure multi-execution in which multiple copies of the controlled program are executed. In order to construct an enforcement mechanism of a policy, users have to write a MAP program and a REDUCE program to control inputs and outputs of executions of these copies. This thesis illustrates the framework by presenting enforcement mechanisms of non-interference (from Devriese and Piessens), non-deducibility (from Sutherland) and deletion of inputs (a policy proposed by Mantel). It demonstrates formally soundness and precision of these enforcement mechanisms. This thesis also presents the investigation on sufficient condition of policies that can be enforced by the framework. The investigation is on reactive programs that are input total and have to finish processing an input item before handling another one. For reactive programs that always terminate on any input, non-empty testable hypersafety policies can be enforced. For reactive programs that might diverge, non-empty downward closed w.r.t. termination policies can be enforced.
|
293 |
Requirements-based Software System AdaptationSilva Souza, Vitor Estevao January 2012 (has links)
Nowadays, there are more and more software systems operating in highly open, dynamic and unpredictable environments. Moreover, as technology advances, requirements for these systems become ever more ambitious. We have reached a point where system complexity and environmental uncertainty are major challenges for the Information Technology industry. A solution proposed to deal with this challenge is to make systems (self-)adaptive, meaning they would evaluate their own behavior and performance, in order to re-plan and reconfigure their operations when needed.
In order to develop an adaptive system, one needs to account for some kind of feedback loop. A feedback loop constitutes an architectural prosthetic to a system proper, introducing monitoring and adaptation functionalities to the overall system. Even if implicit or hidden in the system's architecture, adaptive systems must have a feedback loop among their components in order to evaluate their behavior and act accordingly. In this thesis, we take a Requirements Engineering perspective to the design of adaptive software systems and, given that feedback loops constitute an (architectural) solution for adaptation, we ask the question: what is the requirements problem this solution is intended to solve?
To answer this question, we define two new classes of requirements: Awareness Requirements prescribe the indicators of requirements convergence that the system must strive to achieve, whereas Evolution Requirements represent adaptation strategies in terms of changes in the requirements models themselves. Moreover, we propose that System Identification be conducted to elicit parameters and analyze how changes in these parameters affect the monitored indicators, representing such effect using differential relations.
These new elements represent the requirements for adaptation, making feedback loops a first-class citizen in the requirements specification. Not only they assist requirements engineers in the task of elicitation and communication of adaptation requirements, but with the proper machine-readable representations, they can serve as input to a framework that implements the generic functionalities of a feedback loop, reasoning about requirements at runtime. We have developed one such framework, called Zansin, and validated our proposals through experiments based on a well-known case study adopted from the literature.
|
294 |
Testing Techniques for Software AgentsNguyen, Duy Cu January 2009 (has links)
Software agents and multiagent systems are a promising technology for today's complex, distributed systems. Methodologies and techniques that address testing and reliability of these systems are increasingly demanded, in particular to support systematic verification/validation and automated test generation and execution.
This work deals with two major research problems: the lack of a structured testing process in engineering software agents and the need of adequate testing techniques to tackle the nature of software agents, e.g., being autonomous, decentralized, collaborative.
To address the first problem, we proposed a goal-oriented testing methodology, aiming at defining a systematic and comprehensive testing process for engineering software agents. It encompasses the development process from the early requirements analysis until the deployment. We investigated how to derive test artefacts, i.e. inputs, scenarios, and so on, from agent requirements specification and design, and use these artefacts to refine the analysis and design in order to detect problems early. More importantly, they are executed afterwards to find defects in the implementation and build confidence in the operation of the agents under development.
Concerning the second problem, the peculiar properties of software agents make testing them troublesome. We developed a number of techniques to generate test cases, automatically or semi-automatically. These include goal-oriented, ontology-based, random, and evolutionary generation techniques. Our experiments have shown that each technique has different strength. For instance, while the random technique is effective in revealing crashes or exceptions, the ontology-based one is strong in detecting communication faults. The combination of these techniques can help to detect different types of fault, making software agents more reliable.
All together, the generation, evaluation, and monitoring techniques form a bigger picture: our novel continuous testing method. In this method, test execution can proceed unattendedly and independently of any other human-intensive activity; test cases are generated or evolved continuously using the proposed generation techniques; test results are observed and evaluated by our monitoring and evaluation approaches to give feedbacks to the generation step. The aim of continuous testing is to exercise and stress the agents under test as much as possible, the final goal being the possibility to reveal yet unknown faults.
We applied a case study to illustrate the proposed methodology and performed three experiments to evaluate the performance of the proposed techniques. The obtained results are promising.
|
295 |
A robotic walking assistant for localisation and guidance of older adults in large public spacesMoro, Federico January 2015 (has links)
Ageing is often associated with reduced mobility which is the consequence of a combination of physical, sensory and cognitive degrading. Reduced mobility may weaken older adults’confidence in getting out alone and traveling autonomously in large spaces. We have developed a robotic walking assistant, that compensates for sensory and cognitive impairments and supports the user’s navigation across complex spaces. The device is a walker with cognitive abilities that we named c-Walker, and it is built around a common walker for elderly people. We show the difficulties that arise when building a robotic platform, focusing on hardware and software architecture for the basic functionalities and integration of high level software components. We developed an Extended Kalman Filter in such a way that we are able to select a configuration of sensors that meets our requirements of cost, accuracy, and robustness. We describe the technological and scientific foundations for different guidance systems, and their implementation in the device. Some of them are “active” meaning that the system is allowed to “force a turn” in a specified direction. The other ones are “passive” meaning that they merely produce directions that the user is supposed to follow on her own will. We show a comparison of the different guidance systems together with the results of experiments with a group of volunteers.
|
296 |
Exploiting Contextual and Social Variability for Software AdaptationDalpiaz, Fabiano January 2011 (has links)
Self-adaptive software systems are systems that monitor their environment and compensate if there are deviations from their requirements. Self-adaptivity is gaining prominence as an approach to lowering software costs by reducing the need for manual system maintenance. Self-adaptivity is particularly important for distributed systems that involve both software and human/organizational actors because of the volatility as well as uncertainty that permeates their operational environments. We refer to such systems as Socio-Technical System (STS).
The thesis proposes a comprehensive framework for designing self-adaptive software that operates within a socio-technical system. The framework is founded upon the notions of contextual and social variability. A key ingredient of our approach is to rely on high-level abstractions to represent the purpose of the system (requirements model), to explicitly represent the commitments that exist among participating actors in an STS, and also to consider how operational context influences requirements. The proposed framework consists of (i) modelling and analysis techniques for representing and reasoning about contextual and social variability; (ii) a conceptual architecture for self-adaptive STSs; and (iii) a set of algorithms to diagnose a failure and to compute and select a new variant that addresses the failure. To evaluate our proposal, we developed two prototype implementations of our architecture to demonstrate different features of our framework, and successfully applied them to two case studies. In addition, the thesis reports encouraging results on experiments we conducted with our implementations in order to check for scalability.
|
297 |
Recognizing and Discovering Activities of Daily Living in Smart EnvironmentsAvci, Umut January 2013 (has links)
Identifying human activities is a key task for the development of advanced and effective ubiquitous applications in fields like Ambient Assisted Living. Depending on the availability of labeled data, recognition methods can be categorized as either supervised or unsupervised. Designing a comprehensive activity recognition system that works on a real-world setting is extremely challenging because of the difficulty for computers to process the complex nature of the human behaviors.
In the first part of this thesis we present a novel supervised approach to improve the activity recognition performance based on sequential pattern mining. The method searches for patterns characterizing time segments during which the same activity is performed. A probabilistic model is learned to represent the distribution of pattern matches along sequences, trying to maximize the coverage of an activity segment by a pattern match. The model is integrated in a segmental labeling algorithm and applied to novel sequences. Experimental evaluations show that the pattern-based segmental labeling algorithm allows improving results over sequential and segmental labeling algorithms in most of the cases. An analysis of the discovered patterns highlights non-trivial interactions spanning over a signifcant time horizon. In addition, we show that pattern usage allows incorporating long-range dependencies between distant time instants without incurring in substantial increase in computational complexity of inference.
In the second part of the thesis we propose an unsupervised activity discovery framework that aims at identifying activities within data streams in the absence of data annotation. The process starts with dividing the full sensor stream into segments by identifying differences in sensor activations characterizing potential activity changes. Then, extracted segments are clustered in order to find groups of similar segments each representing a candidate activity. Lastly, parameters of a sequential labeling algorithm are estimated using segment clusters found in the previous step and the learned model is used to smooth the initial segmentation. We present experimental evaluation for two real world datasets. The results obtained show that our segmentation approaches perform almost as good as the true segmentation and that activities are discovered with a high accuracy in most of the cases. We demonstrate the effectiveness of our model by comparing it with a technique using substantial domain knowledge. Our ongoing work is presented at the end of the section, in which we combine pattern-based method introduced in the first part of the thesis with the activity discovery framework. The results of the preliminary experiments indicate that the combined method is better in discovering similar activities than the base framework.
|
298 |
Improving the security of the Android ecosystemZhauniarovich, Yury January 2014 (has links)
During the last few years mobile phones have been being replaced by new devices called smartphones. A more ``intelligent'' version of a mobile phones, smartphones combine usual ``phoning'' facilities with the functionality and performance of personal computers. Moreover, they are equipped with various sensors, such as camera and GPS, and are open to third-party applications. Being almost all the time with their users, it is not surprising that smartphones have access to very sensitive private data. Unfortunately, these data are of particular interest not only to the device owners. Developers of third-party applications embed data collection functionality either to feed advertising frameworks or for their own purposes. Moreover, there are also adversaries aiming at gathering personal user information or performing malicious actions. In this situation the users have a strong motivation to safeguard their devices from being misused and want to protect their privacy.Among all operating systems for mobile platforms, Android developed by Google is the recognized leader. This operating system is installed on four out of five new devices. In this thesis we propose a set of improvements to enhance security of the Android ecosystem and ensure trustworthiness of the applications installed on the device. In particular, we focus on the application ecosystem security, and research the following key aspects: identification of suspicious applications; application code analysis for malicious functionality; distribution of verified applications to end-user devices; and enforcing security on the device itself. It was previously shown that adversaries often relied on app repackaging to burst the proliferation of malicious applications. As the first contribution, this dissertation proposes a fast approach to detect repackaged Android applications. If a repackaged application is detected, it is necessary to understand whether it is malicious or not. Today Android malware conceal their malicious nature using dynamic code update techniques, thus, static analyzers cannot detect this vicious behavior. The second contribution of this work is a static-dynamic analysis approach to discover and analyse apps in the presence of dynamic code updates routines. To increase the user's confidence in the installed apps, as the third contribution we propose the concept of trusted stores for Android. Our approach ensures that a user can install only the applications vetted and attested by trusted stores. Finally, the forth contribution is the design and implementation of a policy-based framework for enforcing software isolation of applications and data that may help to improve the security of end-user devices.
|
299 |
Semantic Aware Representing and Intelligent Processing of Information in an Experimental domain: the Seismic Engineering Research CaseHasan, Md. Rashedul January 2015 (has links)
Seismic Engineering research projects’ experiments generate an enormous amount of data that would benefit researchers and experimentalists of the community if could be shared with their semantics. Semantics is the meaning of a data element and a term alike. For example, the semantics of the term experiment is a scientific research performed to conduct a controlled test or investigation. Ontology is a key technique by which one can annotate semantics and provide a common, comprehensible foundation for the resources on the Semantic Web. The development of the domain ontology requires expertise both in the domain to model as well as in the ontology development. This means that people from very different backgrounds, such as Seismic Engineering and Computer Science should be involved in the process of creating ontology. With the invention of the Semantic Web, computing paradigm is experiencing a shift from databases to Knowledge Bases (KBs), in which ontologies play a major role in enabling reasoning power that can make implicit facts explicit to produce better results for users. To enable an ontology and a dataset automatically exploring the relevant ontology and datasets from the external sources, these can be linked to the Linked Open Data (LOD) cloud, which is an online repository of a large amount of interconnected datasets published in RDF. Throughout the past few decades, database technologies have been advancing continuously and showing their potential in dealing with large collection of data, but they were not originally designed to deal with the semantics of data. Managing data with the Semantic Web tools offers a number of advantages over database tools, including classifying, matching, mapping and querying data. Hence we translate our database based system that was managing the data of Seismic Engineering research projects and experiments into KB-based system. In addition, we also link our ontology and datasets to the LOD cloud. In this thesis, we have been working to address the following issues. To the best of knowledge the Semantic Web still lacks the ontology that can be used for representing information related to Seismic Engineering research projects and experiments. Publishing vocabulary in this domain has largely been overlooked and no suitable vocabulary is yet developed in this very domain to model data in RDF. The vocabulary is an essential component that can provide logistics to a data engineer when modeling data in RDF to include them in the LOD cloud. Ontology integration is another challenge that we had to tackle. To manage the data of a specific field of interest, domain specific ontologies provide essential support. However, they alone can hardly be sufficient to assign meaning also to the generic terms that often appear in a data source. That necessitates the use of the integrated knowledge of the generic ontology and the domain specific one. To address the aforementioned issues, this thesis presents the development of a Seismic Engineering Research Projects and Experiments Ontology (SEPREMO) with a focus on the management of research projects and experiments. We have used DERA methodology for ontology development. The developed ontology was evaluated by a number of domain experts. Data originating from scientific experiments such as cyclic and pseudodynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes. For ontology integration with WordNet, we implemented a semi-automatic facet based algorithm. We also present an approach for publishing both the ontology and the experimental data into the LOD Cloud. In order to model the concepts complementing the vocabulary that we need for the experimental data representation, we suitably extended the SEPREMO ontology. Moreover, the work focuses on RDF data sets interlinking technique by aligning concepts and entities scattered over the cloud.
|
300 |
End-to-End Discourse Parsing with Cascaded Structured PredictionGhosh, Sucheta January 2012 (has links)
Parsing discourse is a challenging natural language processing task.
In this research work first we take a data driven approach to identify arguments of explicit discourse connectives.
In contrast to previous work we do not make any assumptions on the span of arguments and consider parsing as a token-level sequence labeling task. We design the argument segmentation task as a cascade of decisions based on conditional random fields (CRFs). We train the CRFs on lexical, syntactic and semantic features extracted from
the Penn Discourse Treebank and evaluate feature combinations on the commonly used test split. We show that the best combination of features includes syntactic and semantic features. The comparative error analysis investigates the performance variability over connective types and argument positions. We also compare the results of cascaded pipeline with a non-cascaded structured prediction setting that shows us definitely the cascaded structured prediction is a better performing method for discourse parsing.
We present a novel end-to-end discourse parser that, given a plain text document in input, identifies the discourse relations in the text, assigns them a semantic label and detects discourse arguments spans. The parsing architecture is based on a cascade of decisions supported by Conditional Random Fields (CRF). We train and evaluate three different parsers using the PDTB corpus. The three system versions are compared to evaluate their robustness with respect to deep/shallow and automatically extracted syntactic features.
Next, we describe two constraint-based methods that can be used to improve the recall of a shallow discourse parser based on conditional random field chunking.
These method uses a set of natural structural constraints as well as others that follow from the annotation guidelines of the Penn Discourse Treebank.
We evaluated the resulting systems on the standard test set of the PDTB and achieved a rebalancing of precision and recall with improved F-measures across the board. This was especially notable when we used evaluation metrics taking partial matches into account; for these measures, we achieved F-measure improvements of several points.
Finally, we address the problem of optimization in discourse parsing.
A good model for discourse structure analysis needs to account both for local dependencies at the token-level and for global dependencies and statistics. We present techniques on using inter-sentential or sentence-level(global), data-driven, non-grammatical features in the task of parsing discourse.
The parser model follows up previous approach based on using token-level (local) features with conditional random fields for shallow discourse parsing, which is lacking in structural knowledge of discourse.
The parser adopts a two-stage approach where first the local constraints are applied and then global constraints are used on a reduced weighted search space ($n$-best). In the latter stage we experiment with different rerankers trained on the first stage $n$-best parses, which are generated using lexico-syntactic local features. The two-stage parser yields significant improvements over the best performing model of discourse parser on the PDTB corpus.
|
Page generated in 0.0828 seconds