Spelling suggestions: "subject:"informatica"" "subject:"informatical""
121 |
Using Formal Methods for Building more Reliable and Secure e-voting SystemsWeldemariam, Komminist Sisai January 2010 (has links)
Deploying a system in a safe and secure manner requires ensuring the tech- nical and procedural levels of assurance also with respect to social and regu- latory frameworks. This is because threats and attacks may not only derive from pitfalls in complex security critical system, but also from ill-designed procedures. However, existing methodologies are not mature enough to em- brace procedural implications and the need for multidisciplinary approach on the safe and secure operation of system. This is particularly common in electronic voting (e-voting) systems.
This dissertation focuses along two lines. First, we propose an approach to guarantee a reasonable security to the overall systems by performing for- mal procedural security analysis. We apply existing techniques and define novel methodologies and approaches for the analysis and verification of procedural rich systems. This includes not only the definition of adequate modeling convention, but also the definition of general techniques for the injection of attacks, and for the transformation of process models into rep- resentations that can be given as input to model checkers. With this it is possible to understand and highlight how the switch to the new tech- nological solution changes security, with the ultimate goal of defining the procedures regulating system and system processes that ensure a sufficient level of security for the system as well as for its procedures.
We then investigate the usage of formal methods to study and analyze the strength and weaknesses of currently deployed (e-voting) system in order to build the next generation (e-voting) systems. More specifically, we show how formal verification techniques can be used to model and reason about the security of an existing e-voting system. To do that, we reuse the methodology propose for procedural security analysis. The practical applicability of the approaches is demonstrated in several case studies from
the domain of public administrations in general and in e-voting system in particular. With this it can be possible to build more secure, reliable, and trustworthy e-voting system.
|
122 |
STaRS.sys: designing and building a commonsense-knowledge enriched wordnet for therapeutic purposesLebani, Gianluca E. January 2012 (has links)
This thesis investigates the possibility to exploit human language resources and knowledge extraction techniques to build STaRS.sys, a software system designed to support therapists in the rehabilitation of Italian anomic patients.
After an introductory section reviewing classification, assessment, and remediation methods for naming disorders, we analyze the current trends in the exploitation of computers for the rehabilitation of language disorders. Starting from an analysis of the needs of speech therapists in their daily work with aphasic patients, the requirements for the STaRS.sys application are defined, and a number of possible uses identified.
To be able to implement these functionalities, STaRS.sys needs to be based on a lexical knowledge base encoding, in a explicit and computationally tractable way, at least the kind of semantic knowledge contained in the so called feature norms. As a backbone for the development of this semantic resource we chose to exploit the Italian MultiWordNet lexicon derived from the original Princeton WordNet. We show that the WordNet model is relatively well suited for our needs, but that an extension of its semantic model is nevertheless needed.
Starting from the assumption that the kinds composing the feature types classifications exploited for encoding feature norms can be mapped onto semantic relations in a WordNet-like semantic network, we identified a set of 25 semantic relations that can cover all the information contained in these datasets.
To demonstrate the feasibility of our proposal, we first asked to a group of therapists to use our feature types classification for classifying a set of 300 features. The analysis of the inter-coder agreement shows that the proposed classification can be used in a reliable way by speech therapists.
Subsequently, we collected a new set of Italian feature norms for 50 concrete concepts and analyze the issues raised by the attempt to encode them into a version of MultiWordNet extended to include the new set of relations. This analysis shows that, in addition to extending the relation set, a number of further modifications are needed, for instance to be able to encode negation, quantifications or the strength of a relation. Information that, we will show, isn't well represented in the existing feature norms either.
After defining an extended version of MultiWordNet (sMWN), suitable to encode the information contained in feature norms, we deal with the issue of automatic extraction of such semantic information from corpora. We applied to an Italian a corpus state of the art machine-learning-based method for the extraction of common-sense conceptual knowledge from corpora, previously applied to English. We tried a number of modifications and extensions of the original algorithm, with the aim of improving its accuracy. Results and limitations are presented and analyzed, and possible future improvement discussed.
|
123 |
Collecting Common Sense from text and PeopleHerdagdelen, Amac January 2011 (has links)
In order to display human-like intelligence, advanced computational systems should have access to the vast network of generic facts about the world that humans possess and that is known as commonsense knowledge (books have pages, grocery has a price, ...). Developers of AI applications have long been aware of this, and, for decades, they have invested in the laborious and expensive manual creation of commonsense knowledge repositories. An automated, high-throughput and low-noise method for commonsense collection still remains as the holy grail of AI.
Two relatively recent developments in computer science and computational linguistics that may provide an answer to the commonsense collection problem are text mining from large amounts of data, something that has become possible with the massive availability of text on the Web, and human computation, which is a workaround technique implemented by outsourcing the 'hard' sub-steps of a problem to people. Text mining has been very successful in extracting huge amounts of commonsense knowledge from data, but the extracted knowledge tends to be extremely noisy. Human computation is also a challenging problem because people can provide unreliable data and may lack motivation to solve problems on behalf of researchers and engineers. A clever, and recently popularized, technique to motivate people to contribute to such projects it to pose the problems as entertaining games and let people solve those problems while they play a game. This technique, commonly known as games-with-a-purpose approach, has proved a very powerful way of recruiting laypeople on the Web.
The focus of this thesis is to study methods to collect common sense from people via human computation and from text via text mining, and explore the opportunities in bringing these two types of methods together. The first contribution of my study is the introduction of a novel text miner trained on a set of known commonsense facts. The text miner is called BagPack and it is based on a vector-space representation of concept pairs, that also captures the relation between the pairs. BagPack harvests a large number of facts from Web-based corpora and these facts constitute a -- possibly noisy -- set of candidate facts.
The second contribution of the thesis is Concept Game, a game with a purpose which is a simple slot-machine game that presents the candidate facts -- that are mined by BagPack -- to the players. Players are asked to recognize the meaningful facts and discard the meaningless facts in order to score points. Thus, as a result, laypeople verify the candidate set and we obtain a refined, high-quality dataset of commonsense facts.
The evaluation of both systems suggests that text mining and human computation can work very efficiently in tandem. BagPack acts as an almost-endless source of candidate facts which are likely to be true, and Concept Game taps laypeople to verify these candidates. Using Web-based text as a source of commonsense knowledge has several advantages with respect to a purely human-computation system which relies on people as the source of information. Most importantly, we can tap domains that people do not talk about when they are directly asked. Also, relying on people just as a source of verification makes it possible to design fast-paced games with a low cognitive burden.
The third issue that I addressed in this thesis is the subjective and stereotypical knowledge which constitutes an important part of our commonsense repository. Regardless of whether one would like to keep such knowledge in an AI system, being able to identify the subjectivity and detect the stereotypical knowledge is an important problem. As a case study, I focused on stereotypical gender expectations about actions. For this purpose, I created a gold standard of actions (e.g., pay bill, become nurse) rated by human judges on whether they are masculine or feminine actions. After that, I extracted, combined, and evaluated two different types of data to predict the gold standard. The first type of data depends on the metadata provided by social media (in particular, the genders of users in a microblogging site like Twitter) and the second one depends on Web-corpus-based pronoun/name gender heuristics. The metadata about the Twitter users helps us to identify which actions are mentioned more frequently by which gender. The Web-corpus-based score helps us to identify which gender is more frequently reported to be carrying out a given action. The evaluation of both methods suggests that 1) it is possible to predict the human gold standard with considerable success, 2) the two methods capture different aspects of stereotypical knowledge, and 3) they work best when combined together.
|
124 |
Concept Search: Semantics Enabled Information RetrievalKharkevich, Uladzimir January 2010 (has links)
The goal of information retrieval (IR) is to map a natural language query, which specifies the user information needs, to a set of objects in a given collection, which meet these needs. Historically, there have been two major approaches to IR that we call syntactic IR and semantic IR. In syntactic IR, search engines use words or multi-word phrases that occur in document and query representations. The search procedure, used by these search engines, is principally based on the syntactic matching of document and query representations. The precision and recall achieved by these search engines might be negatively affected by the problems of (i) polysemy, (ii) synonymy, (iii) complex concepts, and (iv) related concepts. Semantic IR is based on
fetching document and query representations through a semantic analysis of their contents using natural language processing techniques and then retrieving documents by matching these semantic representations. Semantic IR approaches are developed to improve the quality of syntactic approaches but, in practice, results of semantic IR are often inferior to that of syntactic one. In this thesis, we propose a novel approach to IR which extends syntactic IR with semantics, thus addressing the problem
of low precision and low recall of syntactic IR. The main idea is to keep the same machinery which has made syntactic IR so successful, but to modify it so that, whenever possible (and useful), syntactic IR is substituted by semantic IR, thus improving the system performance. As instances of the general approach, we describe the semantics enabled approaches to: (i) document retrieval, (ii) document classification, and (iii) peer-to-peer search.
|
125 |
Design and Characterization of a Current Assisted Photo Mixing Demodulator for Tof Based 3d Cmos Image SensorHossain, Quazi Delwar January 2010 (has links)
Due to the increasing demand for 3D vision systems, many efforts have been recently concentrated to achieve complete 3D information analogous to human eyes. Scannerless optical range imaging systems are emerging as an interesting alternative to conventional intensity imaging in a variety of applications, including pedestrian security, biomedical appliances, robotics and industrial control etc. For this, several studies have reported to produce 3D images including stereovision, object distance from vision system and structured light source with high frame rate, accuracy, wide dynamic range, low power consumption and lower cost. Several types of optical techniques for 3D imaging range measurement are available in the literature, among them one of the most important is time-of-flight (TOF) principle that is intensively investigated. The third dimension, i.e. depth information, can be determined by correlating the reflected modulated light signal from the scene with a reference signal synchronous with the light source modulation signal.
CMOS image sensors are capable of integrating the image processing circuitry on the same chip as the light sensitive elements. As compared to other imaging technologies, they have the advantages of lower power consumption and potentially lower price. The merits make this technology competent for the next-generation solid-state imaging applications. However, CMOS process technologies are developed for high-performance digital circuits.
Different types of 3D photodetectors have been proposed for three-dimensional imaging. A major performance improvement has been found in the adoption of inherently mixing detectors that incorporate the role of detection and demodulation in a single device. Basically, these devices use a modulated electric field to guide the photo generated charge carriers to different collection sites in phase with a modulation signal. One very promising CMOS photonic demodulator based on substrate current modulation has recently been proposed. In this device the electric field penetrates deeper into the substrate, thus enhancing the charge separation and collection mechanism. A very good sensitivity and high demodulation efficiency can be achieved.
The objective of this thesis has been the design and characterization of a Current Assisted Photo mixing Demodulator (CAPD) to be applied in a TOF based 3D CMOS sensing system. At first, the experimental investigation of the CAPD device is carried out. As a test vehicle, 10×10 pixel arrays have been fabricated in 0.18Âμm CMOS technology with 10×10 Âμm2 pixel size. The main properties of CAPD devices, such as the charge transfer characteristic, modulation contrast, noise performance and non-linearity problem, etc. have been simulated and experimentally evaluated. Experimental results demonstrate a good DC charge separation efficiency and good dynamic demodulation capabilities up to 45MHz. The influence of performance parameters such as wavelength, modulation frequency and voltage on this device is also discussed. This test device corresponds to the first step towards incorporating a high resolution TOF based 3D CMOS image sensor.
The demodulator structure featuring a remarkably small pixel size 10 × 10 Âμm2 is used to realize a 120 × 160 pixel array of ranging sensor fabricated in standard 0.18Âμm CMOS technology. Initial results demonstrate that the demodulator structure is suitable for a real-time 3D image sensor. The prototype camera system is capable of providing real-time distance measurements of a scene through modulated-wave TOF measurements with a modulation frequency 20 MHz. In the distance measurement, the sensor array provides a linear distance range from 1.2m to 3.7m with maximum accuracy error 3.3% and maximum pixel noise 8.5% at 3.7m distance. Extensive testing of the device and prototype camera system has been carried out to gain insight into the characteristics of this device, which is a good candidate for integration in large arrays for time-of-flight based 3D CMOS image sensor in the near future.
|
126 |
Optimal Adaptations over Multi-Dimensional Adaptation Spaces with a Spice of Control TheoryAngelopoulos, Konstantinos January 2016 (has links)
(Self-)Adaptive software systems monitor the status of their requirements and adapt when some of these requirements are failing. The baseline for much of the research on adaptive software systems is the concept of a feedback loop mechanism that monitors the performance of a system relative to its requirements, determines root causes when there is failure, selects an adaptation, and carries it out. The degree of adaptivity of a software system critically depends on the space of possible adaptations supported (and implemented) by the system. The larger the space, the more adaptations a system is capable of. This thesis tackles the following questions: (a) How can we define multi-dimensional adaptation spaces that subsume proposals for requirements- and architecture-based adaptation spaces? (b) Given one of more failures, how can we select an optimal adaptation with respect to one or more objective functions? To answer the first question, we propose a design process for three-dimensional adaptation spaces, named the Three-Peaks Process, that iteratively elicits control and environmental parameters from requirements, architectures and behaviours for the system-to-be. For the second question, we propose three adaptation mechanisms. The first mechanism is founded on the assumption that only qualitative information is available about the impact of changes of the system's control parameters on its goals. The absence of quantitative information is mitigated by a new class of requirements, namely Adaptation Requirements, that impose constraints on the adaptation process itself and dictate policies about how conflicts among failing requirements must be handled. The second mechanism assumes that there is quantitative information about the impact of changes of control parameters on the system’s goals and that the problem of finding an adaptation is formulated as a constrained multi-objective optimization problem. The mechanism measures the degree of failure of each requirement and selects an adaptation that minimizes it along with other objective functions, such as cost. Optimal solutions are derived exploiting OMT/SMT (Optimization Modulo Theories/Satisfiability Modulo Theories) solvers. The third mechanism operates under the assumption that the environment changes dynamically over time and the chosen adaptation has to take into account such changes. Towards this direction, we apply Model Predictive Control, a well-developed theory with myriads of successful applications in Control Theory. In our work, we rely on state-of-the-art system identification techniques to derive the dynamic relationship between requirements and possible adaptations and then propose the use of a controller that exploits this relationship to optimize the satisfaction of requirements relative to a cost-function. This adaptation mechanism can guarantee a certain level of requirements satisfaction over time, by dynamically composing adaptation strategies when necessary. Finally, each piece of our work is evaluated through experimentation using variations of the Meeting-Scheduler exemplar.
|
127 |
Sensing Social Interactions Using Non-Visual and Non-Auditory Mobile Sources, Maximizung Privacy and Minimizing Obtrusiveness.Aleksandar, Matic January 2012 (has links)
Social interaction is one of the basic components of human life which impacts thoughts, emotions, decisions, and the overall wellbeing of individuals. In this regard, monitoring social activity constitutes an important factor for a number of disciplines, particularly the ones related to social and health sciences. Sensor-based social interaction data collection has been seen as a groundbreaking tool which has the potential to overcome the drawbacks of traditional self-reporting methods and to revolutionize social behavior analysis. However, monitoring social interactions typically implies a trade-off between the quality of collected data and the levels of unobtrusiveness and of privacy respecting, aspects which can affect spontaneity in subjectsâ€TM behavior. Despite the substantial research in the area of automatic recording of social interactions, the existing solutions remain limited: they either capture audio/video data which may raise privacy concerns in monitored subjects and may restrict the application to very specific areas, or provide low accuracy in detecting social interactions that occur on small spatio-temporal scale.
The objective of this thesis is to provide and evaluate a solution for mobile monitoring of face-to-face social interactions, which maximizes privacy and minimizes obtrusiveness. In order to reliably detect social interactions that occur on small spatio-temporal scale, the proposed solution infers two types of information, namely spatial settings between subjects and their speech activity status. The challenge was to select appropriate sources that do not restrict application scenarios only to certain areas and do not capture privacy sensitive data, which are the drawbacks of video/audio systems. The second stage was to interpret the data acquired from non-visual and non-auditory sources and to model social interactions on small space- and time- scales. The work in this thesis assesses the reliability of the proposed approach in several scenarios, demonstrating the accuracy of approximately 90% in detecting the occur-rence of face-to-face social interactions.
The feasibility of using the proposed approach for social interaction data collection is further evaluated with respect to the study of social psychology, which serves as the guideline for extracting the relevant features of social interactions. The evaluation has demonstrated the possibility to extract various nonverbal behavioral cues related to spatial organization between individuals and their vocal behavior in social interactions. By modeling social context using the extracted features, it is possible to achieve the accuracy of 81% in the automatic classification between formal versus informal social interactions. In addition, the proposed approach was applied to gather daily patterns of social activity for investigating their correlation with the mood changes in individuals, which has been explored so far only using the traditional self-reporting methods. The findings are consistent with previous studies thus indicating the possibility to use the proposed method of collecting social interaction data for investigating psychological effects of social activities.
|
128 |
Dynamic Biological Modelling: a language-based approachRomanel, Alessandro January 2010 (has links)
Systems biology investigates the interactions and relationships among the components of biological systems to understand how they globally work. The metaphor “cells as computations†, introduced by Regev and Shapiro, opened the realm of biological modelling to concurrent languages. Their peculiar characteristics led to the development of many different bio-inspired languages that allow to abstract and study specific aspects of biological systems. In this thesis we present a language based on the process calculi paradigm and specifically designed to account for the complexity of signalling networks. We explore a new design space for bio-inspired languages, with the aim to capture in an intuitive and simple way the fundamental mechanisms governing protein-protein interactions. We develop a formal framework for modelling, simulating and analysing biological systems. An implementation of the framework is provided to enable in-silico experimentation.
|
129 |
Parametric Real-Time System Feasibility Analysis Using Parametric Timed AutomataRamadian, Yusi January 2012 (has links)
Real-time applications are playing an increasingly significant role in our life. The cost and risk involved
in their design leads to the need for a correct and robust modelling of the system before its deployment.
Many approaches have been proposed to verify the schedulability of real-time task system. A frequent
limitation is that they force the task activation to restrictive patterns (e.g. periodic). Furthermore, the
type of analysis carried out by the real-time scheduling theory relies on restrictive assumptions that
could make the designers miss important optimization opportunities. On the other hand, the application
of formal methods for verification of timed systems typically produces a yes/no answer that does not
suggest any correction action or robustness margins of a given design.
This work proposes an approach to combine the benefits of formal method in terms of flexibility with
the production of a clear feedback for the designers. The key idea is to use parametric timed automata
to enable the definition of flexible task activation patterns. The Parametric Verification of Temporal
Properties (PTVP) algorithm proposed in this work produces a region of feasible parameters for realtime
system. All the parameter valuation within this region is guaranteed to make the system respect
the desired temporal behaviour. In this way developers are provided with a richer information than the
simple feasibility of a given design choice.
This method uses symbolic model checking technique to produce the result that is a union of polyhedral
regions in the parameter space associated with feasible parameters. It is implemented in the tool
Quinq that is based on NuSMV3. The tool also implemented an optimization to speed up the search,
such as using non-parametric model checker to find counterexamples (i.e. traces) related to the unfeasible
choices of parameters.
Two applications of the tool and of the underlying method to several real-time system examples are
presented in this dissertation : periodic real-time system tasks with offset and heterogeneous distributed
real-time systems. A work that applies the tool in collaboration with another real-time system analysis
tool, Modular Performance Analysis Toolbox, is also presented to show one of the many possible
application of the method presented in this work.
In this work we also compare our approach to the state of the art in the field of sensitivity analysis
of real-time systems. However, compared to the other tools and approaches in this field, the method
offered in this work presents unique advantages in the generality of the system modelling approach and
the possibility to analyse the entire region of feasibility of any desired parameter in the system.
|
130 |
Closing the Gap between Business Process Analysis and Service Workflow Design with the BPM-SIC MethodologyVairetti, Carla Marina January 2016 (has links)
Nowadays companies and organizations are challenged to integrate and automate their business processes. A business process is a set of logically related tasks, carried out to produce a product or service. Business processes are typically implemented using Web services. Web services are programmable interfaces that can be invoked through standard communication protocols. In general, the need to outsource parts of a business processes results in a large number of Web services, which are, generally, heterogeneous and distributed among various organizations and platforms. The ability to select and integrate these Web services at runtime is desirable as it would enable Web services platforms a quick reaction to changing business needs and failures, reducing implementation costs and minimizing losses by poor availability. The goal of dynamic and automatic Web services composition is to generate a composition plan (workflow) at runtime that meets certain business goal. Semantics based techniques exploit specialized services annotation to facilitate the discovery of simple or composed services (matchmaking) that form part of composition plan. Usually, the process of matchmaking places more attention in the selection of services and much less on the behavior of the composed service (workflow) that tends to be very simple. In the industry, on the contrary, the service compounds or workflows are manually defined and typically follow complex control flow patterns that implement elaborate business processes. Although a technique of dynamic and automatic service composition produces an executable workflow that implements a business process, it must be validated in relation to the business goal. This high-level analysis is usually performed by domain experts (BPA Business Process Analyst) who must coordinate with the experts (SA: System Architect) the implementation of the business processes. The conversation between BPA and SA is a fundamental requirement for the cycle of creation of an executable business process. The lack of communication between both participants not only causes delays in development time, but also generates product failures and unnecessary cycles involving often increases in production costs and large losses of money in organizations. In this thesis, we have developed three approaches that allow decreasing the gap between BPA and SA and making their collaboration more effective. On one hand we present a Web service composition technique that is dynamic and automatic and is based on services’ semantic descriptions. The composed service corresponds to an executable workflow with complex control flow, facilitating the SAs implementation task. On the other hand, we provide a tool that allows BPAs to verify and analyze the performance of their business processes. And finally, we exploit both tools in order to propose a methodology that integrates both perspectives allowing knowledge transfer in both directions. We obtained promising results that reveal inconsistencies in the development and design of the business processes as well as provide recommendations for best practices in both directions.
|
Page generated in 0.0552 seconds