• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 338
  • 1
  • Tagged with
  • 339
  • 339
  • 339
  • 339
  • 56
  • 47
  • 40
  • 39
  • 39
  • 39
  • 39
  • 34
  • 25
  • 20
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Corrective Evolution of Adaptable Process Models

Sirbu, Adina Iulia January 2013 (has links)
Modeling business processes is a complex and time-consuming task, which can be simplified by allowing process instances to be structurally adapted at runtime, based on context (e.g., by adding or deleting activities). The process model then no longer needs to include a handling procedure for every exception that can occur. Instead, it only needs to include the assumptions under which a successful execution is guaranteed. If a design-time assumption is violated, the exception handling procedure matching the context is selected at runtime. However, if runtime structural adaptation is allowed, the process model may later need to be updated based on the logs of adapted process instances. Evolving the process model is necessary if adapting at run-time is too costly, or if certain adaptations fail and should be avoided. An issue that is insufficiently addressed in the previous work on process evolution is how to evolve a process model and also ensure that the evolved process model continues to achieve the goal of the original model. We refer to the problem of evolving a process model based on selected instance adaptations, such that the evolved model satisfies the goal of the original model, as corrective evolution. Automated techniques for solving the corrective evolution problem are necessary for two reasons. First, the more complex a process model is, the more difficult it is to be changed manually. Second, there is a need to verify that the evolved model satisfies the original goal. To develop automated techniques, we first formalize the problem of corrective evolution. Since we use a graph-based representation of processes, a key element in our formal model is the notion of trace. When plugging an instance adaptation at a particular point in the process model, there can be multiple paths in the model for reaching this point. Each of these paths is uniquely identified by a trace, i.e., a recording of the activities executed up to that point. Depending on traces, an instance adaptation can be used to correct the process model in three different ways. A correction is strict if the adaptation should be plugged in on a precise trace, relaxed if on all traces, and relaxed with conditions if on a subset of all traces. The choice is driven by competing concerns: the evolved model should not introduce untested behavior, but it should also remain understandable. Using our formal model, we develop automated techniques for solving the corrective evolution problem in two cases. The first case is also the most restrictive, when all corrections are strict. This case does not require verification, since the process model and adaptations are assumed to satisfy the goal, as long as the adaptations are applied on the corresponding traces. The second case is when corrections are either strict or relaxed. This second case requires verification, and for this reason we develop an automated technique based on planning. We implemented the two automated techniques as tools, which are integrated into a common toolkit. We used this toolkit to evaluate the tradeoffs between applying strict and relaxed corrections on a scenario built on a real event log.
22

Distributed Contact and Identity Management

Hume Llamosas, Alethia Graciela January 2014 (has links)
Contact management is a twofold problem involving a local and global level where the separation between them is rather fuzzy. Locally, users need to deal with contact management, which refers to a local need to store, organize, maintain up to date, and find information that will allow them contacting or reaching other people, organizations, etc. Globally, users deal with identity management that refers to peers having multiple identities (i.e., profiles) and the need of staying in control of them. In other words, they should be able to manage what information is shared and with whom. We believe many existing applications try to deal with this problem looking only at the data level and without analyzing the underlying complexity. Our approach focus on the complex social relations and interactions between users, identifying three main subproblem: (i) management of identity, (ii) search, and (iii) privacy. The solution we propose concentrates on the models that are needed to address these problems. In particular, we propose a Distributed Contact Management System (DCM System) that: Models and represents the knowledge of peers about physical or abstract objects through the notion of entities that can be of different types (e.g., locations, people, events, facilities, organizations, etc.) and are described by a set of attributes; By representing contacts as entities, allows peers to locally organize their contacts taking into consideration the semantics of the contact’s characteristics; By describing peers as entities allows them to manage their different identities in the network, by sharing different views of themselves (showing possibly different in- formation) with different people. The contributions of this thesis are, (i) the definition of a reference architecture that allows dealing with the diversity in relation with the partial view that peers have of the world, (ii) an approach to search entities based on identifiers, (iii) an approach to search entities based on descriptions, and (iv) the definition of the DCM system that instantiates the previously mentioned approaches and architecture to address concrete usage scenarios.
23

Multimodal Recognition of Social Behaviors and Personality Traits in Small Group Interaction

Lepri, Bruno January 2009 (has links)
In recent years, the automatic analysis of human behaviour has been attracting an increasing amount of attention from researchers because of its important applicative aspects and its intrinsic scientific interest. In many technological fields (pervasive and ubiquitous computing, multimodal interaction, ambient as-sisted living and assisted cognition, computer supported collaborative work, user modelling, automatic visual surveillance, etc.) the awareness is emerging that system can provide better and more appropriate services to people only if they can understand much more of what they presently do about users’ attitudes, preferences, personality, etc., as well as about what people are doing, the activities they have been en-gaged in the past, etc. At the same time, progress on sensors, sensor networking, computer vision, audio analysis and speech recognition are making available the building blocks for the automatic behavioural analysis. Multimodal analysis—the joint consideration of several perceptual channels—is a powerful tool to extract large and varied amounts of information from the acoustical and visual scene and from other sensing devices (e.g., RFIDs, on-body accelerometers, etc.). In this thesis, we consider small group meetings as a challenging example and case study of real life situations in which the multimodal analysis of social signals can be used to extract relevant information about the group and about individuals. In particular, we show how the same type of social signals can be used to reconstruct apparently disparate and diverse aspects of social and individual life ranging from the functional roles played by the participants in a meeting, to static characteristics of individuals (per-sonality traits) and behavioural outcomes (task performance).
24

Distributed Identity Management

Pane Fernandez, Juan Ignacio January 2012 (has links)
Semantics is a local and a global problem at the same time. Local because is in the mind of the people who have personal interpretations, and global because we need to reach a common understanding by sharing and aligning these personal interpretations. As opposed to current state-of-the-art approaches based on a two layer architecture (local and global), we deal with this problem by designing a general three layer architecture rooted on the personal, social, and universal levels. The new intermediate social level acts as a global level for the personal level, where semantics is managed around communities focusing on specific domains, and as local for the universal level as it only deals with one part of universal knowledge. For any of these layers there are three main components of knowledge that helps us encode the semantics at the right granularity. These are: i) Concrete knowledge, which allows us to achieve semantic compatibility at the level of entities, the things we want to talk about; ii) Schematic knowledge, which defines the structure and methods of the entities; and iii) Background knowledge, which enables compatibility at the language level used to describe and structure entities. The contribution of this work is threefold: i) the definition of general architecture for managing semantics of entities, ii) the development components of the system based on the architecture; these are structure preserving semantic matching and sense induction algorithms, and iii) the evaluation of these components with the creation of new gold standards datasets.
25

Energy Adaptive Infrastructure for Sustainable Cloud Data Centres

Dupont, Corentin January 2016 (has links)
With the raising concerns about the environment, the ICT equipments have been pointed out as a major and ever rising source of energy consumption and pollution. Among those ICT equipments, data centres play obviously a major role with the rise of the Cloud computing paradigm. In the recent years, researchers have focused on reducing the energy consumption of data centres. Furthermore, future environmentally friendly data centres are also expected to prioritize the usage of renewable energies over brown energies. However, managing the energy consumption within a data centre is challenging because data centres are complex facilities which supports a huge variety of hardware, computing styles and SLAs. Those may evolve through time as user requirements can change rapidly. Furthermore, differently from non-renewable energy sources, the availability of renewable energies is very volatile and time dependent: e.g. solar power is obtainable only during the day, and is subject to variations due to the meteorological conditions. The goal in this case is to shift the workload of running applications, according to the forecasted availability of the renewable energy. In this thesis we propose a flexible framework called Plug4Green able to reduce the energy consumption of a Cloud data centre. Plug4Green is based on the Constraint Programming paradigm, allowing it to take into account a great number of constraints regarding energy, hardware and SLAs in data centres. We also propose the concept of an energy adaptive software controller (EASC), able to augment the usage of renewable energies in data centres. The EASC supports two kind of applications: service-oriented and task-oriented applications; and two kind of computing environments: Infrastructure as a Service and Platform as a Service. We evaluated our solutions in several trials executed in the testbeds of Milan and Trento, Italy. Results show that Plug4Green was able to reduce the power consumption by 27% in the Milan trial, while the EASC was able to augment the renewable energy percentage by 7.07pp in the Trento trial.
26

Effective Analysis, Characterization, and Detection of Malicious Activities on the Web

Eshete, Birhanu Mekuria January 2013 (has links)
The Web has evolved from a handful of static web pages to billions of dynamic and interactive web pages. This evolution has positively transformed the paradigm of communication, trading, and collaboration for the benefit of humanity. However, these invaluable benefits of the Web are shadowed by cyber-criminals who use the Web as a medium to perform malicious activities motivated by illegitimate benefits. Cyber-criminals often lure victims to visit malicious web pages, exploit vulnerabilities on victims’ devices, and then launch attacks that could lead to: stealing invaluable credentials of victims, downloading and installation of malware on victims’ devices, or complete compromise of victims’ devices to mount future attacks. While the current state-of-the-art is to detect malicious web pages is promising, it is yet limited in addressing the following three problems. First, for the sake of focused detection of certain class of malicious web pages, existing techniques are limited to partial analysis and characterization of attack payloads. Secondly, attacker-motivated and benign evolution of web page artifacts have challenged the resilience of existing detection techniques. The third problem is the prevalence and evolution of Exploit Kits used in spreading web-borne malware. In this dissertation, we present the approaches and the tools we developed to address these problems. To the address partial analysis and characterization of attack payloads, we propose a holistic and lightweight approach that combines static analysis and minimalistic emulation to analyze and detect malicious web pages. This approach leverages features from URL structure, HTML content, JavaScript executed on the client, and reputation of URLs on social networking websites to train multiple models, which are then used in confidence-weighted majority vote classifier to detect unknown web pages. Evaluation of the approach on a large corpus of web pages shows that the approach not only is precise enough in detecting malicious web pages with very low false signals but also does detection with a minimal performance penalty. To address the evolution of web page artifacts, we propose an evolution-aware approach that tunes detection models inline with the evolution of web page artifacts. Our approach takes advantage of evolutionary searching and optimization using Genetic Algorithm to decide the best combination of features and learning algorithms, i.e., models, as a function of detection accuracy and false signals. Evaluation of our approach suggests that it reduces false negatives by about 10% on a fairly large testing corpus of web pages. To tackle the prevalence of Exploit Kits on the Web, we first analyze source code and runtime behavior of several Exploit Kits in a contained setting. In addition, we analyze the behavior of live Exploit Kits on the Web in a contained environment. Combining the analysis results, we characterize Exploit Kits pertinent to their attack-centric and self-defense behaviors. Based on these behaviors, we draw distinguishing features to train classifiers used to detect URLs that are hosted by Exploit Kits. The evaluation of our classifiers on independent testing dataset shows that our approach is effective in precisely detecting malicious URLs linked with Exploit Kits with very low false positives.
27

Efficient Automated Security Analysis of Complex Authorization Policies

Truong, Anh January 2015 (has links)
Access Control is becoming increasingly important for today's ubiquitous systems. Sophisticated security requirements need to be ensured by authorization policies for increasingly complex and large applications. As a consequence, designers need to understand such policies and ensure that they meet the desired security constraints while administrators must also maintain them so as to comply with the evolving needs of systems and applications. These tasks are greatly complicated by the expressiveness and the dimensions of the authorization policies. It is thus necessary to provide policy designers and administrators with automated analysis techniques that are capable to foresee if, and under what conditions, security properties may be violated. For example, some analysis techniques have already been proposed in the literature for Role-Based Access Control (RBAC) policies. RBAC is a security model for access control that has been widely adopted in real-world applications. Although RBAC simplifies the design and management of policies, modifications of RBAC policies in complex organizations are difficult and error prone activities due to the limited expressiveness of the basic RBAC model. For this reason, RBAC has been extended in several directions to accommodate various needs arising in the real world such as Administrative RBAC (ARBAC) and Temporal RBAC (TRBAC). This Dissertation presents our research efforts to find the best trade-off between scalability and expressiveness for the design and benchmarking of analysis techniques for authorization policies. We review the state-of-the-art of automated analysis for authorization policies, identify limitations of available techniques and then describe our approach that is based on recently developed symbolic model checking techniques based on Satisfiability Modulo Theories (SMT) solving (for expressiveness) and carefully tuned heuristics (for scalability). Particularly, we present the implementation of the techniques on the automated analysis of ARBAC and ATRBAC policies and discuss extensive experiments that show that the proposed approach is superior to other state-of-the-art analysis techniques. Finally, we discuss directions for extensions.
28

On Efficient Algorithms for Stochastic Simulation of Biochemical Reaction Systems

Vo, Hong Thanh January 2013 (has links)
Computational techniques provide invaluable tools for developing a quantitative understanding the complexity of biological systems. The knowledge of the biological system under study is formalized in a precise form by a model. A simulation algorithm will realize the dynamic interactions encoded in the model. The simulation can uncover biological implications and derive further predictive experiments. Several successful approaches with different levels of detail have been introduced to deal with various biological pathways including regulatory networks, metabolic pathways and signaling pathways. The Stochastic simulation algorithm (SSA), in particular, is an exact method to realize the time evolution of a well-mixed biochemical reaction network. It takes the inherent randomness in biological reactions and the discrete nature of involved molecular species as the main source in sampling a reaction event. SSA is useful for reaction networks with low populations of molecular species, especially key species. The macroscopic response can be significantly affected when these species involved in the reactions both quantitatively and qualitatively. Even though the underlying assumptions of SSA are obviously simplified for real biological networks, it has been proved having the capability of reproducing the stochastic effects in biological behaviour. Essentially, SSA uses a Monte Carlo simulation technique to realize temporal behaviour of biochemical network. A reaction is randomly selected to fire at a time according to its propensity by conducting a search procedure. The fired reaction leads the system to a new configuration. At this new configuration, reactions have to update their propensities to reflect the changes. In this thesis we investigate new algorithms for improving performance of SSA. First, we study the application of tree-based search for improving the search of a reaction firing, and devise a solution to optimize the average search length. We prove that by a tree-based search the performance of SSA can be sensibly improved, moving the search from linear time complexity to logarithmic complexity. We combine this idea with others from the literature, and compare the performance of our algorithm with previous ones. Our experiments show that our algorithm is faster, especially on large models. Second, we focus on reducing the cost of propensity updates. Although the computational cost for evaluating one reaction propensity is small, the cumulative cost for a large number of reactions contributes a significant portion to the simulation performance. Typical experiments show that the propensity updates contribute 65% to 85%, and in some special cases up to 99%, of the total simulation time even though a dependency graph was applied. Moreover, sometimes one models the kinetics using a complex propensity formula, further increasing the cost of propensity updates. We study and propose a new exact simulation algorithm, called RSSA named after Rejection-based SSA, to reduce the cost of propensity updates. The principle of RSSA is using an over-approximation of propensities to select a reaction firing. The exact propensity value is evaluated only as needed. Thus, the propensity updates are postponed and collapsed as much as possible. We show through experiments that the propensity updates by our algorithm is significantly reduced, and hence substantially improving the simulation time. Third, we extend our study for reaction-diffusion processes. The simulation should explicitly account the diffusion of species in space. The compartment-based reaction-diffusion simulation is based on dividing the space into subvolumes so that the subvolumes are well-mixed. The diffusion of a species between subvolumes is modelled as an additional unimolecular reaction. We propose a new algorithm, called Rejection-based Reaction Diffusion (RRD), to efficiently simulate such reaction-diffusion systems. RRD combines the tree-based search and the idea of RSSA to select the next reaction firing in a subvolume. The highlight of RRD comparing with previous algorithms is the selection of both the subvolume and the reaction uses only the over-approximation of propensities. We prove the correctness and experimentally show performance improvement of RRD over other compartment-based approaches in literature. Finally, we focus on performing a statistical analysis of the targeted event by stochastic simulation. A direct application of SSA is generating trajectories and then counting the number of the successful ones. Rare events, which occur only with a very small probability, however, make this approach infeasible since a prohibitively large number of trajectories would need to be generated before the estimation becomes reasonably accurate. We propose a new method, called splitting SSA (sSSA), to improve the accuracy and efficiency of stochastic simulation while applying to this problem. Essentially, sSSA is a kind of biased simulation in which it encourages the evolution of the system making the target event more likely, yet in such a way that allows one to recover an unbiased estimated probability. We compare both performance and accuracy for sSSA and SSA by experimenting in some concrete scenarios. Experimental results prevail that sSSA is more efficient than the naive SSA approach.
29

Information Retrieval from Neurophysiological Signals

Ghaemmaghami, Pouya January 2017 (has links)
One of the ultimate goals of neuroscience is decoding someone's intentions directly from his/her brain activities. In this thesis, we aim at pursuing this goal in different scenarios. Firstly, we show the possibility of creating a user-centric music/movie recommender system by employing neurophysiological signals. Regarding this, we employed a brain decoding paradigm in order to classify the features extracted from brain signals of participants watching movie/music video clips, into our target classes (two broad music genres and four broad movie genres). Our results provide a preliminary experimental evidence towards user-centric music/movie content retrieval by exploiting brain signals. Secondly, we addressed one of the main issue of the applications of brain decoding algorithms. Generally, the performance of such algorithms suffers from the constraint of having few and noisy samples, which is the case in most of the neuroimaging datasets. In order to overcome this limitation, we employed an adaptation paradigm in order to transfer knowledge from another domain (e.g. large-scale image domain) to the brain domain. We experimentally show that such adaptation procedure leads to improved results. We performed such adaptation pipeline on different tasks (i.e. object recognition and genre classification) using different neuroimaging modalities (i.e. fMRI, EEG, and MEG). Thirdly, we aimed at one of the fundamental goals in brain decoding which is reconstructing the external stimuli using only the brain features. Under this scenario, we show the possibility of regressing the stimuli spectrogram using time-frequency analysis of the brain signals. Finally, we conclude the thesis by summarizing our contributions and discussing the future directions and applications of our research.
30

Formal failure analyses for effective fault management: an aerospace perspective

Bittner, Benjamin January 2016 (has links)
The possibility of failures is a reality that all modern complex engineering systems need to deal with. In this dissertation we consider two techniques to analyze the nature and impact of faults on system dynamics, which is fundamental to reliably manage them. Timed failure propagation analysis studies how and how fast faults propagate through physical and logical parts of a system. We develop formal techniques to validate and automatically generate representations of such behavior from a more detailed model of the system under analysis. Diagnosability analysis studies the impact of faults on observable parameters and tries to understand whether the presence of faults can be inferred from the observations within a useful time frame. We extend a recently developed framework for specifying diagnosis requirements, develop efficient algorithms to assess diagnosability under a fixed set of observables, and propose an automated technique to select optimal subsets of observables. The techniques have been implemented and evaluated on realistic models and case studies developed in collaboration with engineers from the European Space Agency, demonstrating the practicality of the contributions.

Page generated in 0.1131 seconds