Spelling suggestions: "subject:"informatics""
251 |
A theory of constructive and predictable runtime enforcement mechanismsBielova, Nataliia January 2012 (has links)
Nowadays owners and users of software systems want their executions to be reliable and secure. Runtime enforcement is a common mechanism for ensuring that system or program executions adhere to constraints specified by a security policy. It is based on two properties: the enforcement mechanism should leave legal executions without changes(transparency) and make sure that illegal executions are amended (soundness). From the theory side, the literature proposes the precise characterization of legal executions that represent a security policy and thus is enforced by mechanisms like security automata or edit automata. Unfortunately, transparency and soundness do not distinguish what happens when an execution is actually illegal (the practical case). They only tell that the outcome of an enforcement mechanism should be "legal", but not how far the illegal execution should be changed. In this thesis we address the gap between the theory of runtime enforcement and the practical case. First, we explore a set of policies that represent legal executions in terms of repeated legal iterations and propose a constructive enforcement mechanism that can deal with illegal executions by eliminating illegal iterations. Second, we introduce a new notion of predictability, that puts a restriction on the way illegal executions are modified by an enforcement mechanism. Third, we propose an automatic construction of enforcement mechanisms that is able to tolerate some insignificant errors of the user and we prove it to have a sufficient degree of predictability. The main case study of this thesis is a business process from a medical organization. A number of discussions with the partners from this organization shows the validity of the approaches described in this thesis in practical cases.
|
252 |
Learning with Shared Information for Image and Video AnalysisLiu, Gaowen January 2017 (has links)
Image and video recognition is a fundamental and challenging problem in computer vision, which has progressed tremendously fast recently. In the real world, a realistic setting for image or video recognition is that we have some classes containing lots of training data and many classes that contain only a small amount of training data. Therefore, how to use the frequent classes to help learning the rare classes is an open question. Learning with shared information is an emerging topic which can solve this problem. There are different components that can be shared during concept modeling and machine learning procedure, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. For example, representations based on attributes define a finite vocabulary that is common to all categories, with each category using a subset of the attributes. Therefore, sharing some common attributes for multiple classes will benefit the final recognition system. In this thesis, we investigate some challenging image and video recognition problems under the framework of learning with shared information. My Ph.D research comprised of two parts. The first part focuses on the two domains (source and target) problems where the emphasis is to boost the recognition performance on the target domain by utilizing useful knowledge from source domain. The second part focuses on multi-domains problems where all domains are considered equally important. This means we want to improve performance for all domains by exploring the useful information across domains. In particular, we investigate three topics to achieve this goal in the thesis, which are active domain adaptation, multi-task learning, and dictionary learning, respectively.
|
253 |
Security Policy Enforcement in Service-Oriented MiddlewareGheorghe, Gabriela January 2011 (has links)
Policy enforcement, or making sure that software behaves in line with a set of rules, is a problem of interest for developers and users alike. In a single machine environment, the reference monitor has been a well-researched model for enforcing policies. However, applying the same reference model in distributed applications is complicated by the presence of multiple users and concerns, and by the dynamism of the system and policies.
This thesis deals with building, assessing and configuring a tool for distributed policy enforcement that acts at application runtime. In a service-oriented architecture setting, the thesis proposes a set of adaptive middleware controls able to enact policies across applications. A core contribution of this thesis is the first message-level enforcing mechanism for access and usage control policies across services. In line with the idea that no security mechanism can be perfect from the beginning, the thesis also proposes a method to assess and amend how correctly a security mechanism acts across a distributed system. Another contribution is the first method to configure an authorisation system to satisfy conflicting security and performance requirements. This approach is based on the observation that policy violations can be caused by inappropriately fitting the enforcing mechanisms onto a target system. Putting these three contributions together gives a set of middleware tools to enforce cross-service policies in a dynamic environment. These tools make the user in control over continuous and improvable security policy enforcement.
|
254 |
Scaling up Systems Biology: Model Construction, Simulation and VisualizationDemattè, Lorenzo January 2010 (has links)
Being a multi-disciplinary field of research, Systems Biology struggle to have a common view and a common vocabulary, and inevitably people coming from different backgrounds see and care about different aspects. Scientists have to work hard to comprehend each other and to take advantage of each others work, but on the other hand they can provide unexpected and beautiful new insight to the problems we have to face, enabling cross-fertilization among different disciplines.
However, Systems Biology scientists all share one main goal, in the end: comprehend how a system as complex as a living creature can work and exists. Once we really understand how and why a biological system works, we can answer other important questions: can we fix it when it breaks down; can we enhance it and make it more resistant, correct its flaws; can we reproduce its behaviour and take it as inspiration for new works of engineering; can we copy it to make our everyday work easier and our human-created systems more reliable.
The contribution of this thesis is to push ahead the current state of art in different areas of information technology and computer science as applied to systems biology, in a way that could lead, one day, to the understanding of a whole, complex biological system. In particular, this thesis builds upon the current state of art of different disciplines, like programming languages theory and implementation, parallel computing, software engineering and visualization. Work done in these areas is applied to Systems Biology, in the effort to scale up the dimension of the problems that is possible to tackle with current tools and techniques.
|
255 |
Sentential Representations in Distributional SemanticsPham, The Nghia January 2016 (has links)
This thesis is about the problem of representing sentential meaning in distributional semantics. Distributional semantics obtains the meanings of words through their usage, based on the hypothesis that words occurring in similar contexts will have similar meanings. In this framework, words are modeled as distributions over contexts and are represented as vectors in high dimensional space. Compositional distributional semantics attempts to extend this approach to higher linguistics structures. Some basic composition models proposed in literature to obtain the meaning of phrases or possibly sentences show promising results in modeling simple phrases. The goal of the thesis is to further extend these composition models to obtain sentence meaning representations. The thesis puts more focus on unsupervised methods which make use of the context of phrases and sentences to optimize the parameters of a model. Three different methods are presented. The first model is the PLF model, a practical composition and linguistically mo tivated model which is based on the lexical function model introduced by Baroni and Zamparelli (2010) and Coecke et al. (2010). The second model is the Chunk-based Smoothed Tree Kernels (CSTKs) model, extending Smoothed Tree Kernels (Mehdad et al., 2010)by utilizing vector representations of chunks. The final model is the C-PHRASE model, a neural network-based approach, which jointly optimizes the vector representations of words and phrases using a context predicting objective. The thesis makes three principal contributions to the field of compositional distributional semantics. The first is to propose a general framework to estimate the parameters and evaluate the basic composition models. This provides a fair way to comparing the models using a set of phrasal datasets. The second is to extend these basic models to the sentence level, using syntactic information to build up the sentence vectors. The third con tribution is to evaluate all the proposed models, showing that they perform on par with or outperform competing models presented in the literature.
|
256 |
Risk-Based Vulnerability Management. Exploiting the economic nature of the attacker to build sound and measurable vulnerability mitigation strategies.Allodi, Luca January 2015 (has links)
Vulnerability bulletins and feeds report hundreds of vulnerabilities a month that a system administrator or a Chief Information Officer working for an organisation has to take care of. Because of the load of work, vulnerability prioritisation is a must in any complex-enough organisation. Currently, the industry employs the Common Vulnerability Scoring System (CVSS in short) as a metric to prioritise vulnerability risk. However, the CVSS base score is a technical measure of severity, not of risk. By using a severity measure to estimate risk, current practices assume that every vulnerability is characterised by the same exploitation likelihood, and that vulnerability risk can be assessed through a technical analysis of the vulnerability. In this Thesis we argue that this is not the case, and that the economic forces that drive the attacker are a key factor in understanding vulnerability risk. In particular, we argue that attacker's rationality and the economic infrastructure supporting cybercrime's activities play a major role in determining which vulnerabilities will the attackers massively exploit, and therefore which vulnerabilities will represent a (substantially higher than the rest) risk. Our ultimate goal is to show that `risk-based' vulnerability management policies, as opposed to currently employed `criticality-based' ones, are possible and can outperform current practices in terms of patching efficiency without losing in effectiveness (i.e. reduction of risk in the wild). To this aim we perform an extensive data-collection work on vulnerabilities, proof-of-concept exploits, exploits traded in the cybercrime markets, and exploits detected in the wild. We further collaborated with Symantec to collect actual records of attacks in the wild delivered against about 1M machines worldwide. A good part of our data-collection efforts has been also dedicated in infiltrating and analysing the cybercrime markets. We used this data collection to evaluate four `running hypotheses' underlying our main thesis: vulnerability risk is influenced by the attacker's rationality (1), and the underground markets are credible sources of risk that provide technically proficient attack tools (2), are a mature (3) and sound (4) from an economic perspective. We then put this in practice and evaluate the effectiveness of criticality-based and risk-based vulnerability management policies (based on the aforementioned findings) in mitigating real attacks in the wild. We compare the policies in terms of the `risk reduction' they entail, i.e. the gap between `risk' addressed by the policy and residual risk. Our results show that risk-based policies entail a significantly higher risk reduction than criticality-based ones, and thwart the majority of risk in the wild by addressing only a small fraction of the patching work prescribed by current practices.
|
257 |
Effective Recommendations for Leisure ActivitiesValeri, Beatrice January 2015 (has links)
People nowadays find it difficult to identify the best places to spend their leisure time performing different activities. Some services have been created to give a complete list of the opportunities offered by the city in which they live, but they overload people with information and make it difficult for them to identify what is more interesting. Personalized recommendations partially solve this problem of overload, but they need a deeper understanding of the personal tastes of people and of the different ways in which people want to spend their leisure time. In this thesis we identify the requirements for a recommender system for leisure activities, study which data are needed and which algorithm better identifies the most interesting options for each requester. We explore the effects of data quality on recommendations, identifying which kind of information is needed to better understand user needs and who can provide better-quality opinions. We analyse the possibility of using crowdsourcing as a means for collecting ratings when volunteering is not providing the needed amount of ratings or when a new dataset of ratings is needed to answer some interesting research questions. Finally, we show how the lessons learned can be applied in practice, presenting a prototype of personalized restaurant recommender service.
|
258 |
Measuring, Understanding, and Estimating the Influence of the Environment on low-power Wireless NetworksMarfievici, Ramona January 2015 (has links)
After a decade and a half of research in academia and industry, wireless sensor networks (WSNs) are seen as a key infrastructure able to monitor the environment in which they are immersed, thanks to their miniaturization, autonomy, and flexibility. Still, outdoor deployments of WSNs (e.g., in forests) are notoriously difficult to get right, partly due to the fact that their low-power wireless communication is greatly affected by the characteristics of the target environment (e.g., temperature, humidity, foliage). In the absence of quantitative evidence about the target application environments, the asset that drives a successful and reliable outdoor deployment is the experience gained from previous deployments, lab-like testbeds, or simulators that however often do not resemble the real-world environments. The general goal of this dissertation is to support the principled design and deployment of WSNs by improving the understanding of how the natural outdoor environment affects the network stack, and providing tools and modeling techniques to address this impact. This constitutes the premise for WSNs to be a credible tool for domain experts (e.g., biologists) operating in this field. Our own practical need to design and deploy a reliable WSN system for wildlife monitoring in the mountains near Trento, Italy, pushed our goals towards a deployment and application oriented perspective, whose ultimate objectives are: supporting the WSN deployment; informing the selection or design of protocols, to ensure they are well-suited to the target environment; deriving models to push the envelope of what can be predicted or simulated beforehand. To achieve these goals we must start from the first step—assessing quantitatively the characteristics of the low-power wireless links in-field, i.e., in the environment where the WSN must be deployed. To this end, we contribute with Trident and Harpoon, tools for in-field connectivity and routing performance assessment that support principled, repeatable, automated, and flexible collection of measurements in the target environment without the need for a tethered infrastructure and without requiring coding from the end user. Then, using these tools we collect a large set of data traces from six campaigns across different years, environments and seasons, whose analysis quantified the impact of the environmental factors on the network stack, focusing primarily on the physical and routing layers. Finally, we exploit the data traces to create models for both estimating the link quality at run-time and reproducing realistic network conditions in simulators. We argue that the tools we expressly designed for gathering in-field empirical traces, the understanding and quantitative characterization of data traces from real environments, and the modeling, together significantly advance the state of the art by rendering the process of designing and deploying a WSN more repeatable and predictable.
|
259 |
Adaptive Quality Estimation for Machine Translation and Automatic Speech RecognitionCamargo de Souza, José Guilherme January 2016 (has links)
Quality estimation (QE) approaches aim to predict the quality of an automatically generated output without relying on manually-crafted references. Having access only to the system's input and output, a QE module assigns a score or a label to each (input, output) pair. In this thesis we develop approaches to predict the quality of outputs of
two types of natural language processing (NLP) systems: machine translation (MT) and automatic speech recognition (ASR). The work presented here can be divided into three parts. The first part presents advances on the standard approaches to MT QE. We describe a general feature extraction framework, several quality indicators dependent and independent from MT systems that generate the translations, and new quality indicators that approximate the cross-lingual mapping between the meaning of source and translated sentences. Such advances result in state-of-the-art performance in two official evaluation campaigns on the MT QE problem. In the second part we show that the standard MT QE approaches suffer from domain drift problems due to the high specificity of labeled data currently available. In the standard MT QE framework, models are trained on data from a specific text type, with translations produced by one MT system and with labels obtained over the work of specific individual translators. Such models present poor performance when one or more of these conditions change. The ability of a system to adapt and cope with such changes is a facet of the QE problem that so far has been disregarded. To address these issues and deal with the noisy conditions of real-world translation workflows, we propose adaptive approaches to QE that are robust to both the diverse nature of translation jobs and differences between training and test data. In the third part, we propose and define an ASR QE framework. We identify useful quality indicators and show that ASR QE can be performed without having access to the ASR system, by only exploring information of its inputs and outputs. We apply a subset of the same adaptive techniques developed for MT QE and show that the ASR QE setting can also benefit from robust adaptive learning algorithms.
|
260 |
Service Composition in Dynamic Environments: From Theory to PracticeRaik, Heorhi January 2012 (has links)
In recent years, service-oriented architecture (SOA) has become one of the leading paradigms in software design. Among the key advantages behind SOA is service composition, the ability to create new services by reusing the functionality of pre-existing ones. Despite the availability of standard languages and related design and development tools, ``manual'' service composition remains to be an extremely error-prone and time-consuming task. No surprise, the automation of service composition process has been and still is a hot topic in the area of service computing.
In addition to high complexity, modern service-based systems tend to be dynamic. The most common examples of dynamic factors are constantly evolving set of available services, volatile execution context, frequent revision of business policies, regulations and goals, etc. Since dynamic changes of the execution environment can invalidate service compositions predefined within a service-based system, the cost of software maintenance in this case may increase dramatically. Unfortunately, the existing automated service composition approaches are not of much help here. Being design-time by their nature, they intensively involve IT experts, especially for analysing the changes and respecifying formal composition requirements in new conditions, which is still a considerable effort. To make service-based systems more agile, a new composition approach is needed that could automatically perform all composition-related tasks at run time, from deriving composition requirements to generating new compositions to deploying them.
In this dissertation, we propose a novel service composition framework that (i) handles stateful and nondeterministic services that interact asynchronously, (ii) allows for rich control- and data-flow composition requirements that are independent from the details of service implementations (iii) exploits advanced planning techniques for automated reasoning and (iv) exploits modeling methodology that is applicable in dynamic environment.
The corner stone of the framework is the explicit context model that abstracts composition requirements and constraints away from the details of service implementations. By linking services to the context model on the one side, and by expressing composition requirements and constraints in terms of the context model on the other side, we create a formal setting in which abstract requirements and constraints, though being implementation-independent, can always be grounded to available service implementations. Consequently, we show that in such framework it is possible to move most human activities to design time so that the run-time management of the composition life cycle is completely automated. To the best of our knowledge, it is the first composition approach to achieve this goal.
A significant contribution of the dissertation is the investigation of the problem of dynamic adaptation of service-based business processes. Here, our solution is based on the composition approach proposed. Within the thesis, the problem of process adaptation plays the role of the key motivator and evaluation use case for our composition-related research.
The most part of the ideas discussed in the thesis are implemented and evaluated to prove their practical applicability.
|
Page generated in 0.0929 seconds