• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1079
  • 504
  • 14
  • 3
  • 2
  • 2
  • Tagged with
  • 1604
  • 799
  • 691
  • 497
  • 416
  • 346
  • 276
  • 210
  • 192
  • 179
  • 149
  • 138
  • 121
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Towards an Ontology of Software

Wang, Xiaowei January 2016 (has links)
Software is permeating every aspect of our personal and social life. And yet, the cluster of concepts around the notion of software, such as the notions of a software product, software requirements, software specifications, are still poorly understood with no consensus on the horizon. For many, software is just code, something intangible best defined in contrast with hardware, but it is not particularly illuminating. This erroneous notion, software is just code, presents both in the ontology of software literature and in the software maintenance tools. This notion is obviously wrong because it doesn’t account for the fact that whenever someone fixes a bug, the code of a software system changes, but nobody believes that this is a different software system. Several researchers have attempted to understand the core nature of software and programs in terms of concepts such as code, copy, medium and execution. More recently, a proposal was made by Irmak to consider software as an abstract artifact, distinct from code, just because code may change while the software remains the same. We share many of his intuitions, as well as the methodology he adopts to motivate his conclusions, based on an analysis of the condition under which software maintains its identity despite change. However, he leaves the question of ‘what is the identity of software’ open, and we answer this question here. Trying to answer the question left open by Irmak, the main objective of this dissertation is to lay the foundations for an ontology of software, grounded on the foundational ontology DOLCE. This new ontology of software is intended to facilitate the communication within the community by reducing terminological ambiguities, and by resolving inconsistencies. If we had a better footing on answering the question ‘What is software?’, we'd be in a position to build better tools for maintaining and managing a software system throughout its lifetime. The research contents of the thesis consist of three results. Firstly, we dive into the ontological nature of software, recognizing it as an abstract information artifact. To support this proposal the first main contribution of the dissertation is demonstrated from three dimensions: (1) We distinguish software (non-physical object) from hardware (physical object), and demonstrate the idea that the rapid changing speed of software is supported by the easy changeability of its medium hardware; (2) Furthermore, we discuss about the artifactual nature of software, addressing the erroneous notion, software is just code, presents both in the ontology of software literature and in the software maintenance tools; (3)At last, we recognize software as an information artifact, and this approach ensures that software inherits all the properties of an information artifact, and the study and research could be directly reused for software then. Secondly, we propose an ontology founded on the concepts adopted from Requirements Engineering (RE), such as the notions of World and Machine phenomena. In this ontology, we make a sharp distinction between different kinds of software artifacts (software program, software system, and software product), and describe the ways they are inter-connected in the context of a software engineering process. Additionally, we study software from a Social Perspective, explaining the concepts of licensable software product and licensed software product. Also, we discuss about the possibility to adopt our ontology of software in software configuration management systems to provide a better understanding and control of software changes. Thirdly, we note the important role played by assumptions in getting software to fulfill its requirements. The requirements for most software systems -- the intended states-of-affairs these systems are supposed to bring about -- concern their operational environment, usually a social world. But these systems don’t have any direct means to change that environment in order to bring about the intended states-of-affairs. In what sense then can we say that such systems fulfill their requirements? One of the main contributions of this dissertation is to account for this paradox. We do so by proposing a preliminary ontology of assumptions that are implicitly used in software engineering practice to establish that a system specification S fulfills its requirements R given a set of assumptions A, and our proposal is illustrated with a meeting scheduling example.
182

Knowledge Based Open Entity Matching

Bortoli, Stefano January 2013 (has links)
In this work we argue for the definition a knowledge-based entity matching framework for the implementation of a reliable and incrementally scalable solution. Such knowledge base is formed by an ontology and a set of entity matching rules suitable to be applied as a reliable equational theory in the context of the Semantic Web. In particular, we are going to prove that relying on the existence of a set of contextual mappings to ease the semantic heterogeneity characterizing descriptions on the Web, a knowledge-based solution can perform comparably, and sometimes better, than existing solutions at the state of the art. We further argue that a knowledge-based solution to the open entity matching problem ought to be considered under the open world assumption, as in some cases the descriptions to be matched may not contain the necessary information to take any accurate matching decision. The main goal of this work is to show how the framework proposed is suitable to pursue a reliable solution of the entity matching problem, regardless the set of rules for the ontology adopted. In fact, we believe that structural and syntactic heterogeneity affecting data on the Web undermine the definition of a global unique solution. However, we argue that a knowledge-driven approach, considering the semantic and meta-properties of compared attributes, can provide important benefits and lead to more reliable solutions. To achieve this goal, we are going to implement several experiments to evaluate different sets of rules, testing our thesis and learning important lessons for future developments. The sets of rules that we will consider to bootstrap the solution proposed in this work are the result of diverse complementary processes: first we want to investigate whether capturing the matching knowledge employed by people in taking entity matching decision by relying on machine learning techniques can produce an effective set of rules (bottom-up strategy); second, we investigate the application of formal ontology pools to analyze the features defined in the ontology and support the definition of entity matching rules (top-down strategy). Moreover, in this work we argue that by merging the rules resulting from these complementary processes, we can define a set of rules that can support reliably entity matching decision in an open context.
183

A Tag Contract Framework for Modeling Heterogeneous Systems

Le, Thi Thieu Hoa January 2014 (has links)
In the distributed development of modern IT systems, contracts play a vital role in ensuring interoperability of components and adherence to specifica- tions. The design of embedded systems, however, is made more complex by the heterogeneous nature of components, which are often described using different models and interaction mechanisms. Composing such components is generally not well-defined, making design and verification difficult. Sev- eral denotational frameworks have been proposed to handle heterogeneity using a variety of approaches. However, the application of heterogeneous modeling frameworks to contract-based design has not yet been investigated. In this work, we develop an operational model with precise heterogeneous denotational semantics, based on tag machines, that can represent hetero- geneous composition, and provide conditions under which composition can be captured soundly and completely. The operational framework is imple- mented in a prototype tool which we use for experimental evaluation. We then construct a full contract model and introduce heterogeneous compo- sition, refinement, dominance, and compatibility between contracts, alto- gether enabling a formalized and rigorous design process for heterogeneous systems. Besides, we also develop a generic algebraic method to synthe- size or refine a set of contracts so that their composition satisfies a given contract.
184

Protein-dependent prediction of messenger RNA binding using Support Vector Machines

Livi, Carmen Maria January 2013 (has links)
RNA-binding proteins interact specifically with RNA strands to regulate important cellular processes. Knowing the binding partners of a protein is a crucial issue in biology and it is essential to understand the protein function and its involvement in diseases. The identification of the interactions is currently resolvable only through in vivo and in vitro experiments which may not detect all binding partners. Computational methods which capture the protein-dependent nature of the binding phenomena could help to predict, in silico, the binding and could be resistant against experimental biases. This thesis addresses the creation of models based on support vector machines and trained on experimental data. The goal is the identification of RNAs which bind specifically to a regulatory protein. Starting from a case study, done with protein CELF1, we extend our approach and propose three methods to predict whether an RNA strand can be bound by a particular RNA-binding protein. The methods use support vector machines and different features based on the sequence (method Oli), the motif score (method OliMo) and the secondary structure (method OliMoSS). We apply them to different experimentally-derived datasets and compare the predictions with two methods: RNAcontext and RPISeq. Oli outperforms OliMoSS and RPISeq affirming our protein specific prediction and suggesting that oligo frequencies are good discriminative features. Oli and RNAcontext are the most competitive methods in terms of AUC. A Precision-Recall analysis reveals a better performance for Oli. On a second experimental dataset, where negative binding information is available, Oli outperforms RNAcontext with a precision of 0.73 vs. 0.59. Our experiments show that features based on primary sequence information are highly discriminative to predict the binding between protein and RNA. Sequence motifs can improve the prediction only for some RNA-binding proteins. Finally, we can conclude that experimental data on RNA-binding can be effectively used to train protein-specific models for in silico predictions.
185

Automatic Creation of Evocative Witty Expressions

Gatti, Lorenzo January 2018 (has links)
In linguistic creativity, the pragmatic effects of the message are often as important as its aesthetic properties. The productions of creative humans are often based both on a generic intent (such as amusing) and a specific one, for example to attract the attention of the audience, to provoke thoughts, to get the message home, to influence other people and change their attitudes and beliefs. In computational linguistic creativity, however, these pragmatic effects are rarely accounted for. Most works in automated linguistic creativity are limited to the production of a syntactically and semantically correct output that is also pleasing, but in applied scenarios it would be important to validate also the effectiveness of the output. This thesis aims at demonstrating that automatic systems can create productions that are attractive, pleasant and memorable, based on variations of well-known expressions, using the optimal innovation hypothesis as a frame of reference. In particular, these witty expressions can be used for evoking a given concept, improving its memorability, or for other pragmatic goals.
186

Efficient Reasoning with Constrained Goal Models

Nguyen, Chi Mai January 2017 (has links)
GOAL models have been widely used in Computer Science to represent software requirements, business objectives, and design qualities. Existing goal modelling techniques, however, have shown limitations of expressiveness and/or tractability in coping with complex real-world problems. In this work, we exploit advances in automated reasoning technologies, notably Satisfiability and Optimization Modulo Theories (SMT/OMT), and we propose and formalize: (i) an extended modelling language for goals, namely the Constrained Goal Model (CGM), which makes explicit the notion of goal refinements and of domain as- sumptions, allows for expressing preferences between goals and refinements, and allows for associating numerical attributes to goals and refinements for defining constraints and optimization goals over multiple objective functions, refinements and their numerical attributes; (i) a novel set of automated reasoning functionalities over CGMs, allowing for automatically generating suitable realizations of input CGMs, under user-specified assumptions and constraints, that also maximize preferences and optimize given objective functions. We are also interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionalities or performance enhancements to cope with growing user population (requirements evolution). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use CGMs to represent the requirements of a system and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We can then exploit reasoning techniques to derive optimal new specifications for an evolving software system. We have implemented these modelling and reasoning functionalities in a tool, named CGM-Tool, using the OMT solver OptiMathSAT as automated reasoning backend. More- over, we have conducted an experimental evaluation on large CGMs to support the claim that our proposal scales well for goal models with thousands of elements. To access our framework usability, we have employed a user-oriented evaluation using enquiry evaluation method.
187

Security-by-Contract using Automata Modulo Theory

Siahaan, Ida Sri Rejeki January 2010 (has links)
Trust without control is a precarious solution to human nature. This belief has lead to many ways for guaranteeing secure software such as statically analyzing programs to check that they comply to the intended specifications which results in software certification. One problem with this approach is that the current systems can only accept all or nothing without knowing what the software is doing. Another way to complement is by run-time monitoring such that programs are checked during execution that they comply to security policy defined by the systems. The problem with this approach is the significant overhead which may not be desirable for some applications. This thesis describes a formalism, called Automata Modulo Theory, that allows us to have model of what programs do in more precise details thus giving semantics to certification. Automata Modulo Theory allows us to define very expressive policies with infinite cases while keeping the task of matching computationally tractable. This representation is suitable for formalizing systems with finitely many states but infinitely many transitions. Automata Modulo Theory consists of a formal model, two algorithms for matching the claims on the security behavior of a midlet (for short contract) with the desired security behavior of a platform (for short policy), and an algorithm for optimizing policy. The prototype implementations of Automata Modulo Theory matching using language inclusion and simulation have been built, and the results from our experience with the prototype implementations are also evaluated in this thesis.
188

Communication Priorities and Stochastic Measures for Web Services Modeling

Cappello, Igor January 2012 (has links)
Service Oriented Architecture is an important trend in the development of services composed of loosely coupled, heterogeneous and interacting components. We first consider COWS, which is a language tailored to model the behavioural aspects of these systems. We analyse the peculiarities of the communication mechanism of the language, a key ingredient in its modelling capabilities, presenting a separation result between a fragment of CCS equipped with global priorities and the fragment of COWS relevant for its communication mechanism. We then consider a stochastic approach to model the quantitative aspects of Web Services through SCOWS, a stochastic extension of COWS. We present a prototype tool, named SCOWS_LTS, for the derivation of the complete representation of the behaviour of a SCOWS model as a Continuous Time Markov Chain. In order to validate the approach, a number of case studies are modelled in SCOWS considering both the SOA and the concurrency literature. Using PRISM as model checker, the results of the simulation phase are analysed using properties written to check the probabilistic behaviours of the considered systems.
189

Three Case Studies For Understanding, Measuring and Using a Compound Notion of Data Quality With Emphasis on the data Staleness Dimension

Chayka, Oleksiy January 2012 (has links)
By its nature, the term “data quality†with its generic meaning “fitness for use†has both subjective and objective aspects. There are numerous methodologies and techniques to evaluate its subjective parts and to measure its objective parts. However, none of them are uniform enough for exploitation in diverse real-world applications. None of those, in fact, can be created as such, since data quality penetrates too deep into business operations to prevent from finding “a silver bullet†for all of them: it normally goes from representation of real world entities or their properties with data in an information system, to data processing and delivering to consumers. In this work, we considered three real world use cases which entirely or partially cover those areas of data quality scope. In particular, we study the following problems: 1) how quality of data can be defined and propagated to customers in a business intelligence application for quality-aware decision making; 2) how data quality can be defined, measured and used in a web-based system operating with semi-structured data from and designated to both humans and machines; 3) how a data-driven (vs. system-driven) time-related data quality notion of staleness can be defined, efficiently measured and monitored in a generic information system. Thus, we expand the corresponding state of the art with Application, System and Dimension aspects of data quality. In the Application context, we propose a quality-aware architecture for a typical business intelligence application in a healthcare environment. We demonstrate potential quality issues implications, including intra- and inter-dimensional quality dependencies, prone to data from early processing stages up to the reporting level. In the part dedicated to the System, we demonstrate an approach to understand, measure and disseminate data quality measurement results in a context of a web based system called Entity Name System (ENS). On the Dimension side, we propose a definition of data staleness in accordance with key time-related quality metrics requirements, relying on the corresponding similar notions elaborated by the researchers before. We demonstrate an approach to measure data staleness by different statistical methods, including exponential smoothing. In our experiments, we compare their space efficiency and data update instants predictive accuracy using history of updates of sample representative articles from Wikipedia.
190

STS: a Security Requirements Engineering methodology for socio-technical Systems

Paja, Elda January 2014 (has links)
Today’s software systems are situated within larger socio-technical systems, wherein they interact — by exchanging data and delegating tasks — with other technical components, humans, and organisations. The components (actors) of a socio-technical system are autonomous and loosely controllable. Therefore, when interacting, they may endanger security by, for example, disclosing confidential information, breaking the integrity of others’ data, and relying on untrusted third parties, among others. The design of a secure software system cannot disregard its collocation within a socio-technical context, where security is threatened not only by technical attacks, but also by social and organisational threats. This thesis proposes a tool-supported model-driven methodology, namely STS, for conducting security requirements engineering for socio-technical systems. In STS, security requirements are specified — using the STS-ml requirements modelling language — as social contracts that constrain the social interactions and the responsibilities of the actors in the socio-technical system. A particular feature of STS-ml is that it clearly distinguishes information from its representation — in terms of documents, and separates information flow from the permissions or prohibitions actors specify to others over their interactions. This separation allows STS-ml to support a rich set of security requirements. The requirements models of STS-ml have a formal semantics which enables automated reasoning for detecting possible conflicts among security requirements as well as conflicts between security requirements and actors’ business policies — how they intend to achieve their objectives. Importantly, automated reasoning techniques are proposed to calculate the impact of social threats on actors’ information and their objectives. Modelling and reasoning capabilities are supported by STS-Tool. The effectiveness of STS methodology in modelling, and ultimately specifying security requirements for various socio-technical systems, is validated with the help of case studies from different domains. We assess the scalability for the implementation of the conflict identification algorithms conducting a scalability study using data from one of the case studies. Finally, we report on the results from user-oriented empirical evaluations of the STS methodology, the STS-ml modelling language, and the STS-Tool. These studies have been conducted over the past three years starting from the initial proposal of the methodology, language, and tool, in order to improve them after each evaluation.

Page generated in 0.0934 seconds