• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1684
  • 341
  • 13
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2050
  • 707
  • 488
  • 366
  • 346
  • 279
  • 252
  • 251
  • 236
  • 227
  • 223
  • 216
  • 191
  • 191
  • 180
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Ranking Aggregation Based on Belief Function Theory

Argentini, Andrea January 2012 (has links)
The ranking aggregation problem is that to establishing a new aggregate ranking given a set of rankings of a finite set of items. This problem is met in various applications, such as the combination of user preferences, the combination of lists of documents retrieved by search engines and the combination of ranked gene lists. In the literature, the ranking aggregation problem has been solved as an optimization of some distance between the rankings overlooking the existence of a true ranking. In this thesis we address the ranking aggregation problem assuming the existence of a true ranking on the set of items: the goal is to estimate an unknown, true ranking given a set of input rankings provided by experts with different approximation quality. We propose a novel solution called Belief Ranking Estimator (BRE) that takes into account two aspects still unexplored in ranking combination: the approximation quality of the experts and for the first time the uncertainty related to each item position in the ranking. BRE estimates in an unsupervised way the true ranking given a set of rankings that are diverse quality estimations of the unknown true ranking. The uncertainty on the items's position in each ranking is modeled within the Belief Function Theory framework, that allows for the combination of subjective knowledge in a non Bayesian way. This innovative application of belief functions to rankings, allows us to encode different sources of a priori knowledge about the correctness of the ranking positions and also to weigh the reliability of the experts involved in the combination. We assessed the performance of our solution on synthetic and real data against state-of-the-art methods. The tests comprise the aggregation of total and partial rankings in different empirical settings aimed at representing the different quality of the input rankings with respect to the true ranking. The results show that BRE provides an effective solution when the input rankings are heterogeneous in terms of approximation quality with respect to the unknown true ranking.
82

Music signal processing for automatic extraction of harmonic and rhythmic information

Khadkevich, Maksim January 2011 (has links)
This thesis is concerned with the problem of automatic extraction of harmonic and rhythmic information from music audio signals using statistical framework and advanced signal processing methods. Among different research directions, automatic extraction of chords and key has always been of a great interest to Music Information Retrieval (MIR) community. Chord progressions and key information can serve as a robust mid-level representation for a variety of MIR tasks. We propose statistical approaches to automatic extraction of chord progressions using Hidden Markov Models (HMM) based framework. General ideas we rely on have already proved to be effective in speech recognition. We propose novel probabilistic approaches that include acoustic modeling layer and language modeling layer. We investigate the usage of standard N-grams and Factored Language Models (FLM) for automatic chord recognition. Another central topic of this work is the feature extraction techniques. We develop a set of new features that belong to chroma family. A set of novel chroma features that is based on the application of Pseudo-Quadrature Mirror Filter (PQMF) bank is introduced. We show the advantage of using Time-Frequency Reassignment (TFR) technique to derive better acoustic features. Tempo estimation and beat structure extraction are amongst the most challenging tasks in MIR community. We develop a novel method for beat/downbeat estimation from audio. It is based on the same statistical approach that consists of two hierarchical levels: acoustic modeling and beat sequence modeling. We propose the definition of a very specific beat duration model that exploits an HMM structure without self-transitions. A new feature set that utilizes the advantages of harmonic-impulsive component separation technique is introduced. The proposed methods are compared to numerous state-of-the-art approaches by participation in the MIREX competition, which is the best impartial assessment of MIR systems nowadays.
83

Towards an Ontology of Software

Wang, Xiaowei January 2016 (has links)
Software is permeating every aspect of our personal and social life. And yet, the cluster of concepts around the notion of software, such as the notions of a software product, software requirements, software specifications, are still poorly understood with no consensus on the horizon. For many, software is just code, something intangible best defined in contrast with hardware, but it is not particularly illuminating. This erroneous notion, software is just code, presents both in the ontology of software literature and in the software maintenance tools. This notion is obviously wrong because it doesn’t account for the fact that whenever someone fixes a bug, the code of a software system changes, but nobody believes that this is a different software system. Several researchers have attempted to understand the core nature of software and programs in terms of concepts such as code, copy, medium and execution. More recently, a proposal was made by Irmak to consider software as an abstract artifact, distinct from code, just because code may change while the software remains the same. We share many of his intuitions, as well as the methodology he adopts to motivate his conclusions, based on an analysis of the condition under which software maintains its identity despite change. However, he leaves the question of ‘what is the identity of software’ open, and we answer this question here. Trying to answer the question left open by Irmak, the main objective of this dissertation is to lay the foundations for an ontology of software, grounded on the foundational ontology DOLCE. This new ontology of software is intended to facilitate the communication within the community by reducing terminological ambiguities, and by resolving inconsistencies. If we had a better footing on answering the question ‘What is software?’, we'd be in a position to build better tools for maintaining and managing a software system throughout its lifetime. The research contents of the thesis consist of three results. Firstly, we dive into the ontological nature of software, recognizing it as an abstract information artifact. To support this proposal the first main contribution of the dissertation is demonstrated from three dimensions: (1) We distinguish software (non-physical object) from hardware (physical object), and demonstrate the idea that the rapid changing speed of software is supported by the easy changeability of its medium hardware; (2) Furthermore, we discuss about the artifactual nature of software, addressing the erroneous notion, software is just code, presents both in the ontology of software literature and in the software maintenance tools; (3)At last, we recognize software as an information artifact, and this approach ensures that software inherits all the properties of an information artifact, and the study and research could be directly reused for software then. Secondly, we propose an ontology founded on the concepts adopted from Requirements Engineering (RE), such as the notions of World and Machine phenomena. In this ontology, we make a sharp distinction between different kinds of software artifacts (software program, software system, and software product), and describe the ways they are inter-connected in the context of a software engineering process. Additionally, we study software from a Social Perspective, explaining the concepts of licensable software product and licensed software product. Also, we discuss about the possibility to adopt our ontology of software in software configuration management systems to provide a better understanding and control of software changes. Thirdly, we note the important role played by assumptions in getting software to fulfill its requirements. The requirements for most software systems -- the intended states-of-affairs these systems are supposed to bring about -- concern their operational environment, usually a social world. But these systems don’t have any direct means to change that environment in order to bring about the intended states-of-affairs. In what sense then can we say that such systems fulfill their requirements? One of the main contributions of this dissertation is to account for this paradox. We do so by proposing a preliminary ontology of assumptions that are implicitly used in software engineering practice to establish that a system specification S fulfills its requirements R given a set of assumptions A, and our proposal is illustrated with a meeting scheduling example.
84

Knowledge Based Open Entity Matching

Bortoli, Stefano January 2013 (has links)
In this work we argue for the definition a knowledge-based entity matching framework for the implementation of a reliable and incrementally scalable solution. Such knowledge base is formed by an ontology and a set of entity matching rules suitable to be applied as a reliable equational theory in the context of the Semantic Web. In particular, we are going to prove that relying on the existence of a set of contextual mappings to ease the semantic heterogeneity characterizing descriptions on the Web, a knowledge-based solution can perform comparably, and sometimes better, than existing solutions at the state of the art. We further argue that a knowledge-based solution to the open entity matching problem ought to be considered under the open world assumption, as in some cases the descriptions to be matched may not contain the necessary information to take any accurate matching decision. The main goal of this work is to show how the framework proposed is suitable to pursue a reliable solution of the entity matching problem, regardless the set of rules for the ontology adopted. In fact, we believe that structural and syntactic heterogeneity affecting data on the Web undermine the definition of a global unique solution. However, we argue that a knowledge-driven approach, considering the semantic and meta-properties of compared attributes, can provide important benefits and lead to more reliable solutions. To achieve this goal, we are going to implement several experiments to evaluate different sets of rules, testing our thesis and learning important lessons for future developments. The sets of rules that we will consider to bootstrap the solution proposed in this work are the result of diverse complementary processes: first we want to investigate whether capturing the matching knowledge employed by people in taking entity matching decision by relying on machine learning techniques can produce an effective set of rules (bottom-up strategy); second, we investigate the application of formal ontology pools to analyze the features defined in the ontology and support the definition of entity matching rules (top-down strategy). Moreover, in this work we argue that by merging the rules resulting from these complementary processes, we can define a set of rules that can support reliably entity matching decision in an open context.
85

A Tag Contract Framework for Modeling Heterogeneous Systems

Le, Thi Thieu Hoa January 2014 (has links)
In the distributed development of modern IT systems, contracts play a vital role in ensuring interoperability of components and adherence to specifica- tions. The design of embedded systems, however, is made more complex by the heterogeneous nature of components, which are often described using different models and interaction mechanisms. Composing such components is generally not well-defined, making design and verification difficult. Sev- eral denotational frameworks have been proposed to handle heterogeneity using a variety of approaches. However, the application of heterogeneous modeling frameworks to contract-based design has not yet been investigated. In this work, we develop an operational model with precise heterogeneous denotational semantics, based on tag machines, that can represent hetero- geneous composition, and provide conditions under which composition can be captured soundly and completely. The operational framework is imple- mented in a prototype tool which we use for experimental evaluation. We then construct a full contract model and introduce heterogeneous compo- sition, refinement, dominance, and compatibility between contracts, alto- gether enabling a formalized and rigorous design process for heterogeneous systems. Besides, we also develop a generic algebraic method to synthe- size or refine a set of contracts so that their composition satisfies a given contract.
86

Protein-dependent prediction of messenger RNA binding using Support Vector Machines

Livi, Carmen Maria January 2013 (has links)
RNA-binding proteins interact specifically with RNA strands to regulate important cellular processes. Knowing the binding partners of a protein is a crucial issue in biology and it is essential to understand the protein function and its involvement in diseases. The identification of the interactions is currently resolvable only through in vivo and in vitro experiments which may not detect all binding partners. Computational methods which capture the protein-dependent nature of the binding phenomena could help to predict, in silico, the binding and could be resistant against experimental biases. This thesis addresses the creation of models based on support vector machines and trained on experimental data. The goal is the identification of RNAs which bind specifically to a regulatory protein. Starting from a case study, done with protein CELF1, we extend our approach and propose three methods to predict whether an RNA strand can be bound by a particular RNA-binding protein. The methods use support vector machines and different features based on the sequence (method Oli), the motif score (method OliMo) and the secondary structure (method OliMoSS). We apply them to different experimentally-derived datasets and compare the predictions with two methods: RNAcontext and RPISeq. Oli outperforms OliMoSS and RPISeq affirming our protein specific prediction and suggesting that oligo frequencies are good discriminative features. Oli and RNAcontext are the most competitive methods in terms of AUC. A Precision-Recall analysis reveals a better performance for Oli. On a second experimental dataset, where negative binding information is available, Oli outperforms RNAcontext with a precision of 0.73 vs. 0.59. Our experiments show that features based on primary sequence information are highly discriminative to predict the binding between protein and RNA. Sequence motifs can improve the prediction only for some RNA-binding proteins. Finally, we can conclude that experimental data on RNA-binding can be effectively used to train protein-specific models for in silico predictions.
87

Automatic Creation of Evocative Witty Expressions

Gatti, Lorenzo January 2018 (has links)
In linguistic creativity, the pragmatic effects of the message are often as important as its aesthetic properties. The productions of creative humans are often based both on a generic intent (such as amusing) and a specific one, for example to attract the attention of the audience, to provoke thoughts, to get the message home, to influence other people and change their attitudes and beliefs. In computational linguistic creativity, however, these pragmatic effects are rarely accounted for. Most works in automated linguistic creativity are limited to the production of a syntactically and semantically correct output that is also pleasing, but in applied scenarios it would be important to validate also the effectiveness of the output. This thesis aims at demonstrating that automatic systems can create productions that are attractive, pleasant and memorable, based on variations of well-known expressions, using the optimal innovation hypothesis as a frame of reference. In particular, these witty expressions can be used for evoking a given concept, improving its memorability, or for other pragmatic goals.
88

Two-phase modelling of debris flow over composite topography: theoretical and numerical aspects

Zugliani, Daniel January 2015 (has links)
In the mountain territory the majority of the population and of the productive activities are concentrated in the proximity of torrents or over alluvial fans. Here, when intense rainfall occurs, debris flow or hyper-concentrated flow events can produce serious problems to the population with possible casualties. On the other hand, the majority of these problems could be overcome with accurate hazard mapping, disaster prevention planning and mitigation structures (e.g. silt check dams, paved channels, weirs ...). Good and reliable mathematical and numerical models, able to accurately describe these phenomena are therefore necessary. Debris flows and hyper-concentrated flows can be adequately represented by means of a mixture of a fluid (usually water) and a solid phase (granular sediment, e.g. sand, gravel ...), flowing over complex and composite topography. Complex topography is related to complicated bed elevation variety inasmuch as there are slopes, channels, human artifacts and so on. On the other hand, topography is composite because every type of flow can encounter two different bed behaviors: the mobile bed and the fixed bed. In the first case, mass can be exchanged between the bed and the flow, so the bottom elevation can change in time. In the second case (fixed bed case), this mass transfer is inhibited, due to the presence of a rigid bottom, such as bedrock or concrete, and the bottom cannot change in time. The first objective of the work presented in this thesis concerns the development of a new type of hyperbolic mathematical model for free-surface two-phase hyper-concentrated flows able to describe in a single way the fixed bed, the mobile bed and also the transition between them. The second objective, strictly connected with the first, is the development of a numerical scheme that implements this mathematical model in an accurate and efficient way. In the framework of finite-volume methods with Godunov approach, the fluxes are evaluated solving a Riemann Problem (RP). A RP is an initial value problem related to a set of PDEs equations wherein, in a certain point, there is a discontinuity separating different left and right initial constant states. However, if the topography is composite, a new type of Riemann problem, called Composite Riemann Problem (CRP), occurs. In a CRP, not only the initial constant states, but also the relevant PDEs systems change across the discontinuity. This additional complexity makes the general solution of the CRP quite challenging to obtain. The first part of the work is devoted to the derivation of the PDEs systems describing the fixed- and mobile-bed behaviors. Starting from the 3D discrete equations valid for each phase (continuous fluid and solid granular) and using suitable average processes the 3D continuous equations (continuous fluid and solid) are obtained. Introducing the shallow water approximation and performing the depth average process, the 2D fully two-phase models for free-surface flow over fixed- and mobile-bed are derived. The isokinetic approximation, which states the equality between the velocity of the solid phase and the liquid phase, is then used, ending up with the so-called two-phase isokinetic models. Finally, an exhaustive comparison between the fixed- and the mobile-bed fully two-phase models, the two-phase isokinetic models and others models proposed in the literature is presented. The second part of the work concerns the definition and, mainly, the solution of the CRP from a mathematical point of view. Firstly, a general strategy for the CRP solution is developed. It allows to couple different hyperbolic systems that are physically compatible (e.g. fixed-bed with mobile-bed systems, free-surface flow with pressurized flow), also if they have a different number of equations. The resulting CRP solution is composed of a single PDEs system, called Composite PDEs system, whose properties, under some assumptions, degenerate to the properties of the original PDEs systems. The general strategy is developed using the simplest 1D isokinetic models for the fixed bed and the mobile bed (i.e. PDEs systems valid only for low concentration). Coherently with the generality of the CRP solution method, the low concentration constraint is then relaxed, ending up with a Composite PDEs system describing also high concentrated flows. From the numerical point of view, all the developed Composite systems are integrated using the finite-volume method with Godunov fluxes. These fluxes are evaluated using three different approximated Riemann solvers: the Generalized Roe solver, the LHLL solver and the Universal Osher solver. All the solvers are analyzed and an exhaustive comparison between them is performed, highlighting pros and cons. The schemes are second order accurate in space and time, and this has been achieved by means of the MUSCL approach. Finally numerical schemes have been parallelized using OpenMP standard. All the models are then tested comparing analytical and numerical solutions. The results are satisfactory, with an accurate agreement between the two solutions in the majority of the physically-based test cases. There is only some small issue when the simulations are performed in a few resonant cases. However, these problems arise in not realistic situations, so it is impossible to encounter them in real situations. Also a realistic application is presented (i.e. the evolution of a trench over partially paved channel), proving the capabilities of both the mathematical approach and the numerical scheme.
89

Efficient Reasoning with Constrained Goal Models

Nguyen, Chi Mai January 2017 (has links)
GOAL models have been widely used in Computer Science to represent software requirements, business objectives, and design qualities. Existing goal modelling techniques, however, have shown limitations of expressiveness and/or tractability in coping with complex real-world problems. In this work, we exploit advances in automated reasoning technologies, notably Satisfiability and Optimization Modulo Theories (SMT/OMT), and we propose and formalize: (i) an extended modelling language for goals, namely the Constrained Goal Model (CGM), which makes explicit the notion of goal refinements and of domain as- sumptions, allows for expressing preferences between goals and refinements, and allows for associating numerical attributes to goals and refinements for defining constraints and optimization goals over multiple objective functions, refinements and their numerical attributes; (i) a novel set of automated reasoning functionalities over CGMs, allowing for automatically generating suitable realizations of input CGMs, under user-specified assumptions and constraints, that also maximize preferences and optimize given objective functions. We are also interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionalities or performance enhancements to cope with growing user population (requirements evolution). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use CGMs to represent the requirements of a system and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We can then exploit reasoning techniques to derive optimal new specifications for an evolving software system. We have implemented these modelling and reasoning functionalities in a tool, named CGM-Tool, using the OMT solver OptiMathSAT as automated reasoning backend. More- over, we have conducted an experimental evaluation on large CGMs to support the claim that our proposal scales well for goal models with thousands of elements. To access our framework usability, we have employed a user-oriented evaluation using enquiry evaluation method.
90

Optimal Codes and Entropy Extractors

Meneghetti, Alessio January 2017 (has links)
In this work we deal with both Coding Theory and Entropy Extraction for Random Number Generators to be used for cryptographic purposes. We start from a thorough analysis of known bounds on code parameters and a study of the properties of Hadamard codes. We find of particular interest the Griesmer bound, which is a strong result known to be true only for linear codes. We try to extend it to all codes, and we can determine many parameters for which the Griesmer bound is true also for nonlinear codes. In case of systematic codes, a class of codes including linear codes, we can derive stronger results on the relationship between the Griesmer bound and optimal codes. We also construct a family of optimal binary systematic codes contradicting the Griesmer bound. Finally, we obtain new bounds on the size of optimal codes. Regarding the study of random number generation, we analyse linear extractors and their connection with linear codes. The main result on this topic is a link between code parameters and the entropy rate obtained by a processed random number generator. More precisely, to any linear extractor we can associate the generator matrix of a linear code. Then, we link the total variation distance between the uniform distribution and the probability mass function of a random number generator with the weight distribution of the linear code associated to the linear extractor. Finally, we present a collection of results derived while pursuing a way to classify optimal codes, such as a probabilistic algorithm to compute the weight distribution of linear codes and a new bound on the size of codes.

Page generated in 0.0507 seconds