• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 352
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 718
  • 185
  • 97
  • 88
  • 87
  • 76
  • 69
  • 54
  • 54
  • 53
  • 53
  • 52
  • 50
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Managing the empirical hardness of the ontology reasoning using the predictive modelling / Modélisation prédictive et apprentissage automatique pour une meilleure gestion de la complexité empirique du raisonnement autour des ontologies

Alaya Mili, Nourhene 13 October 2016 (has links)
Multiples techniques d'optimisation ont été implémentées afin de surmonter le compromis entre la complexité des algorithmes du raisonnement et l'expressivité du langage de formulation des ontologies. Cependant les compagnes d'évaluation des raisonneurs continuent de confirmer l'aspect imprévisible et aléatoire des performances de ces logiciels à l'égard des ontologies issues du monde réel. Partant de ces observations, l'objectif principal de cette thèse est d'assurer une meilleure compréhension du comportement empirique des raisonneurs en fouillant davantage le contenu des ontologies. Nous avons déployé des techniques d'apprentissage supervisé afin d'anticiper des comportements futurs des raisonneurs. Nos propositions sont établies sous forme d'un système d'assistance aux utilisateurs d'ontologies, appelé "ADSOR". Quatre composantes principales ont été proposées. La première est un profileur d'ontologies. La deuxième est un module d'apprentissage capable d'établir des modèles prédictifs de la robustesse des raisonneurs et de la difficulté empirique des ontologies. La troisième composante est un module d'ordonnancement par apprentissage, pour la sélection du raisonneur le plus robuste étant donnée une ontologie. Nous avons proposé deux approches d'ordonnancement; la première fondée sur la prédiction mono-label et la seconde sur la prédiction multi-label. La dernière composante offre la possibilité d'extraire les parties potentiellement les plus complexes d'une ontologie. L'identification de ces parties est guidée par notre modèle de prédiction du niveau de difficulté d'une ontologie. Chacune de nos approches a été validée grâce à une large palette d'expérimentations. / Highly optimized reasoning algorithms have been developed to allow inference tasks on expressive ontology languages such as OWL (DL). Nevertheless, reasoning remains a challenge in practice. In overall, a reasoner could be optimized for some, but not all ontologies. Given these observations, the main purpose of this thesis is to investigate means to cope with the reasoner performances variability phenomena. We opted for the supervised learning as the kernel theory to guide the design of our solution. Our main claim is that the output quality of a reasoner is closely depending on the quality of the ontology. Accordingly, we first introduced a novel collection of features which characterise the design quality of an OWL ontology. Afterwards, we modelled a generic learning framework to help predicting the overall empirical hardness of an ontology; and to anticipate a reasoner robustness under some online usage constraints. Later on, we discussed the issue of reasoner automatic selection for ontology based applications. We introduced a novel reasoner ranking framework. Correctness and efficiency are our main ranking criteria. We proposed two distinct methods: i) ranking based on single label prediction, and ii) a multi-label ranking method. Finally, we suggested to extract the ontology sub-parts that are the most computationally demanding ones. Our method relies on the atomic decomposition and the locality modules extraction techniques and employs our predictive model of the ontology hardness. Excessive experimentations were carried out to prove the worthiness of our approaches. All of our proposals were gathered in a user assistance system called "ADSOR".
362

Consolidação do estudo e análise da robustez de operadores fuzzy considerando a abordagem intuicionista / Consolidating the intuitionistic approach regarding the study and analysis of fuzzy operators robustness

Zanotelli, Rosana Medina 28 April 2015 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-18T14:21:35Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Rosana_Medina_Zanotelli.pdf: 897398 bytes, checksum: 53b6f15a010386648223058dfc32101b (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-19T14:42:08Z (GMT) No. of bitstreams: 2 Dissertacao_Rosana_Medina_Zanotelli.pdf: 897398 bytes, checksum: 53b6f15a010386648223058dfc32101b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-04-19T14:42:21Z (GMT). No. of bitstreams: 2 Dissertacao_Rosana_Medina_Zanotelli.pdf: 897398 bytes, checksum: 53b6f15a010386648223058dfc32101b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2015-04-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Esta dissertação contribui com a análise da robustez na Lógica Fuzzy, como uma importante fundamentação para modelagem e desenvolvimento de sistemas robustos, estendendo esta abordagem para a lógica intuicionista de Atanassov. Primeiramente, apresenta-se uma introdução à lógica fuzzy, discutindo as negações, funções de agregações, implicações e coimplicações fuzzy, incluindo também os conectivos Xor e derivações. O trabalho também considera a análise da -sensibilidade destes conectivos fuzzy e suas construções duais, essencialmente focados em propriedades algébricas e projeções. Começando com a avaliação da sensibilidade de conectivos fuzzy, a proposta estende os resultados para classes de conectivos fuzzy intuicionistas. Como principal contribuição, formalmente estabelece-se que a robustez preserva as construções duais e as funções de projeção relacionadas a conectivos fuzzy intuicionistas representáveis. Mostra-se que a extensão, do trabalho científico proposto por Y. Li e colaboradores, 2005 em "An Approach to Measure the Robustness of Fuzzy Reasoning", para a classe de conectivos fuzzy intuicionistas é preservada pelas construções duais. A presente pesquisa mostra que a análise de robustez pode ser diretamente verificada a partir de operadores fuzzy usando duas estratégias: (i) a -sensibilidade de operadores fuzzy baseada na análise da monotonicidade de seus argumentos (negações, agregações, implicações e coimplicações); e ainda (ii) a avaliação do comportamento dos operadores fuzzy nos pontos terminais do intervalo unitário, onde a monotonicidade não pode ser aplicada (conectivos Xor, XNor, bi-implicações e bi-coimplicações). / This dissertation contributes to the robustness analysis in fuzzy logic as an important founding for modeling and developing robust systems, extending such approach to the Atanassov’s intuitionistic fuzzy sets. It begins with the introduction of fuzzy logic, discussing negations, aggregation functions, fuzzy implications and bi-implications, also including Xor connectives and derivations. It also considers the -sensitivity analysis of these fuzzy connectives and their dual constructions from a few essentials such as properties and projections. Starting with an evaluation of the sensitivity in representable fuzzy connectives, the results are applied in the intuitionistic connective classes. The main result formally states that the robustness preserves the projection functions of representable intuitionistic fuzzy connectives. It shows an extension of the work in Y. Li et al, 2005 "An Approach to Measure the Robustness of Fuzzy Reasoning" for the class of intuitionistic fuzzy connectives, showing that the robustness of intuitionistic fuzzy sensitivity is preserved by dual constructions. The present research states that robustness analysis of intuitionistic fuzzy operators can be directly verified from fuzzy operators using two strategies: (i) the -sensitivity analysis of fuzzy connectives based on monotonicity of their arguments (negations, aggregations, implications and coimplications); and, otherwise (ii) the evaluation of the behaviour related to fuzzy connectives in endpoints of the unit interval, when the monotonicity property is not applied (Xor, XNor, bi-implications e bi-coimplications fuzzy operators).
363

Envisager la vigilance crues comme système organisationnel : les conditions de sa robustesse en territoires inondés dans le bassin Adour-Garonne (Sud-Ouest de la France) / Flood warning as an organisational system : the conditions of its robustness in flooded territories analysed in the Adour-Garonne basin (South-West of France)

Daupras, France 18 December 2015 (has links)
Malgré les améliorations portées à la détection des crues, à leurs prévisions et au perfectionnement des technologies de communication ces vingt dernières années, les systèmes d’alerte aux inondations restent soumis à des vulnérabilités et des incertitudes inhérentes à leur fonctionnement. En s’intéressant plus particulièrement au dispositif de Vigilance crues, ce travail questionne la manière dont les acteurs impliqués dans ce dispositif sociotechnique s’adaptent aux incertitudes et vulnérabilités auxquelles ils sont soumis. Ce travail s’appuie notamment sur le développement d’un modèle centré sur la notion de robustesse. D’une part, celui-ci permet de mieux saisir les processus qui conditionnent l’atteinte de l’objectif d’anticipation et d’amélioration de l’action collective organisée au cours d’une inondation. D’autre part, cette approche, en combinant les capacités à faire face des acteurs et les vulnérabilités du système, questionne les conditions socio-spatiales de la robustesse de la Vigilance crues au quotidien, i.e. en dehors des périodes de crues. Notre méthodologie repose sur plus de cent cinquante entretiens auprès des acteurs du système de vigilance dans le bassin Adour-Garonne. Il est ainsi démontré que la robustesse de ce système dépend (1) de la capacité des acteurs à faire face aux incertitudes et de leurs connaissances territoriales ; (2) de la mise en œuvre d’une approche intégrée qui tient compte des savoirs vernaculaires et des savoirs techniques ; (3) de rencontres régulières entre maires, services de gestion de crise et prévisionnistes, en particulier dans le cadre d’exercices inondation. Ainsi, se développent la confiance entre acteurs, l’apprentissage collectif et le renforcement de l’action collective en situations de crise. / Improving flood forecasting has become a technological race with major advances over the last 20 years. Moreover, communication technologies improvements have significantly increased the speed of warning dissemination. However, flood warning systems present inherent uncertainties and vulnerabilities. The present thesis questions how stakeholders involved in the French Flood Warning System (FFWS) deal with those uncertainties and vulnerabilities to achieve the aim of anticipation. Our approach is based on a conceptual model making use of the concept of robustness. We have applied this model to several flooded territories in the Adour-Garonne basin (France). Taking into account both vulnerability and coping capacities, we analyse the socio-spatial conditions that allows the robustness of the FFWS. A qualitative research methodology (150 semi-directive interviews) was adopted for the case studies. We demonstrate that (1) some vulnerabilities of the institutional warning can be overcome by the coping capacities and territorial knowledge of people at risk ; (2) the improvement of the FFWS can be achieved by the combination of both vernacular and scientific knowledges, and by an adaptation to local context ; (3) the reinforcement of the FFWS robustness depends on the upholding and the development of collective action, integrating people at-risk, crisis management services and forecasters through regular meetings and flood training exercises outside flooding periods. Such actions allow reinforcing collective action during crisis situations through the development of trustfulness.
364

New Results in Stability, Control, and Estimation of Fractional Order Systems

Koh, Bong Su 2011 May 1900 (has links)
A review of recent literature and the research effort underlying this dissertation indicates that fractional order differential equations have significant potential to advance dynamical system methods broadly. Particular promise exists in the area of control and estimation, even for systems where fractional order models do not arise “naturally”. This dissertation is aimed at further building of the base methodology with a focus on robust feedback control and state estimation. By setting the mathematical foundation with the fractional derivative Caputo definition, we can expand the concept of the fractional order calculus in a way that enables us to build corresponding controllers and estimators in the state-space form. For the robust eigenstructure assignment, we first examine the conditioning problem of the closed-loop eigenvalues and stability robustnesss criteria for the fractional order system, and we find a unique application of an n-dimensional rotation algorithm developed by Mortari, to solve the robust eigenstructure assignment problem in a novel way. In contradistinction to the existing Fractional Kalman filter developed by using Gru ̈ndwald-Letnikov definition, the new Fractional Kalman filter that we establish by utilizing Caputo definition and our algorithms provide us with powerful means for solving practical state estimation problems for fractional order systems.
365

Effective Sampling Design for Groundwater Transport Models

Nordqvist, Rune January 2001 (has links)
Model reliability is important when groundwater models are used for evaluation of environmental impact and water resource management. Model attributes such as geohydrologic units and parameter values need to be quantified in order to obtain reliable results. A primary objective of sampling design for groundwater models is to increase the reliability of modelling results by selecting effective measurement locations and times. It is advantageous to employ simulation models to guide measurement strategies already in early investigation stages. Normally, optimal design is only possible when model attributes are known prior to constructing a design. This is not a meaningful requirement as the model attributes are the final result of the analysis and are not known beforehand. Thus, robust design methods are required that are effective for ranges of parameter values, measurement error types and for alternative conceptual models. Parameter sensitivity is the fundamental model property that is used in this thesis to create effective designs. For conceptual model uncertainty, large-scale sensitivity analysis is used to devise networks that capture sufficient information to determine which model best describes the system with a minimum of measurement points. In fixed conceptual models, effective parameter- and error-robust designs are based on criteria that minimise the size of the parameter covariance matrix (D-optimality). Optimal designs do not necessarily have observations with the highest parameter sensitivities because D-optimality reduces parameter estimation errors by balancing high sensitivity and low correlation between parameters. Ignoring correlation in sparse designs may result in considerably inefficient designs. Different measurement error assumptions may also give widely different optimal designs. Early stage design often involves simple homogenous models for which the design effectiveness may be seriously offset by significant aquifer heterogeneity. Simple automatic and manual methods are possible for design generation. While none of these guarantee globally optimal designs, they do generate designs that are more effective than those normally used for measurement programs. Effective designs are seldom intuitively obvious, indicating that this methodology is quite useful. A general benefit of this type of analysis, in addition to the actual generation of designs, is insight into the relative importance of model attributes and their relation to different measurement strategies.
366

Efficient algorithms for the identification of miRNA motifs in DNA sequences

Mendes, Nuno D 06 June 2011 (has links) (PDF)
Unravelling biological processes is dependent on the adequate modelling of regulatory mechanisms that determine the timing and spatial patterns of gene expression. In the last decade, a novel regulatory mechanism has been discovered and its biological importance has been increasingly recognised. This mechanism is mediated by RNA molecules named miRNAs that are the product of the maturation of non-coding gene transcripts and act post- transcriptionally usually to dampen or abolish the expression of protein-coding genes. Despite having eluded detection for such a long time, it is now clear that the elucidation of the expression pattern of many genes cannot be achieved without incorporating the effects of miRNA-mediated regulation. The technical difficulties that the experimental detection of these regulators entailed prompted the development of increasingly sophisticated computational approaches. Gene finding strategies originally developed for coding genes cannot be applied since these non- coding molecules are subject to very different sequence restraints and are too short to exhibit statistical properties that can be easily distinguished from the background. As a result, com- putational tools came to rely heavily on the identification of conserved sequences, distant homologs and machine learning techniques. Recent developments in sequencing technology have overcome some of the limitations of earlier experimental approaches, but pose new computational challenges. At present, the identification of new miRNA genes is therefore the result of the use of several approaches, both computational and experimental. In spite of the advancement that this research field has known in the last several years, we are still not able to formally and rigourously characterise miRNA genes in order to identify whichever sequence, structure or contextual requirements are needed to turn a DNA sequence into a functional miRNA. Efforts using computational algorithms towards the enumeration of the full set of miRNAs of an organism have been limited by strong reliance on arguments of precursor conservation and feature similarity. However, miRNA precursors may arise anew or be lost across the evolutionary history of a species and a newly-sequenced genome may be evolutionarily too distant from other genomes for an adequate comparative analysis. In addition, the learning of intricate classification rules based purely on features shared by miRNA precursors that are currently known may reflect a perpetuating identification bias rather than a sound means to tell true miRNAs from other genomic stem-loops. In this thesis, we present a strategy to sieve through the vast amount of stem-loops found in metazoan genomes in search of pre-miRNAs, significantly reducing the set of candidates while retaining most known miRNA precursors. Our approach relies on precursor properties derived from the current knowledge of miRNA biogenesis, analysis of the precursor structure and incorporation of information about the transcription potential of each candidate. i Our approach has been applied to the genomes of Drosophila melanogaster and Anophe- les gambiae, which has allowed us to show that there is a strong bias amongst annotated pre-miRNAs towards robust stem-loops in these genomes and to propose a scoring scheme for precursor candidates which combines four robustness measures. Additionally, we have identified several known pre-miRNA homologs in the newly-sequenced Anopheles darlingi and shown that most are found amongst the top-scoring precursor candidates for that or- ganism, with respect to the combined score. The structural analysis of our candidates and the identification of the region of the structural space where known precursors are usually found allowed us to eliminate several candidates, but also showed that there is a staggering number of genomic stem-loops which seem to fulfil the stability, robustness and structural requirements indicating that additional evidence is needed to identify functional precursors. To this effect, we have introduced different strategies to evaluate the transcription potential of the remaining candidates which vary according to the information which is available for the dataset under study.
367

An Energy-Efficient Distributed Algorithm for k-Coverage Problem in Wireless Sensor Networks

Vu, Chinh Trung 03 May 2007 (has links)
Wireless sensor networks (WSNs) have recently achieved a great deal of attention due to its numerous attractive applications in many different fields. Sensors and WSNs possesses a number of special characteristics that make them very promising in many applications, but also put on them lots of constraints that make issues in sensor network particularly difficult. These issues may include topology control, routing, coverage, security, and data management. In this thesis, we focus our attention on the coverage problem. Firstly, we define the Sensor Energy-efficient Scheduling for k-coverage (SESK) problem. We then solve it by proposing a novel, completely localized and distributed scheduling approach, naming Distributed Energy-efficient Scheduling for k-coverage (DESK) such that the energy consumption among all the sensors is balanced, and the network lifetime is maximized while still satisfying the k-coverage requirement. Finally, in related work section we conduct an extensive survey of the existing work in literature that focuses on with the coverage problem.
368

Robustness in Automatic Physical Database Design

El Gebaly, Kareem January 2007 (has links)
Automatic physical database design tools rely on ``what-if'' interfaces to the query optimizer to estimate the execution time of the training query workload under different candidate physical designs. The tools use these what-if interfaces to recommend physical designs that minimize the estimated execution time of the input training workload. Minimizing estimated execution time alone can lead to designs that are not robust to query optimizer errors and workload changes. In particular, if the optimizer makes errors in estimating the execution time of the workload queries, then the recommended physical design may actually degrade the performance of these queries. In this sense, the physical design is risky. Furthermore, if the production queries are slightly different from the training queries, the recommended physical design may not benefit them at all. In this sense, the physical design is not general. We define Risk and Generality as two new measures aimed at evaluating the robustness of a proposed physical database design, and we show how to extend the objective function being optimized by a generic physical design tool to take these measures into account. We have implemented a physical design advisor in PostqreSQL, and we use it to experimentally demonstrate the usefulness of our approach. We show that our two new metrics result in physical designs that are more robust, which means that the user can implement them with a higher degree of confidence. This is particularly important as we move towards truly zero-administration database systems in which there is not the possibility for a DBA to vet the recommendations of the physical design tool before applying them.
369

Robustness in Automatic Physical Database Design

El Gebaly, Kareem January 2007 (has links)
Automatic physical database design tools rely on ``what-if'' interfaces to the query optimizer to estimate the execution time of the training query workload under different candidate physical designs. The tools use these what-if interfaces to recommend physical designs that minimize the estimated execution time of the input training workload. Minimizing estimated execution time alone can lead to designs that are not robust to query optimizer errors and workload changes. In particular, if the optimizer makes errors in estimating the execution time of the workload queries, then the recommended physical design may actually degrade the performance of these queries. In this sense, the physical design is risky. Furthermore, if the production queries are slightly different from the training queries, the recommended physical design may not benefit them at all. In this sense, the physical design is not general. We define Risk and Generality as two new measures aimed at evaluating the robustness of a proposed physical database design, and we show how to extend the objective function being optimized by a generic physical design tool to take these measures into account. We have implemented a physical design advisor in PostqreSQL, and we use it to experimentally demonstrate the usefulness of our approach. We show that our two new metrics result in physical designs that are more robust, which means that the user can implement them with a higher degree of confidence. This is particularly important as we move towards truly zero-administration database systems in which there is not the possibility for a DBA to vet the recommendations of the physical design tool before applying them.
370

Physiologically Motivated Methods For Audio Pattern Classification

Ravindran, Sourabh 20 November 2006 (has links)
Human-like performance by machines in tasks of speech and audio processing has remained an elusive goal. In an attempt to bridge the gap in performance between humans and machines there has been an increased effort to study and model physiological processes. However, the widespread use of biologically inspired features proposed in the past has been hampered mainly by either the lack of robustness across a range of signal-to-noise ratios or the formidable computational costs. In physiological systems, sensor processing occurs in several stages. It is likely the case that signal features and biological processing techniques evolved together and are complementary or well matched. It is precisely for this reason that modeling the feature extraction processes should go hand in hand with modeling of the processes that use these features. This research presents a front-end feature extraction method for audio signals inspired by the human peripheral auditory system. New developments in the field of machine learning are leveraged to build classifiers to maximize the performance gains afforded by these features. The structure of the classification system is similar to what might be expected in physiological processing. Further, the feature extraction and classification algorithms can be efficiently implemented using the low-power cooperative analog-digital signal processing platform. The usefulness of the features is demonstrated for tasks of audio classification, speech versus non-speech discrimination, and speech recognition. The low-power nature of the classification system makes it ideal for use in applications such as hearing aids, hand-held devices, and surveillance through acoustic scene monitoring

Page generated in 0.0621 seconds