• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 366
  • 151
  • Tagged with
  • 511
  • 511
  • 511
  • 511
  • 511
  • 81
  • 37
  • 28
  • 26
  • 23
  • 23
  • 21
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Default Reasoning about Actions

Straß, Hannes 21 June 2012 (has links)
Action Theories are versatile and well-studied knowledge representation formalisms for modelling dynamic domains. However, traditional action theories allow only the specification of definite world knowledge, that is, universal rules for which there are no exceptions. When modelling a complex domain for which no complete knowledge can be obtained, axiomatisers face an unpleasant choice: either they cautiously restrict themselves to the available definite knowledge and live with a limited usefulness of the axiomatisation, or they bravely model some general, defeasible rules as definite knowledge and risk inconsistency in the case of an exception for such a rule. This thesis presents a framework for default reasoning in action theories that overcomes these problems and offers useful default assumptions while retaining a correct treatment of default violations. The framework allows to extend action theories with defeasible statements that express how the domain usually behaves. Normality of the world is then assumed by default and can be used to conclude what holds in the domain under normal circumstances. In the case of an exception, the default assumption is retracted, whereby consistency of the domain axiomatisation is preserved.
72

Development and Evaluation of Data Processing Techniques in Magnetoencephalography

Schönherr, Margit 12 July 2012 (has links)
With MEG, the tiny magnetic fields produced by neuronal currents within the brain can be measured completely non-invasively. But the signals are very small (~100 fT) and often obscured by spontaneous brain activity and external noise. So, a recurrent issue in MEG data analysis is the identification and elimination of this unwanted interference within the recordings. Various strategies exist to meet this purpose. In this thesis, two of these strategies are scrutinized in detail. The first is the commonly used procedure of averaging over trials which is a successfully applied data reduction method in many neurocognitive studies. However, the brain does not always respond identically to repeated stimuli, so averaging can eliminate valuable information. Alternative approaches aiming at single trial analysis are difficult to realize and many of them focus on temporal patterns. Here, a compromise involving random subaveraging of trials and repeated source localization is presented. A simulation study with numerous examples demonstrates the applicability of the new method. As a result, inferences about the generators of single trials can be drawn which allows deeper insight into neuronal processes of the human brain. The second technique examined in this thesis is a preprocessing tool termed Signal Space Separation (SSS). It is widely used for preprocessing of MEG data, including noise reduction by suppression of external interference, as well as movement correction. Here, the mathematical principles of the SSS series expansion and the rules for its application are investigated. The most important mathematical precondition is a source-free sensor space. Using three data sets, the influence of a violation of this convergence criterion on source localization accuracy is demonstrated. The analysis reveals that the SSS method works reliably, even when the convergence criterion is not fully obeyed. This leads to utilizing the SSS method for the transformation of MEG data to virtual sensors on the scalp surface. Having MEG data directly on the individual scalp surface would alleviate sensor space analysis across subjects and comparability with EEG. A comparison study of the transformation results obtained with SSS and those produced by inverse and subsequent forward computation is performed. It shows strong dependence on the relative position of sources and sensors. In addition, the latter approach yields superior results for the intended purpose of data transformation.
73

High-throughput sequencing and small non-coding RNAs

Langenberger, David 22 April 2013 (has links)
In this thesis the processing mechanisms of short non-coding RNAs (ncRNAs) is investigated by using data generated by the current method of high-throughput sequencing (HTS). The recently adapted short RNA-seq protocol allows the sequencing of RNA fragments of microRNA-like length (∼18-28nt). Thus, after mapping the data back to a reference genome, it is possible to not only measure, but also visualize the expression of all ncRNAs that are processed to fragments of this specific length. Short RNA-seq data was used to show that a highly abundant class of small RNAs, called microRNA-offset-RNAs (moRNAs), which was formerly detected in a basal chordate, is also produced from human microRNA precursors. To simplify the search, the blockbuster tool that automatically recognizes blocks of reads to detect specific expression patterns was developed. By using blockbuster, blocks from moRNAs were detected directly next to the miR or miR* blocks and could thus easily be registered in an automated way. When further investigating the short RNA-seq data it was realized that not only microRNAs give rise to short ∼22nt long RNA pieces, but also almost all other classes of ncRNAs, like tRNAs, snoRNAs, snRNAs, rRNAs, Y-RNAs, or vault RNAs. The formed read patterns that arise after mapping these RNAs back to a reference genome seem to reflect the processing of each class and are thus specific for the RNA transcripts of which they are derived from. The potential of this patterns in classification and identification of non-coding RNAs was explored. Using a random forest classifier which was trained on a set of characteristic features of the individual ncRNA classes, it was possible to distinguish three types of ncRNAs, namely microRNAs, tRNAs, and snoRNAs. To make the classification available to the research community, the free web service ‘DARIO’ that allows to study short read data from small RNA-seq experiments was developed. The classification has shown that read patterns are specific for different classes of ncRNAs. To make use of this feature, the tool deepBlockAlign was developed. deepBlockAlign introduces a two-step approach to align read patterns with the aim of quickly identifying RNAs that share similar processing footprints. In order to find possible exceptions to the well-known microRNA maturation by Dicer and to identify additional substrates for Dicer processing the small RNA sequencing data of a Dicer knockdown experiment in MCF-7 cells was re-evaluated. There were several Dicer-independent microRNAs, among them the important tumor supressor mir-663a. It is known that many aspects of the RNA maturation leave traces in RNA sequencing data in the form of mismatches from the reference genome. It is possible to recover many well- known modified sites in tRNAs, providing evidence that modified nucleotides are a pervasive phenomenon in these data sets.
74

Hybridization biases of microarray expression data - A model-based analysis of RNA quality and sequence effects

Fasold, Mario 06 November 2013 (has links)
Modern high-throughput technologies like DNA microarrays are powerful tools that are widely used in biomedical research. They target a variety of genomics applications ranging from gene expression profiling over DNA genotyping to gene regulation studies. However, the recent discovery of false positives among prominent research findings indicates a lack of awareness or understanding of the non-biological factors negatively affecting the accuracy of data produced using these technologies. The aim of this thesis is to study the origins, effects and potential correction methods for selected methodical biases in microarray data. The two-species Langmuir model serves as the basal physicochemical model of microarray hybridization describing the fluorescence signal response of oligonucleotide probes. The so-called hook method allows to estimate essential model parameters and to compute summary parameters characterizing a particular microarray sample. We show that this method can be applied successfully to various types of microarrays which share the same basic mechanism of multiplexed nucleic acid hybridization. Using appropriate modifications of the model we study RNA quality and sequence effects using publicly available data from Affymetrix GeneChip expression arrays. Varying amounts of hybridized RNA result in systematic changes of raw intensity signals and appropriate indicator variables computed from these. Varying RNA quality strongly affects intensity signals of probes which are located at the 3\'' end of transcripts. We develop new methods that help assessing the RNA quality of a particular microarray sample. A new metric for determining RNA quality, the degradation index, is proposed which improves previous RNA quality metrics. Furthermore, we present a method for the correction of the 3\'' intensity bias. These functionalities have been implemented in the freely available program package AffyRNADegradation. We show that microarray probe signals are affected by sequence effects which are studied systematically using positional-dependent nearest-neighbor models. Analysis of the resulting sensitivity profiles reveals that specific sequence patterns such as runs of guanines at the solution end of the probes have a strong impact on the probe signals. The sequence effects differ for different chip- and target-types, probe types and hybridization modes. Theoretical and practical solutions for the correction of the introduced sequence bias are provided. Assessment of RNA quality and sequence biases in a representative ensemble of over 8000 available microarray samples reveals that RNA quality issues are prevalent: about 10% of the samples have critically low RNA quality. Sequence effects exhibit considerable variation within the investigated samples but have limited impact on the most common patterns in the expression space. Variations in RNA quality and quantity in contrast have a significant impact on the obtained expression measurements. These hybridization biases should be considered and controlled in every microarray experiment to ensure reliable results. Application of rigorous quality control and signal correction methods is strongly advised to avoid erroneous findings. Also, incremental refinement of physicochemical models is a promising way to improve signal calibration paralleled with the opportunity to better understand the fundamental processes in microarray hybridization.
75

The Orthology Road: Theory and Methods in Orthology Analysis

Hernandez Rosales, Maribel 09 June 2013 (has links)
The evolution of biological species depends on changes in genes. Among these changes are the gradual accumulation of DNA mutations, insertions and deletions, duplication of genes, movements of genes within and between chromosomes, gene losses and gene transfer. As two populations of the same species evolve independently, they will eventually become reproductively isolated and become two distinct species. The evolutionary history of a set of related species through the repeated occurrence of this speciation process can be represented as a tree-like structure, called a phylogenetic tree or a species tree. Since duplicated genes in a single species also independently accumulate point mutations, insertions and deletions, they drift apart in composition in the same way as genes in two related species. The divergence of all the genes descended from a single gene in an ancestral species can also be represented as a tree, a gene tree that takes into account both speciation and duplication events. In order to reconstruct the evolutionary history from the study of extant species, we use sets of similar genes, with relatively high degree of DNA similarity and usually with some functional resemblance, that appear to have been derived from a common ancestor. The degree of similarity among different instances of the “same gene” in different species can be used to explore their evolutionary history via the reconstruction of gene family histories, namely gene trees. Orthology refers specifically to the relationship between two genes that arose by a speciation event, recent or remote, rather than duplication. Comparing orthologous genes is essential to the correct reconstruction of species trees, so that detecting and identifying orthologous genes is an important problem, and a longstanding challenge, in comparative and evolutionary genomics as well as phylogenetics. A variety of orthology detection methods have been devised in recent years. Although many of these methods are dependent on generating gene and/or species trees, it has been shown that orthology can be estimated at acceptable levels of accuracy without having to infer gene trees and/or reconciling gene trees with species trees. Therefore, there is good reason to look at the connection of trees and orthology from a different angle: How much information about the gene tree, the species tree, and their reconciliation is already contained in the orthology relation among genes? Intriguingly, a solution to the first part of this question has already been given by Boecker and Dress [Boecker and Dress, 1998] in a different context. In particular, they completely characterized certain maps which they called symbolic ultrametrics. Semple and Steel [Semple and Steel, 2003] then presented an algorithm that can be used to reconstruct a phylogenetic tree from any given symbolic ultrametric. In this thesis we investigate a new characterization of orthology relations, based on symbolic ultramterics for recovering the gene tree. According to Fitch’s definition [Fitch, 2000], two genes are (co-)orthologous if their last common ancestor in the gene tree represents a speciation event. On the other hand, when their last common ancestor is a duplication event, the genes are paralogs. The orthology relation on a set of genes is therefore determined by the gene tree and an “event labeling” that identifies each interior vertex of that tree as either a duplication or a speciation event. In the context of analyzing orthology data, the problem of reconciling event-labeled gene trees with a species tree appears as a variant of the reconciliation problem where genes trees have no labels in their internal vertices. When reconciling a gene tree with a species tree, it can be assumed that the species tree is correct or, in the case of a unknown species tree, it can be inferred. Therefore it is crucial to know for a given gene tree whether there even exists a species tree. In this thesis we characterize event-labelled gene trees for which a species tree exists and species trees to which event-labelled gene trees can be mapped. Reconciliation methods are not always the best options for detecting orthology. A fundamental problem is that, aside from multicellular eukaryotes, evolution does not seem to have conformed to the descent-with-modification model that gives rise to tree-like phylogenies. Examples include many cases of prokaryotes and viruses whose evolution involved horizontal gene transfer. To treat the problem of distinguishing orthology and paralogy within a more general framework, graph-based methods have been proposed to detect and differentiate among evolutionary relationships of genes in those organisms. In this work we introduce a measure of orthology that can be used to test graph-based methods and reconciliation methods that detect orthology. Using these results a new algorithm BOTTOM-UP to determine whether a map from the set of vertices of a tree to a set of events is a symbolic ultrametric or not is devised. Additioanlly, a simulation environment designed to generate large gene families with complex duplication histories on which reconstruction algorithms can be tested and software tools can be benchmarked is presented.
76

Efficient Extraction and Query Benchmarking of Wikipedia Data

Morsey, Mohamed 12 April 2013 (has links)
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches. However, the DBpedia release process is heavy-weight and the releases are sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication. Basically, knowledge bases, including DBpedia, are stored in triplestores in order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks. Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
77

Amenable groups and a geometric view on unitarisability

Schlicht, Peter 29 January 2014 (has links)
We investigate unitarisability of groups by looking at induced actions on the cone of positive operators.
78

Ein matrizielles finites Momentenproblem vom Stieltjes-Typ

Makarevich, Tatsiana 13 April 2014 (has links)
Die vorliegende Arbeit beschäftigt sich mit den finiten matriziellen Momentenproblemen von Stieltjes-Typ und beschreibt unter Verwendung der Methode der Fundamentalen Matrixungleichungen die Lösungsmenge durch gebrochen lineare Transformationen.
79

Starkregenereignisse von 1961 bis 2015: Analyse von Starkregenereignissen von 1961 bis 2015 für den Freistaat Sachsen

Bernhofer, Christian, Schaller, Andrea, Pluntke, Thomas 16 October 2017 (has links)
Die Starkregenanalyse Sachsen 1961-2015 wurde auf der Grundlage von 1-km-Rasterdaten für tägliche Niederschlagssummen durchgeführt. Die Definition für Starkregenereignisse erfolgte hier mittels der lokalen 90- und 95-Perzentile im Referenzzeitraum. Demnach wurden Ereignisse einbezogen, deren Regenmenge im Zeitraum 1961-1990 zu den größten 10 bzw. 5 % der lokal aufgetretenen Regenmengen gehörte. Mit Bezug zum Jahr haben Auftretenshäufigkeit und mittlere Intensität von Starkregenereignissen im Zeitraum 1991-2015 gegenüber 1961-1990 zugenommen. Dabei besitzt die Auftretenshäufigkeit gegenüber der Intensität das stärkere Signal. Diese Zunahmen sind entscheidend durch die starken Zunahmen in den Sommermonaten begründet. Die Analyse lieferte deutliche Hinweise auf eine Intensivierung des konvektiven Starkregengeschehens. Die Veröffentlichung richtet sich an regionale Akteure sowie Planungsbüros, Bildungseinrichtungen und Unternehmen.
80

Modellierung abgesetzter Niederschläge: Entwicklung und Anwendung eines Verfahrens zur Berücksichtigung abgesetzter Niederschläge bei der Korrektur von Niederschlagsmessungen

Bernhofer, Christian, Körner, Philipp, Schwarze, Robert 30 October 2017 (has links)
Mittels eines neu entwickelten Verfahrens wurden 1 km-Rasterdaten für tägliche und monatliche Nebelniederschläge von 1967 bis 2014 für Sachsen erzeugt. Zum windbedingten Messfehler kann somit ein weiterer Verlust bei Niederschlagsmessungen für wasserhaushaltsrelevante Untersuchungen kompensiert werden. Die Veröffentlichung richtet sich an regionale Akteure, Planungsbüros, Bildungseinrichtungen und Unternehmen.

Page generated in 0.1736 seconds