• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3193
  • 1384
  • 480
  • 24
  • 5
  • 4
  • 1
  • Tagged with
  • 5071
  • 3728
  • 2830
  • 2817
  • 2817
  • 1863
  • 1727
  • 1724
  • 1709
  • 1707
  • 1197
  • 713
  • 698
  • 684
  • 678
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Über kurz oder lang. Ein Schlichtungsversuch zur Debatte über Micro- und Macrolearning

Lorenz, Anja January 2010 (has links)
Die Vorteile von Microlearning gegenüber langatmiger (Online-) Schulungen liegen auf der Hand: Sie führen schnell zu Lernerfolgen, unterbrechen den Alltag nur kurz und entsprechen der Kultur der „Net Generation“. Mit einem Blogeintrag stellte sich Gabi Reinmann [Rei10a] diesen Argumenten offen entgegen und sprach sich für die Notwendigkeit langer Lernphasen für eine umfassende Bildung aus. Dabei widersprechen sich die Lernkonzepte nicht, sondern können und müssen einander ergänzen. Auch die dafür benötigten Lernmaterialen sollten nicht isoliert voneinander erstellt werden: durch eine geeignete Strukturierung können Bestandteile umfassenderer Schulungen für Microlearning-Einheiten wiederverwendet werden.
412

OpenStreetMap-Karten selbst gebaut

Pönisch, Jens 07 May 2012 (has links)
Der Vortrag erläutert, wie aus den Daten des OpenStreetMap-Projekts Karten erzeugt werden können. Neben dem Zusammensetzen einfacher Karten aus vorberechneten Tiles wird im Hauptteil erläutert, wie die Vektordaten zum Zeichnen benutzt werden können. Die Darstellung von Höhendaten schließt den Vortrag ab.
413

Understanding Inconsistency -- A Contribution to the Field of Non-monotonic Reasoning

Ulbricht, Markus 24 July 2019 (has links)
Conflicting information in an agent's knowledge base may lead to a semantical defect, that is, a situation where it is impossible to draw any plausible conclusion. Finding out the reasons for the observed inconsistency and restoring consistency in a certain minimal way are frequently occurring issues in the research area of knowledge representation and reasoning. In a seminal paper Raymond Reiter proves a duality between maximal consistent subsets of a propositional knowledge base and minimal hitting sets of each minimal conflict -- the famous hitting set duality. We extend Reiter's result to arbitrary non-monotonic logics. To this end, we develop a refined notion of inconsistency, called strong inconsistency. We show that minimal strongly inconsistent subsets play a similar role as minimal inconsistent subsets in propositional logic. In particular, the duality between hitting sets of minimal inconsistent subsets and maximal consistent subsets generalizes to arbitrary logics if the stronger notion of inconsistency is used. We cover various notions of repairs and characterize them using analogous hitting set dualities. Our analysis also includes an investigation of structural properties of knowledge bases with respect to our notions. Minimal inconsistent subsets of knowledge bases in monotonic logics play an important role when investigating the reasons for conflicts and trying to handle them, but also for inconsistency measurement. Our notion of strong inconsistency thus allows us to extend existing results to non-monotonic logics. While measuring inconsistency in propositional logic has been investigated for some time now, taking the non-monotony into account poses new challenges. In order to tackle them, we focus on the structure of minimal strongly inconsistent subsets of a knowledge base. We propose measures based on this notion and investigate their behavior in a non-monotonic setting by revisiting existing rationality postulates, and analyzing the compliance of the proposed measures with these postulates. We provide a series of first results in the context of inconsistency in abstract argumentation theory regarding the two most important reasoning modes, namely credulous as well as skeptical acceptance. Our analysis includes the following problems regarding minimal repairs: existence, verification, computation of one and characterization of all solutions. The latter will be tackled with our previously obtained duality results. Finally, we investigate the complexity of various related reasoning problems and compare our results to existing ones for monotonic logics.
414

Implementing Bayesian Inference with Neural Networks

Sokoloski, Sacha 26 July 2019 (has links)
Embodied agents, be they animals or robots, acquire information about the world through their senses. Embodied agents, however, do not simply lose this information once it passes by, but rather process and store it for future use. The most general theory of how an agent can combine stored knowledge with new observations is Bayesian inference. In this dissertation I present a theory of how embodied agents can learn to implement Bayesian inference with neural networks. By neural network I mean both artificial and biological neural networks, and in my dissertation I address both kinds. On one hand, I develop theory for implementing Bayesian inference in deep generative models, and I show how to train multilayer perceptrons to compute approximate predictions for Bayesian filtering. On the other hand, I show that several models in computational neuroscience are special cases of the general theory that I develop in this dissertation, and I use this theory to model and explain several phenomena in neuroscience. The key contributions of this dissertation can be summarized as follows: - I develop a class of graphical model called nth-order harmoniums. An nth-order harmonium is an n-tuple of random variables, where the conditional distribution of each variable given all the others is always an element of the same exponential family. I show that harmoniums have a recursive structure which allows them to be analyzed at coarser and finer levels of detail. - I define a class of harmoniums called rectified harmoniums, which are constrained to have priors which are conjugate to their posteriors. As a consequence of this, rectified harmoniums afford efficient sampling and learning. - I develop deep harmoniums, which are harmoniums which can be represented by hierarchical, undirected graphs. I develop the theory of rectification for deep harmoniums, and develop a novel algorithm for training deep generative models. - I show how to implement a variety of optimal and near-optimal Bayes filters by combining the solution to Bayes' rule provided by rectified harmoniums, with predictions computed by a recurrent neural network. I then show how to train a neural network to implement Bayesian filtering when the transition and emission distributions are unknown. - I show how some well-established models of neural activity are special cases of the theory I present in this dissertation, and how these models can be generalized with the theory of rectification. - I show how the theory that I present can model several neural phenomena including proprioception and gain-field modulation of tuning curves. - I introduce a library for the programming language Haskell, within which I have implemented all the simulations presented in this dissertation. This library uses concepts from Riemannian geometry to provide a rigorous and efficient environment for implementing complex numerical simulations. I also use the results presented in this dissertation to argue for the fundamental role of neural computation in embodied cognition. I argue, in other words, that before we will be able to build truly intelligent robots, we will need to truly understand biological brains.
415

Statistical and Computational Models for Whole Word Morphology

Janicki, Maciej 09 September 2019 (has links)
Das Ziel dieser Arbeit ist die Formulierung eines Ansatzes zum maschinellen Lernen von Sprachmorphologie, in dem letztere als Zeichenkettentransformationen auf ganzen Wörtern, und nicht als Zerlegung von Wörtern in kleinere stukturelle Einheiten, modelliert wird. Der Beitrag besteht aus zwei wesentlichen Teilen: zum einen wird ein Rechenmodell formuliert, in dem morphologische Regeln als Funktionen auf Zeichenketten definiert sind. Solche Funktionen lassen sich leicht zu endlichen Transduktoren übersetzen, was eine solide algorithmische Grundlage für den Ansatz liefert. Zum anderen wird ein statistisches Modell für Graphen von Wortab\-leitungen eingeführt. Die Inferenz in diesem Modell erfolgt mithilfe des Monte Carlo Expectation Maximization-Algorithmus und die Erwartungswerte über Graphen werden durch einen Metropolis-Hastings-Sampler approximiert. Das Modell wird auf einer Reihe von praktischen Aufgaben evaluiert: Clustering flektierter Formen, Lernen von Lemmatisierung, Vorhersage von Wortart für unbekannte Wörter, sowie Generierung neuer Wörter.
416

From Best Match Graphs to Gene Trees: A new perspective on graph-based orthology inference

Geiß, Manuela 11 November 2019 (has links)
Orthology detection is an important task within the context of genome an- notation, gene nomenclature, and the understanding of gene evolution. With the rapidly accelerating pace at which new genomes become available, highly efficient methods are urgently required. As demonstrated in a large body of literature, reciprocal best match (RBH) methods are reasonably accurate and scale to large data sets. Nevertheless, they are far from perfect and prone to both, false positive and false negative, orthology calls. This work gives a complete characterization of best match as well as reciprocal best match graphs (BMGs and RBMGs) that arise at the first step of RBH methods. While BMGs as well as RBMGs with at most three species can be recognized in polynomial time, RBMGs with more than three species have a surprisingly complicated structure and it remains an open problem whether there exist polynomial time algorithms for the recognition of these RBMGs. In contrast to RBMGs, for which many (often mutually inconsistent) least re- solved trees may exist, there is a unique least resolved tree for BMGs. This tree is a homeomorphic image of the true, but typically unknown, gene tree. Furthermore, in the absence of horizontal gene transfer (HGT), the reciprocal best match graph contains the orthology relation suggesting that RBMGs can only contain false positive but no false negative orthology assignments. Simu- lation scenarios reveal that so-called good quartets, a certain graph pattern on four vertices in BMGs, can be used to successfully identify almost all false pos- itive edges in RBMGs. Together with the existence of a unique least resolved tree, this suggests that BMGs contain a lot of valuable information for orthol- ogy inference that would be lost by exclusively considering RBMGs. These insights motivate to include additional BMG and RBMG editing steps in or- thology detection pipelines based on the presented theoretical insights. Moreover, a workflow is introduced to infer best matches from sequence data by retrieving quartet structures from local information instead of reconstructing the whole gene tree. A crucial prerequisite for this pipeline is the choice of suitable outgroups. However, the empirical simulations also reveal that HGT events cause strong deviations of the orthology relation from the RBMG as well as good quartets that are no longer associated with false positive orthologs, suggesting the need for further investigation of the xenology relation. The directed Fitch’s xenology relation is characterized in terms of forbidden 3-vertex subgraphs and moreover, a polynomial time algorithm for the recog- nition and the reconstruction of a unique least resolved tree is presented. The undirected Fitch relation, in contrast, is shown to be a complete multipartite graph, which does not provide any interesting phylogenetic information. In summary, the results of this work can be used to develop new methods for inferring orthology, paralogy, and HGT. They promise major improvements in the accuracy and the computational performance of RBH-based approaches.
417

Comparative Genomics in Distant Taxa: Generating Total Orders of Digraphs

Gärtner, Fabian 11 March 2020 (has links)
No description available.
418

Towards Dynamic Programming on Generalized Data Structures: and Applications of Dynamic Programming in Bioinformatics

Berkemer, Sarah Juliane 11 March 2020 (has links)
Dynamische Programmierung (DP) ist eine Methode um Optimisierungsprobleme zu lösen. Hierbei wird das Problem in sich überlappende Teilprobleme unterteilt und eine optimale Lösung zu jedem der Teilprobleme berechnet. Diese werden dann wiederrum zur Gesamtlösung zusammengesetzt. Teillösungen werden in einer Tabelle gespeichert, sodass jede Teillösung nur einmal berechnet werden muss. So kann ein Suchraum exponentieller Größe in polynomieller Zeit durchsucht und eine optimale Lösung gefunden werden. Die dynamische Programmierung wurde 1952 von Bellman entwickelt und eine der ersten Anwendung war die Detektion von Tippfehlern beim Programmieren. DP Algorithmen werden oft und sehr vielschichtig in der Bioinformatik angewendet wie zum Beispiel beim Vergleich von Gensequenzen, Sequenzalignment genannt, oder der Vorhersage von Molekülstrukturen. Die Menge an Daten und somit auch deren Analyse steigt stetig an, weshalb neue und komplexere Datenstrukturen immer wichtiger werden. Ein Ziel ist es deswegen, DP Algorithmen zu entwickeln, die auf komplexeren Daten- strukturen als Strings angewendet werden können. Durch das Prinzip der algebraischen dynamischen Programmierung (ADP) können DP Algorithmen in kleinere Bestandteile zerlegt werden, die dann unabhängig voneinander weiterentwickelt und abgeändert werden können. Die Arbeit ist in zwei Teile gegliedert, wobei der erste Teil die theoretische Arbeit zur Entwicklung von Algorithmen der dynamischen Programmierung beinhaltet. Hierbei werden zuerst Prinzipien und Definitionen zur dynamischen Programmierung vorgestellt (Kapitel 2), um ein besseres Verständnis der darauffolgenden Kapitel zu gewährleisten. Der zweite Teil der Arbeit zeigt unterschiedliche bioinformatische Anwendungen von DP Algorithmen auf biologische Daten. In einem ersten Kapitel (Kapitel 5) werden Grundsätze biologischer Daten und Algorithmen vorgestellt, die dann in den weiteren Kapiteln benutzt werden.
419

Studying Evolutionary Change: Transdisciplinary Advances in Understanding and Measuring Evolution

Retzlaff, Nancy 20 April 2020 (has links)
Evolutionary processes can be found in almost any historical, i.e. evolving, system that erroneously copies from the past. Well studied examples do not only originate in evolutionary biology but also in historical linguistics. Yet an approach that would bind together studies of such evolving systems is still elusive. This thesis is an attempt to narrowing down this gap to some extend. An evolving system can be described using characters that identify their changing features. While the problem of a proper choice of characters is beyond the scope of this thesis and remains in the hands of experts we concern ourselves with some theoretical as well data driven approaches. Having a well chosen set of characters describing a system of different entities such as homologous genes, i.e. genes of same origin in different species, we can build a phylogenetic tree. Consider the special case of gene clusters containing paralogous genes, i.e. genes of same origin within a species usually located closely, such as the well known HOX cluster. These are formed by step- wise duplication of its members, often involving unequal crossing over forming hybrid genes. Gene conversion and possibly other mechanisms of concerted evolution further obfuscate phylogenetic relationships. Hence, it is very difficult or even impossible to disentangle the detailed history of gene duplications in gene clusters. Expanding gene clusters that use unequal crossing over as proposed by Walter Gehring leads to distinctive patterns of genetic distances. We show that this special class of distances helps in extracting phylogenetic information from the data still. Disregarding genome rearrangements, we find that the shortest Hamiltonian path then coincides with the ordering of paralogous genes in a cluster. This observation can be used to detect ancient genomic rearrangements of gene clus- ters and to distinguish gene clusters whose evolution was dominated by unequal crossing over within genes from those that expanded through other mechanisms. While the evolution of DNA or protein sequences is well studied and can be formally described, we find that this does not hold for other systems such as language evolution. This is due to a lack of detectable mechanisms that drive the evolutionary processes in other fields. Hence, it is hard to quantify distances between entities, e.g. languages, and therefore the characters describing them. Starting out with distortions of distances, we first see that poor choices of the distance measure can lead to incorrect phylogenies. Given that phylogenetic inference requires additive metrics we can infer the correct phylogeny from a distance matrix D if there is a monotonic, subadditive function ζ such that ζ^−1(D) is additive. We compute the metric-preserving transformation ζ as the solution of an optimization problem. This result shows that the problem of phylogeny reconstruction is well defined even if a detailed mechanistic model of the evolutionary process is missing. Yet, this does not hinder studies of language evolution using automated tools. As the amount of available and large digital corpora increased so did the possibilities to study them automatically. The obvious parallels between historical linguistics and phylogenetics lead to many studies adapting bioinformatics tools to fit linguistics means. Here, we use jAlign to calculate bigram alignments, i.e. an alignment algorithm that operates with regard to adjacency of letters. Its performance is tested in different cognate recognition tasks. Using pairwise alignments one major obstacle is the systematic errors they make such as underestimation of gaps and their misplacement. Applying multiple sequence alignments instead of a pairwise algorithm implicitly includes more evolutionary information and thus can overcome the problem of correct gap placement. They can be seen as a generalization of the string-to-string edit problem to more than two strings. With the steady increase in computational power, exact, dynamic programming solutions have become feasible in practice also for 3- and 4-way alignments. For the pairwise (2-way) case, there is a clear distinction between local and global alignments. As more sequences are consid- ered, this distinction, which can in fact be made independently for both ends of each sequence, gives rise to a rich set of partially local alignment problems. So far these have remained largely unexplored. Thus, a general formal frame- work that gives raise to a classification of partially local alignment problems is introduced. It leads to a generic scheme that guides the principled design of exact dynamic programming solutions for particular partially local alignment problems.
420

Neural Networks for CollaborativeFiltering

Feigl, Josef 10 July 2020 (has links)
Recommender systems are an integral part of almost all modern e-commerce companies. They contribute significantly to the overall customer satisfaction by helping the user discover new and relevant items, which consequently leads to higher sales and stronger customer retention. It is, therefore, not surprising that large e-commerce shops like Amazon or streaming platforms like Netflix and Spotify even use multiple recommender systems to further increase user engagement. Finding the most relevant items for each user is a difficult task that is critically dependent on the available user feedback information. However, most users typically interact with products only through noisy implicit feedback, such as clicks or purchases, rather than providing explicit information about their preferences, such as product ratings. This usually makes large amounts of behavioural user data necessary to infer accurate user preferences. One popular approach to make the most use of both forms of feedback is called collaborative filtering. Here, the main idea is to compare individual user behaviour with the behaviour of all known users. Although there are many different collaborative filtering techniques, matrix factorization models are among the most successful ones. In contrast, while neural networks are nowadays the state-of-the-art method for tasks such as image recognition or natural language processing, they are still not very popular for collaborative filtering tasks. Therefore, the main focus of this thesis is the derivation of multiple wide neural network architectures to mimic and extend matrix factorization models for various collaborative filtering problems and to gain insights into the connection between these models. The basics of the proposed architecture are wide and shallow feedforward neural networks, which will be established for rating prediction tasks on explicit feedback datasets. These networks consist of large input and output layers, which allow them to capture user and item representation similar to matrix factorization models. By deriving all weight updates and comparing the structure of both models, it is proven that a simplified version of the proposed network can mimic common matrix factorization models: a result that has not been shown, as far as we know, in this form before. Additionally, various extensions are thoroughly evaluated. The new findings of this evaluation can also easily be transferred to other matrix factorization models. This neural network architecture can be extended to be used for personalized ranking tasks on implicit feedback datasets. For these problems, it is necessary to rank products according to individual preferences using only the provided implicit feedback. One of the most successful and influential approaches for personalized ranking tasks is Bayesian Personalized Ranking, which attempts to learn pairwise item rankings and can also be used in combination with matrix factorization models. It is shown, how the introduction of an additional ranking layer forces the network to learn pairwise item rankings. In addition, similarities between this novel neural network architecture and a matrix factorization model trained with Bayesian Personalized Ranking are proven. To the best of our knowledge, this is the first time that these connections have been shown. The state-of-the-art performance of this network is demonstrated in a detailed evaluation. The most comprehensive feedback datasets consist of a mixture of explicit as well as implicit feedback information. Here, the goal is to predict if a user will like an item, similar to rating prediction tasks, even if this user has never given any explicit feedback at all: a problem, that has not been covered by the collaborative filtering literature yet. The network to solve this task is composed out of two networks: one for the explicit and one for the implicit feedback. Additional item features are learned using the implicit feedback, which capture all information necessary to rank items. Afterwards, these features are used to improve the explicit feedback prediction. Both parts of this combined network have different optimization goals, are trained simultaneously and, therefore, influence each other. A detailed evaluation shows that this approach is helpful to improve the network's overall predictive performance especially for ranking metrics.

Page generated in 0.0294 seconds