• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 4
  • Tagged with
  • 23
  • 21
  • 21
  • 12
  • 11
  • 11
  • 11
  • 11
  • 9
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Infimum Problem as a Generalization of the Inclusion Problem for Automata

Borgwardt, Stefan 03 January 2024 (has links)
This thesis is concerned with automata over infinite trees. They are given a labeled infinite tree and accept or reject this tree based on its labels. A generalization of these automata with binary decisions are weighted automata. They do not just decide 'yes' or 'no', but rather compute an arbitrary value from a given algebraic structure, e.g., a semiring or a lattice. When passing from unweighted to weighted formalisms, many problems can be translated accordingly. The purpose of this work is to determine the feasibility of solving the inclusion problem for automata on infinite trees and its generalization to weighted automata, the infimum aggregation problem.
12

Individual and age-related differences in face-cognition

Hildebrandt, Andrea 01 September 2010 (has links)
Experimentelle und neurophysiologische Studien weisen auf eine Spezifität der Gesichterkognition hin. In der differentiellen Psychologie wird ein Schwerpunkt auf die Differenzierbarkeit sozio-kognitiver Leistungen von akademischen Fähigkeiten gelegt. Dabei werden bislang kaum Versuche unternommen, Messmodelle zu etablieren, die in neurokognitiven Modellen verankert sind. Basierend auf neuartigen Versuchen zur Etablierung solcher Modelle ist es das Ziel dieser Dissertation, die Robustheit dieser Modelle aus einer entwicklungspsychologischen Perspektive zu betrachten und diese zu erweitern. Zudem werden altersbedingte Leistungsunterschiede in der Gesichterkognition auf der Ebene latenter Faktoren ermittelt und die Hypothese altersbedingter kognitiver Dedifferenzierung mit modernen Methoden kritisch untersucht. Das Hauptziel ist die Erbringung entwicklungspsychologischer Evidenz für die Spezifität der Gesichterkognition. In einem ersten - primär methodologischen - Manuskript wird erstmalig in der Literatur die Implementierung von Funktionen der Beobachtungsgewichtung aus der nicht-parametrischen Regression für Strukturgleichungsanalysen vorgeschlagen. Diese Methode ergänzt Multigruppenanalysen bei der Untersuchung kognitiver Dedifferenzierung. Weitere vier Manuskripte adressieren Fragestellungen zur Gesichterkognition und zeigen: 1) Gesichterwahrnehmung, Gesichtergedächtnis und die Schnelligkeit der Gesichtererkennung sind separierbare Prozesse über die gesamte erwachsene Lebensspanne; 2) die Schnelligkeit der Gesichtererkennung kann nicht von der Schnelligkeit der Emotions- und Objekterkennung faktoriell getrennt werden; 3) Gesichterwahrnehmung und Gesichtergedächtnis können bis zum späten Alter von allgemeinen kognitiven Fähigkeiten getrennt werden, und 4) eine leichte Dedifferenzierung zwischen Objekt- und Gesichterkognition tritt auf der Ebene von Akkuratheitsmessungen auf. Implikationen sind in den Manuskripten ausführlich diskutiert und im Epilog zusammengefasst. / Cognitive-experimental and neuropsychological studies provided strong evidence for the specificity of face cognition. In individual differences research, face tasks are used within a broader variety of tasks, usually with the intention to measure some social skills. Contemporary individual differences research still focuses on the distinction between social-emotional vs. academic intelligence, rather than establishing measurement models with a solid basis in experimental and neuropsychological work. Building upon recent efforts to establish such measurement models this dissertation aimed to extend available models and assess their robustness across age. Furthermore, it investigates mean age differences for latent factors, critically looks at phenomena of dedifferentiation with novel and innovative analytic methods, and attempts to provide more evidence on the uniqueness and communalities of face cognition throughout adulthood. In a first primarily methodological manuscript, we propose for the first time in the literature an implementation of functions to weight observations used in nonparametric regression approaches into structural equation modeling context, which can fruitfully complement traditionally used multiple-group approaches to investigate factorial dedifferentiation. In the following four manuscripts, we investigated individual and age-differences in face cognition. Results show that: 1). Face perception, face memory and the speed of face cognition remain differentiable throughout adulthood; 2). The speed of face cognition is not differentiable from the speed of perceiving emotional expressions in the face and complex objects, like houses; 3). Face perception and memory are clearly differentiable from abstract cognition throughout adulthood; and 4). A slight dedifferentiation occurs between face and object cognition. Implications are discussed in the manuscripts and the epilogue.
13

Optimale Partner offener Systeme

Sürmeli, Jan 05 May 2015 (has links)
Heutzutage besteht ein komplexes Software-System häufig aus lose gekoppelten, interagierenden Komponenten. Eine Komponente ist ein offenes System, das unabhängig von anderen offenen Systemen entwickelt und später mit diesen komponiert wird. Die Komposition L+R zweier offener Systeme L und R kann sich jedoch inkorrekt verhalten, beispielsweise verklemmen (die Komponenten warten gegenseitig aufeinander), in eine Endlosschleife geraten oder unbeschränkten Speicherplatz erfordern. Ist L+R dagegen ein korrektes System, bezeichnet man L und R als Partner voneinander. Formale Methoden der Modellierung, Analyse und Synthese ermöglichen die systematische Konstruktion eines korrekten Systems durch Komposition von Partnern. Die Kosten, die ein offenes System L verursacht, variieren in Abhängigkeit von der konkreten Wahl eines Partners. Es ist daher wünschenswert, L nur mit solchen Partnern zu komponieren, welche die Kosten von L beschränken oder sogar minimieren. Ein Partner, der die Kosten von L minimiert, ist ein optimaler Partner von L. Ziel dieser Arbeit ist die Erarbeitung von Techniken, die garantieren, dass L nur mit optimalen Partnern komponiert wird. Dazu entwickeln wir formale Methoden zur Modellierung, Analyse und Synthese kostenbehafteter offener Systeme und ihrer optimalen Partner. Wir präsentieren einen Formalismus zur Modellierung funktionaler (d.h. Zustandsübergänge) und nicht-funktionaler Verhaltenseigenschaften (d.h. Kosten). In diesem Formalismus definieren wir Kostenbeschränktheit und Optimalität von Partnern. Darauf aufbauend entwickeln wir formale Methoden zur Entscheidung der kostenbeschränkten Bedienbarkeit (d.h. der Existenz kostenbeschränkter Partner), der Synthese optimaler Partner und der endlichen Repräsentation aller optimalen Partner. / Nowadays, a complex software system usually consists of loosely-coupled, interacting components. Such a component is an independently developed open system that one composes with other open systems. The composition L+R of two open systems L and R can be faulty: For instance, the components deadlock (i.e. mutually wait for each other) or require an unbounded amount of memory. If L+R is correct, L and R are called partners of each other. Formal methods for modeling, analysis and synthesis yield a systematic approach to constructing a correct system by means of composing partners. The costs of executing a given open system L vary based on a chosen partner. Therefore, it is desirable to choose a partner that bounds or even minimizes the costs of executing L. If a partner R minimizes the costs of executing L, then R is an optimal partner of L. Our goal is to develop techniques that guarantee the composition of L with optimal partners. To this end, we develop formal methods of modeling, analysis and synthesis of open systems incorporating costs. We present a formalism to model functional aspects (i.e. states and transitions) and non-functional aspects (costs) of behavior. We define the properties of cost boundedness and cost optimality for partners in this formalism. Based thereon, we develop formal methods to decide cost bounded controllability (i.e. the existence of cost bounded partners), to synthesize optimal partners, and to finitely represent the set of all optimal partners.
14

A Combinatorial Algorithm for Minimizing the Maximum Laplacian Eigenvalue of Weighted Bipartite Graphs

Helmberg, Christoph, Rocha, Israel, Schwerdtfeger, Uwe 13 November 2015 (has links) (PDF)
We give a strongly polynomial time combinatorial algorithm to minimise the largest eigenvalue of the weighted Laplacian of a bipartite graph. This is accomplished by solving the dual graph embedding problem which arises from a semidefinite programming formulation. In particular, the problem for trees can be solved in time cubic in the number of vertices.
15

Algebraic decoder specification: coupling formal-language theory and statistical machine translation

Büchse, Matthias 28 January 2015 (has links) (PDF)
The specification of a decoder, i.e., a program that translates sentences from one natural language into another, is an intricate process, driven by the application and lacking a canonical methodology. The practical nature of decoder development inhibits the transfer of knowledge between theory and application, which is unfortunate because many contemporary decoders are in fact related to formal-language theory. This thesis proposes an algebraic framework where a decoder is specified by an expression built from a fixed set of operations. As yet, this framework accommodates contemporary syntax-based decoders, it spans two levels of abstraction, and, primarily, it encourages mutual stimulation between the theory of weighted tree automata and the application.
16

A Formal View on Training of Weighted Tree Automata by Likelihood-Driven State Splitting and Merging

Dietze, Toni 03 June 2019 (has links)
The use of computers and algorithms to deal with human language, in both spoken and written form, is summarized by the term natural language processing (nlp). Modeling language in a way that is suitable for computers plays an important role in nlp. One idea is to use formalisms from theoretical computer science for that purpose. For example, one can try to find an automaton to capture the valid written sentences of a language. Finding such an automaton by way of examples is called training. In this work, we also consider the structure of sentences by making use of trees. We use weighted tree automata (wta) in order to deal with such tree structures. Those devices assign weights to trees in order to, for example, distinguish between good and bad structures. The well-known expectation-maximization algorithm can be used to train the weights for a wta while the state behavior stays fixed. As a way to adapt the state behavior of a wta, state splitting, i.e. dividing a state into several new states, and state merging, i.e. replacing several states by a single new state, can be used. State splitting, state merging, and the expectation maximization algorithm already were combined into the state splitting and merging algorithm, which was successfully applied in practice. In our work, we formalized this approach in order to show properties of the algorithm. We also examined a new approach – the count-based state merging algorithm – which exclusively relies on state merging. When dealing with trees, another important tool is binarization. A binarization is a strategy to code arbitrary trees by binary trees. For each of three different binarizations we showed that wta together with the binarization are as powerful as weighted unranked tree automata (wuta). We also showed that this is still true if only probabilistic wta and probabilistic wuta are considered.:How to Read This Thesis 1. Introduction 1.1. The Contributions and the Structure of This Work 2. Preliminaries 2.1. Sets, Relations, Functions, Families, and Extrema 2.2. Algebraic Structures 2.3. Formal Languages 3. Language Formalisms 3.1. Context-Free Grammars (CFGs) 3.2. Context-Free Grammars with Latent Annotations (CFG-LAs) 3.3. Weighted Tree Automata (WTAs) 3.4. Equivalences of WCFG-LAs and WTAs 4. Training of WTAs 4.1. Probability Distributions 4.2. Maximum Likelihood Estimation 4.3. Probabilities and WTAs 4.4. The EM Algorithm for WTAs 4.5. Inside and Outside Weights 4.6. Adaption of the Estimation of Corazza and Satta [CS07] to WTAs 5. State Splitting and Merging 5.1. State Splitting and Merging for Weighted Tree Automata 5.1.1. Splitting Weights and Probabilities 5.1.2. Merging Probabilities 5.2. The State Splitting and Merging Algorithm 5.2.1. Finding a Good π-Distributor 5.2.2. Notes About the Berkeley Parser 5.3. Conclusion and Further Research 6. Count-Based State Merging 6.1. Preliminaries 6.2. The Likelihood of the Maximum Likelihood Estimate and Its Behavior While Merging 6.3. The Count-Based State Merging Algorithm 6.3.1. Further Adjustments for Practical Implementations 6.4. Implementation of Count-Based State Merging 6.5. Experiments with Artificial Automata and Corpora 6.5.1. The Artificial Automata 6.5.2. Results 6.6. Experiments with the Penn Treebank 6.7. Comparison to the Approach of Carrasco, Oncina, and Calera-Rubio [COC01] 6.8. Conclusion and Further Research 7. Binarization 7.1. Preliminaries 7.2. Relating WSTAs and WUTAs via Binarizations 7.2.1. Left-Branching Binarization 7.2.2. Right-Branching Binarization 7.2.3. Mixed Binarization 7.3. The Probabilistic Case 7.3.1. Additional Preliminaries About WSAs 7.3.2. Constructing an Out-Probabilistic WSA from a Converging WSA 7.3.3. Binarization and Probabilistic Tree Automata 7.4. Connection to the Training Methods in Previous Chapters 7.5. Conclusion and Further Research A. Proofs for Preliminaries B. Proofs for Training of WTAs C. Proofs for State Splitting and Merging D. Proofs for Count-Based State Merging Bibliography List of Algorithms List of Figures List of Tables Index Table of Variable Names
17

A Combinatorial Algorithm for Minimizing the Maximum Laplacian Eigenvalue of Weighted Bipartite Graphs

Helmberg, Christoph, Rocha, Israel, Schwerdtfeger, Uwe 13 November 2015 (has links)
We give a strongly polynomial time combinatorial algorithm to minimise the largest eigenvalue of the weighted Laplacian of a bipartite graph. This is accomplished by solving the dual graph embedding problem which arises from a semidefinite programming formulation. In particular, the problem for trees can be solved in time cubic in the number of vertices.
18

Characterisation Theorems for Weighted Tree Automaton Models

Dörband, Frederic 15 November 2022 (has links)
In this thesis, we investigate different theoretical questions concerning weighted automata models over tree-like input structures. First, we study exact and approximated determinisation and then, we turn to Kleene-like and Büchi-like characterisations. We consider multiple weighted automata models, including weighted tree automata over semirings (Chapters 3 and 4), weighted forest automata over M-monoids (Chapter 5), and rational weighted tree languages with storage (Chapter 6). For an explanation as to why the last class can be considered as a weighted automaton model, we refer to page 188 of the thesis. We will now summarise the main contributions of the thesis. In Chapter 3, we focus on the determinisation of weighted tree automata and present our determinisation framework, called M-sequentialisation, which can model different notions of determinisation from the existing literature. Then, we provide a positive M-sequentialisation result for the case of additively idempotent semirings or finitely M-ambiguous weighted tree automata. Another important contribution of Chapter 3 is Theorem 77, where we provide a blueprint theorem that can be used to find determini- sation results for more classes of semirings and weighted tree automata easily. In fact, instead of repeating an entire determinisation construction, Theorem 77 allows us to prove a determinisation result by finding certain finite equivalence relations. This is a very potent tool for future research in the area of determinisation. In Chapter 4, we move from exact determinisation towards approximate determini- sation. We lift the formalisms and the main results from one approach from the literature from the word case to the tree case. This successfully results in an approximated determinisation construction for weighted tree automata over the tropical semiring. We provide a formal mathematical description of the approximated determinisation construction, rather than an algorithmic description as found in the related approach from the literature. In Chapter 5, we turn away from determinisation and instead consider Kleene-like and Büchi-like characterisations of weighted recognisability. We introduce weighted forest automata over M-monoids, which are a generalisation of weighted tree automata over M-monoids and weighted forest automata over semirings. Then, we prove that our recognisable weighted forest languages can be decomposed into a finite product of recognisable weighted tree languages. We also prove that the initial algebra semantic and the run semantic for weighted forest automata are equivalent under certain conditions. Lastly, we define rational forest expressions and forest M-expressions and and prove that the classes of languages generated by these formalisms coincide with recognisable weighted forest languages under certain conditions. In Chapter 6, we consider rational weighted tree languages with storage, where the storage is introduced by composing rational weighted tree languages without storage with a storage map. It has been proven in the literature that rational weighted tree languages with storage are closed under the rational operations. In Chapter 6, we provide alternative proofs of these closure properties. In fact, we prove that our way of introducing storage to rational weighted tree languages preserves the closure properties from rational weighted tree languages without storage.:1 Introduction 2 Preliminaries 2.1 Languages 2.2 WeightedLanguages 2.3 Weighted Tree Automata 3 A Unifying Framework for the Determinisation of Weighted Tree Automata 3.1 Introduction 3.2 Preliminaries 3.3 Factorisation in Monoids 3.3.1 Ordering Multisets over Monoids 3.3.2 Cayley Graph and Cayley Distance 3.3.3 Divisors and Rests 3.3.4 Factorisation Properties 3.4 Weighted Tree Automata over M_fin(M) and the Twinning Property 3.4.1 Weighted Tree Automata over M_fin(M) 3.4.2 The Twinning Property 3.5 Sequentialisation of Weighted Tree Automata over M_fin(M) 3.5.1 The Sequentialisation Construction 3.5.2 The Finitely R-Ambiguous Case 3.6 Relating WTA over M_fin(M) and WTA over S 3.7 M-Sequentialisation of Weighted Tree Automata 3.7.1 Accumulation of D_B 3.7.2 M-Sequentialisation Results 3.8 Comparison of our Results to the Literature 3.8.1 Determinisation of Unweighted Tree Automata 3.8.2 The Free Monoid Case 3.8.3 The Group Case 3.8.4 The Extremal Case 3.9 Conclusion 4 Approximated Determinisation of Weighted Tree Automata 125 4.1 Introduction 4.2 Preliminaries 4.3 Approximated Determinisation 4.3.1 The Approximated Determinisation Construction 4.3.2 Correctness of the Construction 4.4 The Approximated Twinning Property 4.4.1 Implications for Approximated Determinisability 4.4.2 Decidability of the Twinning Property 4.5 Conclusion 5 Kleene and Büchi Theorems for Weighted Forest Languages over M-Monoids 5.1 Introduction 5.2 Preliminaries 5.3 WeightedForestAutomata 5.3.1 Forests 5.3.2 WeightedForestAutomata 5.3.3 Rectangularity 5.3.4 I-recognisable is R-recognisable 5.4 Kleene’s Theorem 5.4.1 Kleene’s Theorem for Trees 5.4.2 Kleene’s Theorem for Forests 5.4.3 An Inductive Approach 5.5 Büchi’s Theorem 5.5.1 Büchi’s Theorem for Trees 5.5.2 Büchi’s Theorem for Forests 5.6 Conclusion 6 Rational Weighted Tree Languages with Storage 6.1 Introduction 6.2 Preliminaries 6.3 Rational Weighted Tree Languages with Storage 6.4 The Kleene-Goldstine Theorem 6.5 Closure of Rat(S¢,Σ,S) under Rational Operations 6.5.1 Top-Concatenation, Scalar Multiplication, and Sum 6.5.2 α-Concatenation 6.5.3 α-Kleene Star 6.6 Conclusion 7 Outlook References
19

MR-tomographische Darstellung intracerebraler Blutungen mit und ohne Therapie / Different magnetic resonance imaging of experimentally induced intracerebral hemorrhages with and without therapy

Meddour, Miriam 02 February 2011 (has links)
No description available.
20

On the numerical analysis of eigenvalue problems

Gedicke, Joscha Micha 05 November 2013 (has links)
Die vorliegende Arbeit zum Thema der numerischen Analysis von Eigenwertproblemen befasst sich mit fünf wesentlichen Aspekten der numerischen Analysis von Eigenwertproblemen. Der erste Teil präsentiert einen Algorithmus von asymptotisch quasi-optimaler Rechenlaufzeit, der die adaptive Finite Elemente Methode mit einem iterativen algebraischen Eigenwertlöser kombiniert. Der zweite Teil präsentiert explizite beidseitige Schranken für die Eigenwerte des Laplace Operators auf beliebig groben Gittern basierend auf einer Approximation der zugehörigen Eigenfunktion in dem nicht konformen Finite Elemente Raum von Crouzeix und Raviart und einem Postprocessing. Die Effizienz der garantierten Schranke des Eigenwertfehlers hängt von der globalen Gitterweite ab. Der dritte Teil betrachtet eine adaptive Finite Elemente Methode basierend auf Verfeinerungen von Knoten-Patchen. Dieser Algorithmus zeigt eine asymptotische Fehlerreduktion der adaptiven Sequenz von einfachen Eigenwerten und Eigenfunktionen des Laplace Operators. Die hier erstmals bewiesene Eigenschaft der Saturation des Eigenwertfehlers zeigt Zuverlässigkeit und Effizienz für eine Klasse von hierarchischen a posteriori Fehlerschätzern. Der vierte Teil betrachtet a posteriori Fehlerschätzer für Konvektion-Diffusion Eigenwertprobleme, wie sie von Heuveline und Rannacher (2001) im Kontext der dual-gewichteten residualen Methode (DWR) diskutiert wurden. Zwei neue dual-gewichtete a posteriori Fehlerschätzer werden vorgestellt. Der letzte Teil beschäftigt sich mit drei adaptiven Algorithmen für Eigenwertprobleme von nicht selbst-adjungierten Operatoren partieller Differentialgleichungen. Alle drei Algorithmen basieren auf einer Homotopie-Methode die vom einfacheren selbst-adjungierten Problem startet. Neben der Gitterverfeinerung wird der Prozess der Homotopie sowie die Anzahl der Iterationen des algebraischen Löser adaptiv gesteuert und die verschiedenen Anteile am gesamten Fehler ausbalanciert. / This thesis "on the numerical analysis of eigenvalue problems" consists of five major aspects of the numerical analysis of adaptive finite element methods for eigenvalue problems. The first part presents a combined adaptive finite element method with an iterative algebraic eigenvalue solver for a symmetric eigenvalue problem of asymptotic quasi-optimal computational complexity. The second part introduces fully computable two-sided bounds on the eigenvalues of the Laplace operator on arbitrarily coarse meshes based on some approximation of the corresponding eigenfunction in the nonconforming Crouzeix-Raviart finite element space plus some postprocessing. The efficiency of the guaranteed error bounds involves the global mesh-size and is proven for the large class of graded meshes. The third part presents an adaptive finite element method (AFEM) based on nodal-patch refinement that leads to an asymptotic error reduction property for the adaptive sequence of simple eigenvalues and eigenfunctions of the Laplace operator. The proven saturation property yields reliability and efficiency for a class of hierarchical a posteriori error estimators. The fourth part considers a posteriori error estimators for convection-diffusion eigenvalue problems as discussed by Heuveline and Rannacher (2001) in the context of the dual-weighted residual method (DWR). Two new dual-weighted a posteriori error estimators are presented. The last part presents three adaptive algorithms for eigenvalue problems associated with non-selfadjoint partial differential operators. The basis for the developed algorithms is a homotopy method which departs from a well-understood selfadjoint problem. Apart from the adaptive grid refinement, the progress of the homotopy as well as the solution of the iterative method are adapted to balance the contributions of the different error sources.

Page generated in 0.0566 seconds