• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ein Sequenzdesign-Algorithmus für verzweigte DNA-Strukturen / A Sequence Design Algorithm for Branched DNA Structures

Seiffert, Jan 28 November 2008 (has links) (PDF)
Aufgrund ihrer Selbstorganisationseigenschaften besitzt DNA ein großes Potential für den Einsatz in Bottom-up-Techniken der Nanotechnologie. So erlaubt DNA eine genau definierte Anordnung von Bauelementen im Abstand von nur wenigen Nanometern. Zum Beispiel kann ein regelmäßig mit Metallclustern oder Proteinen bestücktes DNA-Netz als Katalysator oder in Sensoren eingesetzt werden. DNA wird außerdem als Templat für Nanodrähte benutzt und kann deshalb eine wichtige Rolle in einer zukünftigen Nanoelektronik spielen. DNA-Strukturen entstehen meist durch Selbsassemblierung von Einzelstrangmolekülen während einer Hybridisierung. Die Assemblierung wird dabei durch die Basensequenzen der beteiligten Einzelstränge gesteuert. Das bedeutet: Die Basensequenzen der Einzelstränge definieren die Gestalt der entstehenden Struktur. Diese Dissertation stellt Regeln für Sequenzkonfigurationen vor, welche DNA-Einzelstränge erfüllen müssen, damit die erfolgreiche Selbstassemblierung einer gewünschten Zielstruktur erfolgreich sein kann. Das Grundprinzip dieser Regeln ist eine Minimierung der Länge von Basenfehlpaarungen. Es wird ein Algorithmus entwickelt, welcher diese Regeln umsetzt und für beliebige Zielstrukturen passende Sequenzkonfigurationen erzeugt. Der Algorithmus arbeitet vollautomatisch und ist für die meisten Strukturgrößen sehr schnell. Eine Java-Implementierung des Algorithmus mit Namen Seed ist unter http://nano.tu-dresden.de/~jseiffert/Seed/ frei erhältlich. Abschließend präsentiert diese Arbeit ein Experiment, in welchem eine Reihe von Double-Crossover-(DX)-Molekülen zu einer langen Kette verbunden werden. Die Sequenzkonfiguration für dieses Experiment wurde mit Seed erstellt und zeigt die Anwendungsfähigkeit des vorgestellten Algorithmus. / Due to its self-recognition abilities, DNA has a great potential to disclose new bottom-up routes towards nanofabrication. DNA allows well-defined arrangements of building blocks with only a few nanometer distance. For example, a DNA network with regulary attached metal beeds or proteins can be placed on a surface to act as a catalyst or a sensor. DNA can also be used as template for nanowires and, therefore, might play a major role in future nanoelectronics. DNA structures mostly assemble themselves by hybridization of single stranded DNA molecules. The self-assemby process is controlled by the base sequences of the single strands: The sequence configuration defines the shape of the resulting structure. This thesis introduces rules for sequence configuration that DNA strands must fullfill to produce a desired target structure in a hybridazation process. The basic principle of these rules is a mismatch minimization. An algorithm is presented, which generates suitable sequence configurations according to the introduced rules. The algorithm can handle any DNA structures, works full-automatically, and for most structure dimensions, is very fast. A Java-implementation of the algorithm called Seed is freely available at http://nano.tu-dresden.de/~jseiffert/Seed/. Finally, this work describes a structure building experiment, where a number of double crossover (DX) molecules were concatenated into a long chain. The sequence configuration for this experiment was generated by the developed program Seed showing the use of the presented algorithm.
2

To and Fro Between Tableaus and Automata for Description Logics

Hladik, Jan 31 January 2008 (has links) (PDF)
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten. / Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
3

Nondeterminism and Language Design in Deep Inference

Kahramanogullari, Ozan 13 April 2007 (has links) (PDF)
This thesis studies the design of deep-inference deductive systems. In the systems with deep inference, in contrast to traditional proof-theoretic systems, inference rules can be applied at any depth inside logical expressions. Deep applicability of inference rules provides a rich combinatorial analysis of proofs. Deep inference also makes it possible to design deductive systems that are tailored for computer science applications and otherwise provably not expressible. By applying the inference rules deeply, logical expressions can be manipulated starting from their sub-expressions. This way, we can simulate analytic proofs in traditional deductive formalisms. Furthermore, we can also construct much shorter analytic proofs than in these other formalisms. However, deep applicability of inference rules causes much greater nondeterminism in proof construction. This thesis attacks the problem of dealing with nondeterminism in proof search while preserving the shorter proofs that are available thanks to deep inference. By redesigning the deep inference deductive systems, some redundant applications of the inference rules are prevented. By introducing a new technique which reduces nondeterminism, it becomes possible to obtain a more immediate access to shorter proofs, without breaking certain proof theoretical properties such as cutelimination. Different implementations presented in this thesis allow to perform experiments on the techniques that we developed and observe the performance improvements. Within a computation-as-proof-search perspective, we use deepinference deductive systems to develop a common proof-theoretic language to the two fields of planning and concurrency.
4

Compositional Synthesis and Most General Controllers

Klein, Joachim 18 December 2013 (has links) (PDF)
Given a formal model of the behavior of a system, an objective and some notion of control the goal of controller synthesis is to construct a (finite-state) controller that ensures that the system always satisfies the objective. Often, the controller can base its decisions only on limited observations of the system. This notion of limited observability induces a partial-information game between the controller and the uncontrollable part of the system. A successful controller then realizes an observation-based strategy that enforces the objective. In this thesis we consider the controller synthesis problem in the linear-time setting where the behavior of the system is given as a nondeterministic, labeled transitions system A, where the controller can only partially observe and control the behavior of A. The goal of the thesis is to develop a compositional approach for constructing controllers, suitable to treat conjunctive cascades of linear-time objectives P_1, P_2, ..., P_k in an online manner. We iteratively construct a controller C_1 for system A enforcing P_1, then a controller C_2 enforcing P_2 for the parallel composition of the first controller with the system, and so on. It is crucial for this approach that each controller C_i enforces P_i in a most general manner, being as permissive as possible. Otherwise, behavior that is needed to enforce subsequent objectives could be prematurely removed. Standard notions of strategies and controllers only allow the most general treatment for the limited class of safety objectives. We introduce a novel concept of most general strategies and controllers suited for the compositional treatment of objectives beyond safety. We demonstrate the existence of most general controllers for all enforceable, observation-based omega-regular objectives and provide algorithms for the construction of such most general controllers, with specialized variants for the subclass of safety and co-safety objectives. We furthermore adapt and apply our general framework for the compositional synthesis of most general controllers to the setting of exogenous coordination in the context of the channel-based coordination language Reo and the constraint automata framework and report on our implementation in the verification toolset Vereofy. The construction of most general controllers in Vereofy for omega-regular objectives relies on our tool ltl2dstar for generating deterministic omega-automata from Linear Temporal Logic (LTL) formulas. We introduce a generic improvement for exploiting insensitiveness to stuttering during the determinization construction and evaluate its effectiveness in practice. We further investigate the performance of recently proposed variants of Safra\'s determinization construction in practice.
5

Efficient Reorganisation of Hybrid Index Structures Supporting Multimedia Search Criteria

Kropf, Carsten 11 February 2017 (has links) (PDF)
This thesis describes the development and setup of hybrid index structures. They are access methods for retrieval techniques in hybrid data spaces which are formed by one or more relational or normalised columns in conjunction with one non-relational or non-normalised column. Examples for these hybrid data spaces are, among others, textual data combined with geographical ones or data from enterprise content management systems. However, all non-relational data types may be stored as well as image feature vectors or comparable types. Hybrid index structures are known to function efficiently regarding retrieval operations. Unfortunately, little information is available about reorganisation operations which insert or update the row tuples. The fundamental research is mainly executed in simulation based environments. This work is written ensuing from a previous thesis that implements hybrid access structures in realistic database surroundings. During this implementation it has become obvious that retrieval works efficiently. Yet, the restructuring approaches require too much effort to be set up, e.g., in web search engine environments where several thousands of documents are inserted or modified every day. These search engines rely on relational database systems as storage backends. Hence, the setup of these access methods for hybrid data spaces is required in real world database management systems. This thesis tries to apply a systematic approach for the optimisation of the rearrangement algorithms inside realistic scenarios. Thus, a measurement and evaluation scheme is created which is repeatedly deployed to an evolving state and a model of hybrid index structures in order to optimise the regrouping algorithms to make a setup of hybrid index structures in real world information systems possible. Thus, a set of input corpora is selected which is applied to the test suite as well as an evaluation scheme. To sum up, it can be said that this thesis describes input sets, a test suite including an evaluation scheme as well as optimisation iterations on reorganisation algorithms reflecting a theoretical model framework to provide efficient reorganisations of hybrid index structures supporting multimedia search criteria.
6

Static Partial Order Reduction for Probabilistic Concurrent Systems

Fernández-Díaz, Álvaro, Baier, Christel, Benac-Earle, Clara, Fredlund, Lars-Åke 10 September 2013 (has links) (PDF)
Sound criteria for partial order reduction for probabilistic concurrent systems have been presented in the literature. Their realization relies on a depth-first search-based approach for generating the reduced model. The drawback of this dynamic approach is that it can hardly be combined with other techniques to tackle the state explosion problem, e.g., symbolic probabilistic model checking with multi-terminal variants of binary decision diagrams. Following the approach presented by Kurshan et al. for non-probabilistic systems, we study partial order reduction techniques for probabilistic concurrent systems that can be realized by a static analysis. The idea is to inject the reduction criteria into the control flow graphs of the processes of the system to be analyzed. We provide the theoretical foundations of static partial order reduction for probabilistic concurrent systems and present algorithms to realize them. Finally, we report on some experimental results.

Page generated in 0.026 seconds