• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 41
  • 14
  • Tagged with
  • 127
  • 108
  • 77
  • 59
  • 57
  • 57
  • 44
  • 31
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Skeleton-based visualization of massive voxel objects with network-like architecture

Prohaska, Steffen January 2007 (has links)
This work introduces novel internal and external memory algorithms for computing voxel skeletons of massive voxel objects with complex network-like architecture and for converting these voxel skeletons to piecewise linear geometry, that is triangle meshes and piecewise straight lines. The presented techniques help to tackle the challenge of visualizing and analyzing 3d images of increasing size and complexity, which are becoming more and more important in, for example, biological and medical research. Section 2.3.1 contributes to the theoretical foundations of thinning algorithms with a discussion of homotopic thinning in the grid cell model. The grid cell model explicitly represents a cell complex built of faces, edges, and vertices shared between voxels. A characterization of pairs of cells to be deleted is much simpler than characterizations of simple voxels were before. The grid cell model resolves topologically unclear voxel configurations at junctions and locked voxel configurations causing, for example, interior voxels in sets of non-simple voxels. A general conclusion is that the grid cell model is superior to indecomposable voxels for algorithms that need detailed control of topology. Section 2.3.2 introduces a noise-insensitive measure based on the geodesic distance along the boundary to compute two-dimensional skeletons. The measure is able to retain thin object structures if they are geometrically important while ignoring noise on the object's boundary. This combination of properties is not known of other measures. The measure is also used to guide erosion in a thinning process from the boundary towards lines centered within plate-like structures. Geodesic distance based quantities seem to be well suited to robustly identify one- and two-dimensional skeletons. Chapter 6 applies the method to visualization of bone micro-architecture. Chapter 3 describes a novel geometry generation scheme for representing voxel skeletons, which retracts voxel skeletons to piecewise linear geometry per dual cube. The generated triangle meshes and graphs provide a link to geometry processing and efficient rendering of voxel skeletons. The scheme creates non-closed surfaces with boundaries, which contain fewer triangles than a representation of voxel skeletons using closed surfaces like small cubes or iso-surfaces. A conclusion is that thinking specifically about voxel skeleton configurations instead of generic voxel configurations helps to deal with the topological implications. The geometry generation is one foundation of the applications presented in Chapter 6. Chapter 5 presents a novel external memory algorithm for distance ordered homotopic thinning. The presented method extends known algorithms for computing chamfer distance transformations and thinning to execute I/O-efficiently when input is larger than the available main memory. The applied block-wise decomposition schemes are quite simple. Yet it was necessary to carefully analyze effects of block boundaries to devise globally correct external memory variants of known algorithms. In general, doing so is superior to naive block-wise processing ignoring boundary effects. Chapter 6 applies the algorithms in a novel method based on confocal microscopy for quantitative study of micro-vascular networks in the field of microcirculation. / Die vorliegende Arbeit führt I/O-effiziente Algorithmen und Standard-Algorithmen zur Berechnung von Voxel-Skeletten aus großen Voxel-Objekten mit komplexer, netzwerkartiger Struktur und zur Umwandlung solcher Voxel-Skelette in stückweise-lineare Geometrie ein. Die vorgestellten Techniken werden zur Visualisierung und Analyse komplexer drei-dimensionaler Bilddaten, beispielsweise aus Biologie und Medizin, eingesetzt. Abschnitt 2.3.1 leistet mit der Diskussion von topologischem Thinning im Grid-Cell-Modell einen Beitrag zu den theoretischen Grundlagen von Thinning-Algorithmen. Im Grid-Cell-Modell wird ein Voxel-Objekt als Zellkomplex dargestellt, der aus den Ecken, Kanten, Flächen und den eingeschlossenen Volumina der Voxel gebildet wird. Topologisch unklare Situationen an Verzweigungen und blockierte Voxel-Kombinationen werden aufgelöst. Die Charakterisierung von Zellpaaren, die im Thinning-Prozess entfernt werden dürfen, ist einfacher als bekannte Charakterisierungen von so genannten "Simple Voxels". Eine wesentliche Schlussfolgerung ist, dass das Grid-Cell-Modell atomaren Voxeln überlegen ist, wenn Algorithmen detaillierte Kontrolle über Topologie benötigen. Abschnitt 2.3.2 präsentiert ein rauschunempfindliches Maß, das den geodätischen Abstand entlang der Oberfläche verwendet, um zweidimensionale Skelette zu berechnen, welche dünne, aber geometrisch bedeutsame, Strukturen des Objekts rauschunempfindlich abbilden. Das Maß wird im weiteren mit Thinning kombiniert, um die Erosion von Voxeln auf Linien zuzusteuern, die zentriert in plattenförmigen Strukturen liegen. Maße, die auf dem geodätischen Abstand aufbauen, scheinen sehr geeignet zu sein, um ein- und zwei-dimensionale Skelette bei vorhandenem Rauschen zu identifizieren. Eine theoretische Begründung für diese Beobachtung steht noch aus. In Abschnitt 6 werden die diskutierten Methoden zur Visualisierung von Knochenfeinstruktur eingesetzt. Abschnitt 3 beschreibt eine Methode, um Voxel-Skelette durch kontrollierte Retraktion in eine stückweise-lineare geometrische Darstellung umzuwandeln, die als Eingabe für Geometrieverarbeitung und effizientes Rendering von Voxel-Skeletten dient. Es zeigt sich, dass eine detaillierte Betrachtung der topologischen Eigenschaften eines Voxel-Skeletts einer Betrachtung von allgemeinen Voxel-Konfigurationen für die Umwandlung zu einer geometrischen Darstellung überlegen ist. Die diskutierte Methode bildet die Grundlage für die Anwendungen, die in Abschnitt 6 diskutiert werden. Abschnitt 5 führt einen I/O-effizienten Algorithmus für Thinning ein. Die vorgestellte Methode erweitert bekannte Algorithmen zur Berechung von Chamfer-Distanztransformationen und Thinning so, dass diese effizient ausführbar sind, wenn die Eingabedaten den verfügbaren Hauptspeicher übersteigen. Der Einfluss der Blockgrenzen auf die Algorithmen wurde analysiert, um global korrekte Ergebnisse sicherzustellen. Eine detaillierte Analyse ist einer naiven Zerlegung, die die Einflüsse von Blockgrenzen vernachlässigt, überlegen. In Abschnitt 6 wird, aufbauend auf den I/O-effizienten Algorithmen, ein Verfahren zur quantitativen Analyse von Mikrogefäßnetzwerken diskutiert.
42

Dynamische Rissdetektion mittels photogrammetrischer Verfahren – Entwicklung und Anwendung optimierter Algorithmen

Hampel, Uwe, Maas, Hans-Gerd 03 June 2009 (has links) (PDF)
Die digitale Nahbereichsphotogrammetrie ermöglicht eine effiziente Erfassung dreidimensionaler Objektoberflächen bei experimentellen Untersuchungen. Besonders für die flächenhafte Erfassung von Verformungen und die Rissdetektion sind photogrammetrische Verfahren – unter Beachtung entsprechender Randbedingungen – prinzipiell geeignet. Der Beitrag geht unter Einbeziehung aktueller Untersuchungen an textilbewehrten Betonproben auf die Problematik der Rissdetektion ein und gibt einen Überblick über den Entwicklungsstand und das erreichbare Genauigkeitspotential. In Bezug auf die praktische Anwendung der vorgestellten Verfahren wird abschließend auf verschiedene Möglichkeiten der Optimierung eingegangen.
43

To and Fro Between Tableaus and Automata for Description Logics

Hladik, Jan 31 January 2008 (has links) (PDF)
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten. / Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
44

Beitrag zur Energieeinsatzoptimierung mit evolutionären Algorithmen in lokalen Energiesystemen mit kombinierter Nutzung von Wärme- und Elektroenergie

Hable, Matthias 06 March 2005 (has links) (PDF)
Decentralised power systems with a high portion of power generated from renewable energy sources and cogeneration units (CHP) are emerging worldwide. Optimising the energy usage of such systems is a difficult task as the stochastic fluctuations of generation from renewable sources, the coupling of electrical and thermal power generation by CHP and the time dependence of necessary storage devices require new approaches. Evolutionary algorithms are able to solve the optimisation task of the energy management. They use the principles of erroneous replication and cumulative selection that can be observed in biological processes, too. Very often recombination is included in the optimisation process. Using these quite simple principles the algorithm is able to explore difficult, large and high dimensional solution spaces. It will converge to the optimal solution in most of the cases quite fast, compared to other types of optimisation algorithms. At the example of an one dimensional replicator it is derived that the convergence speed in optimising convex functions increases by several orders of magnitude even after a few cycles compared to Monte-Carlo-simulation. For several types of equipment models are developed in this work. The cost to operate a given power system for a given time span is chosen as objective function. There is a variety of parameters (more than 15) that can be set in the algorithm. With quite extensive investigations it could be shown that the product of number of replicators and the number of calculated cycles has the most important influence on the quality of the solution but the calculation time is also proportional to this number. If there are reasonable values chosen for the remaining parameters the algorithm will find appropriate solutions in adequate time in most of the cases. Although a pure evolutionary algorithm will converge to a solution the convergence speed can be greatly enhanced by extending it to a hybrid algorithm. Grouping the replicators of the first cycle in suggestive regions of the solution space by an intelligent initialisation algorithm and repairing bad solutions by introducing a Lamarckian repair algorithm makes the optimisation converge fast to good optima. The algorithm was tested using data of several existing energy systems of different structure. To optimise the energy usage in a power system with 15 different types of units the required computation time is in the range of 15 minutes. The results of this work show that extended hybrid evolutionary algorithms are suitable for integrated optimisation of energy usage in combined local energy systems. They reach better results with the same or less effort than many other optimisation methods. The developed method of optimisation of energy usage can be applied in energy systems of small and large size and complexity as optimisation computations of energy systems on the island of Cape Clear, at FH Offenburg and in the Allgäu demonstrate.
45

Advanced visualization and modeling of tetrahedral meshes

Frank, Tobias 17 July 2009 (has links) (PDF)
Tetrahedral meshes are becoming more and more important for geo-modeling applications. The presented work introduces new algorithms for efficient visualization and modeling of tetrahedral meshes. Visualization consists of a generic framework that includes the extraction of geological information like stratigraphic columns, fault block boundaries, simultaneous co-rendering of different attributes and boolean operations of Constructive Solid Geometry with constant complexity. Modeling can be classified into geometric and implicit modeling. Geometric modeling addresses local mesh refinement to increase the numerical resolution of a given mesh. Implicit modeling covers the definition and manipulation of implicitly defined models. A new surface reconstruction method was developed to reconstruct complex, multi-valued surfaces from noisy and sparse data sets as they occur in geological applications. The surface can be bounded and may have discontinuities. Further, this work proposes a new and innovative algorithm for rapid editing of implicitly defined shapes like horizons based on the GeoChron parametrization. The editing is performed interactively on the 3d-volumetric model and geological constraints are respected automatically.
46

Profillinie 6: Modellierung, Simulation, Hochleistungsrechnen

Rehm, Wolfgang, Hofmann, Bernd, Meyer, Arnd, Steinhorst, Peter, Weinelt, Wilfried, Rünger, Gudula, Platzer, Bernd, Urbaneck, Thorsten, Lorenz, Mario, Thießen, Friedrich, Kroha, Petr, Benner, Peter, Radons, Günter, Seeger, Steffen, Auer, Alexander A., Schreiber, Michael, John, Klaus Dieter, Radehaus, Christian, Farschtschi, Abbas, Baumgartl, Robert, Mehlan, Torsten, Heinrich, Bernd 11 November 2005 (has links) (PDF)
An der TU Chemnitz haben sich seit über zwei Jahrzehnten die Gebiete der rechnergestützten Wissenschaften (Computational Science) sowie des parallelen und verteilten Hochleistungsrechnens mit zunehmender Verzahnung entwickelt. Die Koordinierung und Bündelung entsprechender Forschungsarbeiten in der Profillinie 6 “Modellierung, Simulation, Hochleistungsrechnen” wird es ermöglichen, im internationalen Wettbewerb des Wissens mitzuhalten.
47

Hybride Indexstrukturen

Kropf, Carsten 10 October 2014 (has links) (PDF)
Im Folgenden wird ein Promotionsprojekt zur Implementierung und Optimierung von hybriden Indexstrukturen beschrieben. Die erhöhte Suchperformance wird bei hybriden Indexstrukturen durch einen höheren Aufwand an Vorberechnungen bei Einfügeoperationen erreicht. Dadurch ergibt sich, im Gegensatz zu Ansätzen, welche mehrere Indexstrukturen miteinander verbinden oder getrennte Suchanfragen ausführen eine Effizienz der Reorganisation hybrider Indexstrukturen, die prohibitiv für den Einsatz in den meisten Anwendungen ist. Diese sollen innerhalb des Promotionsprojekts optimiert werden, um eine Einsatzfähigkeit in realistischen Szenarien gewährleisten zu können.
48

Mehrzieloptimierung betriebswirtschaftlicher Probleme durch evolutionäre Algorithmen /

Garen, Joost. January 2005 (has links) (PDF)
Univ., Diss.--Osnabrück, 2004.
49

Dynamics of Driven Quantum Systems:

Baghery, Mehrdad 15 January 2018 (has links) (PDF)
This thesis explores the possibility of using parallel algorithms to calculate the dynamics of driven quantum systems prevalent in atomic physics. In this process, new as well as existing algorithms are considered. The thesis is split into three parts. In the first part an attempt is made to develop a new formalism of the time dependent Schroedinger equation (TDSE) in the hope that the new formalism could lead to a parallel algorithm. The TDSE is written as an eigenvalue problem, the ground state of which represents the solution to the original TDSE. Even though mathematically sound and correct, it turns out the ground state of this eigenvalue problem cannot be easily found numerically, rendering the original hope a false one. In the second part we borrow a Bayesian global optimisation method from the machine learning community in an effort to find the optimum conditions in different systems quicker than textbook optimisation algorithms. This algorithm is specifically designed to find the optimum of expensive functions, and is used in this thesis to 1. maximise the electron yield of hydrogen, 2. maximise the asymmetry in the photo-electron angular distribution of hydrogen, 3. maximise the higher harmonic generation yield within a certain frequency range, 4. generate short pulses via combining higher harmonics generated by hydrogen. In the last part, the phenomenon of dynamic interference (temporal equivalent of the double-slit experiment) is discussed. The necessary conditions are derived from first principles and it is shown where some of the previous analytical and numerical studies have gone wrong; it turns out the choice of gauge plays a crucial role. Furthermore, a number of different scenarios are presented where interference in the photo-electron spectrum is expected to occur.
50

Graph-based Analysis of Dynamic Systems

Schiller, Benjamin 23 November 2017 (has links) (PDF)
The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones. The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected. In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis. We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis. We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets. We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures. We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload. Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.

Page generated in 0.0518 seconds