• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 42
  • 14
  • Tagged with
  • 130
  • 110
  • 78
  • 60
  • 58
  • 58
  • 44
  • 32
  • 22
  • 21
  • 20
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Digitalisierung der Pflanzenprodukten: Anforderungen an ein Farm Management- und Informationssystem (FMIS)

Leithold, Peer 21 April 2017 (has links)
No description available.
52

Graph-based Analysis of Dynamic Systems

Schiller, Benjamin 15 December 2016 (has links)
The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones. The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected. In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis. We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis. We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets. We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures. We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload. Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.:1 Introduction 2 Notation and Terminology 3 Related Work 4 DNA - Dynamic Network Analyzer 5 Algorithms 6 Parallel Dynamic Network Analysis 7 Selection of Efficient Graph Data Structures 8 Use Cases 9 Conclusion A DNA - Dynamic Network Analyzer B Algorithms C Selection of Efficient Graph Data Structures D Parallel Dynamic Network Analysis E Graph-based Intrusion Detection System F Molecular Dynamics
53

Dynamics of Driven Quantum Systems:: A Search for Parallel Algorithms

Baghery, Mehrdad 24 November 2017 (has links)
This thesis explores the possibility of using parallel algorithms to calculate the dynamics of driven quantum systems prevalent in atomic physics. In this process, new as well as existing algorithms are considered. The thesis is split into three parts. In the first part an attempt is made to develop a new formalism of the time dependent Schroedinger equation (TDSE) in the hope that the new formalism could lead to a parallel algorithm. The TDSE is written as an eigenvalue problem, the ground state of which represents the solution to the original TDSE. Even though mathematically sound and correct, it turns out the ground state of this eigenvalue problem cannot be easily found numerically, rendering the original hope a false one. In the second part we borrow a Bayesian global optimisation method from the machine learning community in an effort to find the optimum conditions in different systems quicker than textbook optimisation algorithms. This algorithm is specifically designed to find the optimum of expensive functions, and is used in this thesis to 1. maximise the electron yield of hydrogen, 2. maximise the asymmetry in the photo-electron angular distribution of hydrogen, 3. maximise the higher harmonic generation yield within a certain frequency range, 4. generate short pulses via combining higher harmonics generated by hydrogen. In the last part, the phenomenon of dynamic interference (temporal equivalent of the double-slit experiment) is discussed. The necessary conditions are derived from first principles and it is shown where some of the previous analytical and numerical studies have gone wrong; it turns out the choice of gauge plays a crucial role. Furthermore, a number of different scenarios are presented where interference in the photo-electron spectrum is expected to occur.
54

Advanced visualization and modeling of tetrahedral meshes

Frank, Tobias 07 April 2006 (has links)
Tetrahedral meshes are becoming more and more important for geo-modeling applications. The presented work introduces new algorithms for efficient visualization and modeling of tetrahedral meshes. Visualization consists of a generic framework that includes the extraction of geological information like stratigraphic columns, fault block boundaries, simultaneous co-rendering of different attributes and boolean operations of Constructive Solid Geometry with constant complexity. Modeling can be classified into geometric and implicit modeling. Geometric modeling addresses local mesh refinement to increase the numerical resolution of a given mesh. Implicit modeling covers the definition and manipulation of implicitly defined models. A new surface reconstruction method was developed to reconstruct complex, multi-valued surfaces from noisy and sparse data sets as they occur in geological applications. The surface can be bounded and may have discontinuities. Further, this work proposes a new and innovative algorithm for rapid editing of implicitly defined shapes like horizons based on the GeoChron parametrization. The editing is performed interactively on the 3d-volumetric model and geological constraints are respected automatically.
55

Fast algorithms for material specific process chain design and analysis in metal forming - final report DFG Priority Programme SPP 1204

Kawalla, Rudolf January 2016 (has links)
The book summarises the results of the DFG-funded coordinated priority programme \"Fast Algorithms for Material Specific Process Chain Design and Analysis in Metal Forming\". In the first part it includes articles which provide a general introduction and overview on the field of process modeling in metal forming. The second part collates the reports from all projects included in the priority programme.
56

Beitrag zur Energieeinsatzoptimierung mit evolutionären Algorithmen in lokalen Energiesystemen mit kombinierter Nutzung von Wärme- und Elektroenergie

Hable, Matthias 27 October 2004 (has links)
Decentralised power systems with a high portion of power generated from renewable energy sources and cogeneration units (CHP) are emerging worldwide. Optimising the energy usage of such systems is a difficult task as the stochastic fluctuations of generation from renewable sources, the coupling of electrical and thermal power generation by CHP and the time dependence of necessary storage devices require new approaches. Evolutionary algorithms are able to solve the optimisation task of the energy management. They use the principles of erroneous replication and cumulative selection that can be observed in biological processes, too. Very often recombination is included in the optimisation process. Using these quite simple principles the algorithm is able to explore difficult, large and high dimensional solution spaces. It will converge to the optimal solution in most of the cases quite fast, compared to other types of optimisation algorithms. At the example of an one dimensional replicator it is derived that the convergence speed in optimising convex functions increases by several orders of magnitude even after a few cycles compared to Monte-Carlo-simulation. For several types of equipment models are developed in this work. The cost to operate a given power system for a given time span is chosen as objective function. There is a variety of parameters (more than 15) that can be set in the algorithm. With quite extensive investigations it could be shown that the product of number of replicators and the number of calculated cycles has the most important influence on the quality of the solution but the calculation time is also proportional to this number. If there are reasonable values chosen for the remaining parameters the algorithm will find appropriate solutions in adequate time in most of the cases. Although a pure evolutionary algorithm will converge to a solution the convergence speed can be greatly enhanced by extending it to a hybrid algorithm. Grouping the replicators of the first cycle in suggestive regions of the solution space by an intelligent initialisation algorithm and repairing bad solutions by introducing a Lamarckian repair algorithm makes the optimisation converge fast to good optima. The algorithm was tested using data of several existing energy systems of different structure. To optimise the energy usage in a power system with 15 different types of units the required computation time is in the range of 15 minutes. The results of this work show that extended hybrid evolutionary algorithms are suitable for integrated optimisation of energy usage in combined local energy systems. They reach better results with the same or less effort than many other optimisation methods. The developed method of optimisation of energy usage can be applied in energy systems of small and large size and complexity as optimisation computations of energy systems on the island of Cape Clear, at FH Offenburg and in the Allgäu demonstrate.
57

Dynamische Rissdetektion mittels photogrammetrischer Verfahren – Entwicklung und Anwendung optimierter Algorithmen

Hampel, Uwe, Maas, Hans-Gerd 03 June 2009 (has links)
Die digitale Nahbereichsphotogrammetrie ermöglicht eine effiziente Erfassung dreidimensionaler Objektoberflächen bei experimentellen Untersuchungen. Besonders für die flächenhafte Erfassung von Verformungen und die Rissdetektion sind photogrammetrische Verfahren – unter Beachtung entsprechender Randbedingungen – prinzipiell geeignet. Der Beitrag geht unter Einbeziehung aktueller Untersuchungen an textilbewehrten Betonproben auf die Problematik der Rissdetektion ein und gibt einen Überblick über den Entwicklungsstand und das erreichbare Genauigkeitspotential. In Bezug auf die praktische Anwendung der vorgestellten Verfahren wird abschließend auf verschiedene Möglichkeiten der Optimierung eingegangen.
58

Column-specific Context Extraction for Web Tables

Braunschweig, Katrin, Thiele, Maik, Eberius, Julian, Lehner, Wolfgang 14 June 2022 (has links)
Relational Web tables have become an important resource for applications such as factual search and entity augmentation. A major challenge for an automatic identification of relevant tables on the Web is the fact that many of these tables have missing or non-informative column labels. Research has focused largely on recovering the meaning of columns by inferring class labels from the instances using external knowledge bases. The table context, which often contains additional information on the table's content, is frequently considered as an indicator for the general content of a table, but not as a source for column-specific details. In this paper, we propose a novel approach to identify and extract column-specific information from the context of Web tables. In our extraction framework, we consider different techniques to extract directly as well as indirectly related phrases. We perform a number of experiments on Web tables extracted from Wikipedia. The results show that column-specific information extracted using our simple heuristic significantly boost precision and recall for table and column search.
59

Sample synopses for approximate answering of group-by queries

Lehner, Wolfgang, Rösch, Philipp 22 April 2022 (has links)
With the amount of data in current data warehouse databases growing steadily, random sampling is continuously gaining in importance. In particular, interactive analyses of large datasets can greatly benefit from the significantly shorter response times of approximate query processing. Typically, those analytical queries partition the data into groups and aggregate the values within the groups. Further, with the commonly used roll-up and drill-down operations a broad range of group-by queries is posed to the system, which makes the construction of highly-specialized synopses difficult. In this paper, we propose a general-purpose sampling scheme that is biased in order to answer group-by queries with high accuracy. While existing techniques focus on the size of the group when computing its sample size, our technique is based on its standard deviation. The basic idea is that the more homogeneous a group is, the less representatives are required in order to give a good estimate. With an extensive set of experiments, we show that our approach reduces both the estimation error and the construction cost compared to existing techniques.
60

To and Fro Between Tableaus and Automata for Description Logics

Hladik, Jan 14 November 2007 (has links)
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten. / Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.

Page generated in 0.0386 seconds