• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 41
  • 14
  • Tagged with
  • 128
  • 108
  • 77
  • 59
  • 57
  • 57
  • 44
  • 31
  • 22
  • 21
  • 19
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Complexity of Normal Forms on Structures of Bounded Degree

Heimberg, Lucas 04 June 2018 (has links)
Normalformen drücken semantische Eigenschaften einer Logik durch syntaktische Restriktionen aus. Sie ermöglichen es Algorithmen, Grenzen der Ausdrucksstärke einer Logik auszunutzen. Ein Beispiel ist die Lokalität der Logik erster Stufe (FO), die impliziert, dass Graph-Eigenschaften wie Erreichbarkeit oder Zusammenhang nicht FO-definierbar sind. Gaifman-Normalformen drücken die Bedeutung einer FO-Formel als Boolesche Kombination lokaler Eigenschaften aus. Sie haben eine wichtige Rolle in Model-Checking Algorithmen für Klassen dünn besetzter Graphen, deren Laufzeit durch die Größe der auszuwertenden Formel parametrisiert ist. Es ist jedoch bekannt, dass Gaifman-Normalformen im Allgemeinen nur mit nicht-elementarem Aufwand konstruiert werden können. Dies führt zu einer enormen Parameterabhängigkeit der genannten Algorithmen. Ähnliche nicht-elementare untere Schranken sind auch für Feferman-Vaught-Zerlegungen und für die Erhaltungssätze von Lyndon, Łoś und Tarski bekannt. Diese Arbeit untersucht die Komplexität der genannten Normalformen auf Klassen von Strukturen beschränkten Grades, für welche die nicht-elementaren unteren Schranken nicht gelten. Für diese Einschränkung werden Algorithmen mit elementarer Laufzeit für die Konstruktion von Gaifman-Normalformen, Feferman-Vaught-Zerlegungen, und für die Erhaltungssätze von Lyndon, Łoś und Tarski entwickelt, die in den ersten beiden Fällen worst-case optimal sind. Wichtig hierfür sind Hanf-Normalformen. Es wird gezeigt, dass eine Erweiterung von FO durch unäre Zählquantoren genau dann Hanf-Normalformen erlaubt, wenn alle Zählquantoren ultimativ periodisch sind, und wie Hanf-Normalformen in diesen Fällen in elementarer und worst-case optimaler Zeit konstruiert werden können. Dies führt zu Model-Checking Algorithmen für solche Erweiterungen von FO sowie zu Verallgemeinerungen der Algorithmen für Feferman-Vaught-Zerlegungen und die Erhaltungssätze von Lyndon, Łoś und Tarski. / Normal forms express semantic properties of logics by means of syntactical restrictions. They allow algorithms to benefit from restrictions of the expressive power of a logic. An example is the locality of first-order logic (FO), which implies that properties like reachability or connectivity cannot be defined in FO. Gaifman's local normal form expresses the satisfaction conditions of an FO-formula by a Boolean combination of local statements. Gaifman normal form serves as a first step in fixed-parameter model-checking algorithms, parameterised by the size of the formula, on sparse graph classes. However, it is known that in general, there are non-elementary lower bounds for the costs involved in transforming a formula into Gaifman normal form. This leads to an enormous parameter-dependency of the aforementioned algorithms. Similar non-elementary lower bounds also hold for Feferman-Vaught decompositions and for the preservation theorems by Lyndon, Łoś, and Tarski. This thesis investigates the complexity of these normal forms when restricting attention to classes of structures of bounded degree, for which the non-elementary lower bounds are known to fail. Under this restriction, the thesis provides algorithms with elementary and even worst-case optimal running time for the construction of Gaifman normal form and Feferman-Vaught decompositions. For the preservation theorems, algorithmic versions with elementary running time and non-matching lower bounds are provided. Crucial for these results is the notion of Hanf normal form. It is shown that an extension of FO by unary counting quantifiers allows Hanf normal forms if, and only if, all quantifiers are ultimately periodic, and furthermore, how Hanf normal form can be computed in elementary and worst-case optimal time in these cases. This leads to model-checking algorithms for such extensions of FO and also allows generalisations of the constructions for Feferman-Vaught decompositions and preservation theorems.
112

Scheduling algorithms for saving energy and balancing load

Antoniadis, Antonios 16 August 2012 (has links)
Diese Arbeit beschäftigt sich mit Scheduling von Tasks in Computersystemen. Wir untersuchen sowohl die in neueren Arbeiten betrachtete Zielfunktion zur Energieminimierung als auch die klassische Zielfunktion zur Lastbalancierung auf mehreren Prozessoren. Beim Speed-Scaling mit Sleep-State darf ein Prozessor, der zu jedem Zeitpunkt seine Geschwindigkeit anpassen kann, auch in einen Schlafmodus übergehen. Unser Ziel ist es, den Energieverbrauch zu minimieren. Wir zeigen die NP-Härte des Problems und klären somit den Komplexitätsstatus. Wir beweisen eine untere Schranke für die Approximationsgüte für eine spezielle natürliche Klasse von Schedules. Ferner entwickeln wir eine Familie von Algorithmen, die gute Approximationsfaktoren liefert, und zeigen, dass diese sogar Lösungen liefert, die optimal für die zuvor erwähnte Klasse von Schedules sind. Anschließend widmen wir unsere Aufmerksamkeit dem folgenden Termin-basierten Scheduling-Problem. Es seien mehrere Prozessoren gegeben, wobei jeder einzelne Prozessor zu jedem Zeitpunkt seine Geschwindigkeit anpassen kann. Ziel ist es wie zuvor, den Energieverbrauch des erzeugten Schedules zu minimieren. Für den Offline-Fall entwickeln wir einen optimalen Polynomialzeit-Algorithmus. Für das Online-Problem erweitern wir die zwei bekannten Ein-Prozessor-Algorithmen Optimal Available und Average Rate. Wir zeigen, dass diese den gleichen bzw. einen um die additive Konstante von eins vergrößerten kompetiven Faktor haben. Bei der Lastbalancierung auf mehreren Prozessoren betrachten wir Offline-Load-Balancing auf identischen Maschinen. Unser Ziel ist es, die Current-Load für temporäre Tasks mit identischem Gewicht zu minimieren. Wir zeigen, dass eine Lösung mit maximaler Imbalance von eins immer existiert und entwickeln einen effizienten Algorithmus, der solche Lösungen liefert. Zum Schluss beweisen wir die NP-Härte von zwei Verallgemeinerungen des Problems. / This thesis studies problems of scheduling tasks in computing environments. We consider both the modern objective function of minimizing energy consumption, and the classical objective of balancing load across machines. We first investigate offline deadline-based scheduling in the setting of a single variable-speed processor that is equipped with a sleep state. The objective is that of minimizing the total energy consumption. Apart from settling the complexity of the problem by showing its NP-hardness, we provide a lower bound of 2 for general convex power functions, and a particular natural class of schedules. We also present an algorithmic framework for designing good approximation algorithms. Furthermore, we give tight bounds for the aforementioned particular class of schedules. We then focus on the multiprocessor setting where each processor has the ability to vary its speed. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. Regarding the online problem and a natural class of power functions, we extend the two well-known single-processor algorithms Optimal Available and Average Rate. We prove that Optimal Available has the same competitive ratio as in the single-processor case. For Average Rate we show a competitive factor that increases by an additive constant of one compared to the single-processor result. With respect to load balancing, we consider offline load balancing on identical machines, with the objective of minimizing the current load, for temporary unit-weight jobs. The problem can be seen as coloring n intervals with k colors, such that for each point on the line, the maximal difference between the number of intervals of any two colors is minimal. We prove that a coloring with maximal difference at most one is always possible, and develop a fast polynomial-time algorithm for generating such a coloring. Lastly, we prove that two generalizations of the problem are NP-hard.
113

Parallele dynamische Adaption hybrider Netze für effizientes verteiltes Rechnen / Parallel dynamic adaptation of hybrid grids for efficient distributed computing

Alrutz, Thomas 17 September 2008 (has links)
No description available.
114

Computation with finitely L-presented groups / Algorithmen für endlich L-präsentierte Gruppen

Hartung, René 01 June 2012 (has links)
No description available.
115

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
116

Frequent itemset mining on multiprocessor systems

Schlegel, Benjamin 08 May 2014 (has links) (PDF)
Frequent itemset mining is an important building block in many data mining applications like market basket analysis, recommendation, web-mining, fraud detection, and gene expression analysis. In many of them, the datasets being mined can easily grow up to hundreds of gigabytes or even terabytes of data. Hence, efficient algorithms are required to process such large amounts of data. In recent years, there have been many frequent-itemset mining algorithms proposed, which however (1) often have high memory requirements and (2) do not exploit the large degrees of parallelism provided by modern multiprocessor systems. The high memory requirements arise mainly from inefficient data structures that have only been shown to be sufficient for small datasets. For large datasets, however, the use of these data structures force the algorithms to go out-of-core, i.e., they have to access secondary memory, which leads to serious performance degradations. Exploiting available parallelism is further required to mine large datasets because the serial performance of processors almost stopped increasing. Algorithms should therefore exploit the large number of available threads and also the other kinds of parallelism (e.g., vector instruction sets) besides thread-level parallelism. In this work, we tackle the high memory requirements of frequent itemset mining twofold: we (1) compress the datasets being mined because they must be kept in main memory during several mining invocations and (2) improve existing mining algorithms with memory-efficient data structures. For compressing the datasets, we employ efficient encodings that show a good compression performance on a wide variety of realistic datasets, i.e., the size of the datasets is reduced by up to 6.4x. The encodings can further be applied directly while loading the dataset from disk or network. Since encoding and decoding is repeatedly required for loading and mining the datasets, we reduce its costs by providing parallel encodings that achieve high throughputs for both tasks. For a memory-efficient representation of the mining algorithms’ intermediate data, we propose compact data structures and even employ explicit compression. Both methods together reduce the intermediate data’s size by up to 25x. The smaller memory requirements avoid or delay expensive out-of-core computation when large datasets are mined. For coping with the high parallelism provided by current multiprocessor systems, we identify the performance hot spots and scalability issues of existing frequent-itemset mining algorithms. The hot spots, which form basic building blocks of these algorithms, cover (1) counting the frequency of fixed-length strings, (2) building prefix trees, (3) compressing integer values, and (4) intersecting lists of sorted integer values or bitmaps. For all of them, we discuss how to exploit available parallelism and provide scalable solutions. Furthermore, almost all components of the mining algorithms must be parallelized to keep the sequential fraction of the algorithms as small as possible. We integrate the parallelized building blocks and components into three well-known mining algorithms and further analyze the impact of certain existing optimizations. Our algorithms are already single-threaded often up an order of magnitude faster than existing highly optimized algorithms and further scale almost linear on a large 32-core multiprocessor system. Although our optimizations are intended for frequent-itemset mining algorithms, they can be applied with only minor changes to algorithms that are used for mining of other types of itemsets.
117

Entwicklung eines Evolutionären Algorithmus zur Preisoptimierung für kleine und mittlere Handelsunternehmen / Development of an evolutionary algorithm for price optimization for small and medium sized enterprises

Lüders, Sören Oliver 20 April 2018 (has links)
No description available.
118

Frequent itemset mining on multiprocessor systems

Schlegel, Benjamin 30 May 2013 (has links)
Frequent itemset mining is an important building block in many data mining applications like market basket analysis, recommendation, web-mining, fraud detection, and gene expression analysis. In many of them, the datasets being mined can easily grow up to hundreds of gigabytes or even terabytes of data. Hence, efficient algorithms are required to process such large amounts of data. In recent years, there have been many frequent-itemset mining algorithms proposed, which however (1) often have high memory requirements and (2) do not exploit the large degrees of parallelism provided by modern multiprocessor systems. The high memory requirements arise mainly from inefficient data structures that have only been shown to be sufficient for small datasets. For large datasets, however, the use of these data structures force the algorithms to go out-of-core, i.e., they have to access secondary memory, which leads to serious performance degradations. Exploiting available parallelism is further required to mine large datasets because the serial performance of processors almost stopped increasing. Algorithms should therefore exploit the large number of available threads and also the other kinds of parallelism (e.g., vector instruction sets) besides thread-level parallelism. In this work, we tackle the high memory requirements of frequent itemset mining twofold: we (1) compress the datasets being mined because they must be kept in main memory during several mining invocations and (2) improve existing mining algorithms with memory-efficient data structures. For compressing the datasets, we employ efficient encodings that show a good compression performance on a wide variety of realistic datasets, i.e., the size of the datasets is reduced by up to 6.4x. The encodings can further be applied directly while loading the dataset from disk or network. Since encoding and decoding is repeatedly required for loading and mining the datasets, we reduce its costs by providing parallel encodings that achieve high throughputs for both tasks. For a memory-efficient representation of the mining algorithms’ intermediate data, we propose compact data structures and even employ explicit compression. Both methods together reduce the intermediate data’s size by up to 25x. The smaller memory requirements avoid or delay expensive out-of-core computation when large datasets are mined. For coping with the high parallelism provided by current multiprocessor systems, we identify the performance hot spots and scalability issues of existing frequent-itemset mining algorithms. The hot spots, which form basic building blocks of these algorithms, cover (1) counting the frequency of fixed-length strings, (2) building prefix trees, (3) compressing integer values, and (4) intersecting lists of sorted integer values or bitmaps. For all of them, we discuss how to exploit available parallelism and provide scalable solutions. Furthermore, almost all components of the mining algorithms must be parallelized to keep the sequential fraction of the algorithms as small as possible. We integrate the parallelized building blocks and components into three well-known mining algorithms and further analyze the impact of certain existing optimizations. Our algorithms are already single-threaded often up an order of magnitude faster than existing highly optimized algorithms and further scale almost linear on a large 32-core multiprocessor system. Although our optimizations are intended for frequent-itemset mining algorithms, they can be applied with only minor changes to algorithms that are used for mining of other types of itemsets.
119

A 2TnC ferroelectric memory gain cell suitable for compute-in-memory and neuromorphic application

Slesazeck, Stefan, Ravsher, Taras, Havel, Viktor, Breyer, Evelyn T., Mulaosmanovic, Halid, Mikolajick, Thomas 20 June 2022 (has links)
A 2TnC ferroelectric memory gain cell consisting of two transistors and two or more ferroelectric capacitors (FeCAP) is proposed. While a pre-charge transistor allows to access the single cell in an array, the read transistor amplifies the small read signals from small-scaled FeCAPs that can be operated either in FeRAM mode by sensing the polarization reversal current, or in ferroelectric tunnel junction (FTJ) mode by sensing the polarization dependent leakage current. The simultaneous read or write operation of multiple FeCAPs is used to realize compute-in-memory (CiM) algorithms that enable processing of data being represented by both, non-volatilely internally stored data and externally applied data. The internal gain of the cell mitigates the need for 3D integration of the FeCAPs, thus making the concept very attractive especially for embedded memories. Here we discuss design constraints of the 2TnC cell and present the proof-of-concept for realizing versatile (CiM) approaches by means of electrical characterization results.
120

Evolving Complex Neuro-Controllers with Interactively Constrained Neuro-Evolution

Rempis, Christian Wilhelm 17 October 2012 (has links)
In the context of evolutionary robotics and neurorobotics, artificial neural networks, used as controllers for animats, are examined to identify principles of neuro-control, network organization, the interaction between body and control, and other likewise properties. Before such an examination can take place, suitable neuro-controllers have to be identified. A promising and widely used technique to search for such networks are evolutionary algorithms specifically adapted for neural networks. These allow the search for neuro-controllers with various network topologies directly on physically grounded (simulated) animats. This neuro-evolution approach works well for small neuro-controllers and has lead to interesting results. However, due to the exponentially increasing search space with respect to the number of involved neurons, this approach does not scale well with larger networks. This scaling problem makes it difficult to find non-trivial, larger networks, that show interesting properties. In the context of this thesis, networks of this class are called mid-scale networks, having between 50 and 500 neurons. Searching for networks of this class involves very large search spaces, including all possible synaptic connections between the neurons, the bias terms of the neurons and (optionally) parameters of the neuron model, such as the transfer function, activation function or parameters of learning rules. In this domain, most evolutionary algorithms are not able to find suitable, non-trivial neuro-controllers in feasible time. To cope with this problem and to shift the frontier for evolvable network topologies a bit further, a novel evolutionary method has been developed in this thesis: the Interactively Constrained Neuro-Evolution method (ICONE). A way to approach the problem of increasing search spaces is the introduction of measures that reduce and restrict the search space back to a feasible domain. With ICONE, this restriction is realized with a unified, extensible and highly adaptable concept: Instead of evolving networks freely, networks are evolved within specifically designed constraint masks, that define mandatory properties of the evolving networks. These constraint masks are defined primarily using so called functional constraints, that actively modify a neural network to enforce the adherence of all required limitations and assumptions. Consequently, independently of the mutations taking place during evolution, the constraint masks repair and readjust the networks so that constraint violations are not able to evolve. Such functional constraints can be very specific and can enforce various network properties, such as symmetries, structure reuse, connectivity patterns, connectivity density heuristics, synaptic pathways, local processing assemblies, and much more. Constraint masks therefore describe a narrow, user defined subset of the parameter space -- based on domain knowledge and user experience -- that focuses the search on a smaller search space leading to a higher success rate for the evolution. Due to the involved domain knowledge, such evolutions are strongly biased towards specific classes of networks, because only networks within the defined search space can evolve. This, surely, can also be actively used to lead the evolution towards specific solution approaches, allowing the experimenter not only to search for any upcoming solution, but also to confirm assumptions about possible solutions. This makes it easier to investigate specific neuro-control principles, because the experimenter can systematically search for networks implementing the desired principles, simply by using suitable constraints to enforce them. Constraint masks in ICONE are built up by functional constraints working on so called neuro-modules. These modules are used to structure the networks, to define the scope for constraints and to simplify the reuse of (evolved) neural structures. The concept of functional, constrained neuro-modules allows a simple and flexible way to construct constraint masks and to inherit constraints when neuro-modules are reused or shared. A final cornerstone of the ICONE method is the interactive control of the evolution process, that allows the adaptation of the evolution parameters and the constraint masks to guide evolution towards promising domains and to counteract undesired developments. Due to the constraint masks, this interactive guidance is more effective than the adaptation of the evolution parameters alone, so that the identification of promising search space regions becomes easier. This thesis describes the ICONE method in detail and shows several applications of the method and the involved features. The examples demonstrate that the method can be used effectively for problems in the domain of mid-scale networks. Hereby, as effects of the constraint masks and the herewith reduced complexity of the networks, the results are -- despite their size -- often easy to comprehend, well analyzable and easy to reuse. Another benefit of constraint masks is the ability to deliberately search for very specific network configurations, which allows the effective and systematic exploration of distinct variations for an evolution experiment, simply by changing the constraint masks over the course of multiple evolution runs. The ICONE method therefore is a promising novel evolution method to tackle the problem of evolving mid-scale networks, pushing the frontier of evolvable networks a bit further. This allows for novel evolution experiments in the domain of neurorobotics and evolutionary robotics and may possibly lead to new insights into neuro-dynamical principles of animat control.

Page generated in 0.084 seconds