• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 24
  • 15
  • Tagged with
  • 114
  • 114
  • 69
  • 69
  • 69
  • 42
  • 32
  • 19
  • 19
  • 17
  • 16
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Evolving Ensemble-Clustering to a Feedback-Driven Process

Lehner, Wolfgang, Habich, Dirk, Hahmann, Martin 01 November 2022 (has links)
Data clustering is a highly used knowledge extraction technique and is applied in more and more application domains. Over the last years, a lot of algorithms have been proposed that are often complicated and/or tailored to specific scenarios. As a result, clustering has become a hardly accessible domain for non-expert users, who face major difficulties like algorithm selection and parameterization. To overcome this issue, we develop a novel feedback-driven clustering process using a new perspective of clustering. By substituting parameterization with user-friendly feedback and providing support for result interpretation, clustering becomes accessible and allows the step-by-step construction of a satisfying result through iterative refinement.
32

Adaptive Index Buffer

Lehner, Wolfgang, Voigt, Hannes, Jaekel, Tobias, Kissinger, Thomas 03 November 2022 (has links)
With rapidly increasing datasets and more dynamic workloads, adaptive partial indexing becomes an important way to keep indexing efficiently. During times of changing workloads, the query performance suffers from inefficient tables scans while the index tuning mechanism adapts the partial index. In this paper we present the Adaptive Index Buffer. The Adaptive Index Buffer reduces the cost of table scans by quickly indexing tuples in memory until the partial index has adapted to the workload again. We explain the basic operating mode of an Index Buffer and discuss how it adapts to changing workload situations. Further, we present three experiments that show the Index Buffer at work.
33

A database accelerator for energy-efficient query processing and optimization

Lehner, Wolfgang, Haas, Sebastian, Arnold, Oliver, Scholze, Stefan, Höppner, Sebastian, Ellguth, Georg, Dixius, Andreas, Ungethüm, Annett, Mier, Eric, Nöthen, Benedikt, Matúš, Emil, Schiefer, Stefan, Cederstroem, Love, Pilz, Fabian, Mayr, Christian, Schüffny, Renè, Fettweis, Gerhard P. 12 January 2023 (has links)
Data processing on a continuously growing amount of information and the increasing power restrictions have become an ubiquitous challenge in our world today. Besides parallel computing, a promising approach to improve the energy efficiency of current systems is to integrate specialized hardware. This paper presents a Tensilica RISC processor extended with an instruction set to accelerate basic database operators frequently used in modern database systems. The core was taped out in a 28 nm SLP CMOS technology and allows energy-efficient query processing as well as query optimization by applying selectivity estimation techniques. Our chip measurements show an 1000x energy improvement on selected database operators compared to state-of-the-art systems.
34

Topology-aware optimization of big sparse matrices and matrix multiplications on main-memory systems

Lehner, Wolfgang, Kernert, David, Köhler, Frank 12 January 2023 (has links)
Since data sizes of analytical applications are continuously growing, many data scientists are switching from customized micro-solutions to scalable alternatives, such as statistical and scientific databases. However, many algorithms in data mining and science are expressed in terms of linear algebra, which is barely supported by major database vendors and big data solutions. On the other side, conventional linear algebra algorithms and legacy matrix representations are often not suitable for very large matrices. We propose a strategy for large matrix processing on modern multicore systems that is based on a novel, adaptive tile matrix representation (AT MATRIX). Our solution utilizes multiple techniques inspired from database technology, such as multidimensional data partitioning, cardinality estimation, indexing, dynamic rewrites, and many more in order to optimize the execution time. Based thereon we present a matrix multiplication operator ATMULT, which outperforms alternative approaches. The aim of our solution is to overcome the burden for data scientists of selecting appropriate algorithms and matrix storage representations. We evaluated AT MATRIX together with ATMULT on several real-world and synthetic random matrices.
35

Model-oriented Programming with Bigraphical Reactive Systems: Theory and Implementation

Grzelak, Dominik 25 April 2024 (has links)
It is well-recognized among computer scientists that prospective informatics systems will become more and more complex and will increasingly accumulate non-linear behaviour that is difficult to orchestrate, to configure, and to reason about. Certainly, large-scale mobile computing systems will challenge our understanding, similar to climate systems, bio-chemical systems, physical systems, or social networks. This suggests a rigorous formalization and study of non-trivial computational systems. In this regard, bigraphs are a groundbreaking novel theory for distributed and parallel systems, treating mobile locality and mobile interaction as first-class citizens. The theory was devised by Robin Milner as a process algebra with rewrite capabilities based on graph equations and an algebraic type system. Bigraphs have been the subject of extensive investigation from various perspectives, including agent-based modelling, cyber-physical games, language construction, graph rewriting, and as a unifying metamodel for other rewrite theories and process calculi, but particularly in the context of the categorical reactive system approach. In this approach, a labelled transition system, over which bisimilarity is a congruent equivalence relation, is generated from reduction semantics that can be freely specified. The bigraph theory treats two-dimensional graphs as arrows and their interfaces as objects while category theory provides the underlying mathematical framework for their axiomatization. Fortunately, category theory makes the theory future-proof, competitive and extensible. The recently developed categorical concepts of relative and idempotent pushouts facilitate the categorical construction of minimal context labels enabling the development of a behavioural theory, where bisimulation is a congruence. The metamodel character of bigraphs enables the comparison of other formalisms and algebraic theories of concurrent computation at a very abstract level, thus, regaining their behavioural theory and computational notion, with the ultimate goal of exploiting synergies. Indeed, bigraphs are much more than a computational model for understanding, analysing, and verifying systems. They provide both a formal and practical foundation for context-oriented reactive system modelling and programming languages. Consequently, the development of software solutions based on the bigraph theory necessitates suitable tools, frameworks, and languages for putting bigraphs into practice. These tools are essential for evaluating the model’s effectiveness in both academic research and the software industry. Only this permits rigorous testing of the theory. But moreover, reaching this goal is the result of the motivation to lower the barrier to entry for model-driven context-adaptive programming using the bigraph theory. So far, several tools and libraries have been developed to model and simulate bigraphical reactive systems. These tools can roughly be referred to as bigraphical calculators and are meant for experimentation and comprehension of the theory. Without them, this work could not have been written. However, we elevate the initial efforts to a level more conducive to enable advanced bigraphical software engineering practises. Therefore, the Bigraph Toolkit Suite was developed—a collection of tools and methods for the research and development of reactive systems for real-world applications. The suite consists of model-based integration frameworks, architectural guidelines, integrated development environments, command-line tools, and an uncomplicated language engineering workbench with an extensible grammar and interpreter. Each product of the Bigraph Toolkit Suite serves a distinct function, ranging from the manipulation and simulation of bigraphs to bigraphical language engineering and distributed storage of bigraph models. The tools are finely tuned to each other via a common metamodel, which facilitates implementation of novel bigraphical tool chains as well as the integration of arbitrary tools and public programming interfaces. Certainly, the following ambient question paved the path of this research: Is there a formalism or theory that supports context modelling, computation and verification, and that can be used as a programming language? The work shows that bigraphs can be used to lay such a foundation to regain the understanding of new informatics systems, with model-driven engineering contributing to this. According to this, the latent objective of this work is the cross-fertilization process of model-driven engineering and bigraphical reactive systems on a practical basis. A discussion is developed of whether there is evidence to support the view that the presence of MDE-related model operations and practices may be related to bigraph operations and vice versa. A relation can be established by a consistent and complete canonical mapping on the x-axis based on a systematic four-layer metamodelling framework; and on the y-axis between three different yet interoperable technical spaces. Thus, the result of this thesis is developed along two axes from a strict software engineering perspective. On the one hand, practical observations of the bigraph theory are provided about other graph structures, categories and model-driven operations. On the other, a novel software ecosystem for bigraphical reactive systems is provided together with several generic experimental approaches such as event-based execution of sub-bigraphical reactive systems. The theory already stimulated much other research works and therefore advancing fundamental computer science, this work may additionally—hopefully, the reader will agree—solidify and advance bigraphical research.:1 Introduction 1.1 Background and Motivation 1.1.1 Typical Application Scenarios 1.1.2 The Dilemma of Complex Reactive Systems 1.2 Field of Work 1.2.1 Reactive Systems 1.2.2 Model-driven Engineering 1.2.3 Context Adoption in Software: A Novel Taxonomy 1.3 Research Project 1.3.1 Hypothesis 1.3.2 Research Aim 1.3.3 Research Objectives 1.3.4 Contributions 1.4 Outline 2 The Theory of Bigraphical Reactive Systems for Software Engineers 2.1 Graph Theory: Basic Notation 2.2 Categories for Context-adaptive Software 2.2.1 Elementary Category Theory 2.2.2 Reactive System Categories: s-category and spm category 2.2.3 Type Graphs and Type Morphisms 2.2.4 Observations 2.3 On the Static Structure of Bigraphs 2.3.1 Signatures 2.3.2 Pure Bigraphs: Place Graphs and Link Graphs 2.3.3 Compositional Structures: Interfaces and Operators 2.3.4 Algebra of Bigraphs 2.3.5 Graphical s-categories 2.4 Type Systems and Sortings 2.4.1 Basic Terminology 2.4.2 Sortings and Bigraphs 2.5 Dynamics of Reactive Systems 2.5.1 Operational Semantics of Reactive Systems 2.5.2 Reactive System Theory: General Categories 2.5.3 Bigraphical Reactive Systems 2.5.4 Labelled Transition Systems 2.5.5 Behavioural Equivalences 2.5.6 Observations 2.6 Formal Verification 2.6.1 Model Checking in Detail 2.6.2 Properties of Sequential and Parallel Programs 2.6.3 State-Space Explosion Problem 3 Model-driven Concepts in Bigraphs 3.1 A Canonical Mapping: From Bigraphs to Ecore 3.1.1 The Four-layer Metamodelling Framework Revisited 3.1.2 Formal Relations between Bigraphs, Type Graphs and Ecore 3.1.3 Model Constraints at the 𝑀1 and 𝑀0 layer 3.1.4 Design Level Variability and Extensibility 3.2 Bigraphical Models: Specification and Generation 3.2.1 Typing and Subtyping via (Un-)Sorted Signatures and their Instantiation from Metamodels 3.2.2 Bigraphs and their Instantiation from Metamodels 3.2.3 Observations 3.3 Modelling Techniques: A Bigraphical Perspective 3.3.1 Bigraphs and UML Class Diagrams 3.3.2 Signature Operations 3.3.3 Abstraction 3.3.4 View Modelling with Place Graphs 3.4 Design Patterns for Implementing Variation Points 3.5 Summary 4 Bigraph Toolkit Suite: A Complete Software Development Ecosystem 4.1 The Bigraphical Tool Landscape 4.1.1 High-level Architecture 4.1.2 Overview of the Constituents 4.1.3 Design Qualities 4.1.4 Project Organisation 4.2 Modelling and Visualization 4.2.1 Programmatic Approach: Builders and Operators 4.2.2 Domain-specific Language 4.2.3 Converters: Model Translations 4.2.4 Visual Modelling: Bigellor 4.3 Simulation and Verification 4.3.1 Specification of BRSs 4.3.2 Implementation Aspects: Entity Classes 4.3.3 Implementation Aspects: Business Classes 4.3.4 Model Checking Algorithm 4.3.5 Coordination of BRSs: Higher-order Execution Strategies 4.3.6 Error Handling: Chain of Responsibility and Exceptions 4.4 Bigraphical Domain-specific Language 4.4.1 Overview of BDSL’s Grammar 4.4.2 Language Features 4.4.3 Interpreter: Decoupling the Grammar from Application-Specific Code 4.4.4 BDSL-CLI: A Command-line Interpreter Tool for BDSL 4.4.5 Theia: An Integrated Development Environment for BDSL 4.5 Persistence: Distributed Model Storage 4.5.1 Basic Filesystem Storage Facilities 4.5.2 Spring Data CDO: Spring Data and Connected Data Objects 4.5.3 Arbitrary Hierarchical Layouts for Bigraphical Models 4.5.4 Event Listeners 4.6 Performance and Quality Analysis 4.6.1 Functional Tests 4.6.2 Dependency Analysis 4.6.3 Runtime Analysis 4.7 Summary 5 Related Work: The Bigraphical Tool Landscape 5.1 A Lightweight Qualitative Comparison Framework 5.1.1 Conceptual Foundations 5.1.2 Considerations 5.2 Method and Tool Candidates 5.2.1 Selection Process 5.2.2 Excluded Tool Candidates 5.2.3 Tool Overview 5.3 Results 5.3.1 jLibBig 5.3.2 bigraphspace 5.3.3 BigRED 5.3.4 BigM 5.3.5 BigraphER 5.3.6 BigMC 5.3.7 BPL Tool 5.3.8 BiGMTE 5.3.9 DBtk 5.4 Evaluation and Discussion 5.4.1 Assessment Criteria 5.4.2 Comparison of Non-Functional Aspects 5.4.3 Comparison of Functional Aspects 5.4.4 Term Language 5.4.5 Interoperability 5.4.6 Accessibility 5.5 Summary 6 Conclusion List of Figures List of Tables List of Listings Bibliography Online Resources Appendices A Theoretical Addendum B Design Patterns, Techniques and Technologies C Code Listings: Related Work Abbreviations
36

A Novel, User-Friendly Indoor Mapping Approach for OpenStreetMap

Graichen, Thomas, Quinger, Sven, Heinkel, Ulrich, Strassenburg-Kleciak, Marek 29 March 2017 (has links) (PDF)
The community project OpenStreetMap (OSM), which is well-known for its open geographic data, still lacks a commonly accepted mapping scheme for indoor data. Most of the previous approaches show inconveniences in their mapping workflow and affect the mapper's motivation. In our paper an easy to use data scheme for OSM indoor mapping is presented. Finally, by means of several rendering examples from our Android application, we show that the new data scheme is capable for real world scenarios.
37

Prototypische Entwicklung eines mandantenfähigen dezentralen Austauschsystems für hochsensible Daten

Stockhaus, Christian 01 March 2017 (has links) (PDF)
Diese Arbeit behandelt die Entstehung eines Prototypen für die Übertragung von hochsensiblen Daten zwischen verschieden Firmen. Dabei geht Sie auf alle Schritte bei der Entwicklung ein von der Anforderungsanalyse über die Evaluierung einer passenden Technologie und die eigentliche Implementierung bis hin zum Test und der Administration.
38

Multi Criteria Mapping Based on SVM and Clustering Methods

Diddikadi, Abhishek 09 November 2015 (has links) (PDF)
There are many more ways to automate the application process like using some commercial software’s that are used in big organizations to scan bills and forms, but this application is only for the static frames or formats. In our application, we are trying to automate the non-static frames as the study certificate we get are from different counties with different universities. Each and every university have there one format of certificates, so we try developing a very new application that can commonly work for all the frames or formats. As we observe many applicants are from same university which have a common format of the certificate, if we implement this type of tools, then we can analyze this sort of certificates in a simple way within very less time. To make this process more accurate we try implementing SVM and Clustering methods. With these methods we can accurately map courses in certificates to ASE study path if not to exclude list. A grade calculation is done for courses which are mapped to an ASE list by separating the data for both labs and courses in it. At the end, we try to award some points, which includes points from ASE related courses, work experience, specialization certificates and German language skills. Finally, these points are provided to the chair to select the applicant for master course ASE.
39

Spezifikation und Implementierung eines Plug-ins für JOSM zur semiautomatisierten Kartografierung von Innenraumdaten für OpenStreetMap

Gruschka, Erik 15 January 2016 (has links) (PDF)
Der Kartendienst OpenStreetMap ist einer der beliebtesten Anbieter für OpenData-Karten. Diese Karten konzentrieren sich jedoch derzeitig auf Außenraumumgebungen, da sich bereits existierende Ansätze zur Innenraumkartografierung nicht durchsetzen konnten. Als einer der Hauptgründe wird die mangelnde Unterstützung der verbreiteten Karteneditoren angesehen. Die vorliegende Bachelorarbeit befasst sich daher mit der Implementierung eines Plug-Ins für die Erstellung von Innenraumkarten im Editor „JOSM“, und dem Vergleich des Arbeitsaufwandes zur Innenraumkartenerstellung mit und ohne diesem Hilfsmittel.
40

Learning Vector Symbolic Architectures for Reactive Robot Behaviours

Neubert, Peer, Schubert, Stefan, Protzel, Peter 08 August 2017 (has links) (PDF)
Vector Symbolic Architectures (VSA) combine a hypervector space and a set of operations on these vectors. Hypervectors provide powerful and noise-robust representations and VSAs are associated with promising theoretical properties for approaching high-level cognitive tasks. However, a major drawback of VSAs is the lack of opportunities to learn them from training data. Their power is merely an effect of good (and elaborate) design rather than learning. We exploit high-level knowledge about the structure of reactive robot problems to learn a VSA based on training data. We demonstrate preliminary results on a simple navigation task. Given a successful demonstration of a navigation run by pairs of sensor input and actuator output, the system learns a single hypervector that encodes this reactive behaviour. When executing (and combining) such VSA-based behaviours, the advantages of hypervectors (i.e. the representational power and robustness to noise) are preserved. Moreover, a particular beauty of this approach is that it can learn encodings for behaviours that have exactly the same form (a hypervector) no matter how complex the sensor input or the behaviours are.

Page generated in 0.0223 seconds