• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 24
  • 15
  • Tagged with
  • 111
  • 111
  • 67
  • 67
  • 67
  • 42
  • 31
  • 18
  • 18
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Intuitive Visualisierung universitätsinterner Publikationsdaten zur Unterstützung von Entscheidungsprozessen

Bolte, Fabian 29 September 2016 (has links)
Die vorliegende Arbeit nutzt die Publikationsdaten der TU Chemnitz zur Darstellung der Entwicklung von Kooperationen zwischen Instituten und Fakultäten über die Zeit. Dabei wird die Unzulänglichkeit gängiger Netzwerkanalysen mithilfe von Graphen, die komplexen Beziehungen um eine zeitliche Dimension zu erweitern, aufgezeigt. Stattdessen wird eine Anwendung auf Basis des Streamgraphen vorgestellt, welche nicht nur den Vergleich der Entwicklung beliebiger Kombinationen von Instituten und Fakultäten ermöglicht, sondern auch spezifische Auskünfte zu den Kooperationsarten und deren zeitlicher Verlagerung gibt. Dafür werden zwei Erweiterungen für den Streamgraphen vorgestellt, welche seinen Informationsumfang erweitern und ihn damit zur Erfüllung der gesetzten Anforderungen befähigen. / This thesis uses data about publications from members of the TU Chemnitz to visualize the progress of cooperations between institutes and faculties over time. Thereby it is shown, that the attempt to expand common used network analyses, via graphs, by a temporal dimension, is insufficient for this task. Instead we present an application, based on a streamgraph, which enables the user to compare the development of any combination of institutes and faculties, as well as giving specific information about cooperation types and their temporal shift. Therefore, two extensions to the streamgraph are proposed, which increase the amount of information visible and provide tools to satisfy the stated requirements.
22

Learning Vector Symbolic Architectures for Reactive Robot Behaviours

Neubert, Peer, Schubert, Stefan, Protzel, Peter 08 August 2017 (has links)
Vector Symbolic Architectures (VSA) combine a hypervector space and a set of operations on these vectors. Hypervectors provide powerful and noise-robust representations and VSAs are associated with promising theoretical properties for approaching high-level cognitive tasks. However, a major drawback of VSAs is the lack of opportunities to learn them from training data. Their power is merely an effect of good (and elaborate) design rather than learning. We exploit high-level knowledge about the structure of reactive robot problems to learn a VSA based on training data. We demonstrate preliminary results on a simple navigation task. Given a successful demonstration of a navigation run by pairs of sensor input and actuator output, the system learns a single hypervector that encodes this reactive behaviour. When executing (and combining) such VSA-based behaviours, the advantages of hypervectors (i.e. the representational power and robustness to noise) are preserved. Moreover, a particular beauty of this approach is that it can learn encodings for behaviours that have exactly the same form (a hypervector) no matter how complex the sensor input or the behaviours are.
23

A Novel, User-Friendly Indoor Mapping Approach for OpenStreetMap

Graichen, Thomas, Quinger, Sven, Heinkel, Ulrich, Strassenburg-Kleciak, Marek 29 March 2017 (has links)
The community project OpenStreetMap (OSM), which is well-known for its open geographic data, still lacks a commonly accepted mapping scheme for indoor data. Most of the previous approaches show inconveniences in their mapping workflow and affect the mapper's motivation. In our paper an easy to use data scheme for OSM indoor mapping is presented. Finally, by means of several rendering examples from our Android application, we show that the new data scheme is capable for real world scenarios.
24

matlab scripts for MMC Pareto optimization

Lopez, Mario, Fehr, Hendrik 22 October 2020 (has links)
Calculate the Pareto frontier with minimum arm energy ripple and conduction loss of an MMC when the second and fourth harmonic of the circulating current is used as free parameters. ParetoMMC attempts to solve min F(X, lambda), X where F(X, lambda) = E_ripple(X)*lambda + P_loss*(1 - lambda). X denotes the amplitudes and phases of the second and fourth harmonic of the circulating current. lambda is the weighting scalar in the range 0 <= lamda <= 1. The MMC dc side is connected to a dc voltage source, while the ac side is a symmetric three phase voltage with isolated star point. A third harmonic in the common mode voltage is assumed.:ParetoMMC.m F_eval.m LICENSE.GNU_AGPLv3
25

Quality of Service and Predictability in DBMS

Sattler, Kai-Uwe, Lehner, Wolfgang 03 May 2022 (has links)
DBMS are a ubiquitous building block of the software stack in many complex applications. Middleware technologies, application servers and mapping approaches hide the core database technologies just like power, networking infrastructure and operating system services. Furthermore, many enterprise-critical applications demand a certain degree of quality of service (QoS) or guarantees, e.g. wrt. response time, transaction throughput, latency but also completeness or more generally quality of results. Examples of such applications are billing systems in telecommunication, where each telephone call has to be monitored and registered in a database, Ecommerce applications where orders have to be accepted even in times of heavy load and the waiting time of customers should not exceed a few seconds, ERP systems processing a large number of transactions in parallel, or systems for processing streaming or sensor data in realtime, e.g. in process automation of traffic control. As part of complex multilevel software stack, database systems have to share or contribute to these QoS requirements, which means that guarantees have to be given by the DBMS, too, and that the processing of database requests is predictable. Todays mainstream DBMS typically follow a best effort approach: requests are processed as fast as possible without any guarantees: the optimization goal of query optimizers and tuning approaches is rather to minimize resource consumption instead of just fulfilling given service level agreements. However, motivated by the situation described above there is an emerging need for database services providing guarantees or simply behave in a predictable manner and at the same time interact with other components of the software stack in order to fulfill the requirements. This is also driven by the paradigm of service-oriented architectures widely discussed in industry. Currently, this is addressed only by very specialized solutions. Nevertheless, database researchers have developed several techniques contributing to the goal of QoS-aware database systems. The purpose of the tutorial is to introduce database researchers and practitioners to the scope, the challenges and the available techniques to the problem of predictability and QoS agreements in DBMS.
26

Multi Criteria Mapping Based on SVM and Clustering Methods

Diddikadi, Abhishek 09 November 2015 (has links)
There are many more ways to automate the application process like using some commercial software’s that are used in big organizations to scan bills and forms, but this application is only for the static frames or formats. In our application, we are trying to automate the non-static frames as the study certificate we get are from different counties with different universities. Each and every university have there one format of certificates, so we try developing a very new application that can commonly work for all the frames or formats. As we observe many applicants are from same university which have a common format of the certificate, if we implement this type of tools, then we can analyze this sort of certificates in a simple way within very less time. To make this process more accurate we try implementing SVM and Clustering methods. With these methods we can accurately map courses in certificates to ASE study path if not to exclude list. A grade calculation is done for courses which are mapped to an ASE list by separating the data for both labs and courses in it. At the end, we try to award some points, which includes points from ASE related courses, work experience, specialization certificates and German language skills. Finally, these points are provided to the chair to select the applicant for master course ASE.
27

Spezifikation und Implementierung eines Plug-ins für JOSM zur semiautomatisierten Kartografierung von Innenraumdaten für OpenStreetMap

Gruschka, Erik 15 January 2016 (has links)
Der Kartendienst OpenStreetMap ist einer der beliebtesten Anbieter für OpenData-Karten. Diese Karten konzentrieren sich jedoch derzeitig auf Außenraumumgebungen, da sich bereits existierende Ansätze zur Innenraumkartografierung nicht durchsetzen konnten. Als einer der Hauptgründe wird die mangelnde Unterstützung der verbreiteten Karteneditoren angesehen. Die vorliegende Bachelorarbeit befasst sich daher mit der Implementierung eines Plug-Ins für die Erstellung von Innenraumkarten im Editor „JOSM“, und dem Vergleich des Arbeitsaufwandes zur Innenraumkartenerstellung mit und ohne diesem Hilfsmittel.
28

A database accelerator for energy-efficient query processing and optimization

Lehner, Wolfgang, Haas, Sebastian, Arnold, Oliver, Scholze, Stefan, Höppner, Sebastian, Ellguth, Georg, Dixius, Andreas, Ungethüm, Annett, Mier, Eric, Nöthen, Benedikt, Matúš, Emil, Schiefer, Stefan, Cederstroem, Love, Pilz, Fabian, Mayr, Christian, Schüffny, Renè, Fettweis, Gerhard P. 12 January 2023 (has links)
Data processing on a continuously growing amount of information and the increasing power restrictions have become an ubiquitous challenge in our world today. Besides parallel computing, a promising approach to improve the energy efficiency of current systems is to integrate specialized hardware. This paper presents a Tensilica RISC processor extended with an instruction set to accelerate basic database operators frequently used in modern database systems. The core was taped out in a 28 nm SLP CMOS technology and allows energy-efficient query processing as well as query optimization by applying selectivity estimation techniques. Our chip measurements show an 1000x energy improvement on selected database operators compared to state-of-the-art systems.
29

Topology-aware optimization of big sparse matrices and matrix multiplications on main-memory systems

Lehner, Wolfgang, Kernert, David, Köhler, Frank 12 January 2023 (has links)
Since data sizes of analytical applications are continuously growing, many data scientists are switching from customized micro-solutions to scalable alternatives, such as statistical and scientific databases. However, many algorithms in data mining and science are expressed in terms of linear algebra, which is barely supported by major database vendors and big data solutions. On the other side, conventional linear algebra algorithms and legacy matrix representations are often not suitable for very large matrices. We propose a strategy for large matrix processing on modern multicore systems that is based on a novel, adaptive tile matrix representation (AT MATRIX). Our solution utilizes multiple techniques inspired from database technology, such as multidimensional data partitioning, cardinality estimation, indexing, dynamic rewrites, and many more in order to optimize the execution time. Based thereon we present a matrix multiplication operator ATMULT, which outperforms alternative approaches. The aim of our solution is to overcome the burden for data scientists of selecting appropriate algorithms and matrix storage representations. We evaluated AT MATRIX together with ATMULT on several real-world and synthetic random matrices.
30

Conflict Detection-Based Run-Length Encoding: AVX-512 CD Instruction Set in Action

Lehner, Wolfgang, Ungethum, Annett, Pietrzyk, Johannes, Damme, Patrick, Habich, Dirk 18 January 2023 (has links)
Data as well as hardware characteristics are two key aspects for efficient data management. This holds in particular for the field of in-memory data processing. Aside from increasing main memory capacities, efficient in-memory processing benefits from novel processing concepts based on lightweight compressed data. Thus, an active research field deals with the adaptation of new hardware features such as vectorization using SIMD instructions to speedup lightweight data compression algorithms. Following this trend, we propose a novel approach for run-length encoding, a well-known and often applied lightweight compression technique. Our novel approach is based on newly introduced conflict detection (CD) instructions in Intel's AVX-512 instruction set extension. As we are going to show, our CD-based approach has unique properties and outperforms the state-of-the-art RLE approach for data sets with small run lengths.

Page generated in 0.0213 seconds