• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 36
  • 5
  • 3
  • Tagged with
  • 109
  • 76
  • 72
  • 49
  • 49
  • 49
  • 20
  • 14
  • 14
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

On-Board Memory Extension on Reconfigurable Integrated Circuits using External DDR3 Memory: On-Board Memory Extension on Reconfigurable Integrated Circuits usingExternal DDR3 Memory

Lodaya, Bhaveen 08 February 2018 (has links)
User-programmable, integrated circuits (ICs) e.g. Field Programmable Gate Arrays (FPGAs) are increasingly popular for embedded, high-performance data exploitation. They combine the parallelization capability and processing power of application specific integrated circuits (ASICs) with the exibility, scalability and adaptability of software-based processing solutions. FPGAs provide powerful processing resources due to an optimal adaptation to the target application and a well-balanced ratio of performance, efficiency and parallelization. One drawback of FPGA-based data exploitation is the limited memory capacity of reconfigurable integrated circuits. Large-scale Digital Signal Processor (DSP) FPGAs provide approximately 4MB on-board random access memory (RAM) which is not sufficient to buffer the broadband sensor and result data. Hence, additional external memory is connected to the FPGA to increase on-board storage capacities. External memory devices like double data rate three synchronous dynamic random access memories (DDR3-SDRAM) provide very fast and wide bandwidth interfaces that represent a bottleneck when used in highly parallelized processing architectures. Independent processing modules are demanding concurrent read and write access. Within the master thesis, a concept for the integration of an external DDR3- SDRAM into an FPGA-based parallelized processing architecture is developed and implemented. The solution realizes time division multiple access (TDMA) to the external memory and virtual, low-latency memory extension to the on-board buffer capabilities. The integration of the external RAM does not change the way how on-board buffers are used (control, data-fow).
62

Tupel von TVL als Datenstruktur für Boolesche Funktionen

Kempe, Galina 20 June 2003 (has links)
In der vorliegenden Arbeit wird eine Datenstruktur zur Darstellung einer Booleschen Funktion "TVL-Tupel" präsentiert, die im Ergebnis einer Kombination der bekannten Datenstrukturen Entscheidungsgraph und Ternärvektorliste entsteht. Zuerst wird untersucht, wie lokale Phasenlisten sich als Elemente des Tupels eignen. Weiterhin wird die neue Dekompositionsart ("Tupel-Dekomposition") einer Boolesche Funktion in drei bzw. vier Teilfunktionen vorgestellt. Die Besonderheit der Teilfunktionen der Dekomposition besteht in ihrer Orthogonalität zueinander. Der Vorteil der Dekomposition von Funktionen mit einer hohen Anzahl von Konjunktionen besteht im geringeren Speicherplatzbedarf. Des weiteren wurden Algorithmen für Realisierung der Operationen entwickelt, die für eine Handhabung der zerlegten Funktionen erforderlich sind. Der detaillierte Vergleich der Berechnungszeiten für die Operationen erbringt den Nachweis, dass eine Verringerung des Zeitbedarfs als Folge der Zerlegung zu erwarten ist. Weiterhin bietet die Dekomposition einen Ansatz für den Entwurf von Algorithmen, die eine parallele Bearbeitung auf der Grundlage verteilter Rechentechnik zulassen. Die Erkenntnisse der Untersuchungen der Tupel-Dekomposition einschließlich der Verwendung der verteilen Verarbeitung können beispielsweise für die Suche der Variablenmengen der OR-Bi-Decomposition verwendet werden.
63

Consistency of Probabilistic Context-Free Grammars

Stüber, Torsten 10 May 2012 (has links)
We present an algorithm for deciding whether an arbitrary proper probabilistic context-free grammar is consistent, i.e., whether the probability that a derivation terminates is one. Our procedure has time complexity $\\\\mathcal O(n^3)$ in the unit-cost model of computation. Moreover, we develop a novel characterization of consistent probabilistic context-free grammars. A simple corollary of our result is that training methods for probabilistic context-free grammars that are based on maximum-likelihood estimation always yield consistent grammars.
64

Cardinality Estimation with Local Deep Learning Models

Woltmann, Lucas, Hartmann, Claudio, Thiele, Maik, Habich, Dirk, Lehner, Wolfgang 14 June 2022 (has links)
Cardinality estimation is a fundamental task in database query processing and optimization. Unfortunately, the accuracy of traditional estimation techniques is poor resulting in non-optimal query execution plans. With the recent expansion of machine learning into the field of data management, there is the general notion that data analysis, especially neural networks, can lead to better estimation accuracy. Up to now, all proposed neural network approaches for the cardinality estimation follow a global approach considering the whole database schema at once. These global models are prone to sparse data at training leading to misestimates for queries which were not represented in the sample space used for generating training queries. To overcome this issue, we introduce a novel local-oriented approach in this paper, therefore the local context is a specific sub-part of the schema. As we will show, this leads to better representation of data correlation and thus better estimation accuracy. Compared to global approaches, our novel approach achieves an improvement by two orders of magnitude in accuracy and by a factor of four in training time performance for local models.
65

Influence of bilayer resist processing on p-i-n OLEDs: Towards multicolor photolithographic structuring of organic displays

Krotkus, Simonas, Nehm, Frederik, Janneck, Robby, Kalkura, Shrujan, Zakhidov, Alex A., Schober, Matthias, Hild, Olaf R., Kasemann, Daniel, Hofmann, Simone, Leo, Karl, Reineke, Sebastian 14 August 2019 (has links)
Recently, bilayer resist processing combined with development in hydro uoroether (HFE) solvents has been shown to enable single color structuring of vacuum-deposited state-of-the-art organic light-emitting diodes (OLED). In this work, we focus on further steps required to achieve multicolor structuring of p-i-n OLEDs using a bilayer resist approach. We show that the green phosphorescent OLED stack is undamaged after lift-off in HFEs, which is a necessary step in order to achieve RGB pixel array structured by means of photolithography. Furthermore, we investigate the in uence of both, double resist processing on red OLEDs and exposure of the devices to ambient conditions, on the basis of the electrical, optical and lifetime parameters of the devices. Additionally, water vapor transmission rates of single and bilayer system are evaluated with thin Ca film conductance test. We conclude that diffusion of propylene glycol methyl ether acetate (PGMEA) through the uoropolymer film is the main mechanism behind OLED degradation observed after bilayer processing.
66

Zum Kalkbergbau im Nossen- Wilsdruffer Schiefergebirge - Von Blankenstein bis Grumbach / Braunsdorf -

23 September 2019 (has links)
No description available.
67

Sample synopses for approximate answering of group-by queries

Lehner, Wolfgang, Rösch, Philipp 22 April 2022 (has links)
With the amount of data in current data warehouse databases growing steadily, random sampling is continuously gaining in importance. In particular, interactive analyses of large datasets can greatly benefit from the significantly shorter response times of approximate query processing. Typically, those analytical queries partition the data into groups and aggregate the values within the groups. Further, with the commonly used roll-up and drill-down operations a broad range of group-by queries is posed to the system, which makes the construction of highly-specialized synopses difficult. In this paper, we propose a general-purpose sampling scheme that is biased in order to answer group-by queries with high accuracy. While existing techniques focus on the size of the group when computing its sample size, our technique is based on its standard deviation. The basic idea is that the more homogeneous a group is, the less representatives are required in order to give a good estimate. With an extensive set of experiments, we show that our approach reduces both the estimation error and the construction cost compared to existing techniques.
68

Social Semantic Product Idea Mining: Konzeption und Evaluierung

Häusl, Martin 11 January 2022 (has links)
Im heutigen Zeitalter erwarten Kunden kürzere Produkt- und Dienstleistungsentwicklungszyklen als je zuvor. Unternehmen, die diesem Trend standhalten wollen, müssen folglich auf Innovationen setzen und ihre Innovationsfähigkeit zu einer Kernkompetenz ausbauen. Ein Innovationsprozess, der ein Vorgehensmodell zur Steigerung der Innovationsfähigkeit aufzeigt, beginnt mit der Ideen-generierungsphase. In dieser Phase werden im klassischen Innovationsprozess überwiegend unternehmensinterne Quellen genutzt, um Ideen zu generieren. Tatsächlich werden aber auf dieser Quellenbasis vermehrt Produkte und Dienstleistungen am Kundenbedürfnis vorbei entwickelt. Mit dem Open-Innovation-Ansatz kann eine Verbesserung der Innovationsfähigkeit von Unternehmen durch die Einbindung unternehmensexterner Quellen in den Innovationsprozess erzielt werden. Im Social Web, einer bedeutenden externen Quelle, werden große Mengen an Informationen erzeugt, die für den Innovationsprozess verwendet werden könnten, jedoch werden diese in heutigen Innovationsansätzen nicht oder kaum genutzt. Mit der vorliegenden Arbeit sollen mehrere Beiträge zur Adressierung dieser Problematik geleistet werden. Unter anderem werden etablierte Innovationsprozesse und aktuelle Methoden im Bereich der Ideengenerierung untersucht und miteinander verglichen. Im Rahmen einer Studie werden zudem die Datenstrukturen, Merkmale und Beschaffungsmöglichkeiten von Social-Web-Daten erforscht. Dabei bestätigt sich die These, dass aktuelle Ansätze verfügbare Social-Web-Daten nur rudimentär berücksichtigen. Auf Basis der gewonnenen Erkenntnisse wird darüber hinaus ein generisches Datenmodell entwickelt, das grundlegende Entitäten und Relationen diverser Ausprägungen von Social-Web-Daten abbildet. In diesem Zusammenhang wird aufgezeigt, dass semantische Technologien zur Generierung neuen Produktinnovationswissens überaus nützlich sind. Der Schwerpunkt der Forschungsarbeit liegt daher auf der Nutzung semantischer Technologien zur Verbesserung des Innovationsprozesses, insbesondere im Prozessschritt der Ideation. Die Produkt-, Ideen- und Social-Web-Domäne wird formal in einer neuartigen generischen Ontologie beschrieben, die es erlaubt, axiomatisch auf Basis der Web Ontology Language (OWL) neues Produktinnovationswissen aus dem Social Web zu erschließen und für nachgelagerte Innovationsmanagementsysteme maschinen-interpretierbar bereitzustellen. Anhand einer prototypischen Umsetzung kann die Machbarkeit des eigenen Ansatzes nachgewiesen werden. Dabei wird auch ersichtlich, dass der vorgestellte Lösungsansatz den aktuellen Stand der Technik hinsichtlich der Ideenerkennungsrate übersteigt.
69

Al-3.5Cu-1.5Mg-1Si alloy and related materials produced by selective laser melting

Wang, Pei 06 October 2018 (has links)
Selective laser melting (SLM) is an additive manufacturing technology. In this thesis, a heat-treatable Al-3.5Cu-1.5Mg-1Si alloy and related materials (composites and hybrid materials) have been successfully fabricated by selective laser melting and characterized in terms of densification, microstructure, heat treatment, mechanical properties as well as tribological and corrosion behavior. Firstly, the fully dense SLM Al-Cu-Mg-Si alloy was fabricated by SLM successfully. The alloy shows a higher yield strength than SLM Al-12Si alloy, and lower wear resistance and corrosion rate than commercial 2024 alloy before and after T6 heat treatment. Secondly, with the aim of designing new alloy compositions and to examine the phases and microstructures of SLM Al-Cu alloys and to correlate their microstructures with the observed mechanical properties, Al-xCu (x = 4.5, 6, 20, 33 and 40 wt. %) alloys have been synthesized in-situ by SLM from mixtures of Al-4.5Cu and Cu powders. The results indicate that the insufficient Cu solute diffusion during the layer-by-layer processing results in an inhomogeneous microstructure around the introduced Cu powders. With increasing Cu content, the Al2Cu phase in the alloys increases improving the strength of the material. These results show that powder mixtures can be used for the synthesis of SLM composites but the reaction between the matrix and the second-phase should be considered carefully. Thirdly, the TiB2/Al-Cu-Mg-Si composite was also designed and fabricated successfully by SLM and it shows a higher strength than the unreinforced SLM alloy before and after T6 heat treatment. Finally, an Al-12Si/Al-3.5Cu-1.5Mg-1Si hybrid with a good interface was fabricated successfully. This hybrid alloy shows a good yield strength and elongation at room temperature, indicating an effective potential of selective laser melting in the field of hybrid manufacturing.
70

A Formal View on Training of Weighted Tree Automata by Likelihood-Driven State Splitting and Merging

Dietze, Toni 03 June 2019 (has links)
The use of computers and algorithms to deal with human language, in both spoken and written form, is summarized by the term natural language processing (nlp). Modeling language in a way that is suitable for computers plays an important role in nlp. One idea is to use formalisms from theoretical computer science for that purpose. For example, one can try to find an automaton to capture the valid written sentences of a language. Finding such an automaton by way of examples is called training. In this work, we also consider the structure of sentences by making use of trees. We use weighted tree automata (wta) in order to deal with such tree structures. Those devices assign weights to trees in order to, for example, distinguish between good and bad structures. The well-known expectation-maximization algorithm can be used to train the weights for a wta while the state behavior stays fixed. As a way to adapt the state behavior of a wta, state splitting, i.e. dividing a state into several new states, and state merging, i.e. replacing several states by a single new state, can be used. State splitting, state merging, and the expectation maximization algorithm already were combined into the state splitting and merging algorithm, which was successfully applied in practice. In our work, we formalized this approach in order to show properties of the algorithm. We also examined a new approach – the count-based state merging algorithm – which exclusively relies on state merging. When dealing with trees, another important tool is binarization. A binarization is a strategy to code arbitrary trees by binary trees. For each of three different binarizations we showed that wta together with the binarization are as powerful as weighted unranked tree automata (wuta). We also showed that this is still true if only probabilistic wta and probabilistic wuta are considered.:How to Read This Thesis 1. Introduction 1.1. The Contributions and the Structure of This Work 2. Preliminaries 2.1. Sets, Relations, Functions, Families, and Extrema 2.2. Algebraic Structures 2.3. Formal Languages 3. Language Formalisms 3.1. Context-Free Grammars (CFGs) 3.2. Context-Free Grammars with Latent Annotations (CFG-LAs) 3.3. Weighted Tree Automata (WTAs) 3.4. Equivalences of WCFG-LAs and WTAs 4. Training of WTAs 4.1. Probability Distributions 4.2. Maximum Likelihood Estimation 4.3. Probabilities and WTAs 4.4. The EM Algorithm for WTAs 4.5. Inside and Outside Weights 4.6. Adaption of the Estimation of Corazza and Satta [CS07] to WTAs 5. State Splitting and Merging 5.1. State Splitting and Merging for Weighted Tree Automata 5.1.1. Splitting Weights and Probabilities 5.1.2. Merging Probabilities 5.2. The State Splitting and Merging Algorithm 5.2.1. Finding a Good π-Distributor 5.2.2. Notes About the Berkeley Parser 5.3. Conclusion and Further Research 6. Count-Based State Merging 6.1. Preliminaries 6.2. The Likelihood of the Maximum Likelihood Estimate and Its Behavior While Merging 6.3. The Count-Based State Merging Algorithm 6.3.1. Further Adjustments for Practical Implementations 6.4. Implementation of Count-Based State Merging 6.5. Experiments with Artificial Automata and Corpora 6.5.1. The Artificial Automata 6.5.2. Results 6.6. Experiments with the Penn Treebank 6.7. Comparison to the Approach of Carrasco, Oncina, and Calera-Rubio [COC01] 6.8. Conclusion and Further Research 7. Binarization 7.1. Preliminaries 7.2. Relating WSTAs and WUTAs via Binarizations 7.2.1. Left-Branching Binarization 7.2.2. Right-Branching Binarization 7.2.3. Mixed Binarization 7.3. The Probabilistic Case 7.3.1. Additional Preliminaries About WSAs 7.3.2. Constructing an Out-Probabilistic WSA from a Converging WSA 7.3.3. Binarization and Probabilistic Tree Automata 7.4. Connection to the Training Methods in Previous Chapters 7.5. Conclusion and Further Research A. Proofs for Preliminaries B. Proofs for Training of WTAs C. Proofs for State Splitting and Merging D. Proofs for Count-Based State Merging Bibliography List of Algorithms List of Figures List of Tables Index Table of Variable Names

Page generated in 0.0695 seconds