• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 50
  • Tagged with
  • 162
  • 162
  • 162
  • 162
  • 162
  • 23
  • 21
  • 18
  • 18
  • 17
  • 16
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Privacy-Preserving Public Verification via Homomorphic Encryption

Becher, Kilian 07 February 2024 (has links)
Nachhaltige und ethisch vertretbare Beschaffung und Produktion gehören zu den großen Herausforderungen, die aus dem rasanten Klimawandel und der wachsenden Weltbevölkerung resultieren. Die Erneuerbare-Energien-Richtlinie II der EU und das deutsche Lieferkettensorgfaltspflichtengesetz sind nur zwei Beispiele für die Vielzahl von Gesetzen und Vorschriften, die Standards für nachhaltige und ethisch vertretbare Beschaffung und Produktion vorgeben. Sie implizieren einen Bedarf an Transparenz, Rückverfolgbarkeit und Verifizierbarkeit von Lieferketten und Transaktionen. Öffentliche Verifikationen von Transaktionen entlang von Lieferketten ermöglichen es Dritten, die Einhaltung von Standards und Richtlinien und den Wahrheitsgehalt von Nachhaltigkeitsversprechen zu überprüfen. Folglich kann die öffentliche Überprüfbarkeit Kunden, öffentlichen Stellen und Nichtregierungsorganisationen dabei helfen, Verstöße und Betrug in Lieferketten aufzudecken. Dies wiederum kann dazu beitragen, den Druck zur Einhaltung geltender Standards und Vorschriften zu erhöhen. Transaktionen in Lieferketten basieren oft auf vertraulichen Informationen, wie beispielsweise Mengen und Preise. Die Transparenz derartiger Daten könnte auf Geschäftsgeheimnisse schließen lassen, was direkten Einfluss auf die Wettbewerbsvorteile der beteiligten Firmen hätte. Die Vereinbarkeit von Transparenz und Vertraulichkeit scheint jedoch auf den ersten Blick widersprüchlich zu sein. Diese Dissertation stellt sich der Herausforderung, die öffentliche Verifizierbarkeit von Transaktionen in Lieferketten unter Wahrung der Vertraulichkeit zu ermöglichen. Ausgehend von zwei Fallbeispielen für Lieferketten-Verifikationen werden zunächst Anforderungen an Lösungen untersucht und fünf Forschungsfragen abgeleitet. Anschließend wird eine universelle Lösung entworfen, welche Transparenz und Vertraulichkeit in Einklang bringt. Das vorgestellte Systemmodell ermöglicht sichere öffentliche Verifikationen durch den Einsatz von Fully Homomorphic Encryption (FHE) und Proxy Re-Encryption (PRE). Um die Eignung des Systemmodells für eine Vielzahl realer Szenarien zu verdeutlichen, werden in dieser Dissertation Protokolle für verschiedene Verifikationsfunktionen entworfen. Dies umfasst die Verifikation von Bilanzen, motiviert durch den Handel mit nachhaltigem Palmöl, sowie die Verifikation von Verhältnissen, veranschaulicht durch die Verarbeitung verschiedener Arten von Kobalt. Durch theoretische und empirische Untersuchungen wird nachgewiesen, dass die Protokolle sichere öffentliche Verifikationen für realitätsnahe Szenarien in praktikabler Zeit ermöglichen. Im Weiteren werden die Sicherheitseigenschaften und -implikationen des vorgeschlagenen Systemmodells und der Protokolle untersucht. Dies beinhaltet eine formale Analyse des Risikos, vertrauliche Informationen im Falle wiederholter, gleicher Verifikationen preiszugeben. Aufgrund der Anfälligkeit gegenüber derartigen Angriffen beim Verwenden probabilistischer Output Obfuscation, wird das Paradigma der Data-Dependent Deterministic Obfuscation (D3O) vorgestellt. D3O ist ein universelles Konzept und damit unabhängig vom Anwendungsfall der Lieferketten-Verifikation. Daher kann es in einer Vielzahl weiterer Protokolle für sichere Berechnungen eingesetzt werden, um das Abfließen vertraulicher Informationen zu reduzieren. / Sustainable and ethical sourcing and production are major challenges that arise from rapid climate change and our growing world population. The EU's Renewable Energy Directive II and the German Supply Chain Act are just two examples of the multitude of laws and regulations that define standards for sustainable and ethical sourcing and production. They imply a need for supply chain transparency, traceability, and verification. Public verification of supply chain transactions gives any third-party verifier the chance to evaluate compliance and the correctness of claims based on supply chain transaction details. Therefore, public verification can help customers, buyers, regulators, and non-governmental organizations uncover non-compliance and fraud committed by supply chain actors. This, in turn, can help increase the pressure to comply with applicable standards and regulations. Supply chain transactions often involve confidential data like amounts or prices. Transparency of such data could leak trade secrets and affect companies' competitive advantages. However, reconciling transparency with confidentiality seems contradictory at first glance. This thesis takes up the challenge of enabling privacy-preserving public verification of confidential supply chain transactions. Given two exemplary real-world use cases for supply chain verification, the thesis first investigates requirements for valid solutions and infers five research questions. It then designs a universal solution that combines transparency with confidentiality. The proposed system model achieves privacy-preserving public verification by employing the cryptographic techniques of fully homomorphic encryption (FHE) and proxy re-encryption (PRE). To demonstrate the suitability of the system model for a large variety of lifelike supply chain verification scenarios, the thesis designs privacy-preserving protocols for different verification functions. This includes the verification of balances, using the trade in sustainable palm oil as an example, as well as the verification of ratios, motivated by different forms of cobalt sourcing. These protocols are evaluated both theoretically and empirically. Through extensive empirical evaluation, the proposed protocols prove to enable privacy-preserving public verification for the mentioned supply chain scenarios in practical time. Additionally, this thesis investigates the security implications of the proposed system model and protocols and formally analyzes the risk of leaking information through repeated similar verifications. Based on the identified vulnerability to such attacks in the case of probabilistically obfuscated protocol outputs, the thesis introduces and investigates the paradigm of data-dependent deterministic obfuscation (D3O). D3O is a universal concept that is independent of the field of supply chain verification. It can reduce the leakage of confidential information in a large class of privacy-preserving protocols.
132

Understanding Immersive Environments for Visual Data Analysis

Satkowski, Marc 06 February 2024 (has links)
Augmented Reality enables combining virtual data spaces with real-world environments through visual augmentations, transforming everyday environments into user interfaces of arbitrary type, size, and content. In the past, the development of Augmented Reality was mainly technology-driven. This made head-mounted Mixed Reality devices more common in research, industrial, or personal use cases. However, such devices are always human-centered, making it increasingly important to closely investigate and understand human factors within such applications and environments. Augmented Reality usage can reach from a simple information display to a dedicated device to present and analyze information visualizations. The growing data availability, amount, and complexity amplified the need and wish to generate insights through such visualizations. Those, in turn, can utilize human visual perception and Augmented Reality’s natural interactions, the potential to display three-dimensional data, or the stereoscopic display. In my thesis, I aim to deepen the understanding of how Augmented Reality applications must be designed to optimally adhere to human factors and ergonomics, especially in the area of visual data analysis. To address this challenge, I ground my thesis on three research questions: (1) How can we design such applications in a human-centered way? (2) What influence does the real-world environment have within such applications? (3) How can AR applications be combined with existing systems and devices? To answer those research questions, I explore different human properties and real-world environments that can affect the same environment’s augmentations. For human factors, I investigate the competence in working with visualizations as visualization literacy, the visual perception of visualizations, and physical ergonomics like head movement. Regarding the environment, I examine two main factors: the visual background’s influence on reading and working with immersive visualizations and the possibility of using alternative placement areas in Augmented Reality. Lastly, to explore future Augmented Reality systems, I designed and implemented Hybrid User Interfaces and authoring tools for immersive environments. Throughout the different projects, I used empirical, qualitative, and iterative methods in studying and designing immersive visualizations and applications. With that, I contribute to understanding how developers can apply human and environmental parameters for designing and creating future AR applications, especially for visual data analysis. / Augmented Reality ermöglicht es, die reale Welt mit virtuellen Datenräume durch visuelle Augmentierungen zu kombinieren. Somit werden alltägliche Umgebungen in Benutzeroberflächen beliebiger Art, Größe und beliebigen Inhalts verwandelt. In der Vergangenheit war die Entwicklung von Augmented Reality hauptsächlich technologiegetrieben. Folglich fanden head-mounted Mixed Reality Geräte immer häufiger in der Forschung, Industrie oder im privaten Bereich anwendung. Da die Geräte jedoch immer auf den Menschen ausgerichtet sind, wird es immer wichtiger die menschlichen Faktoren in solchen Anwendungen und Umgebungen genau zu untersuchen. Die Nutzung von Augmented Reality kann von einer einfachen Informationsanzeige bis hin zur Darstellung und Analyse von Informationsvisualisierungen reichen. Die wachsende Datenverfügbarkeit, -menge und -komplexität verstärkte den Bedarf und Wunsch, durch solche Visualisierungen Erkenntnisse zu gewinnen. Diese wiederum können die menschliche visuelle Wahrnehmung und die durch Augmented Reality bereitgestellte natürlichen Interaktion und die Darstellung dreidimensionale and stereoskopische Daten nutzen. In meiner Dissertation möchte ich das Verständnis dafür vertiefen, wie Augmented Reality-Anwendungen gestaltet werden müssen, um menschliche Faktoren und Ergonomie optimal zu berücksichtigen, insbesondere im Bereich der visuellen Datenanalyse. Hierbei stütze ich mich in meiner Arbeit auf drei Forschungsfragen: (1) Wie können solche Anwendungen menschenzentriert gestaltet werden? (2) Welchen Einfluss hat die reale Umgebung auf solche Anwendungen? (3) Wie können AR Anwendungen mit existierenden Systemen und Geräten kombiniert werden? Um diese Forschungsfragen zu beantworten, untersuche ich verschiedene menschliche und Umgebungseigenschaften, die sich auf die Augmentierungen derselben Umgebung auswirken können. Für menschliche Faktoren untersuche ich die Kompetenz im Umgang mit Visualisierungen als ``Visualization Literacy'', die visuelle Wahrnehmung von Visualisierungen, und physische Ergonomie wie Kopfbewegungen. In Bezug auf die Umgebung untersuche ich zwei Hauptfaktoren: den Einfluss des visuellen Hintergrunds auf das Lesen und Arbeiten mit immersiven Visualisierungen und die Möglichkeit der Verwendung alternativer Platzierungsbereiche in Augmented Reality. Um zukünftige Augmented Reality-Systeme zu erforschen, habe ich schließlich Hybride Benutzerschnittstellen und Konfigurationstools für immersive Umgebungen entworfen und implementiert. Während der verschiedenen Projekte habe ich empirische, qualitative und iterative Methoden bei der Untersuchung und Gestaltung von immersiven Visualisierungen und Anwendungen eingesetzt. Damit trage ich zum Verständnis bei, wie Entwickler menschliche und umbebungsbezogene Parameter für die Gestaltung und Erstellung zukünftiger AR-Anwendungen, insbesondere für die visuelle Datenanalyse, nutzen können.
133

Database System Acceleration on FPGAs

Moghaddamfar, Mehdi 30 May 2023 (has links)
Relational database systems provide various services and applications with an efficient means for storing, processing, and retrieving their data. The performance of these systems has a direct impact on the quality of service of the applications that rely on them. Therefore, it is crucial that database systems are able to adapt and grow in tandem with the demands of these applications, ensuring that their performance scales accordingly. In the past, Moore's law and algorithmic advancements have been sufficient to meet these demands. However, with the slowdown of Moore's law, researchers have begun exploring alternative methods, such as application-specific technologies, to satisfy the more challenging performance requirements. One such technology is field-programmable gate arrays (FPGAs), which provide ideal platforms for developing and running custom architectures for accelerating database systems. The goal of this thesis is to develop a domain-specific architecture that can enhance the performance of in-memory database systems when executing analytical queries. Our research is guided by a combination of academic and industrial requirements that seek to strike a balance between generality and performance. The former ensures that our platform can be used to process a diverse range of workloads, while the latter makes it an attractive solution for high-performance use cases. Throughout this thesis, we present the development of a system-on-chip for database system acceleration that meets our requirements. The resulting architecture, called CbMSMK, is capable of processing the projection, sort, aggregation, and equi-join database operators and can also run some complex TPC-H queries. CbMSMK employs a shared sort-merge pipeline for executing all these operators, which results in an efficient use of FPGA resources. This approach enables the instantiation of multiple acceleration cores on the FPGA, allowing it to serve multiple clients simultaneously. CbMSMK can process both arbitrarily deep and wide tables efficiently. The former is achieved through the use of the sort-merge algorithm which utilizes the FPGA RAM for buffering intermediate sort results. The latter is achieved through the use of KeRRaS, a novel variant of the forward radix sort algorithm introduced in this thesis. KeRRaS allows CbMSMK to process a table a few columns at a time, incrementally generating the final result through multiple iterations. Given that acceleration is a key objective of our work, CbMSMK benefits from many performance optimizations. For instance, multi-way merging is employed to reduce the number of merge passes required for the execution of the sort-merge algorithm, thus improving the performance of all our pipeline-breaking operators. Another example is our in-depth analysis of early aggregation, which led to the development of a novel cache-based algorithm that significantly enhances aggregation performance. Our experiments demonstrate that CbMSMK performs on average 5 times faster than the state-of-the-art CPU-based database management system MonetDB.:I Database Systems & FPGAs 1 INTRODUCTION 1.1 Databases & the Importance of Performance 1.2 Accelerators & FPGAs 1.3 Requirements 1.4 Outline & Summary of Contributions 2 BACKGROUND ON DATABASE SYSTEMS 2.1 Databases 2.1.1 Storage Model 2.1.2 Storage Medium 2.2 Database Operators 2.2.1 Projection 2.2.2 Filter 2.2.3 Sort 2.2.4 Aggregation 2.2.5 Join 2.2.6 Operator Classification 2.3 Database Queries 2.4 Impact of Acceleration 3 BACKGROUND ON FPGAS 3.1 FPGA 3.1.1 Logic Element 3.1.2 Block RAM (BRAM) 3.1.3 Digital Signal Processor (DSP) 3.1.4 IO Element 3.1.5 Programmable Interconnect 3.2 FPGADesignFlow 3.2.1 Specifications 3.2.2 RTL Description 3.2.3 Verification 3.2.4 Synthesis, Mapping, Placement, and Routing 3.2.5 TimingAnalysis 3.2.6 Bitstream Generation and FPGA Programming 3.3 Implementation Quality Metrics 3.4 FPGA Cards 3.5 Benefits of Using FPGAs 3.6 Challenges of Using FPGAs 4 RELATED WORK 4.1 Summary of Related Work 4.2 Platform Type 4.2.1 Accelerator Card 4.2.2 Coprocessor 4.2.3 Smart Storage 4.2.4 Network Processor 4.3 Implementation 4.3.1 Loop-based implementation 4.3.2 Sort-based Implementation 4.3.3 Hash-based Implementation 4.3.4 Mixed Implementation 4.4 A Note on Quantitative Performance Comparisons II Cache-Based Morphing Sort-Merge with KeRRaS (CbMSMK) 5 OBJECTIVES AND ARCHITECTURE OVERVIEW 5.1 From Requirements to Objectives 5.2 Architecture Overview 5.3 Outlineof Part II 6 COMPARATIVE ANALYSIS OF OPENCL AND RTL FOR SORT-MERGE PRIMITIVES ON FPGAS 6.1 Programming FPGAs 6.2 RelatedWork 6.3 Architecture 6.3.1 Global Architecture 6.3.2 Sorter Architecture 6.3.3 Merger Architecture 6.3.4 Scalability and Resource Adaptability 6.4 Experiments 6.4.1 OpenCL Sort-Merge Implementation 6.4.2 RTLSorters 6.4.3 RTLMergers 6.4.4 Hybrid OpenCL-RTL Sort-Merge Implementation 6.5 Summary & Discussion 7 RESOURCE-EFFICIENT ACCELERATION OF PIPELINE-BREAKING DATABASE OPERATORS ON FPGAS 7.1 The Case for Resource Efficiency 7.2 Related Work 7.3 Architecture 7.3.1 Sorters 7.3.2 Sort-Network 7.3.3 X:Y Mergers 7.3.4 Merge-Network 7.3.5 Join Materialiser (JoinMat) 7.4 Experiments 7.4.1 Experimental Setup 7.4.2 Implementation Description & Tuning 7.4.3 Sort Benchmarks 7.4.4 Aggregation Benchmarks 7.4.5 Join Benchmarks 7. Summary 8 KERRAS: COLUMN-ORIENTED WIDE TABLE PROCESSING ON FPGAS 8.1 The Scope of Database System Accelerators 8.2 Related Work 8.3 Key-Reduce Radix Sort(KeRRaS) 8.3.1 Time Complexity 8.3.2 Space Complexity (Memory Utilization) 8.3.3 Discussion and Optimizations 8.4 Architecture 8.4.1 MSM 8.4.2 MSMK: Extending MSM with KeRRaS 8.4.3 Payload, Aggregation and Join Processing 8.4.4 Limitations 8.5 Experiments 8.5.1 Experimental Setup 8.5.2 Datasets 8.5.3 MSMK vs. MSM 8.5.4 Payload-Less Benchmarks 8.5.5 Payload-Based Benchmarks 8.5.6 Flexibility 8.6 Summary 9 A STUDY OF EARLY AGGREGATION IN DATABASE QUERY PROCESSING ON FPGAS 9.1 Early Aggregation 9.2 Background & Related Work 9.2.1 Sort-Based Early Aggregation 9.2.2 Cache-Based Early Aggregation 9.3 Simulations 9.3.1 Datasets 9.3.2 Metrics 9.3.3 Sort-Based Versus Cache-Based Early Aggregation 9.3.4 Comparison of Set-Associative Caches 9.3.5 Comparison of Cache Structures 9.3.6 Comparison of Replacement Policies 9.3.7 Cache Selection Methodology 9.4 Cache System Architecture 9.4.1 Window Aggregator 9.4.2 Compressor & Hasher 9.4.3 Collision Detector 9.4.4 Collision Resolver 9.4.5 Cache 9.5 Experiments 9.5.1 Experimental Setup 9.5.2 Resource Utilization and Parameter Tuning 9.5.3 Datasets 9.5.4 Benchmarks on Synthetic Data 9.5.5 Benchmarks on Real Data 9.6 Summary 10 THE FULL PICTURE 10.1 System Architecture 10.2 Benchmarks 10.3 Meeting the Objectives III Conclusion 11 SUMMARY AND OUTLOOK ON FUTURE RESEARCH 11.1 Summary 11.2 Future Work BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES
134

Entwicklungs- und Testunterstützung für Steuergeräte mit AUTOSAR Architektur

Englisch, Norbert 06 January 2023 (has links)
Die Einführung des AUTOSAR Standards in der Softwareentwicklung für Steuergeräte ermöglicht die Entwicklung von Kundenfunktionalitäten unabhängig von der Zielplattform. Eine AUTOSAR konforme Applikation wird anschließend für eine konkrete Zielplattform konfiguriert. Diese Flexibilität bringt auch neue Herausforderungen für den Test eines Steuergerätes mit sich. Diese Arbeit präsentiert einen Ansatz, der sowohl durch statische Analysen von Konfigurationen und Quellcode, als auch durch dynamische Tests den Entwicklungsprozess von AUTOSAR Systemen unterstützt. Der dynamische Test überprüft auf der Zielplattform die Schichten der Basissoftware und der RTE und unterstützt bei der Fehlerlokalisierung. Der präsentierte Ansatz ist für alle Versionen der AUTOSAR Classic Plattform anwendbar und nutzt nur Methoden, die durch den AUTOSAR Standard erlaubt sind. Für diese Arbeit wurde eine Wissensbasis entworfen und umgesetzt, die das Architekturwissen des AUTOSAR Standards vorhält. In der Wissensbasis sind Schichten, Stacks und Basissoftwaremodule mit ihren Eigenschaften abgelegt. Durch die Arbeit konnten verschiedenen Projekte mit AUTOSAR Architektur überprüft, verglichen und optimiert werden.:1. Einleitung 2. Grundlagen 3. Stand der Technik 4. Konzept 5. Implementierung 6. Ergebnisse 7. Zusammenfassung und Ausblick A. AUTOSAR Basissoftwaremodule in Wissensbasis / The introduction of the AUTOSAR standard in software development for electronic control units enables the development of customer functionalities independent of the target platform. An AUTOSAR-compliant application is configured for a specific target platform. This flexibility leads to new challenges for testing of electronic control units. This work presents an approach for supporting the development and test process of AUTOSAR systems by static analysis and dynamic tests. The dynamic test checks the different layers of the basic software and RTE on the target platform. Moreover, error localization is supported. The presented approach can be used for all versions of the AUTOSAR Classic platform and only uses methods that are permitted by the AUTOSAR standard. A knowledge base was designed and implemented for this work, which contains the architecture knowledge of the AUTOSAR standard. Layers, stacks and basic software modules with their properties are stored in the knowledge base. The work enabled various projects with AUTOSAR architecture to be checked, compared and optimized.:1. Einleitung 2. Grundlagen 3. Stand der Technik 4. Konzept 5. Implementierung 6. Ergebnisse 7. Zusammenfassung und Ausblick A. AUTOSAR Basissoftwaremodule in Wissensbasis
135

Design of a Robust and Flexible Grammar for Speech Control

Ludyga, Tomasz 28 May 2024 (has links)
Voice interaction is an established automatization and accessibility feature. While many satisfactory speech recognition solutions are available today, the interpretation of text se-mantic is in some use-cases difficult. Differentiated can be two types of text semantic ex-traction models: probabilistic and pure rule-based. Rule-based reasoning is formalizable into grammars and enables fast language validation, transparent decision-making and easy customization. In this thesis we develop a context-free ANTLR semantic grammar to control software by speech in a medical, smart glasses related, domain. The implementation is preceded by research of state-of-the-art, requirements consultation and a thorough design of reusable system abstractions. Design includes definitions of DSL, meta grammar, generic system ar-chitecture and tool support. Additionally, we investigate trivial and experimental grammar improvement techniques. Due to multifaceted flexibility and robustness of the designed framework, we indicate its usability in critical and adaptive systems. We determine 75% semantic recognition accuracy in the medical main use-case. We compare it against se-mantic extraction using SpaCy and two fine-tuned AI classifiers. The evaluation reveals high accuracy of BERT for sequence classification and big potential of hybrid solutions with AI techniques on top grammars, essentially for detection of alerts. The accuracy is strong dependent on input quality, highlighting the importance of speech recognition tailored to specific vocabulary.:1 Introduction 1 1.1 Motivation 1 1.2 CAIS.ME Project 2 1.3 Problem Statement 2 1.4 Thesis Overview 3 2 Related Work 4 3 Foundational Concepts and Systems 6 3.1 Human-Computer Interaction in Speech 6 3.2 Speech Recognition 7 3.2.1 Open-source technologies 8 3.2.2 Other technologies 9 3.3 Language Recognition 9 3.3.1 Regular expressions 10 3.3.2 Lexical tokenization 10 3.3.3 Parsing 10 3.3.4 Domain Specific Languages 11 3.3.5 Formal grammars 11 3.3.6 Natural Language Processing 12 3.3.7 Model-Driven Engineering 14 4 State-of-the-Art: Grammars 15 4.1 Overview 15 4.2 Workbenches for Grammar Design 16 4.2.1 ANTLR 16 4.2.2 Xtext 17 4.2.3 JetBrains MPS 17 4.2.4 Other tools 18 4.3 Design Approaches 19 5 Problem Analysis 23 5.1 Methodology 23 5.2 Identification of Use-Cases 24 5.3 Requirements Analysis 26 5.3.1 Functional requirements 26 5.3.2 Qualitative requirements 26 5.3.3 Acceptance criteria 27 6 Design 29 6.1 Preprocessing 29 6.2 Underlying Domain Specific Modelling 31 6.2.1 Language model definition 31 6.2.2 Formalization 32 6.2.3 Constraints 32 6.3 Generic Grammar Syntax 33 6.4 Architecture 36 6.5 Integration of AI Techniques 38 6.6 Grammar Improvement 40 6.6.1 Identification of synonyms 40 6.6.2 Automatic addition of synonyms 42 6.6.3 Addition of same-meaning strings 42 6.6.4 Addition and modification of rules 43 6.7 Processing of unrecognized input 44 6.8 Summary 45 7 Implementation and Evaluation 47 7.1 Development Environment 47 7.2 Implementation 48 7.2.1 Grammar model transformation 48 7.2.2 Output construction 50 7.2.3 Testing 50 7.2.4 Reusability for similar use-cases 51 7.3 Limitations and Challenges 52 7.4 Comparison to NLP Solutions 54 8 Conclusion 58 8.1 Summary of Findings 58 8.2 Future Research and Development 60 Acronyms 62 Bibliography 63 List of Figures 73 List of Tables 74 List of Listings 75
136

Opto-Mechatronic Screening Module for 3D Tumour Model Engineering

Kahl, Melanie, Hutmacher, Dietmar, Friedrich, Oliver 28 May 2024 (has links)
The integration of an opto-mechatronic screening module into the biomanufacturing workstation enables the automated, reproducible and user-independent production and analysis of hydrogels-based 3D cell cultures.:I. Introduction II. Methods and Materials III. Results IV. Conclusion References
137

Leben mit Python

Piko Koch, Dorothea 28 May 2024 (has links)
Dies ist ein kurzer Überblick über Python-Projekte abseits von Einsatzmöglichkeiten im Beruf.:1. Einleitung 2. Python unterrichten 3. Mit Python promovieren 4. Mit Python chatten lassen 4.1. Implementation 4.2. Literaturwissenschaftlicher Hintergrund 5. Mit Python leben 6. Mit Python basteln Literatur
138

Toward Multimodal Sentiment Analysis of Historic Plays: A Case Study with Text and Audio for Lessing’s Emilia Galotti

Schmidt, Thomas, Burghardt, Manuel, Wolff, Christian 05 June 2024 (has links)
We present a case study as part of a work-in-progress project about multimodal sentiment analysis on historic German plays, taking Emilia Galotti by G. E. Lessing as our initial use case. We analyze the textual version and an audio version (audiobook). We focus on ready-to-use sentiment analysis methods: For the textual component, we implement a naive lexicon-based approach and another approach that enhances the lexicon by means of several NLP methods. For the audio analysis, we use the free version of the Vokaturi tool. We compare the results of all approaches and evaluate them against the annotations of a human expert, which serves as a gold standard. For our use case, we can show that audio and text sentiment analysis behave very differently: textual sentiment analysis tends to predict sentiment as rather negative and audio sentiment as rather positive. Compared to the gold standard, the textual sentiment analysis achieves accuracies of 56% while the accuracy for audio sentiment analysis is only 32%. We discuss possible reasons for these mediocre results and give an outlook on further steps we want to pursue in the context of multimodal sentiment analysis on historic plays.
139

A Bigraphical Framework for Modeling and Simulation of UAV-based Inspection Scenarios

Grzelak, Dominik, Lindner, Martin 11 April 2024 (has links)
We present a formal modeling approach for the design and simulation of Multi-Unmanned Aerial Vehicle (multi-UAV) inspection scenarios, where planning is based on model checking. As demonstration, we formalize and simulate a compositional UAV inspection system of a solar park using bigraphical reactive systems, which introduce the notion of time-varying bigraphs. Specifically, the UAV system is modeled as a process-algebraic expression, whose semantics is a bigraph state in a labeled transition system. The underlying Multi-Agent Path Finding problem is solved model-theoretically using Planning-by-Model-Checking. It solves the inherently connected collision-free path planning problem for multiple UAVs subject to contexts and local conditions. First, a bigraph is constructed algebraically, which can be decomposed systematically into separate parts with interfaces. The layered composite model accounts for location, navigation, UAVs, and contexts, which enables simple configuration and extension (changeability). Second, the executable operational semantics of our formal bigraph model are given by bigraphical reactive systems, where rules constitute the behavioral component of our model. Rules reconfigure the bigraph to simulate state changes, i.e., they allow to alter the conditions under which UAVs are permitted to move. Properties can be attached to nodes of the bigraph and evaluated in a simulation over the traces of the transition system according to some cost-based policies. In essence, the inherent multi-UAV path planning problem of our scenario is formulated as a reachability problem and solved by model checking the generated transition system. The bigraph-algebraic expression also allows us to reason about potential parallelization opportunities when moving UAVs. Moreover, we sketch how to directly simulate the bigraph specification in a ROS-based Gazebo simulation by examining the inspection and monitoring of a solar park as an application. The reactive system specification provides the blueprint for analysis, simulation, implementation and execution. Thus, the same algorithm used for verification is used as well for the simulation in ROS/Gazebo to execute plans.:1 Introduction 2 Overview: Scenario Description and Formal Modeling Approach 3 Background: Bigraphs and Model Checking 4 Construction of the UAV System via Composition 5 Making the Drones Fly: Executable Model Semantics 6 Collision-Free Path Planning Problem 7 Prototypical Implementation 8 Discussion 9 Related Work 10 Conclusion A UAV State Machine B Bigraphical Reactive Systems C RPO/IPO Semantic
140

Biologically Inspired Hexagonal Deep Learning for Hexagonal Image Processing

Schlosser, Tobias 27 May 2024 (has links)
While current approaches to digital image processing within the context of machine learning and deep learning are motivated by biological processes within the human brain, they are, however, also limited due to the current state of the art of input and output devices as well as the algorithms that are concerned with the processing of their data. In order to generate digital images from real-world scenes, the utilized digital images' underlying lattice formats are predominantly based on rectangular or square structures. Yet, the human visual perception system suggests an alternative approach that manifests itself within the sensory cells of the human eye in the form of hexagonal arrangements. As previous research demonstrates that hexagonal arrangements can provide different benefits to image processing systems in general, this contribution is concerned with the synthesis of both worlds in the form of the biologically inspired hexagonal deep learning for hexagonal image processing. This contribution is therefore concerned with the design, the implementation, and the evaluation of hexagonal solutions to currently developed approaches in the form of hexagonal deep neural networks. For this purpose, the respectively realized hexagonal functionality had to be built from the ground up as hexagonal counterparts to otherwise conventional square lattice format based image processing and deep learning based systems. Furthermore, hexagonal equivalents for artificial neural network based operations, layers, as well as models and architectures had to be realized. This also encompasses the related evaluation metrics for hexagonal lattice format based representations of digital images and their conventional counterparts in comparison. Therefore, the developed hexagonal image processing and deep learning framework Hexnet functions as a first general application-oriented open science framework for hexagonal image processing within the context of machine learning. To enable the evaluation of hexagonal approaches, a set of different application areas and use cases within conventional and hexagonal image processing – astronomical, medical, and industrial image processing – are provided that allow an assessment of hexagonal deep neural networks in terms of their classification capabilities as well as their general performance. The obtained and presented results demonstrate the possible benefits of hexagonal deep neural networks and their hexagonal representations for image processing systems. It is shown that hexagonal deep neural networks can result in increased classification capabilities given different basic geometric shapes and contours, which in turn partially translate into their real-world applications. This is indicated by a relative improvement in F1-score for the proposed hexagonal and square models, ranging from 1.00 (industrial image processing) to 1.03 (geometric primitives) with single classes even reaching a relative improvement of over 1.05. However, possible disadvantages are also given by the increased complexity of hexagonal algorithms. This is evident by the present potential in regard to runtime optimizations that have yet to be realized for certain hexagonal operations in comparison to their currently deployed square equivalents.:1 Introduction and Motivation 2 Fundamentals and Methods 3 Implementation 4 Test Results, Evaluation, and Discussion 5 Conclusion and Outlook

Page generated in 0.1849 seconds