• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 31
  • 17
  • Tagged with
  • 153
  • 153
  • 116
  • 116
  • 116
  • 33
  • 21
  • 20
  • 19
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Visual Interactive Labeling of Large Multimedia News Corpora

Han, Qi, John, Markus, Kurzhals, Kuno, Messner, Johannes, Ertl, Thomas 25 January 2019 (has links)
The semantic annotation of large multimedia corpora is essential for numerous tasks. Be it for the training of classification algorithms, efficient content retrieval, or for analytical reasoning, appropriate labels are often the first necessity before automatic processing becomes efficient. However, manual labeling of large datasets is time-consuming and tedious. Hence, we present a new visual approach for labeling and retrieval of reports in multimedia news corpora. It combines automatic classifier training based on caption text from news reports with human interpretation to ease the annotation process. In our approach, users can initialize labels with keyword queries and iteratively annotate examples to train a classifier. The proposed visualization displays representative results in an overview that allows to follow different annotation strategies (e.g., active learning) and assess the quality of the classifier. Based on a usage scenario, we demonstrate the successful application of our approach. Therein, users label several topics which interest them and retrieve related documents with high confidence from three years of news reports.
32

Reducing Occlusion in Cinema Databases through Feature-Centric Visualizations

Bujack, Roxana, Rogers, David H., Ahrens, James 25 January 2019 (has links)
In modern supercomputer architectures, the I/O capabilities do not keep up with the computational speed. Image-based techniques are one very promising approach to a scalable output format for visual analysis, in which a reduced output that corresponds to the visible state of the simulation is rendered in-situ and stored to disk. These techniques can support interactive exploration of the data through image compositing and other methods, but automatic methods of highlighting data and reducing clutter can make these methods more effective. In this paper, we suggest a method of assisted exploration through the combination of feature-centric analysis with image space techniques and show how the reduction of the data to features of interest reduces occlusion in the output for a set of example applications.
33

Towards Smarter Fluorescence Microscopy: Enabling Adaptive Acquisition Strategies With Optimized Photon Budget

Dibrov, Alexandr 12 August 2022 (has links)
Fluorescence microscopy is an invaluable technique for studying the intricate process of organism development. The acquisition process, however, is associated with the fundamental trade-off between the quality and reliability of the acquired data. On one hand, the goal of capturing the development in its entirety, often times across multiple spatial and temporal scales, requires extended acquisition periods. On the other hand, high doses of light required for such experiments are harmful for living samples and can introduce non-physiological artifacts in the normal course of development. Conventionally, a single set of acquisition parameters is chosen in the beginning of the acquisition and constitutes the experimenter’s best guess of the overall optimal configuration within the aforementioned trade-off. In the paradigm of adaptive microscopy, in turn, one aims at achieving more efficient photon budget distribution by dynamically adjusting the acquisition parameters to the changing properties of the sample. In this thesis, I explore the principles of adaptive microscopy and propose a range of improvements for two real imaging scenarios. Chapter 2 summarizes the design and implementation of an adaptive pipeline for efficient observation of the asymmetrically dividing neurogenic progenitors in Zebrafish retina. In the described approach the fast and expensive acquisition mode is automatically activated only when the mitotic cells are present in the field of view. The method illustrates the benefits of the adaptive acquisition in the common scenario of the individual events of interest being sparsely distributed throughout the duration of the acquisition. Chapter 3 focuses on computational aspects of segmentation-based adaptive schemes for efficient acquisition of the developing Drosophila pupal wing. Fast sample segmentation is shown to provide a valuable output for the accurate evaluation of the sample morphology and dynamics in real time. This knowledge proves instrumental for adjusting the acquisition parameters to the current properties of the sample and reducing the required photon budget with minimal effects to the quality of the acquired data. Chapter 4 addresses the generation of synthetic training data for learning-based methods in bioimage analysis, making them more practical and accessible for smart microscopy pipelines. State-of-the-art deep learning models trained exclusively on the generated synthetic data are shown to yield powerful predictions when applied to the real microscopy images. In the end, in-depth evaluation of the segmentation quality of both real and synthetic data-based models illustrates the important practical aspects of the approach and outlines the directions for further research.
34

Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

Kreimeier, Julian 31 August 2022 (has links)
Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger für eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell präsentiert, was den Zugang für blinde und sehbehinderte Menschen einschränkt. Die grundlegendste Barriere ist oft die elementare Orientierung und Mobilität (und folglich die soziale Mobilität), einschließlich der Erlangung von Kenntnissen über unbekannte Gebäude vor deren Besuch. Um solche Barrieren zu überbrücken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugänglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen Realität (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte Interaktivität, die Aktualisierbarkeit und die Möglichkeit, virtuelle Räume und Modelle als Abbilder von realen Räumen zu erkunden, ohne dass reale Gefahren und die begrenzte Verfügbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle Repräsentationen realer Gebäude im Kontext von Orientierung und Mobilität berührbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. Zusätzlich bietet diese Arbeit einen Überblick über die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung für blinde und sehbehinderte Nutzer und der Weg dorthin durch die Einführung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des Geräts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. Begründet und motiviert werden die folgenden Kapitel durch explorative Ansätze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel führen empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der Hände mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden geräteunabhängige technologische Möglichkeiten und auch Herausforderungen für weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und räumlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren. / Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner).
35

Role-based Context-sensitive Monitoring of Distributed Systems

Shmelkin, Ilja 08 March 2023 (has links)
Monitoring information technology (IT) systems during operation is one of the few methods that help administrators track the health of the monitored system, predict and detect faults, and assist in system repair and error prevention. However, current implementations impose architectural and functional constraints on monitored systems that result in less flexibility in deployment and operation. While excellent monitoring systems exist for some use cases, others are not adequately supported, having no monitoring system available at all for very specific use cases. In addition, most monitoring software specializes in specific data formats, protocols, data collection mechanisms, etc., further limiting its flexibility. As a result, individuals and organizations struggle to find the right combination of features to support their monitoring needs in a single monitoring system, forcing them to use multiple monitoring systems instead in order to support all of their use cases. The role-based approach to software modeling and implementation promises an intuitive way to increase flexibility in modeling and implementing IT systems. In conjunction with technology from the field of self-adaptive systems, this thesis describes a framework for context-sensitive control loops with roles that can be used to overcome these limitations. We present a novel approach to building a flexible role-based monitoring system based on that framework. Our approach allows for context-specific implementation of monitoring capabilities to support a variety of application domains, while maintaining a derived architecture of well-defined roleplaying components that inherently support distribution and scalability. To this end, important background knowledge from the areas of self-adaptive systems, control loops, the role concept, as well as role-based modeling and implementation is first presented. In addition, important related work from the areas of flexible system design and monitoring systems is presented. Then, a framework for context-sensitive control loops with roles is introduced and applied to the monitoring application domain in modeling and implementation. Based on a common use case for monitoring systems (i.e., monitoring and autoscaling of a web service infrastructure), the resulting Role-based Monitoring Approach (RBMA) is compared to two state-of-the-art monitoring toolkits. This is followed by a qualitative and quantitative evaluation of RBMA, showing that it is more flexible and, at the same time, provides reasonable performance at no additional cost compared to the state-of-the-art tools. Finally, it is explained how this thesis’ contributions can be applied to another monitoring use case (i.e., network device monitoring) as well as to another application domain (i.e., embedded systems monitoring) and its extension (i.e., the Internet of Things domain). This thesis concludes with a summary of the contributions and a presentation of important topics for future work.:Preface iv Statement of Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii The RoSI Research Training Group . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction 1 1.1 Thesis Topic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Background 5 2.1 Principles of Self-adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 The MAPE-K Control Loop . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.2 MAPE-K Patterns for Distributed Self-adaptive Systems . . . . . . 12 2.1.3 MAPE-K Control Loop in Monitoring Systems . . . . . . . . . . . 16 2.2 The Notion of Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 The Compartment Role Object Meta-Model . . . . . . . . . . . . . . . . . . 24 2.4 The ObjectTeams Java Programming Model . . . . . . . . . . . . . . . . . . 26 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3 Related Work 31 3.1 Design Patterns for Flexibility in Software . . . . . . . . . . . . . . . . . . . 31 3.1.1 Strategy Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1.2 Template Method Pattern . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.3 Using Delegation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.1.4 Role-object Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.2 Classifying Flexibility in Monitoring Systems . . . . . . . . . . . . . . . . . 39 3.2.1 Criteria for Flexibility in Monitoring Systems . . . . . . . . . . . . 40 3.2.2 Classification of Flexibility in Monitoring Systems . . . . . . . . . 44 3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 The Role-based Monitoring Approach 47 4.1 Framework and Model for Context-sensitive Control Loops with Roles . 48 4.2 Evaluation Scenario: Autoscaling of Web Service Infrastructures . . . . . 54 4.2.1 Version 1: Role-based Monitoring Approach . . . . . . . . . . . . . 59 4.2.2 Version 2: Prometheus with Alertmanager . . . . . . . . . . . . . . 70 4.2.3 Version 3: Elasticsearch with Kibana . . . . . . . . . . . . . . . . . . 73 iii Contents 4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5 Evaluation 77 5.1 Quantitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.1.1 First Experiment (Correct Functionality) . . . . . . . . . . . . . . . 78 5.1.2 Second Experiment (Idle Performance) . . . . . . . . . . . . . . . . 80 5.1.3 Third Experiment (Performance under Load) . . . . . . . . . . . . 80 5.2 Qualitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.3 Additional Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Monitoring Network Devices . . . . . . . . . . . . . . . . . . . . . . 87 5.3.2 Flexible Embedded Systems Management . . . . . . . . . . . . . . 90 5.3.3 Managing Internet of Things Devices . . . . . . . . . . . . . . . . . 92 6 Conclusion and Future Work 95 6.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.2 Topics for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Bibliography 99 List of Figures 107 List of Tables 109 List of Listings 110 List of Abbreviations 111 A Implementation, Compilation, and Execution of RBMA 113 A.1 Implementation of Base Classes . . . . . . . . . . . . . . . . . . . . . . . . . 113 A.2 Implementation of Team- and inner Role Classes . . . . . . . . . . . . . . . 121 A.3 Implementation of Auxiliary Classes . . . . . . . . . . . . . . . . . . . . . . 139 A.4 Compilation of RBMA with Eclipse OT/J . . . . . . . . . . . . . . . . . . . 144 A.5 Execution of RBMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 B Additional Information: Autoscaling of Web Service Infrastructures 145 B.1 Setup of the Slave-level Clusters (Versions 1, 2, and 3) . . . . . . . . . . . . 145 B.2 RBMA: Setup of the Master-level Cluster (Version 1) . . . . . . . . . . . . 156 B.3 Prometheus: Setup of Master-level Cluster (Version 2) . . . . . . . . . . . 160 B.4 Elastic Stack: Setup of the Master-level Cluster (Version 3) . . . . . . . . . 165 B.5 Auxiliary Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 C Large Figures 179
36

Who is afraid of MT?

Schmitt, Peter A. 12 August 2022 (has links)
Machine translation (MT) is experiencing a renaissance. On one hand, machine translation is becoming more common and used in ever larger scale, on the other hand many translators have an almost hostile attitude towards machine translation programs and those translators who use MT as a tool. Either it is assumed that the MT can never be as good as a human translation or machine translation is viewed as the ultimate enemy of the translator and as a job killer. The article discusses with various examples the limits and possibilities of machine translation. It demonstrates that machine translation can be better than human translations – even if they were made by experienced professional translators. The paper also reports the results of a test that showed that translation customers must expect that even well-known and expensive translation service providers deliver a quality that is on par with poor MT. Overall, it is argued that machine translation programs are no more and no less than an additional tool with which the translation industry can satisfy certain requirements. This abstract was also – as the entire article – automatically translated into English.
37

Mitigating Emergent Safety and Security Incidents of CPS by a Protective Shell

Wagner, Leonard 07 November 2023 (has links)
In today's modern world, Cyber-Physical Systems (CPS) have gained widespread prevalence, offering tremendous benefits while also increasing society's dependence on them. Given the direct interaction of CPS with the physical environment, their malfunction or compromise can pose significant risks to human life, property, and the environment. However, as the complexity of CPS rises due to heightened expectations and expanded functional requirements, ensuring their trustworthy operation solely during the development process becomes increasingly challenging. This thesis introduces and delves into the novel concept of the 'Protective Shell' – a real-time safeguard actively monitoring CPS during their operational phases. The protective shell serves as a last line of defence, designed to detect abnormal behaviour, conduct thorough analyses, and initiate countermeasures promptly, thereby mitigating unforeseen risks in real-time. The primary objective of this research is to enhance the overall safety and security of CPS by refining, partly implementing, and evaluating the innovative protective shell concept. To provide context for collaborative systems working towards higher objectives — common within CPS as system-of-systems (SoS) — the thesis introduces the 'Emergence Matrix'. This matrix categorises outcomes of such collaboration into four quadrants based on their anticipated nature and desirability. Particularly concerning are outcomes that are both unexpected and undesirable, which frequently serve as the root cause of safety accidents and security incidents in CPS scenarios. The protective shell plays a critical role in mitigating these unfavourable outcomes, as conventional vulnerability elimination procedures during the CPS design phase prove insufficient due to their inability to proactively anticipate and address these unforeseen situations. Employing the design science research methodology, the thesis is structured around its iterative cycles and the research questions imposed, offering a systematic exploration of the topic. A detailed analysis of various safety accidents and security incidents involving CPS was conducted to retrieve vulnerabilities that led to dangerous outcomes. By developing specific protective shells for each affected CPS and assessing their effectiveness during these hazardous scenarios, a generic core for the protective shell concept could be retrieved, indicating general characteristics and its overall applicability. Furthermore, the research presents a generic protective shell architecture, integrating advanced anomaly detection techniques rooted in explainable artificial intelligence (XAI) and human machine teaming. While the implementation of protective shells demonstrate substantial positive impacts in ensuring CPS safety and security, the thesis also articulates potential risks associated with their deployment that require careful consideration. In conclusion, this thesis makes a significant contribution towards the safer and more secure integration of complex CPS into daily routines, critical infrastructures and other sectors by leveraging the capabilities of the generic protective shell framework.:1 Introduction 1.1 Background and Context 1.2 Research Problem 1.3 Purpose and Objectives 1.3.1 Thesis Vision 1.3.2 Thesis Mission 1.4 Thesis Outline and Structure 2 Design Science Research Methodology 2.1 Relevance-, Rigor- and Design Cycle 2.2 Research Questions 3 Cyber-Physical Systems 3.1 Explanation 3.2 Safety- and Security-Critical Aspects 3.3 Risk 3.3.1 Quantitative Risk Assessment 3.3.2 Qualitative Risk Assessment 3.3.3 Risk Reduction Mechanisms 3.3.4 Acceptable Residual Risk 3.4 Engineering Principles 3.4.1 Safety Principles 3.4.2 Security Principles 3.5 Cyber-Physical System of Systems (CPSoS) 3.5.1 Emergence 4 Protective Shell 4.1 Explanation 4.2 System Architecture 4.3 Run-Time Monitoring 4.4 Definition 4.5 Expectations / Goals 5 Specific Protective Shells 5.1 Boeing 737 Max MCAS 5.1.1 Introduction 5.1.2 Vulnerabilities within CPS 5.1.3 Specific Protective Shell Mitigation Mechanisms 5.1.4 Protective Shell Evaluation 5.2 Therac-25 5.2.1 Introduction 5.2.2 Vulnerabilities within CPS 5.2.3 Specific Protective Shell Mitigation Mechanisms 5.2.4 Protective Shell Evaluation 5.3 Stuxnet 5.3.1 Introduction 5.3.2 Exploited Vulnerabilities 5.3.3 Specific Protective Shell Mitigation Mechanisms 5.3.4 Protective Shell Evaluation 5.4 Toyota 'Unintended Acceleration' ETCS 5.4.1 Introduction 5.4.2 Vulnerabilities within CPS 5.4.3 Specific Protective Shell Mitigation Mechanisms 5.4.4 Protective Shell Evaluation 5.5 Jeep Cherokee Hack 5.5.1 Introduction 5.5.2 Vulnerabilities within CPS 5.5.3 Specific Protective Shell Mitigation Mechanisms 5.5.4 Protective Shell Evaluation 5.6 Ukrainian Power Grid Cyber-Attack 5.6.1 Introduction 5.6.2 Vulnerabilities in the critical Infrastructure 5.6.3 Specific Protective Shell Mitigation Mechanisms 5.6.4 Protective Shell Evaluation 5.7 Airbus A400M FADEC 5.7.1 Introduction 5.7.2 Vulnerabilities within CPS 5.7.3 Specific Protective Shell Mitigation Mechanisms 5.7.4 Protective Shell Evaluation 5.8 Similarities between Specific Protective Shells 5.8.1 Mitigation Mechanisms Categories 5.8.2 Explanation 5.8.3 Conclusion 6 AI 6.1 Explainable AI (XAI) for Anomaly Detection 6.1.1 Anomaly Detection 6.1.2 Explainable Artificial Intelligence 6.2 Intrinsic Explainable ML Models 6.2.1 Linear Regression 6.2.2 Decision Trees 6.2.3 K-Nearest Neighbours 6.3 Example Use Case - Predictive Maintenance 7 Generic Protective Shell 7.1 Architecture 7.1.1 MAPE-K 7.1.2 Human Machine Teaming 7.1.3 Protective Shell Plugin Catalogue 7.1.4 Architecture and Design Principles 7.1.5 Conclusion Architecture 7.2 Implementation Details 7.3 Evaluation 7.3.1 Additional Vulnerabilities introduced by the Protective Shell 7.3.2 Summary 8 Conclusion 8.1 Summary 8.2 Research Questions Evaluation 8.3 Contribution 8.4 Future Work 8.5 Recommendation
38

Efficient and Scalable Simulations of Active Hydrodynamics in Three Dimensions

Singh, Abhinav 14 February 2024 (has links)
Active matter represents a unique class of non-equilibrium systems, including examples ranging from cellular structures to large-scale biological tissues. These systems exhibit intriguing spatiotemporal dynamics, driven by the constituent particles’ continuous energy expenditure. Such active-matter systems, featuring complex hydrodynamics, are described by sophisticated mathematical models, typically using partial differential equations (PDEs). PDEs modeling hydrodynamics, such as the Navier-Stokes equations, are analytically intractable, and notoriously challenging to study computationally. The challenges include the need for consistent numerical methods along with their efficient and scalable high-performance computer implementation to solve the PDEs numerically. However, when considering new theoretical PDE models, such as active hydrodynamics, conventional approaches often fall short due to the specialization made in the numerical methods to study certain specific models. The inherent complexity and nonlinearity of active-matter PDEs add to the challenge. Hence, the computational study of such active-matter PDE models requires rapidly evolving high-performance computer software that can easily implement new numerical methods to solve these equations in biologically realistic three-dimensional domains. This presents a rich, yet underexplored territory demanding scalable computational frameworks that apply to a large class of PDEs. In this thesis, we introduce a computational framework that effectively allows for using multiple numerical methods through a context-aware template expression system akin to an embedded domain-specific language. This framework primarily aims at solving lengthy PDEs associated with active hydrodynamics in complex domains, while experimenting with new numerical methods. Existing PDE-solving codes often lack this flexibility, as they are closely tied to a PDE and domain geometry that rely on a specific numerical method. We overcome these limitations by using an object-oriented implementation design, and show experiments with adaptive and numerically consistent particle-based approach called Discretization-Corrected Particle Strength Exchange (DC-PSE). DC-PSE allows for the higher-order discretization of differential operators on arbitrary particle distributions leading to the possibility of solving active hydrodynamic PDEs in complex domains. However, the curse of dimensionality makes it difficult to numerically solve three-dimensional equations on single-core architectures and warrants the use of parallel and distributed computers. We design a novel template-expression system and implement it in the scalable scientific computing library OpenFPM. Our methodology offers an expression-based embedded language, enabling PDE codes to be written in a form that closely mirrors mathematical notation. Leveraging OpenFPM, this approach also ensures parallel scalability. To further enhance our framework's versatility, we employ a \textit{separation-of-concerns} abstraction, segregating the model equations from numerics, and domain geometry. This allows for the rapid rewriting of codes for agile numerical experiments across different model equations in various geometries. Supplementing this framework, we develop a distributed algebra system compatible with OpenFPM and Boost Odeint. This algebra system opens avenues for a multitude of explicit adaptive time-integration schemes, which can be selected by modifying a single line of code while maintaining parallel scalability. Motivated by symmetry-preserving theories of active hydrodynamics, and as a first benchmark of our template-expression system, we present a high-order numerically convergent scheme to study active polar fluids in arbitrary three-dimensional domains. We derive analytical solutions in simple Cartesian geometries and use them to show the numerical convergence of our algorithm. Further, we showcase the scalability of the computer code written using our expression system on distributed computing systems. To cater to the need for solving PDEs on curved surfaces, we present a novel meshfree numerical scheme, the Surface DC-PSE method. Upon implementation in our scalable framework, we benchmark Surface DC-PSE for both explicit and implicit Laplace-Beltrami operators and show applications to computing mean and Gauss curvature. Finally, we apply our computational framework to exploring the three-dimensional active hydrodynamics of biological flowing matter, a prominent model system to study the active dynamics of cytoskeletal networks, celluar migration, and tissue mechanics. Our software framework effectively tackles the challenges associated to numerically solving such non-equilibrium spatiotemporal PDEs. We perform linear perturbation analysis of the three-dimensional Ericksen-Leslie model and find an analytical expression for the critical active potential or, equivalently, a critical length of the system above which a spontaneous flow transition occurs. This spontaneous flow transition is a first realization of a three-dimensional active Fr\'eedericksz transition. With our expression system, we successfully simulate 3D active fluids, finding phases of spontaneous flow transitions, traveling waves, and spatiotemporal chaos with increasing active stress. We numerically find a topological phase transition similar to the Berezinskii–Kosterlitz–Thouless transition (BKT transition) of the two-dimensional XY model that occurs in active polar fluids after the spontaneous flow transition. We then proceed to non-Cartesian geometries and show the application of our software framework to solve the active polar fluid equations in spherical domains. We find spontaneous flows in agreement with recent experimental observations. We further showcase the framework to solve the equations in 3D annular domains and a `peanut' geometry that resembles a dividing cell. Our simulations further recapitulate the actin flows observed in \textit egg extracts within spherical shell geometries, showcasing our framework's versatility in handling complex geometrical modifications of model equations. Looking ahead, we hope our framework will serve as a foundation for further advancements in computational morphogenesis, fostering collaboration and using the present techniques in biophysical modeling.
39

Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality Systems

Ranjbar, Behnaz 06 December 2022 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail. In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution. Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS. Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms. This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS. Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction 1.1. Mixed-Criticality Application Design 1.2. Mixed-Criticality Hardware Design 1.3. Certain Challenges and Questions 1.4. Thesis Key Contributions 1.4.1. Application Analysis and Modeling 1.4.2. Multi-Core Mixed-Criticality System Design 1.5. Thesis Overview 2. Preliminaries and Literature Reviews 2.1. Preliminaries 2.1.1. Mixed-Criticality Systems 2.1.2. Fault-Tolerance, Fault Model and Safety Requirements 2.1.3. Hardware Architectural Modeling 2.1.4. Low-Power Techniques and Power Consumption Model 2.2. Related Works 2.2.1. Mixed-Criticality Task Scheduling Mechanisms 2.2.2. QoS Improvement Methods in Mixed-Criticality Systems 2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems 2.3. Conclusion 3. Bounding Time in Mixed-Criticality Systems 3.1. BOT-MICS: A Design-Time WCET Adjustment Approach 3.1.1. Motivational Example 3.1.2. BOT-MICS in Detail 3.1.3. Evaluation 3.2. A Run-Time WCET Adjustment Approach 3.2.1. Motivational Example 3.2.2. ADAPTIVE in Detail 3.2.3. Evaluation 3.3. Conclusion 4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling 4.1. Problem Objectives and Motivational Example 4.2. FANTOM in detail 4.2.1. Safety Quantification 4.2.2. MC Tasks Utilization Bounds Definition 4.2.3. Scheduling Analysis 4.2.4. System Upper Bound Utilization 4.2.5. A General Design Time Scheduling Algorithm 4.3. Evaluation 4.3.1. Evaluation with Real-Life Benchmarks 4.3.2. Evaluation with Synthetic Task Sets 4.4. Conclusion 5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling 5.1. Motivational Example and Problem Statement 5.2. Proposed Method in Detail 5.2.1. An Overview of the Design-Time Approach 5.2.2. Run-Time Approach: Employment of SOLID 5.2.3. LIQUID Approach 5.3. Evaluation 5.3.1. Evaluation with Real-Life Benchmarks 5.3.2. Evaluation with Synthetic Task Sets 5.3.3. Investigating the Timing and Memory Overheads of ML Technique 5.4. Conclusion 6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design 6.1. Problem Objectives and Motivational Example 6.2. Design Methodology 6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping 6.3.1. Making Scheduling Tree 6.3.2. Mapping and Scheduling 6.3.3. Time Complexity Analysis 6.3.4. Memory Space Analysis 6.4. Evaluation 6.4.1. Experimental Setup 6.4.2. Analyzing the Tree Construction Time 6.4.3. Analyzing the Run-Time Timing Overhead 6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications 6.4.5. Analyzing the QoS of LC Tasks 6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature 6.4.7. Effect of Varying Different Parameters on Acceptance Ratio 6.4.8. Investigating Different Approaches at Run-Time 6.5. Conclusion 7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems 7.1. Research Questions, Objectives and Motivational Example 7.2. Design-Time Approach 7.3. Run-Time Mixed-Criticality Scheduler 7.3.1. Selecting the Appropriate Task to Assign Slack 7.3.2. Re-Mapping Technique 7.3.3. Run-Time Management Algorithm 7.3.4. DVFS governor in Clustered Multi-Core Platforms 7.4. Run-Time Scheduler Algorithm Optimization 7.5. Evaluation 7.5.1. Experimental Setup 7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption 7.5.3. The Effect of Varying Parameters of Cost Functions 7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping 7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms 7.5.6. The Latency of Changing Frequency in Real Platform 7.5.7. The Effect of Latency on System Schedulability 7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement 7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture 7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform 7.6. Conclusion 8. Conclusion and Future Work 8.1. Conclusions 8.2. Future Work
40

Local Learning Strategies for Data Management Components

Woltmann, Lucas 18 December 2023 (has links)
In a world with an ever-increasing amount of data processed, providing tools for highquality and fast data processing is imperative. Database Management Systems (DBMSs) are complex adaptive systems supplying reliable and fast data analysis and storage capabilities. To boost the usability of DBMSs even further, a core research area of databases is performance optimization, especially for query processing. With the successful application of Artificial Intelligence (AI) and Machine Learning (ML) in other research areas, the question arises in the database community if ML can also be beneficial for better data processing in DBMSs. This question has spawned various works successfully replacing DBMS components with ML models. However, these global models have four common drawbacks due to their large, complex, and inflexible one-size-fits-all structures. These drawbacks are the high complexity of model architectures, the lower prediction quality, the slow training, and the slow forward passes. All these drawbacks stem from the core expectation to solve a certain problem with one large model at once. The full potential of ML models as DBMS components cannot be reached with a global model because the model’s complexity is outmatched by the problem’s complexity. Therefore, we present a novel general strategy for using ML models to solve data management problems and to replace DBMS components. The novel strategy is based on four advantages derived from the four disadvantages of global learning strategies. In essence, our local learning strategy utilizes divide-and-conquer to place less complex but more expressive models specializing in sub-problems of a data management problem. It splits the problem space into less complex parts that can be solved with lightweight models. This circumvents the one-size-fits-all characteristics and drawbacks of global models. We will show that this approach and the lesser complexity of the specialized local models lead to better problem-solving qualities and DBMS performance. The local learning strategy is applied and evaluated in three crucial use cases to replace DBMS components with ML models. These are cardinality estimation, query optimizer hinting, and integer algorithm selection. In all three applications, the benefits of the local learning strategy are demonstrated and compared to related work. We also generalize the strategy’s usability for a broader application and formulate best practices with instructions for others.

Page generated in 0.0285 seconds