• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 167
  • 58
  • 18
  • 17
  • 8
  • 2
  • Tagged with
  • 427
  • 221
  • 185
  • 184
  • 184
  • 44
  • 35
  • 33
  • 32
  • 32
  • 32
  • 31
  • 30
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Numerical Evaluation of Pathline Predicates of the Benguela Upwelling System

Nardini, Pascal, Böttinger, Michael, Pogrzeba, Hans, Siegfried, Lydia, Schmidt, Martin, Scheuermann, Gerik 25 January 2019 (has links)
Using simulation data of a regional ocean model, Nardini et al. applied pathline predicates for a detailed post-hoc analysis of the Benguela upwelling system. In this work, we evaluate the accuracy of this technique. Using different temporal samplings, we aim at finding minimum requirements for the temporal resolution of the flow data in the context of retroactive particle pathline techniques. Besides the flow field, our simulation data contains synthetic tracer fields for different tracer source regions. Using the flow data, dense trajectories are computed to enable deriving ”emulated tracer fields” based on the local ratio of pathline particles originating from tracer source regions to other ones, which can then be compared to the original tracer fields. We find that the emulated tracer concentrations are overestimated in comparison to the original ones. However, the shape of the regions with high tracer concentration can be reproduced.
172

Towards an Image-based Indicator for Peripheral Artery Disease Classification and Localization

Gillmann, Christina, Matsuura, John M., Hagen, Hans, Wischgoll, Thomas 25 January 2019 (has links)
Peripheral Artery Disease (PAD) is an often occurring problem caused by narrowed veins. With this type of disease, mostly the legs receive an insufficient supply of blood to sustain their functions. This can result in an amputation of extremities or strokes. In order to quantify the risks, doctors consult a classification table which is based on the pain response of a patient. This classification is subjective and does not indicate the exact origin of the PAD symptoms. Resulting from this, complications can occur unprompted. We present the first results for an image-based indicator assisting medical doctors in estimating the stage of PAD and its location. Therefore, a segmentation tree is utilized to compare the changes in a healthy versus diseased leg. We provide a highlighting mechanism that allows users to review the location of changes in selected structures. To show the effectiveness of the presented approach, we demonstrate a localization of the PAD and show how the presented technique can be utilized for a novel image-based indicator of PAD stages.
173

Virtual Immersive Concerts

Haack, Lukas Adrian 02 March 2022 (has links)
This thesis is an approach to formalize the dynamics of a physical concert experience and then translate the individual psychological factors by means of communication technology to the computer-mediated context. It discusses socio-psychological factors of live concert attendance as well as socio-technical affordances of communication technologies to establish a three-dimensional framework. The derived temporal, spatial and social dimensions are used to conduct a requirement analysis concerning suitable technological platforms to find common mechanics that foster the mediation of a virtual concert experience. Based on bigraphical reactive systems (BRS), a formalism is developed which models and simulates audience behaviour at distributed events. Concluding, computational model checking is used for the model’s validation as well as formal exploration, evaluation, and iterative refinement of the derived mechanics. Thereby, this work contributes to the improvement of computer-based model-checking in the context of bigraph analysis and affirms the potential of formal modelling of socio-technical phenomena such as virtual events.
174

Scalable Schedule-Aware Bundle Routing

De Jonckère, Olivier 09 August 2023 (has links)
This thesis introduces approaches providing scalable delay-/disruption-tolerant routing capabilities in scheduled space topologies. The solution is developed for the requirements derived from use cases built according to predictions for future space topology, like the future Mars communications architecture report from the interagency operations advisory group. A novel routing algorithm is depicted to provide optimized networking performance that discards the scalability issues inherent to state-of-the-art approaches. This thesis also proposes a new recommendation to render volume management concerns generic and easily exchangeable, including a new simple management technique increasing volume awareness accuracy while being adaptable to more particular use cases. Additionally, this thesis introduces a more robust and scalable approach for internetworking between subnetworks to increase the throughput, reduce delays, and ease configuration thanks to its high flexibility.:1 Introduction 1.1 Motivation 1.2 Problem statement 1.3 Objectives 1.4 Outline 2 Requirements 2.1 Use cases 2.2 Requirements 2.2.1 Requirement analysis 2.2.2 Requirements relative to the routing algorithm 2.2.3 Requirements relative to the volume management 2.2.4 Requirements relative to interregional routing 3 Fundamentals 3.1 Delay-/disruption-tolerant networking 3.1.1 Architecture 3.1.2 Opportunistic and deterministic DTNs 3.1.3 DTN routing 3.1.4 Contact plans 3.1.5 Volume management 3.1.6 Regions 3.2 Contact graph routing 3.2.1 A non-replication routing scheme 3.2.2 Route construction 3.2.3 Route selection 3.2.4 Enhancements and main features 3.3 Graph theory and DTN routing 3.3.1 Mapping with DTN objects 3.3.2 Shortest path algorithm 3.3.3 Edge and vertex contraction 3.4 Algorithmic determinism and predictability 4 Preliminary analysis 4.1 Node and contact graphs 4.2 Scenario 4.3 Route construction in ION-CGR 4.4 Alternative route search 4.4.1 Yen’s algorithm scalability 4.4.2 Blocking issues with Yen 4.4.3 Limiting contact approaches 4.5 CGR-multicast and shortest-path tree search 4.6 Volume management 4.6.1 Volume obstruction 4.6.2 Contact sink 4.6.3 Ghost queue 4.6.4 Data rate variations 4.7 Hierarchical interregional routing 4.8 Other potential issues 5 State-of-the-art and related work 5.1 Taxonomy 5.2 Opportunistic and probabilistic approaches 5.2.1 Flooding approaches 5.2.2 PROPHET 5.2.3 MaxProp 5.2.4 Issues 5.3 Deterministic approaches 5.3.1 Movement-aware routing over interplanetary networks 5.3.2 Delay-tolerant link state routing 5.3.3 DTN routing for quasi-deterministic networks 5.3.4 Issues 5.4 CGR variants and enhancements 5.4.1 CGR alternative routing table computation 5.4.2 CGR-multicast 5.4.3 CGR extensions 5.4.4 RUCoP and CGR-hop 5.4.5 Issues 5.5 Interregional routing 5.5.1 Border gateway protocol 5.5.2 Hierarchical interregional routing 5.5.3 Issues 5.6 Further approaches 5.6.1 Machine learning approaches 5.6.2 Tropical geometry 6 Scalable schedule-aware bundle routing 6.1 Overview 6.2 Shortest-path tree routing for space networks 6.2.1 Structure 6.2.2 Tree construction 6.2.3 Tree management 6.2.4 Tree caching 6.3 Contact segmentation 6.3.1 Volume management interface 6.3.2 Simple volume manager 6.3.3 Enhanced volume manager 6.4 Contact passageways 6.4.1 Regional border definition 6.4.2 Virtual nodes 6.4.3 Pathfinding and administration 7 Evaluation 7.1 Methodology 7.1.1 Simulation tools 7.1.2 Simulator extensions 7.1.3 Algorithms and scenarios 7.2 Offline analysis 7.3 Eliminatory processing pressures 7.4 Networking performance 7.4.1 Intraregional unicast routing tests 7.4.2 Intraregional multicast tests 7.4.3 Interregional routing tests 7.4.4 Behavior with congestion 7.5 Requirement fulfillment 8 Summary and Outlook 8.1 Conclusion 8.2 Future works 8.2.1 Next development steps 8.2.2 Contact graph routing
175

Fuzzy-Analysis in a Generic Polymorphic Uncertainty Quantification Framework

Richter, Bertram 30 November 2022 (has links)
In this thesis, a framework for generic uncertainty analysis is developed. The two basic uncertainty characteristics aleatoric and epistemic uncertainty are differentiated. Polymorphic uncertainty as the combination of these two characteristics is discussed. The main focus is on epistemic uncertainty, with fuzziness as an uncertainty model. Properties and classes of fuzzy quantities are discussed. Some information reduction measures to reduce a fuzzy quantity to a characteristic value, are briefly debated. Analysis approaches for aleatoric, epistemic and polymorphic uncertainty are discussed. For fuzzy analysis α-level-based and α-level-free methods are described. As a hybridization of both methods, non-flat α-level-optimization is proposed. For numerical uncertainty analysis, the framework PUQpy, which stands for “Polymorphic Uncertainty Quantification in Python” is introduced. The conception, structure, data structure, modules and design principles of PUQpy are documented. Sequential Weighted Sampling (SWS) is presented as an optimization algorithm for general purpose optimization, as well as for fuzzy analysis. Slice Sampling as a component of SWS is shown. Routines to update Pareto-fronts, which are required for optimization are benchmarked. Finally, PUQpy is used to analyze example problems as a proof of concept. In those problems analytical functions with uncertain parameters, characterized by fuzzy and polymorphic uncertainty, are examined.
176

Methods and Algorithms for Efficient Programming of FPGA-based Heterogeneous Systems for Object Detection

Kalms, Lester 14 March 2023 (has links)
Nowadays, there is a high demand for computer vision applications in numerous application areas, such as autonomous driving or unmanned aerial vehicles. However, the application areas and scenarios are becoming increasingly complex, and their data requirements are growing. To meet these requirements, it needs increasingly powerful computing systems. FPGA-based heterogeneous systems offer an excellent solution in terms of energy efficiency, flexibility, and performance, especially in the field of computer vision. Due to complex applications and the use of FPGAs in combination with other architectures, efficient programming is becoming increasingly difficult. Thus, developers need a comprehensive framework with efficient automation, good usability, reasonable abstraction, and seamless integration of tools. It should provide an easy entry point, and reduce the effort to learn new concepts, programming languages and tools. Additionally, it needs optimized libraries for the user to focus on developing applications without getting involved with the underlying details. These should be well integrated, easy to use, and cover a wide range of possible use cases. The framework needs efficient algorithms to execute applications on heterogeneous architectures with maximum performance. These algorithms should distribute applications across various nodes with low fragmentation and communication overhead and find a near-optimal solution in a reasonable amount of time. This thesis addresses the research problem of an efficient implementation of object detection applications, their distribution across FPGA-based heterogeneous systems, and methods for automation and integration using toolchains. Within this, the three contributions are the HiFlipVX object detection library, the DECISION framework, and the APARMAP application distribution algorithm. HiFlipVX is an open-source HLS-based FPGA library optimized for performance and resource efficiency. It contains 66 highly parameterizable computer vision functions including neural networks, ideally for design space exploration. It extends the OpenVX standard for feature extraction, which is challenging due to unknown element size at design time. All functions are streaming capable to achieve maximum performance by increasing parallelism and reducing off-chip memory access. It does not require external or vendor libraries, which eases project integration, device coverage, and vendor portability, as shown for Intel. The library consumed on average 0.39% FFs and 0.32% LUTs for a set of image processing functions compared to a vendor library. A HiFlipVX implementation of the AKAZE feature detector computes between 3.56 and 4.13 times more pixels per second than the related work, while its resource consumption is comparable to optimized VHDL designs. Its neural network extension achieved a speedup of 3.23 for an AlexNet layer in comparison to a related work, while consuming 73% less on-chip memory. Furthermore, this thesis proposes an improved feature extraction implementation that achieves a repeatability of 72.57% when weighting complex cases, while the next best algorithm only achieves 62.99 %. DECISION is a framework consisting of two toolchains for the efficient programming of FPGA-based heterogeneous systems. Both integrate HiFlipVX and use a joint OpenVXbased frontend to implement computer vision applications. It abstracts the underlying hardware and algorithm details while covering a wide range of architectures and applications. The first toolchain targets x86-based systems consisting of CPUs, GPUs, and FPGAs using OpenCL (Open Computing Language). To create a heterogeneous schedule, it considers device profiles, kernel profiles and estimates, and FPGA dataflow characteristics. It manages synchronization, memory transfers and data coherence at design time. It creates a runtime optimized program which excels by its high parallelism and a low overhead. Additionally, this thesis looks at the integration of OpenCL-based libraries, automatic OpenCL kernel generation, and OpenCL kernel optimization and comparison for different architectures. The second toolchain creates an application specific and adaptive NoC-based architecture. The streaming-optimized architecture enables the reusability of vision functions by multiple applications to improve the resource efficiency while maintaining high performance. For a set of example applications, the resource consumption was more than halved, while its overhead was only 0.015% in terms of performance. APARMAP is an application distribution algorithm for partition-based and mesh-like FPGA topologies. It uses a NoC (Network-on-Chip) as communication infrastructure to connect reconfigurable regions and generate an application-specific hardware architecture. The algorithm uses load balancing techniques to find reasonable solutions within a predictable and scalable amount of time. It optimizes solutions using various heuristics, such as Simulated Annealing and Tabu Search. It uses a multithreaded grid-based approach to prevent threads from calculating the same solution and getting stuck in local minimums. Its constraints and objectives are the FPGA resource utilization, NoC bandwidth consumption, NoC hop count, and execution time of the proposed algorithm. The evaluation showed that the algorithm can deal with heterogeneous and irregular host graph topologies. The algorithm showed a good scalability in terms of computation time for an increasing number of nodes and partitions. It was able to achieve an optimal placement for a set of example graphs up to a size of 196 nodes on host graphs of up to 49 partitions. For a real application with 271 nodes and 441 edges, it was able to achieve a distribution with low resource fragmentation in an average time of 149 ms.
177

Post-Training Optimization of Cross-layer Approximate Computing for Edge Inference of Deep Learning Applications

De la Parra Aparicio, Cecilia Eugenia 07 February 2024 (has links)
Over the past decade, the rapid development of deep learning (DL) algorithms has enabled extraordinary advances in perception tasks throughout different fields, from computer vision to audio signal processing. Additionally, increasing computational resources available in supercomputers and graphic processor clusters have provided a suitable environment to train larger and deeper deep neural network (DNN) models for improved performances. However, the resulting memory bandwidth and computational requirements of such DNN models restricts their deployment in embedded systems with constrained hardware resources. To overcome this challenge, it is important to establish new paradigms to reduce the computational workload of such DL algorithms while maintaining their original accuracy. A key observation of previous research is that DL models are resilient to input noise and computational errors; therefore, a reasonable approach to decreasing such hardware requirements is to embrace DNN resiliency and utilize approximate computing techniques at different system design layers. This approach requires, however, constant monitoring as well as a careful combination of approximation techniques to avoid performance degradation while maximizing computational savings. Within this context, the focus of this thesis is the simulation of cross-layer approximate computing (AC) methods for DNN computation and the development of optimization methods to compensate AC errors in approximated DNNs. The first part of this thesis proposes the simulation framework ProxSim. This framework enables accelerated approximate computational unit (ACU) simulation for evaluation and training of approximated DNNs. ProxSim supports quantization and approximation of common neural layers such as fully connected (FC), convolutional, and recurrent layers. A performance evaluation using a variety of DNN architectures, as well as a comparison with the state of the art is also presented. The author used ProxSim to implement and evaluate the following methods presented in this work. The second part of this thesis introduces an approach to model the approximation error in DNN computation. First, the author thoroughly anaylzes the error caused by approximate multipliers to compute the multiply and accumulate (MAC) operations in DNN models. From this analysis, a statistical model of the approximation error is obtained. Through various experiments with DNNs for image classification, the proposed model is verified and compared with other methods from the literature. The results demonstrate the validity of the approximation error model and reinforce a general understanding of approximate computing in DNNs. In the third part of this thesis, the author presents a methodology for uniform systematic approximation of DNNs. This methodology focuses on the optimization of full DNN approximation with a single type of ACU to minimize power consumption without accuracy loss. The backbone of this methodology is the custom fine-tuning methods the author proposes to compensate for the approximation error. These methods enable the use of ACUs with large approximation errors, which results in significant power savings and negligible accuracy losses. This process is corroborated by extensive experiments, where the estimated savings and the accuracy achieved after approximation are thoroughly examined using ProxSim. In the last part of this thesis, the author proposes two different methodologies to further boost energy savings after applying uniform approximation. This increment in energy savings is achieved by computing more resilient DNN elements (neurons or layers) with increased approximation levels. The first methodology focuses on iterative kernel-wise approximation and quantization enabled by a custom approximate MAC unit. The second method is based on flexible layer-wise approximation, and applied to bit-decomposed in-memory computing (IMC) architectures as a case study to demonstrate the effectiveness of the proposed approach.
178

Reconfigurable Computing Systems for Robotics using a Component-Oriented Approach

Podlubne, Ariel 18 December 2023 (has links)
Robotic platforms are becoming more complex due to the wide range of modern applications, including multiple heterogeneous sensors and actuators. In order to comply with real-time and power-consumption constraints, these systems need to process a large amount of heterogeneous data from multiple sensors and take action (via actuators), which represents a problem as the resources of these systems have limitations in memory storage, bandwidth, and computational power. Field Programmable Gate Arrays (FPGAs) are programmable logic devices that offer high-speed parallel processing. FPGAs are particularly well-suited for applications that require real-time processing, high bandwidth, and low latency. One of the fundamental advantages of FPGAs is their flexibility in designing hardware tailored to specific needs, making them adaptable to a wide range of applications. They can be programmed to pre-process data close to sensors, which reduces the amount of data that needs to be transferred to other computing resources, improving overall system efficiency. Additionally, the reprogrammability of FPGAs enables them to be repurposed for different applications, providing a cost-effective solution that needs to adapt quickly to changing demands. FPGAs' performance per watt is close to that of Application-Specific Integrated Circuits (ASICs), with the added advantage of being reprogrammable. Despite all the advantages of FPGAs (e.g., energy efficiency, computing capabilities), the robotics community has not fully included them so far as part of their systems for several reasons. First, designing FPGA-based solutions requires hardware knowledge and longer development times as their programmability is more challenging than Central Processing Units (CPUs) or Graphics Processing Units (GPUs). Second, porting a robotics application (or parts of it) from software to an accelerator requires adequate interfaces between software and FPGAs. Third, the robotics workflow is already complex on its own, combining several fields such as mechanics, electronics, and software. There have been partial contributions in the state-of-the-art for FPGAs as part of robotics systems. However, a study of FPGAs as a whole for robotics systems is missing in the literature, which is the primary goal of this dissertation. Three main objectives have been established to accomplish this. (1) Define all components required for an FPGAs-based system for robotics applications as a whole. (2) Establish how all the defined components are related. (3) With the help of Model-Driven Engineering (MDE) techniques, generate these components, deploy them, and integrate them into existing solutions. The component-oriented approach proposed in this dissertation provides a proper solution for designing and implementing FPGA-based designs for robotics applications. The modular architecture, the tool 'FPGA Interfaces for Robotics Middlewares' (FIRM), and the toolchain 'FPGA Architectures for Robotics' (FAR) provide a set of tools and a comprehensive design process that enables the development of complex FPGA-based designs more straightforwardly and efficiently. The component-oriented approach contributed to the state-of-the-art in FPGA-based designs significantly for robotics applications and helps to promote their wider adoption and use by specialists with little FPGA knowledge.
179

Explainable Artificial Intelligence for Image Segmentation and for Estimation of Optical Aberrations

Vinogradova, Kira 18 December 2023 (has links)
State-of-the-art machine learning methods such as convolutional neural networks (CNNs) are frequently employed in computer vision. Despite their high performance on unseen data, CNNs are often criticized for lacking transparency — that is, providing very limited if any information about the internal decision-making process. In some applications, especially in healthcare, such transparency of algorithms is crucial for end users, as trust in diagnosis and prognosis is important not only for the satisfaction and potential adherence of patients, but also for their health. Explainable artificial intelligence (XAI) aims to open up this “black box,” often perceived as a cryptic and inconceivable algorithm, to increase understanding of the machines’ reasoning.XAI is an emerging field, and techniques for making machine learning explainable are becoming increasingly available. XAI for computer vision mainly focuses on image classification, whereas interpretability in other tasks remains challenging. Here, I examine explainability in computer vision beyond image classification, namely in semantic segmentation and 3D multitarget image regression. This thesis consists of five chapters. In Chapter 1 (Introduction), the background of artificial intelligence (AI), XAI, computer vision, and optics is presented, and the definitions of the terminology for XAI are proposed. Chapter 2 is focused on explaining the predictions of U-Net, a CNN commonly used for semantic image segmentation, and variations of this architecture. To this end, I propose the gradient-weighted class activation mapping for segmentation (Seg-Grad-CAM) method based on the well-known Grad-CAM method for explainable image classification. In Chapter 3, I present the application of deep learning to estimation of optical aberrations in microscopy biodata by identifying the present Zernike aberration modes and their amplitudes. A CNN-based approach PhaseNet can accurately estimate monochromatic aberrations in images of point light sources. I extend this method to objects of complex shapes. In Chapter 4, an approach for explainable 3D multitarget image regression is reported. First, I visualize how the model differentiates the aberration modes using the local interpretable model-agnostic explanations (LIME) method adapted for 3D image classification. Then I “explain,” using LIME modified for multitarget 3D image regression (Image-Reg-LIME), the outputs of the regression model for estimation of the amplitudes. In Chapter 5, the results are discussed in a broader context. The contribution of this thesis is the development of explainability methods for semantic segmentation and 3D multitarget image regression of optical aberrations. The research opens the door for further enhancement of AI’s transparency.:Title Page i List of Figures xi List of Tables xv 1 Introduction 1 1.1 Essential Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Artificial intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Explainable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Proposed definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Explainable Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Aims and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 Image classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Image regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.4 Image segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.1 Aberrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.2 Zernike polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.5 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.2 Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 Explainable Image Segmentation 23 2.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.1 CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.2 Grad-CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.3 U-Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3.4 Seg-Grad-CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.1 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.2 TextureMNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.1 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.2 TextureMNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3 Estimation of Aberrations 55 3.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 PhaseNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 PhaseNet data generator . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3.3 Retrieval of noise parameters . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.4 Data generator with phantoms . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.5 Restoration via deconvolution . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3.6 Convolution with the “zero” synthetic PSF . . . . . . . . . . . . . . . . 63 3.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.4.1 Astrocytes (synthetic data) . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4.2 Fluorescent beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.4.3 Drosophila embryo (live sample) . . . . . . . . . . . . . . . . . . . . . . 67 3.4.4 Neurons (fixed sample) . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.1 Astrocytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.2 Conclusions on the results for astrocytes . . . . . . . . . . . . . . . . . . 74 3.5.3 Fluorescent beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.5.4 Conclusions on the results for fluorescent beads . . . . . . . . . . . . . . 81 3.5.5 Drosophila embryo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.5.6 Conclusions on the results for Drosophila embryo . . . . . . . . . . . . . 87 3.5.7 Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4 Explainable Multitarget Image Regression 99 4.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.1 LIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.2 Superpixel algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.3 LIME for 3D image classification . . . . . . . . . . . . . . . . . . . . . . 104 4.3.4 Image-Reg-LIME: LIME for 3D image regression . . . . . . . . . . . . . 107 4.4 Results: Classification of Aberrations . . . . . . . . . . . . . . . . . . . . . . . . 109 viii TABLE OF CONTENTS 4.4.1 Transforming the regression task into classification . . . . . . . . . . . . 110 4.4.2 Data augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.4.3 Parameter search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.4 Clustering of 3D images . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.5 Explanations of classification . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.6 Conclusions on the results for classification . . . . . . . . . . . . . . . . 117 4.5 Results: Explainable Regression of Aberrations . . . . . . . . . . . . . . . . . . 118 4.5.1 Explanations with a reference value . . . . . . . . . . . . . . . . . . . . 121 4.5.2 Validation of explanations . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5 Conclusions and Outlook 127 References 129
180

Streaming-Based Progressive Enhancement of Websites for Slow and Error-Prone Networks

Vogel, Lucas Jacob 29 June 2023 (has links)
This thesis aims to improve the loading times of web pages by streaming the content in a non-render-blocking way. At the beginning of this thesis, a large-scale analysis was performed, spanning all downloadable pages of the top 10.000 web pages according to the Tranco-list. This analysis aimed to gather data about the render-blocking properties of web page resources, including HTML, JavaScript, and CSS. It further gathered data about code coverage, giving insight into how much of the render-blocking code is actually used. Therefore, the structural optimization potential could be determined. Less render-blocking code will, in turn, lead to faster loading times due to requiring less data to display the page. The analysis showed that there is significant optimization potential left. On average, modern web pages are built with a combined 86.7% of JavaScript and CSS, the rest being HTML. Both JavaScript and CSS are loaded mostly render-blocking, with 91.8% of JavaScript and 89.47% of CSS loaded in this way. Furthermore, only 40.8% of JavaScript and 15.9% of CSS is used until render. This shows that, on average, web pages have significant room for improvement. The concept, which is then developed based on the results of this analysis, aims to load web pages in a new way by streaming all render-blocking content. The related work showed that multiple sub-techniques are required first, which were conceptualized next. First, an optimization and splitting tool for CSS is proposed, called Essential. This is followed by an optimization framework concept for JavaScript, consisting of Waiter and AUTRATAC. Lastly, a backward-compatible approach was developed, which allows for splitting HTML and streaming all content to a client. The evaluation showed that the streamed web page loads significantly faster when comparing FCP, content ”Above-the-Fold,” and total transfer time of all render-blocking resources of the document. For example, the case study test determined that the streamed page could reduce the time until FCP by 83.3% at 2 Mbps and the time until the last render-blocking data is transferred by up to 70.4% at 2 Mbps. Furthermore, existing streaming methods were also compared, determining that WebSockets meets the requirements to stream web page content sufficiently. Lastly, an anonymous online user questionnaire showed that 85% of users preferred this new style of loading pages.

Page generated in 0.0236 seconds