• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 32
  • Tagged with
  • 117
  • 117
  • 117
  • 117
  • 117
  • 19
  • 18
  • 17
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Narrow Band Level Set Method for Surface Extraction from Unstructured Point-based Volume Data

Rosenthal, Paul, Molchanov, Vladimir, Linsen, Lars 24 June 2011 (has links)
Level-set methods have become a valuable and well-established field of visualization over the last decades. Different implementations addressing different design goals and different data types exist. In particular, level sets can be used to extract isosurfaces from scalar volume data that fulfill certain smoothness criteria. Recently, such an approach has been generalized to operate on unstructured point-based volume data, where data points are not arranged on a regular grid nor are they connected in form of a mesh. Utilizing this new development, one can avoid an interpolation to a regular grid which inevitably introduces interpolation errors. However, the global processing of the level-set function can be slow when dealing with unstructured point-based volume data sets containing several million data points. We propose an improved level-set approach that performs the process of the level-set function locally. As for isosurface extraction we are only interested in the zero level set, values are only updated in regions close to the zero level set. In each iteration of the level-set process, the zero level set is extracted using direct isosurface extraction from unstructured point-based volume data and a narrow band around the zero level set is constructed. The band consists of two parts: an inner and an outer band. The inner band contains all data points within a small area around the zero level set. These points are updated when executing the level set step. The outer band encloses the inner band providing all those neighbors of the points of the inner band that are necessary to approximate gradients and mean curvature. Neighborhood information is obtained using an efficient kd-tree scheme, gradients and mean curvature are estimated using a four-dimensional least-squares fitting approach. Comparing ourselves to the global approach, we demonstrate that this local level-set approach for unstructured point-based volume data achieves a significant speed-up of one order of magnitude for data sets in the range of several million data points with equivalent quality and robustness.
52

Algorithmen der Bildanalyse und -synthese für große Bilder und Hologramme

Kienel, Enrico 27 November 2012 (has links)
Die vorliegende Arbeit befasst sich mit Algorithmen aus dem Bereich der Bildsegmentierung sowie der Datensynthese für das so genannte Hologrammdruck-Prinzip. Angelehnt an ein anatomisch motiviertes Forschungsprojekt werden aktive Konturen zur halbautomatischen Segmentierung digitalisierter histologischer Schnitte herangezogen. Die besondere Herausforderung liegt dabei in der Entwicklung von verschiedenen Ansätzen, die der Anpassung des Verfahrens für sehr große Bilder dienen, welche in diesem Kontext eine Größe von einigen hundert Megapixel erreichen können. Unter dem Aspekt der größtmöglichen Effizienz, jedoch mit der Beschränkung auf die Verwendung von Consumer-Hardware, werden Ideen vorgestellt, welche eine auf aktiven Konturen basierende Segmentierung bei derartigen Bildgrößen erstmals ermöglichen sowie zur Beschleunigung und Reduktion des Speicheraufwandes beitragen. Darüber hinaus wurde das Verfahren um ein intuitives Werkzeug erweitert, das eine interaktive lokale Korrektur der finalen Kontur gestattet und damit die Praxistauglichkeit der Methode maßgeblich erhöht. Der zweite Teil der Arbeit beschäftigt sich mit einem Druckprinzip für die Herstellung von Hologrammen, basierend auf virtuellen Abbildungsgegenständen. Der Hologrammdruck, der namentlich an die Arbeitsweise eines Tintenstrahldruckers erinnern soll, benötigt dazu spezielle diskrete Bilddaten, die als Elementarhologramme bezeichnet werden. Diese tragen die visuelle Information verschiedener Blickrichtungen durch einen festen geometrischen Ort auf der Hologrammebene. Ein vollständiges, aus vielen Elementarhologrammen zusammengesetztes Hologramm erzeugt dabei ein erhebliches Datenvolumen, das parameterabhängig schnell im Terabyte-Bereich liegen kann. Zwei unabhängige Algorithmen zur Erzeugung geeignet aufbereiteter Daten unter intensiver Ausnutzung von Standard-Graphikhardware werden präsentiert, hinsichtlich ihrer Berechnungs- sowie Speicherkomplexität verglichen und unter Berücksichtigung von Qualitätsaspekten bewertet.
53

Strategien zur Datenfusion beim Maschinellen Lernen

Schwalbe, Karsten, Groh, Alexander, Hertwig, Frank, Scheunert, Ulrich 25 November 2019 (has links)
Smarte Prüfsysteme werden ein Schlüsselbaustein zur Qualitätssicherung in der industriellen Fertigung und Produktion sein. Insbesondere trifft dies auf komplexe Prüf- und Bewertungsprozesse zu. In den letzten Jahren haben sich hierfür lernbasierte Verfahren als besonders vielversprechend herauskristallisiert. Ihr Einsatz geht in der Regel mit erheblichen Performanceverbesserungen gegenüber konventionellen, regel- bzw. geometriebasierten Methoden einher. Der Black-Box-Charakter dieser Algorithmen führt jedoch dazu, dass die Interpretationen der berechneten Prognosegüten kritisch zu hinterfragen sind. Das Vertrauen in die Ergebnisse von Algorithmen, die auf maschinellem Lernen basieren, kann erhöht werden, wenn verschiedene, voneinander unabhängige Verfahren zum Einsatz kommen. Hierbei sind Datenfusionsstrategien anzuwenden, um die Resultate der verschiedenen Methoden zu einem Endergebnis zusammenzufassen. Im Konferenzbeitrag werden, aufbauend auf einer kurzen Vorstellung wichtiger Ansätze zur Objektklassifikation, entsprechende Fusionsstrategien präsentiert und an einem Fallbeispiel evaluiert. Im Anschluss wird auf Basis der Ergebnisse das Potential der Datenfusion in Bezug auf das Maschinelle Lernen erörtert.
54

Optische Methoden zur Positionsbestimmung auf Basis von Landmarken

Bilda, Sebastian 24 April 2017 (has links)
Die Innenraumpositionierung kommt in der heutigen Zeit immer mehr Aufmerksamkeit zu teil. Neben der Navigation durch das Gebäude sind vor allem Location Based Services von Bedeutung, welche Zusatzinformationen zu spezifischen Objekten zur Verfügung stellen Da für eine Innenraumortung das GPS Signal jedoch zu schwach ist, müssen andere Techniken zur Lokalisierung gefunden werden. Neben der häufig verwendeten Positionierung durch Auswertung von empfangenen Funkwellen existieren Methoden zur optischen Lokalisierung mittels Landmarken. Das kamerabasierte Verfahren bietet den Vorteil, dass eine oft zentimetergenaue Positionierung möglich ist. In dieser Masterarbeit erfolgt die Bestimmung der Position im Gebäude mittels Detektion von ArUco-Markern und Türschildern aus Bilddaten. Als Evaluationsgeräte sind zum einen die Kinect v2 von Microsoft, als auch das Lenovo Phab 2 Pro Smartphone verwendet worden. Neben den Bilddaten stellen diese auch mittels Time of Flight Sensoren generierte Tiefendaten zur Verfügung. Durch den Vergleich von aus dem Bild extrahierten Eckpunkten der Landmarke, mit den aus einer Datenbank entnommenen realen geometrischen Maßen des Objektes, kann die Entfernung zu einer gefundenen Landmarke bestimmt werden. Neben der optischen Distanzermittlung wird die Position zusätzlich anhand der Tiefendaten ermittelt. Abschließend werden beiden Verfahren miteinander verglichen und eine Aussage bezüglich der Genauigkeit und Zuverlässigkeit des in dieser Arbeit entwickelten Algorithmus getroffen. / Indoor Positioning is receiving more and more attention nowadays. Beside the navigation through a building, Location Bases Services offer the possibility to get more information about certain objects in the enviroment. Because GPS signals are too weak to penetrate buildings, other techniques for localization must be found. Beneath the commonly used positioning via the evaluation of received radio signals, optical methods for localization with the help of landmarks can be used. These camera-based procedures have the advantage, that an inch-perfect positioning is possible. In this master thesis, the determination of the position in a building is chieved through the detection of ArUco-Marker and door signs in images gathered by a camera. The evaluation is done with the Microsoft Kinect v2 and the Lenovo Phab 2 Pro Smartphone. They offer depth data gained by a time of flight sensor beside the color images. The range to a detected landmark is calculated by comparing the object´s corners in the image with the real metrics, extracted from a database. Additionally, the distance is determined by the evaluation of the depth data. Finally, both procedures are compared with each other and a statement about the accuracy and responsibility is made.
55

Data Visualization for Statistical Analysis and Discovery in Container Surface Characterization at the Nano-Scale and Micro-Scale

Wendelberger, James George, Smith, Paul Herrick 25 January 2019 (has links)
Visualization is used for stainless steel container wall and lid cross section characterization. Two specific types of containers are examined: 3013 and SAVY. The container wall examined is from a sample of the inner container of a 3013 container. The inner lid cross section examined is from a SAVY container. Laser confocal microscope data and photographic data are used to determine features of the surfaces. The surface features are then characterized by various feature statistics, such as, maximum depth, area, eccentricity, and others. The purpose of this pilot study is to demonstrate the effectiveness of using the methodology to detect potential corrosion events on the inner container surfaces. The features are used to quantify these corrosion events. An automatic image analysis system uses this methodology to classify images for possible further human analysis by flagging possible corrosion events. A manual image analysis methodology is used to determine the amount of MnS on the SAVY container lid cross section. Visualization is an integral component of the analysis methodology.
56

Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations

Prakash, Mangal 06 April 2022 (has links)
Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks. Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation. The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications. In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input. In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research.
57

A computational framework for multidimensional parameter space screening of reaction-diffusion models in biology

Solomatina, Anastasia 16 March 2022 (has links)
Reaction-diffusion models have been widely successful in explaining a large variety of patterning phenomena in biology ranging from embryonic development to cancer growth and angiogenesis. Firstly proposed by Alan Turing in 1952 and applied to a simple two-component system, reaction-diffusion models describe spontaneous spatial pattern formation, driven purely by interactions of the system components and their diffusion in space. Today, access to unprecedented amounts of quantitative biological data allows us to build and test biochemically accurate reaction-diffusion models of intracellular processes. However, any increase in model complexity increases the number of unknown parameters and thus the computational cost of model analysis. To efficiently characterize the behavior and robustness of models with many unknown parameters is, therefore, a key challenge in systems biology. Here, we propose a novel computational framework for efficient high-dimensional parameter space characterization of reaction-diffusion models. The method leverages the $L_p$-Adaptation algorithm, an adaptive-proposal statistical method for approximate high-dimensional design centering and robustness estimation. Our approach is based on an oracle function, which describes for each point in parameter space whether the corresponding model fulfills given specifications. We propose specific oracles to estimate four parameter-space characteristics: bistability, instability, capability of spontaneous pattern formation, and capability of pattern maintenance. We benchmark the method and demonstrate that it allows exploring the ability of a model to undergo pattern-forming instabilities and to quantify model robustness for model selection in polynomial time with dimensionality. We present an application of the framework to reconstituted membrane domains bearing the small GTPase Rab5 and propose molecular mechanisms that potentially drive pattern formation.
58

Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetry

Nazari, Mahmood 17 March 2022 (has links)
Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow. Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine.
59

Implementation of an Approach for 3D Vehicle Detection in Monocular Traffic Surveillance Videos

Mishra, Abhinav 19 February 2021 (has links)
Recent advancements in the field of Computer Vision are a by-product of breakthroughs in the domain of Artificial Intelligence. Object detection in monocular images is now realized by an amalgamation of Computer Vision and Deep Learning. While most approaches detect objects as a mere two dimensional (2D) bounding box, there are a few that exploit rather traditional representation of the 3D object. Such approaches detect an object either as a 3D bounding box or exploit its shape primitives using active shape models which results in a wireframe-like detection. Such a wireframe detection is represented as combinations of detected keypoints (or landmarks) of the desired object. Apart from a faithful retrieval of the object’s true shape, wireframe based approaches are relatively robust in handling occlusions. The central task of this thesis was to find such an approach and to implement it with the goal of its performance evaluation. The object of interest is the vehicle class (cars, mini vans, trucks etc.) and the evaluation data is monocular traffic surveillance videos collected by the supervising chair. A wireframe type detection can aid several facets of traffic analysis by improved (compared to 2D bounding box) estimation of the detected object’s ground plane. The thesis encompasses the process of implementation of the chosen approach called Occlusion-Net [40], including its design details and a qualitative evaluation on traffic surveillance videos. The implementation reproduces most of the published results across several occlusion categories except the truncated car category. Occlusion-Net’s erratic detections are mostly caused by incorrect detection of the initial region of interest. It employs three instances of Graph Neural Networks for occlusion reasoning and localization. The thesis also provides a didactic introduction to the field of Machine and Deep Learning including intuitions of mathematical concepts required to understand the two disciplines and the implemented approach.:Contents 1 Introduction 1 2 Technical Background 7 2.1 AI, Machine Learning and Deep Learning 7 2.1.1 But what is AI ? 7 2.1.2 Representational composition by Deep Learning 10 2.2 Essential Mathematics for ML 14 2.2.1 Linear Algebra 15 2.2.2 Probability and Statistics 25 2.2.3 Calculus 34 2.3 Mathematical Introduction to ML 39 2.3.1 Ingredients of a Machine Learning Problem 39 2.3.2 The Perceptron 40 2.3.3 Feature Transformation 46 2.3.4 Logistic Regression 48 2.3.5 Artificial Neural Networks: ANN 53 2.3.6 Convolutional Neural Network: CNN 61 2.3.7 Graph Neural Networks 68 2.4 Specific Topics in Computer Vision 72 2.5 Previous work 76 3 Design of Implemented Approach 81 3.1 Training Dataset 81 3.2 Keypoint Detection : MaskRCNN 83 3.3 Occluded Edge Prediction : 2D-KGNN Encoder 84 3.4 Occluded Keypoint Localization : 2D-KGNN Decoder 86 3.5 3D Shape Estimation: 3D-KGNN Encoder 88 4 Implementation 93 4.1 Open-Source Tools and Libraries 93 4.1.1 Code Packaging: NVIDIA-Docker 94 4.1.2 Data Processing Libraries 94 4.1.3 Libraries for Neural Networks 95 4.1.4 Computer Vision Library 95 4.2 Dataset Acquisition and Training 96 4.2.1 Acquiring Dataset 96 4.2.2 Training Occlusion-Net 96 4.3 Refactoring 97 4.3.1 Error in Docker File 97 4.3.2 Image Directories as Input 97 4.3.3 Frame Extraction in Parallel 98 4.3.4 Video as Input 100 4.4 Functional changes 100 4.4.1 Keypoints In Output 100 4.4.2 Mismatched BB and Keypoints 101 4.4.3 Incorrect Class Labels 101 4.4.4 Bounding Box Overlay 101 5 Evaluation 103 5.1 Qualitative Evaluation 103 5.1.1 Evaluation Across Occlusion Categories 103 5.1.2 Performance on Moderate and Heavy Vehicles 105 5.2 Verification of Failure Analysis 106 5.2.1 Truncated Cars 107 5.2.2 Overlapping Cars 108 5.3 Analysis of Missing Frames 109 5.4 Test Performance 110 6 Conclusion 113 7 Future Work 117 Bibliography 119
60

Improvement of network-based QoE estimation for TCP based streaming services

Knoll, Thomas Martin, Eckert, Marcus 12 November 2015 (has links)
Progressive download video services, such as YouTube and podcasts, are responsible for a major part of the transmitted data volume in the Internet and it is expected, that they will also strongly affect mobile networks. Streaming video quality mainly depends on the sustainable throughput achieved during transmission. To ensure acceptable video quality in mobile networks (with limited capacity resources) the perceived quality by the customer (QoE) needs to be monitored by estimation. For that, the streaming video quality needs to be measured and monitored permanently. For TCP based progressive download we propose to extract the the video timestamps which are encoded within the payload of the TCP segments by decoding the video within the payload. The actual estimation is then done by play out buffer fill level calculations based on the TCP segment timestamp and their internal play out timestamp. The perceived quality for the user is derived from the number and duration of video stalls. Algorithms for decoding Flash Video, MP4 and WebM Video have already been implemented. After deriving the play out time it is compared to the timestamp of the respective TCP segment. The result of this comparison is an estimate of the fill level of the play out buffer in terms of play out time within the client. This estimation is done without access to the end device. The same measurement procedure can be applied for any TCP based progressive download Internet service. Video was simply taken as an example because of its current large share in traffic volume in operator networks.

Page generated in 0.1381 seconds