• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 32
  • 17
  • Tagged with
  • 154
  • 154
  • 117
  • 117
  • 117
  • 33
  • 21
  • 20
  • 19
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Mitteilungen des URZ 4/2010

Schier, Thomas, Riedel, Wolfgang 10 December 2010 (has links)
Informationen für URZ-Nutzer, in dieser Ausgabe speziell zum Ausbau des Campusnetzes und zur Nutzung der Ausbildungspools.:Campusnetz-Backbone 2010 Ausbildungspools: Verfügbarkeit im Sommersemester 2011 Ausbildungspools: Softwarebedarf für das Sommersemester 2011 Kurzinformationen: * Root VPS mit vorinstalliertem Betriebssystem * WebVPN * Erweiterung eines VPSH-Clusters * Renovierung Computerpools Software-News: * Microsoft-Software für Privat * MSDNAA-Portal * Software-Updates * Neue Softwarehandbücher
52

Interactive Image-space Point Cloud Rendering with Transparency and Shadows

Dobrev, Petar, Rosenthal, Paul, Linsen, Lars 24 June 2011 (has links)
Point-based rendering methods have proven to be effective for the display of large point cloud surface models. For a realistic visualization of the models, transparency and shadows are essential features. We propose a method for point cloud rendering with transparency and shadows at interactive rates. Our approach does not require any global or local surface reconstruction method, but operates directly on the point cloud. All passes are executed in image space and no pre-computation steps are required. The underlying technique for our approach is a depth peeling method for point cloud surface representations. Having detected a sorted sequence of surface layers, they can be blended front to back with given opacity values to obtain renderings with transparency. These computation steps achieve interactive frame rates. For renderings with shadows, we determine a point cloud shadow texture that stores for each point of a point cloud whether it is lit by a given light source. The extraction of the layer of lit points is obtained using the depth peeling technique, again. For the shadow texture computation, we also apply a Monte-Carlo integration method to approximate light from an area light source, leading to soft shadows. Shadow computations for point light sources are executed at interactive frame rates. Shadow computations for area light sources are performed at interactive or near-interactive frame rates depending on the approximation quality.
53

A Narrow Band Level Set Method for Surface Extraction from Unstructured Point-based Volume Data

Rosenthal, Paul, Molchanov, Vladimir, Linsen, Lars 24 June 2011 (has links)
Level-set methods have become a valuable and well-established field of visualization over the last decades. Different implementations addressing different design goals and different data types exist. In particular, level sets can be used to extract isosurfaces from scalar volume data that fulfill certain smoothness criteria. Recently, such an approach has been generalized to operate on unstructured point-based volume data, where data points are not arranged on a regular grid nor are they connected in form of a mesh. Utilizing this new development, one can avoid an interpolation to a regular grid which inevitably introduces interpolation errors. However, the global processing of the level-set function can be slow when dealing with unstructured point-based volume data sets containing several million data points. We propose an improved level-set approach that performs the process of the level-set function locally. As for isosurface extraction we are only interested in the zero level set, values are only updated in regions close to the zero level set. In each iteration of the level-set process, the zero level set is extracted using direct isosurface extraction from unstructured point-based volume data and a narrow band around the zero level set is constructed. The band consists of two parts: an inner and an outer band. The inner band contains all data points within a small area around the zero level set. These points are updated when executing the level set step. The outer band encloses the inner band providing all those neighbors of the points of the inner band that are necessary to approximate gradients and mean curvature. Neighborhood information is obtained using an efficient kd-tree scheme, gradients and mean curvature are estimated using a four-dimensional least-squares fitting approach. Comparing ourselves to the global approach, we demonstrate that this local level-set approach for unstructured point-based volume data achieves a significant speed-up of one order of magnitude for data sets in the range of several million data points with equivalent quality and robustness.
54

Algorithmen der Bildanalyse und -synthese für große Bilder und Hologramme

Kienel, Enrico 27 November 2012 (has links)
Die vorliegende Arbeit befasst sich mit Algorithmen aus dem Bereich der Bildsegmentierung sowie der Datensynthese für das so genannte Hologrammdruck-Prinzip. Angelehnt an ein anatomisch motiviertes Forschungsprojekt werden aktive Konturen zur halbautomatischen Segmentierung digitalisierter histologischer Schnitte herangezogen. Die besondere Herausforderung liegt dabei in der Entwicklung von verschiedenen Ansätzen, die der Anpassung des Verfahrens für sehr große Bilder dienen, welche in diesem Kontext eine Größe von einigen hundert Megapixel erreichen können. Unter dem Aspekt der größtmöglichen Effizienz, jedoch mit der Beschränkung auf die Verwendung von Consumer-Hardware, werden Ideen vorgestellt, welche eine auf aktiven Konturen basierende Segmentierung bei derartigen Bildgrößen erstmals ermöglichen sowie zur Beschleunigung und Reduktion des Speicheraufwandes beitragen. Darüber hinaus wurde das Verfahren um ein intuitives Werkzeug erweitert, das eine interaktive lokale Korrektur der finalen Kontur gestattet und damit die Praxistauglichkeit der Methode maßgeblich erhöht. Der zweite Teil der Arbeit beschäftigt sich mit einem Druckprinzip für die Herstellung von Hologrammen, basierend auf virtuellen Abbildungsgegenständen. Der Hologrammdruck, der namentlich an die Arbeitsweise eines Tintenstrahldruckers erinnern soll, benötigt dazu spezielle diskrete Bilddaten, die als Elementarhologramme bezeichnet werden. Diese tragen die visuelle Information verschiedener Blickrichtungen durch einen festen geometrischen Ort auf der Hologrammebene. Ein vollständiges, aus vielen Elementarhologrammen zusammengesetztes Hologramm erzeugt dabei ein erhebliches Datenvolumen, das parameterabhängig schnell im Terabyte-Bereich liegen kann. Zwei unabhängige Algorithmen zur Erzeugung geeignet aufbereiteter Daten unter intensiver Ausnutzung von Standard-Graphikhardware werden präsentiert, hinsichtlich ihrer Berechnungs- sowie Speicherkomplexität verglichen und unter Berücksichtigung von Qualitätsaspekten bewertet.
55

Strategien zur Datenfusion beim Maschinellen Lernen

Schwalbe, Karsten, Groh, Alexander, Hertwig, Frank, Scheunert, Ulrich 25 November 2019 (has links)
Smarte Prüfsysteme werden ein Schlüsselbaustein zur Qualitätssicherung in der industriellen Fertigung und Produktion sein. Insbesondere trifft dies auf komplexe Prüf- und Bewertungsprozesse zu. In den letzten Jahren haben sich hierfür lernbasierte Verfahren als besonders vielversprechend herauskristallisiert. Ihr Einsatz geht in der Regel mit erheblichen Performanceverbesserungen gegenüber konventionellen, regel- bzw. geometriebasierten Methoden einher. Der Black-Box-Charakter dieser Algorithmen führt jedoch dazu, dass die Interpretationen der berechneten Prognosegüten kritisch zu hinterfragen sind. Das Vertrauen in die Ergebnisse von Algorithmen, die auf maschinellem Lernen basieren, kann erhöht werden, wenn verschiedene, voneinander unabhängige Verfahren zum Einsatz kommen. Hierbei sind Datenfusionsstrategien anzuwenden, um die Resultate der verschiedenen Methoden zu einem Endergebnis zusammenzufassen. Im Konferenzbeitrag werden, aufbauend auf einer kurzen Vorstellung wichtiger Ansätze zur Objektklassifikation, entsprechende Fusionsstrategien präsentiert und an einem Fallbeispiel evaluiert. Im Anschluss wird auf Basis der Ergebnisse das Potential der Datenfusion in Bezug auf das Maschinelle Lernen erörtert.
56

Optische Methoden zur Positionsbestimmung auf Basis von Landmarken

Bilda, Sebastian 24 April 2017 (has links)
Die Innenraumpositionierung kommt in der heutigen Zeit immer mehr Aufmerksamkeit zu teil. Neben der Navigation durch das Gebäude sind vor allem Location Based Services von Bedeutung, welche Zusatzinformationen zu spezifischen Objekten zur Verfügung stellen Da für eine Innenraumortung das GPS Signal jedoch zu schwach ist, müssen andere Techniken zur Lokalisierung gefunden werden. Neben der häufig verwendeten Positionierung durch Auswertung von empfangenen Funkwellen existieren Methoden zur optischen Lokalisierung mittels Landmarken. Das kamerabasierte Verfahren bietet den Vorteil, dass eine oft zentimetergenaue Positionierung möglich ist. In dieser Masterarbeit erfolgt die Bestimmung der Position im Gebäude mittels Detektion von ArUco-Markern und Türschildern aus Bilddaten. Als Evaluationsgeräte sind zum einen die Kinect v2 von Microsoft, als auch das Lenovo Phab 2 Pro Smartphone verwendet worden. Neben den Bilddaten stellen diese auch mittels Time of Flight Sensoren generierte Tiefendaten zur Verfügung. Durch den Vergleich von aus dem Bild extrahierten Eckpunkten der Landmarke, mit den aus einer Datenbank entnommenen realen geometrischen Maßen des Objektes, kann die Entfernung zu einer gefundenen Landmarke bestimmt werden. Neben der optischen Distanzermittlung wird die Position zusätzlich anhand der Tiefendaten ermittelt. Abschließend werden beiden Verfahren miteinander verglichen und eine Aussage bezüglich der Genauigkeit und Zuverlässigkeit des in dieser Arbeit entwickelten Algorithmus getroffen. / Indoor Positioning is receiving more and more attention nowadays. Beside the navigation through a building, Location Bases Services offer the possibility to get more information about certain objects in the enviroment. Because GPS signals are too weak to penetrate buildings, other techniques for localization must be found. Beneath the commonly used positioning via the evaluation of received radio signals, optical methods for localization with the help of landmarks can be used. These camera-based procedures have the advantage, that an inch-perfect positioning is possible. In this master thesis, the determination of the position in a building is chieved through the detection of ArUco-Marker and door signs in images gathered by a camera. The evaluation is done with the Microsoft Kinect v2 and the Lenovo Phab 2 Pro Smartphone. They offer depth data gained by a time of flight sensor beside the color images. The range to a detected landmark is calculated by comparing the object´s corners in the image with the real metrics, extracted from a database. Additionally, the distance is determined by the evaluation of the depth data. Finally, both procedures are compared with each other and a statement about the accuracy and responsibility is made.
57

Data Visualization for Statistical Analysis and Discovery in Container Surface Characterization at the Nano-Scale and Micro-Scale

Wendelberger, James George, Smith, Paul Herrick 25 January 2019 (has links)
Visualization is used for stainless steel container wall and lid cross section characterization. Two specific types of containers are examined: 3013 and SAVY. The container wall examined is from a sample of the inner container of a 3013 container. The inner lid cross section examined is from a SAVY container. Laser confocal microscope data and photographic data are used to determine features of the surfaces. The surface features are then characterized by various feature statistics, such as, maximum depth, area, eccentricity, and others. The purpose of this pilot study is to demonstrate the effectiveness of using the methodology to detect potential corrosion events on the inner container surfaces. The features are used to quantify these corrosion events. An automatic image analysis system uses this methodology to classify images for possible further human analysis by flagging possible corrosion events. A manual image analysis methodology is used to determine the amount of MnS on the SAVY container lid cross section. Visualization is an integral component of the analysis methodology.
58

Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations

Prakash, Mangal 06 April 2022 (has links)
Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks. Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation. The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications. In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input. In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research.
59

A computational framework for multidimensional parameter space screening of reaction-diffusion models in biology

Solomatina, Anastasia 16 March 2022 (has links)
Reaction-diffusion models have been widely successful in explaining a large variety of patterning phenomena in biology ranging from embryonic development to cancer growth and angiogenesis. Firstly proposed by Alan Turing in 1952 and applied to a simple two-component system, reaction-diffusion models describe spontaneous spatial pattern formation, driven purely by interactions of the system components and their diffusion in space. Today, access to unprecedented amounts of quantitative biological data allows us to build and test biochemically accurate reaction-diffusion models of intracellular processes. However, any increase in model complexity increases the number of unknown parameters and thus the computational cost of model analysis. To efficiently characterize the behavior and robustness of models with many unknown parameters is, therefore, a key challenge in systems biology. Here, we propose a novel computational framework for efficient high-dimensional parameter space characterization of reaction-diffusion models. The method leverages the $L_p$-Adaptation algorithm, an adaptive-proposal statistical method for approximate high-dimensional design centering and robustness estimation. Our approach is based on an oracle function, which describes for each point in parameter space whether the corresponding model fulfills given specifications. We propose specific oracles to estimate four parameter-space characteristics: bistability, instability, capability of spontaneous pattern formation, and capability of pattern maintenance. We benchmark the method and demonstrate that it allows exploring the ability of a model to undergo pattern-forming instabilities and to quantify model robustness for model selection in polynomial time with dimensionality. We present an application of the framework to reconstituted membrane domains bearing the small GTPase Rab5 and propose molecular mechanisms that potentially drive pattern formation.
60

Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetry

Nazari, Mahmood 17 March 2022 (has links)
Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow. Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine.

Page generated in 0.0284 seconds