• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 96
  • 14
  • 2
  • Tagged with
  • 351
  • 304
  • 229
  • 188
  • 181
  • 141
  • 130
  • 130
  • 76
  • 53
  • 42
  • 37
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Quantification and Classification of Cortical Perfusion during Ischemic Strokes by Intraoperative Thermal Imaging

Hoffmann, Nico, Drache, Georg, Koch, Edmund, Steiner, Gerald, Kirsch, Matthias, Petersohn, Uwe 06 June 2018 (has links)
Thermal imaging is a non-invasive and marker-free approach for intraoperative measurements of small temperature variations. In this work, we demonstrate the abilities of active dynamic thermal imaging for analysis of tissue perfusion state in case of cerebral ischemia. For this purpose, a NaCl irrigation is applied to the exposed cortex during hemicraniectomy. The cortical temperature changes are measured by a thermal imaging system and the thermal signal is recognized by a novel machine learning framework. Subsequent tissue heating is then approximated by a double exponential function to estimate tissue temperature decay constants. These constants allow us to characterize tissue with respect to its dynamic thermal properties. Using a Gaussian mixture model we show the correlation of these estimated parameters with infarct demarcations of post-operative CT. This novel scheme yields a standardized representation of cortical thermodynamic properties and might guide further research regarding specific intraoperative diagnostics.
172

Dynamic Thermal Imaging for Intraoperative Monitoring of Neuronal Activity and Cortical Perfusion

Hoffmann, Nico 09 December 2016 (has links)
Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient's quality of life. In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging. Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets. Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn't share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging.
173

Applications and extensions of Random Forests in genetic and environmental studies

Michaelson, Jacob 20 December 2010 (has links)
Transcriptional regulation refers to the molecular systems that control the concentration of mRNA species within the cell. Variation in these controlling systems is not only responsible for many diseases, but also contributes to the vast phenotypic diversity in the biological world. There are powerful experimental approaches to probe these regulatory systems, and the focus of my doctoral research has been to develop and apply effective computational methods that exploit these rich data sets more completely. First, I present a method for mapping genetic regulators of gene expression (expression quantitative trait loci, or eQTL) using Random Forests. This approach allows for flexible modeling and feature selection, and results in eQTL that are more biologically supportable than those mapped with competing methods. Next, I present a method that finds interactions between genes that in turn regulate the expression of other genes. This is accomplished by finding recurring decision motifs in the forest structure that represent dependencies between genetic loci. Third, I present a method to use distributional differences in eQTL data to establish the regulatory roles of genes relative to other disease-associated genes. Using this method, we found that genes that are master regulators of other disease genes are more likely to be consistently associated with the disease in genetic association studies. Finally, I present a novel application of Random Forests to determine the mode of regulation of toxin-perturbed genes, using time-resolved gene expression. The results demonstrate a novel approach to supervised weighted clustering of gene expression data.
174

Approaching Concept Drift by Context Feature Partitioning

Hoffmann, Nico, Kirmse, Matthias, Petersohn, Uwe 20 February 2012 (has links)
In this paper we present a new approach to handle concept drift using domain-specific knowledge. More precisely, we capitalize known context features to partition a domain into subdomains featuring static class distributions. Subsequently, we learn separate classifiers for each sub domain and classify new instances accordingly. To determine the optimal partitioning for a domain we apply a search algorithm aiming to maximize the resulting accuracy. In practical domains like fault detection concept drift often occurs in combination with imbalances data. As this issue gets more important learning models on smaller subdomains we additionally use sampling methods to handle it. Comparative experiments with artificial data sets showed that our approach outperforms a plain SVM regarding different performance measures. Summarized, the partitioning concept drift approach (PCD) is a possible way to handle concept drift in domains where the causing context features are at least partly known.
175

Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

Pfeiffer, Micha 02 June 2022 (has links)
During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliography
176

Digitalisierung in der Bauteilreinigung: Chancen für die Qualitätssicherung

Windisch, Markus 31 May 2019 (has links)
Die Qualitätslenkung von Reinigungsprozessen erfordert die systematische Erfassung von Eingangs-, Prozess- und Ausgangsgrößen, für die nur teilweise Sensoren zur automatischen Messung verfügbar sind. Da die Eingangsgrößen (Verschmutzungszustand) nicht vollständig inline messbar sind und die Wirkung von Restschmutz auf den Folgeprozess – als Grundlage der Grenzwertfestlegung – nicht vollständig bekannt ist, müssen Vor- und Folgeprozesse in die Datenerfassung einbezogen werden. In diesem Vortrag erläutert Dipl.-Ing. Markus Windisch (Teamleiter Bauteilreinigung des Fraunhofer IVV Dresden) die Entwicklung einer Systemlösung zur Prozessdatenerfassung, zeigt dabei branchenspezifische Herausforderungen und den Praxisnutzen beim Einsatz auf und gibt einen Ausblick auf eine zukünftige Integration von selbstlernenden Assistenzsystemen.
177

Learning to Predict Dense Correspondences for 6D Pose Estimation

Brachmann, Eric 17 January 2018 (has links)
Object pose estimation is an important problem in computer vision with applications in robotics, augmented reality and many other areas. An established strategy for object pose estimation consists of, firstly, finding correspondences between the image and the object’s reference frame, and, secondly, estimating the pose from outlier-free correspondences using Random Sample Consensus (RANSAC). The first step, namely finding correspondences, is difficult because object appearance varies depending on perspective, lighting and many other factors. Traditionally, correspondences have been established using handcrafted methods like sparse feature pipelines. In this thesis, we introduce a dense correspondence representation for objects, called object coordinates, which can be learned. By learning object coordinates, our pose estimation pipeline adapts to various aspects of the task at hand. It works well for diverse object types, from small objects to entire rooms, varying object attributes, like textured or texture-less objects, and different input modalities, like RGB-D or RGB images. The concept of object coordinates allows us to easily model and exploit uncertainty as part of the pipeline such that even repeating structures or areas with little texture can contribute to a good solution. Although we can train object coordinate predictors independent of the full pipeline and achieve good results, training the pipeline in an end-to-end fashion is desirable. It enables the object coordinate predictor to adapt its output to the specificities of following steps in the pose estimation pipeline. Unfortunately, the RANSAC component of the pipeline is non-differentiable which prohibits end-to-end training. Adopting techniques from reinforcement learning, we introduce Differentiable Sample Consensus (DSAC), a formulation of RANSAC which allows us to train the pose estimation pipeline in an end-to-end fashion by minimizing the expectation of the final pose error.
178

Hypothesis Generation for Object Pose Estimation From local sampling to global reasoning

Michel, Frank 14 February 2019 (has links)
Pose estimation has been studied since the early days of computer vision. The task of object pose estimation is to determine the transformation that maps an object from it's inherent coordinate system into the camera-centric coordinate system. This transformation describes the translation of the object relative to the camera and the orientation of the object in three dimensional space. The knowledge of an object's pose is a key ingredient in many application scenarios like robotic grasping, augmented reality, autonomous navigation and surveillance. A general estimation pipeline consists of the following four steps: extraction of distinctive points, creation of a hypotheses pool, hypothesis verification and, finally, the hypotheses refinement. In this work, we focus on the hypothesis generation process. We show that it is beneficial to utilize geometric knowledge in this process. We address the problem of hypotheses generation of articulated objects. Instead of considering each object part individually we model the object as a kinematic chain. This enables us to use the inner-part relationships when sampling pose hypotheses. Thereby we only need K correspondences for objects consisting of K parts. We show that applying geometric knowledge about part relationships improves estimation accuracy under severe self-occlusion and low quality correspondence predictions. In an extension we employ global reasoning within the hypotheses generation process instead of sampling 6D pose hypotheses locally. We therefore formulate a Conditional-Random-Field operating on the image as a whole inferring those pixels that are consistent with the 6D pose. Within the CRF we use a strong geometric check that is able to assess the quality of correspondence pairs. We show that our global geometric check improves the accuracy of pose estimation under heavy occlusion.
179

Binary Geometric Transformer Descriptor Based Machine Learning for Pattern Recognition in Design Layout

Treska, Fergo 13 September 2023 (has links)
This paper proposes a novel algorithm in pixel-based pattern recognition in design layout which offers simplicity, speed and accuracy to recognize any patterns that later can be used to detect problematic pattern in lithography process so they can be removed or improved earlier in design stage.:Abstract 1 Content 3 List of Figure 6 List of Tables 8 List of Abbreviations 9 Chapter 1: Introduction 10 1.1 Motivation 10 1.2 Related Work 11 1.3 Purpose and Research Question 12 1.4 Approach and Methodology 12 1.5 Scope and Limitation 12 1.6 Target group 13 1.7 Outline 13 Chapter 2: Theoretical Background 14 2.1 Problematic Pattern in Computational Lithography 14 2.2 Optical Proximity Effect 16 2.3 Taxonomy of Pattern Recognition 17 2.3.1 Feature Generation 18 2.3.2 Classifier Model 19 2.3.3 System evaluation 20 2.4 Feature Selection Technique 20 2.4.1 Wrapper-Based Methods 21 2.4.2 Average-Based Methods 22 2.4.3 Binary Geometrical Transformation 24 2.4.3.1 Image Interpolation 24 2.4.3.2 Geometric Transformation 26 2.4.3.2.1 Forward Mapping: 26 2.4.3.2.2 Inverse Mapping: 27 2.4.3.3 Thresholding 27 2.5 Machine Learning Algorithm 28 2.5.1 Linear Classifier 29 2.5.2 Linear Discriminant Analysis (LDA) 30 2.5.3 Maximum likelihood 30 2.6 Scoring (Metrics to Measure Classifier Model Quality) 31 2.6.1 Accuracy 32 2.6.2 Sensitivity 32 2.6.3 Specifity 32 2.6.4 Precision 32 Chapter 3: Method 33 3.1 Problem Formulation 33 3.1.1 T2T Pattern 35 3.1.2 Iso-Dense Pattern 36 3.1.3 Hypothetical Hotspot Pattern 37 3.2 Classification System 38 3.2.1 Wrapper and Average-based 38 3.2.2 Binary Geometric Transformation Based 39 3.3 Window-Based Raster Scan 40 3.3.1 Scanning algorithm 40 3.4 Classifier Design 42 3.4.1 Training Phase 43 3.4.2 Discriminant Coefficient Function 44 3.4.3 SigmaDi 45 3.4.4 Maximum Posterior Probability 45 3.4.5 Classifier Model Block 46 3.5 Weka 3.8 47 3.6 Average-based Influence 49 3.7 BGT Based Model 50 Chapter 4: Results 55 4.1 Wrapper and Average-based LDA classifier 55 4.2 BGT Based LDA with SigmaDi Classifier 56 4.3 Estimation Output 57 4.4 Probability Function 58 Chapter 5: Conclusion 59 5.1 Conclusions 59 5.2 Future Research 60 Bibliography 61 Selbstständigkeitserklärung 63
180

Machine Learning in Detecting Auditory Sequences in Magnetoencephalography Data: Research Project in Computational Modelling and Simulation

Shaikh, Mohd Faraz 17 November 2022 (has links)
Spielt Ihr Gehirn Ihre letzten Lebenserfahrungen ab, während Sie sich ausruhen? Eine offene Frage in den Neurowissenschaften ist, welche Ereignisse unser Gehirn wiederholt und gibt es eine Korrelation zwischen der Wiederholung und der Dauer des Ereignisses? In dieser Studie habe ich versucht, dieser Frage nachzugehen, indem ich Magnetenzephalographie-Daten aus einem Experiment zum aktiven Hören verwendet habe. Die Magnetenzephalographie (MEG) ist ein nicht-invasives Neuroimaging-Verfahren, das verwendet wird, um die Gehirnaktivität zu untersuchen und die Gehirndynamik bei Wahrnehmungs- und kognitiven Aufgaben insbesondere in den Bereichen Sprache und Hören zu verstehen. Es zeichnet das in unserem Gehirn erzeugte Magnetfeld auf, um die Gehirnaktivität zu erkennen. Ich baue eine Pipeline für maschinelles Lernen, die einen Teil der Experimentdaten verwendet, um die Klangmuster zu lernen und dann das Vorhandensein von Geräuschen im späteren Teil der Aufnahmen vorhersagt, in denen die Teilnehmer untätig sitzen mussten und kein Ton zugeführt wurde. Das Ziel der Untersuchung der Testwiedergabe von gelernten Klangsequenzen in der Nachhörphase. Ich habe ein Klassifikationsschema verwendet, um Muster zu identifizieren, wenn MEG auf verschiedene Tonsequenzen in der Zeit nach der Aufgabe reagiert. Die Studie kam zu dem Schluss, dass die Lautfolgen über dem theoretischen Zufallsniveau identifiziert und unterschieden werden können und bewies damit die Gültigkeit unseres Klassifikators. Darüber hinaus könnte der Klassifikator die Geräuschsequenzen in der Nachhörzeit mit sehr hoher Wahrscheinlichkeit vorhersagen, aber um die Modellergebnisse über die Nachhörzeit zu validieren, sind mehr Beweise erforderlich. / Does your brain replay your recent life experiences while you are resting? An open question in neuroscience is which events does our brain replay and is there any correlation between the replay and duration of the event? In this study I tried to investigate this question by using Magnetoencephalography data from an active listening experiment. Magnetoencephalography (MEG) is a non-invasive neuroimaging technique used to study the brain activity and understand brain dynamics in perception and cognitive tasks particularly in the fields of speech and hearing. It records the magnetic field generated in our brains to detect the brain activity. I build a machine learning pipeline which uses part of the experiment data to learn the sound patterns and then predicts the presence of sound in the later part of the recordings in which the participants were made to sit idle and no sound was fed. The aim of the study of test replay of learned sound sequences in the post listening period. I have used classification scheme to identify patterns if MEG responses to different sound sequences in the post task period. The study concluded that the sound sequences can be identified and distinguished above theoretical chance level and hence proved the validity of our classifier. Further, the classifier could predict the sound sequences in the post-listening period with very high probability but in order to validate the model results on post listening period, more evidence is needed.

Page generated in 0.0744 seconds