• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • Tagged with
  • 17
  • 17
  • 10
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Reconstruction d'images en tomographie scintigraphique cardiaque par fusion de données

Coutand, Frédérique 06 December 1996 (has links) (PDF)
La méthode proposée dans ce manuscrit consiste a utiliser la fusion de données anatomiques pour améliorer la quantification des images reconstruites en tomographie d'émission monophotonique (t.e.m.p.). Les données anatomiques (déduites d'autres modalités) sont utilisées afin d'obtenir une modélisation paramétrique (par des fonctions spline) des organes de la coupe a reconstruire. Dans la pratique, on utilise deux types d'images: premièrement des images en transmission qui servent a obtenir les contours des organes tels que le thorax et les poumons. Deuxièmement, des images en émission qui donnent une prelocalisation de l'activité et une première estimation des contours du ventricule gauche, qui sont améliorés au cours du processus de reconstruction. Le modèle géométrique nous permet de mieux caractériser la formation des données scintigraphiques et ainsi d'améliorer le problème direct. Les principales originalités de ces travaux consistent a restreindre le champ de la reconstruction uniquement aux régions actives et d'utiliser un maillage adapte aux contours des régions a reconstruire (pour éviter des erreurs de volume partiel). La réduction du nombre d'inconnus permet de mieux conditionner le problème inverse et ainsi de réduire le nombre de projections nécessaires à la reconstruction. La méthode de reconstruction qui est proposée repose sur une double estimation ; estimation de la distribution de radioactivité a l'intérieur de notre modèle géométrique, et estimation des paramètres optimaux de ce modèle. les reconstructions 2D à partir de données simulées puis enregistrées sur fantôme, ont permis de valider le principe de la méthode et montrent une nette amélioration de la quantification des images scintigraphiques
12

λ-connectedness and its application to image segmentation, recognition and reconstruction

Chen, Li January 2001 (has links)
Seismic layer segmentation, oil-gas boundary surfaces recognition, and 3D volume data reconstruction are three important tasks in three-dimensional seismic image processing. Geophysical and geological parameters and properties have been known to exhibit progressive changes in a layer. However, there are also times when sudden changes can occur between two layers. λ-connectedness was proposed to describe such a phenomenon. Based on graph theory, λ-connectedness describes the relationship among pixels in an image. It is proved that λ-connectedness is an equivalence relation. That is, it can be used to partition an image into different classes and hence can be used to perform image segmentation. Using the random graph theory and λ-connectivity of the image, the length of the path in a λ-connected set can be estimated. In addition to this, the normal λ-connected subsets preserve every path that is λ-connected in the subsets. An O(nlogn) time algorithm is designed for the normal λ-connected segmentation. Techniques developed are used to find objects in 2D/3D seismic images. Finding the interface between two layers or finding the boundary surfaces of an oil-gas reserve is often asked. This is equivalent to finding out whether a λ-connected set is an interface or surface. The problem that is raised is how to recognize a surface in digital spaces. λ-connectedness is a natural and intuitive way for describing digital surfaces and digital manifolds. Fast algorithms are designed to recognize whether an arbitrary set is a digital surface. Furthermore, the classification theorem of simple surface points is deduced: there are only six classes of simple surface points in 3D digital spaces. Our definition has been proved to be equivalent to Morgenthaler-Rosenfeld's definition of digital surfaces in direct adjacency. Reconstruction of a surface and data volume is important to the seismic data processing. Given a set of guiding pixels, the problem of generating a λ-connected (subset of image) surface is an inverted problem of λ-connected segmentation. In order to simplify the fitting algorithm, gradual variation, an equivalent concept of λ-connectedness, is used to preserve the continuity of the fitted surface. The key theorem, the necessary and sufficient condition for the gradually varied interpolation, has been mathematically proven. A random gradually varied surface fitting is designed, and other theoretical aspects are investigated. The concepts are used to successfully reconstruct 3D seismic real data volumes. This thesis proposes λ-connectedness and its applications as applied to seismic data processing. It is used for other problems such as ionogram scaling and object tracking. It has the potential to become a general technique in image processing and computer vision applications. Concepts and knowledge from several areas in mathematics such as Set Theory, Fuzzy Set Theory, Graph Theory, Numerical Analysis, Topology, Discrete Geometry, Computational Complexity, and Algorithm Design and Analysis have been applied to the work of this thesis.
13

A Comparative Evaluation Of Super

Erbay, Fulya 01 May 2011 (has links) (PDF)
In this thesis, it is proposed to get the high definition color images by using super &ndash / resolution algorithms. Resolution enhancement of RGB, HSV and YIQ color domain images is presented. In this study, three solution methods are presented to improve the resolution of HSV color domain images. These solution methods are suggested to beat the color artifacts on super resolution image and decrease the computational complexity in HSV domain applications. PSNR values are measured and compared with the results of other two color domain experiments. In RGB color space, super &ndash / resolution algorithms are applied three color channels (R, G, B) separately and PSNR values are measured. In YIQ color domain, only Y channel is processed with super resolution algorithms because Y channel is luminance component of the image and it is the most important channel to improve the resolution of the image in YIQ color domain. Also, the third solution method suggested for HSV color domain offers applying super resolution algorithm to only value channel. Hence, value channel carry brightness data of the image. The results are compared with the YIQ color domain experiments. During the experiments, four different super resolution algorithms are used that are Direct Addition, MAP, POCS and IBP. Although, these methods are widely used reconstruction of monochrome images, here they are used for resolution enhancement of color images. Color super resolution performances of these algorithms are tested.
14

Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction

Buchholz, Tim-Oliver 12 August 2022 (has links)
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data. Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations. Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT). In the next paragraphs I will summarize the individual contributions briefly. Electron microscopy is the go to method for high-resolution images in biological research. Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses. However, slow scanning speeds are required to obtain SEM images of sufficient quality. In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks. Once such a network is trained, it can be applied to noisy data to restore high quality images. With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in $40$- to $50$-fold imaging speedups for SEM imaging. In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions. However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images. Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually. To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series. An implementation of cryoCARE is publicly available as Scipion (de la Rosa-Trevín et al. 2016) plugin. Next, I will discuss the problem of self-supervised image denoising. With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied. However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system. In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach. Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012). In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks. I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE). I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction. The missing wedge artefacts in tomographic imaging originate in sparse-view imaging. Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images. However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space. I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients. Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents Summary iii Acknowledgements v 1 Introduction 1 1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3 1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4 1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8 1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11 2 Denoising in Electron Microscopy 15 2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19 2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21 2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23 2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25 2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27 2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29 2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31 2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32 2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33 3 Noise2Void: Self-Supervised Denoising 35 3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37 3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47 3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48 3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50 4 Fourier Image Transformer 53 4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55 4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57 4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59 4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60 4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61 4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64 4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Outlook 71
15

Vers une imagerie fonctionnelle de l'electrophysiologie corticale modelisation markovienne pour l'estimation des sources de la magneto/electroencephalographie et evaluations experimentales .

Baillet, Sylvain 08 July 1998 (has links) (PDF)
La magnetoencephalographie (meg) et l'electroencephalographie (eeg) possèdent une resolution temporelle exceptionnelle qui les destine naturellement a l'observation et au suivi des processus electrophysiologiques sous-jacents. Cependant, il n'existe pas a ce jour de méthode d'exploitation des signaux meg et eeg qui puisse les faire prétendre au statut de veritables méthodes d'imageries. En effet, la modélisation de la production des champs magnétiques et des différences de potentiels électriques recueillis sur le champ necessite a priori la prise en compte de la géométrie complexe de la tete et des proprietes de conductivite des tissus. Enfin, l'estimation des générateurs est un probleme qui fondamentalement ne possede pas de solution unique. La motivation initiale de notre travail a concerne le developpement d'approches permettant d'obtenir une tomographie corticale de l'electrophysiologie. Nous avons alors mis en uvre une modelisation markovienne du champ d'intensite des sources en proposant des modeles spatio-temporels adaptes a la meeg, et notamment aux variations morphologiques locales des structures anatomiques corticales. De plus, nous avons exploite cette notion d'ajustement local afin de proposer une nouvelle méthode de fusion de données meg et eeg au sein d'un seul et unique problème inverse. Nous avons également accorde une importance particulière a l'évaluation des méthodes proposées. Ainsi, et pour aller au-dela des simulations numériques souvent trop limitatives, nous avons mis au point un fantôme physique adapte a la meg et a l'eeg qui nous a permis d'etudier les performances des estimateurs en association avec des modeles de tete a divers degres de realisme. Enfin, nous proposons une première application a ces méthodes dans le cadre de l'identification de réseaux épileptiques chez les patients souffrant d'épilepsie partielle.
16

Photogrammetric techniques for characterisation of anisotropic mechanical properties of Ti-6Al-4V

Arthington, Matthew Reginald January 2010 (has links)
The principal aims of this research have been the development of photogrammetric techniques for the measurement of anisotropic deformation in uniaxially loaded cylindrical specimens. This has been achieved through the use of calibrated cameras and the application of edge detection and multiple view geometry. The techniques have been demonstrated at quasi-static strain rates, 10^-3 s^-1, using a screw-driven loading device and high strain rates, 10^3 s^-1, using Split Hopkinson Bars. The materials that have been measured using the technique are nearlyisotropic steel, anisotropic cross-rolled Ti-6Al-4V and anisotropic clock-rolled commercially pure Zr. These techniques allow the surface shapes of specimens that deform elliptically to be completely tracked and measured in situ during loading. This has allowed the measurement of properties that could not have been recorded before, including true direct stress and the ratio of transverse strains in principal material directions, at quasi-static and elevated strain rates, in tension and compression. The techniques have been validated by measuring elliptical prisms of various aspect ratios and independently measuring interrupted specimens using a coordinate measurement machine. A secondary aim of this research has been to improve the characterisation of the anisotropic mechanical properties of cross-rolled Ti-6Al-4V using the techniques developed. In particular, the uniaxial yield stresses, hardening properties and the associated anisotropic deformation behaviour along the principal material directions, have all been recorded in detail not seen before. Significant findings include: higher yield stresses in-plane than in the through-thickness direction in both tension and compression, and the near transverse-isotropy of the through-thickness direction for loading conditions other than quasi-static tension, where significant anisotropy was observed.
17

Problèmes inverses en Haute Résolution Angulaire

Mugnier, Laurent 18 October 2011 (has links) (PDF)
Les travaux exposés portent sur les techniques d'imagerie optique à haute résolution et plus particulièrement sur les méthodes, dites d'inversion, de traitement des données associées à ces techniques. Ils se situent donc à la croisée des chemins entre l'imagerie optique et le traitement du signal et des images. Ces travaux sont appliqués à l'astronomie depuis le sol ou l'espace, l'observation de la Terre, et l'imagerie de la rétine. Une partie introductive est dédiée au rappel de caractéristiques importantes de l'inversion de données et d'éléments essentiels sur la formation d'image (diffraction, turbulence, techniques d'imagerie) et sur la mesure des aberrations (analyse de front d'onde). La première partie des travaux exposés porte sur l'étalonnage d'instrument, c'est-à-dire l'estimation d'aberrations instrumentales ou turbulentes. Ils concernent essentiellement la technique de diversité de phase : travaux méthodologiques, travaux algorithmiques, et extensions à l'imagerie à haute dynamique en vue de la détection et la caractérisation d'exoplanètes. Ces travaux comprennent également des développements qui n'utilisent qu'une seule image au voisinage du plan focal, dans des cas particuliers présentant un intérêt pratique avéré. La seconde partie des travaux porte sur le développement de méthodes de traitement (recalage, restauration et reconstruction, détection) pour l'imagerie à haute résolution. Ces développements ont été menés pour des modalités d'imagerie très diverses : imagerie corrigée ou non par optique adaptative (OA), mono-télescope ou interférométrique, pour l'observation de l'espace ; imagerie coronographique d'exoplanètes par OA depuis le sol ou par interférométrie depuis l'espace ; et imagerie 2D ou 3D de la rétine humaine. Enfin, une dernière partie présente des perspectives de recherches.

Page generated in 0.1502 seconds