• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 28
  • 18
  • 13
  • 8
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 343
  • 343
  • 78
  • 71
  • 63
  • 56
  • 52
  • 38
  • 32
  • 32
  • 28
  • 28
  • 28
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
212

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
213

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
214

Blur invariant pattern recognition and registration in the Fourier domain

Ojansivu, V. (Ville) 13 October 2009 (has links)
Abstract Pattern recognition and registration are integral elements of computer vision, which considers image patterns. This thesis presents novel blur, and combined blur and geometric invariant features for pattern recognition and registration related to images. These global or local features are based on the Fourier transform phase, and are invariant or insensitive to image blurring with a centrally symmetric point spread function which can result, for example, from linear motion or out of focus. The global features are based on the even powers of the phase-only discrete Fourier spectrum or bispectrum of an image and are invariant to centrally symmetric blur. These global features are used for object recognition and image registration. The features are extended for geometrical invariances up to similarity transformation: shift invariance is obtained using bispectrum, and rotation-scale invariance using log-polar mapping of bispectrum slices. Affine invariance can be achieved as well using rotated sets of the log-log mapped bispectrum slices. The novel invariants are shown to be more robust to additive noise than the earlier blur, and combined blur and geometric invariants based on image moments. The local features are computed using the short term Fourier transform in local windows around the points of interest. Only the lowest horizontal, vertical, and diagonal frequency coefficients are used, the phase of which is insensitive to centrally symmetric blur. The phases of these four frequency coefficients are quantized and used to form a descriptor code for the local region. When these local descriptors are used for texture classification, they are computed for every pixel, and added up to a histogram which describes the local pattern. There are no earlier textures features which have been claimed to be invariant to blur. The proposed descriptors were superior in the classification of blurred textures compared to a few non-blur invariant state of the art texture classification methods.
215

Ensemble registration : combining groupwise registration and segmentation

Purwani, Sri January 2016 (has links)
Registration of a group of images generally only gives a pointwise, dense correspondence defined over the whole image plane or volume, without having any specific description of any common structure that exists in every image. Furthermore, identifying tissue classes and structures that are significant across the group is often required for analysis, as well as the correspondence. The overall aim is instead to perform registration, segmentation, and modelling simultaneously, so that the registration can assist the segmentation, and vice versa. However, structural information does play a role in conventional registration, in that if the registration is successful, it would be expected structures to be aligned to some extent. Hence, we perform initial experiments to investigate whether there is explicit structural information present in the shape of the registration objective function about the optimum. We perturbed one image locally with a diffeomorphism, and found interesting structure in the shape of the quality of fit function. Then, we proceed to add explicit structural information into registration framework, using various types of structural information derived from the original intensity images. For the case of MR brain images, we augment each intensity image with its own set of tissue fraction images, plus intensity gradient images, which form an image ensemble for each example. Then, we perform groupwise registration by using these ensembles of images. We apply the method to four different real-world datasets, for which ground-truth annotation is available. It is shown that the method can give a greater than 25% improvement on the three difficult datasets, when compared to using intensity-based registration alone. On the easier dataset, it improves upon intensity-based registration, and achieves results comparable with the previous method.
216

Génération d'images 3D HDR / Generation of 3D HDR images

Bonnard, Jennifer 11 December 2015 (has links)
L’imagerie HDR et l’imagerie 3D sont deux domaines dont l’évolution simultanée mais indépendante n’a cessé de croître ces dernières années. D’une part, l’imagerie HDR (High Dynamic Range) permet d’étendre la gamme dynamique de couleur des images conventionnelles dites LDR (Low Dynamic Range). D’autre part, l’imagerie 3D propose une immersion dans le film projeté avec cette impression de faire partie de la scène tournée. Depuis peu, ces deux domaines sont conjugués pour proposer des images ou vidéos 3D HDR mais peu de solutions viables existent et aucune n’est accessible au grand public. Dans ce travail de thèse, nous proposons une méthode de génération d’images 3D HDR pour une visualisation sur écrans autostéréoscopiques en adaptant une caméra multi-points de vue à l’acquisition d’expositions multiples. Pour cela, des filtres à densité neutre sont fixés sur les objectifs de la caméra. Ensuite, un appareillement des pixels homologues permet l’agrégation des pixels représentant le même point dans la scène acquise. Finalement, l’attribution d’une valeur de radiance est calculée pour chaque pixel du jeud’images considéré par moyenne pondérée des valeurs LDR des pixels homologues. Une étape supplémentaire est nécessaire car certains pixels ont une radiance erronée. Nous proposons une méthode basée surla couleur des pixels voisins puis deux méthodes basées sur la correction de la disparité des pixels dontla radiance est erronée. La première est basée sur la disparité des pixels du voisinage et la seconde sur la disparité calculée indépendamment sur chaque composante couleur. Ce pipeline permet la générationd’une image HDR par point de vue. Un algorithme de tone-mapping est ensuite appliqué à chacune d’elles afin qu’elles puissent être composées avec les filtres correspondants à l’écran autostéréoscopique considéré pour permettre la visualisation de l’image 3D HDR. / HDR imaging and 3D imaging are two areas in which the simultaneous but separate development has been growing in recent years. On the one hand, HDR (High Dynamic Range) imaging allows to extend the dynamic range of traditionnal images called LDR (Low Dynamic Range). On the other hand, 3Dimaging offers immersion in the shown film with the feeling to be part of the acquired scene. Recently, these two areas have been combined to provide 3D HDR images or videos but few viable solutions existand none of them is available to the public. In this thesis, we propose a method to generate 3D HDR images for autostereoscopic displays by adapting a multi-viewpoints camera to several exposures acquisition.To do that, neutral density filters are fixed on the objectives of the camera. Then, pixel matchingis applied to aggregate pixels that represent the same point in the acquired scene. Finally, radiance is calculated for each pixel of the set of images by using a weighted average of LDR values. An additiona lstep is necessary because some pixels have wrong radiance. We proposed a method based on the color of adjacent pixels and two methods based on the correction of the disparity of those pixels. The first method is based on the disparity of pixels of the neighborhood and the second method on the disparity independently calculated on each color channel. This pipeline allows the generation of 3D HDR image son each viewpoint. A tone-mapping algorithm is then applied on each of these images. Their composition with filters corresponding to the autostereoscopic screen used allows the visualization of the generated 3DHDR image.
217

Autoradiographie quantitative d'échantillons prélevés par biopsie guidée par TEP/TDM : méthode et applications cliniques / Quantitative autoradiography of biopsy specimens obtained under PET/CT guidance : method development and clinical applications

Fanchon, Louise 24 March 2016 (has links)
Au cours des dix dernières années, l’utilisation de l’imagerie par tomographie par émission de positrons (TEP) s’est rapidement développée en oncologie. Certaines tumeurs non visibles en imagerie anatomique conventionnelle sont détectables en mesurant l'activité métabolique dans le corps humain par TEP. L’imagerie TEP est utilisée pour guider la délivrance de traitements locaux tels que par rayonnement ionisants ou ablation thermique. Pour la délivrance de ces traitements, segmenter la zone tumorale avec précision est primordial. Cependant, la faible résolution spatiale des images TEP rend la segmentation difficile. Plusieurs études ont démontré que la segmentation manuelle est sujette à une grande variabilité inter- et intra- individuelle et est fastidieuse. Pour ces raisons, de nombreux algorithmes de segmentation automatiques ont été développés. Cependant, peu de données fiables, avec des résultats histopathologiques existent pour valider ces algorithmes car il est expérimentalement difficile de les produire. Le travail méthodologique mis en place durant cette thèse a eu pour but de développer une méthode permettant de comparer les données histopathologiques aux données obtenue par TEP pour tester et valider des algorithmes de segmentation automatiques. Cette méthode consiste à réaliser des autoradiographies quantitatives de spécimens prélevés lors de biopsies guidées par TEP/tomodensitométrie (TDM); l’autoradiographie permettant d’imager la distribution du radiotraceur dans les échantillons avec une haute résolution spatiale. Les échantillons de tissus sont ensuite finement tranchés pour pouvoir être étudiés à l’aide d’un microscope. L’autoradiographie et les photomicrographes de l’échantillon de tissus sont ensuite recalés à l’image TEP, premièrement en les alignant avec l’aiguille à biopsie visible sur l’image TDM, puis en les transférant sur l’image TEP. Nous avons ensuite cherché à utiliser ces données pour tester deux algorithmes de segmentation automatique d'images TEP, le Fuzzy Locally Adaptive Bayesian (FLAB) développé au Laboratoire de Traitement de l'Information Médicale (LaTIM) à Brest, ainsi qu’une méthode de segmentation par seuillage. Cependant, la qualité de ces données repose sur la précision du recalage des images TEP, autoradiographiques et des micrographes. La principale source d’erreur dans le recalage de ces images venant de la fusion des images TEP/TDM, une méthode a été développée afin de quantifier la précision du recalage. Les résultats obtenus pour les patients inclus dans cette étude montrent que la précision de la fusion varie de 1.1 à 10.9 mm. En se basant sur ces résultats, les données ont été triées, pour finalement sélectionner les données acquises sur 4 patients jugées satisfaisantes pour tester les algorithmes de segmentation. Les résultats montrent qu’au point de la biopsie, les contours obtenus avec FLAB concordent davantage avec le bord de la lésion observé sur les micrographes. Cependant les deux méthodes de segmentation donnent des contours similaires, les lésions étant peu hétérogènes. / During the last decade, positron emission tomography (PET) has been finding broader application in oncology. Some tumors that are non-visible in standard anatomic imaging like computerized tomography (CT) or ultrasounds, can be detected by measuring in 3D the metabolic activity of the body, using PET imaging. PET images can also be used to deliver localized therapy like radiation therapy or ablation. In order to deliver localized therapy, the tumor border has to be delineated with very high accuracy. However, the poor spatial resolution of PET images makes the segmentation challenging. Studies have shown that manual segmentation introduces a large inter- and intra- variability, and is very time consuming. For these reasons, many automatic segmentation algorithms have been developed. However, few datasets with histopathological information are available to test and validate these algorithms since it is experimentally difficult to produce them. The aim of the method developed was to evaluate PET segmentation algorithms against the underlying histopathology. This method consists in acquiring quantitative autoradiography of biopsy specimen extracted under PET/CT guidance. The autoradiography allows imaging the radiotracer distribution in the biopsy specimen with a very high spatial accuracy. Histopathological sections of the specimen can then obtained and observed under the microscope. The autoradiography and the micrograph of the histological sections can then be registered with the PET image, by aligning them first with the biopsy needle seen on the CT image and then transferring them onto the PET image. The next step was to use this dataset to test two PET automatic segmentation algorithms: the Fuzzy Locally Adaptive Bayesian (FLAB) developed at the Laboratory of Medical Information Processing (LaTIM) in Brest, France, as well as a fix threshold segmentation method. However, the reliability of the dataset produced depends on the accuracy of the registration of the PET, autoradiography and micrograph images. The main source of uncertainty in the registration of these images comes from the registration between the CT and the PET. In order to evaluate the accuracy of the registration, a method was developed. The results obtained with this method showed that the registration error ranges from 1.1 to 10.9mm. Based on those results, the dataset obtained from 4 patients was judged satisfying to test the segmentation algorithms. The comparison of the contours obtained with FLAB and with the fixed threshold method shows that at the point of biopsy, the FLAB contour is closer than that to the histopathology contour. However, the two segmentation methods give similar contours, because the lesions were homogeneous.
218

Mise en correspondance inter-individus pour la prédiction de la toxicité en radiothérapie du cancer de la prostate / Inter-individual mapping for the prediction of toxicity following prostate cancer radiotherapy

Dréan, Gaël 17 June 2014 (has links)
Ces travaux de thèse s’inscrivent dans le contexte de la prédiction de la toxicité en radiothérapie du cancer de la prostate. Dans l'objectif d'analyser les corrélations spatiales entre la dose et les effets secondaires cette problématique est abordée dans un cadre d'analyse de population. La mise en correspondance inter-individus de l'anatomie et de la distribution de dose planifiée soulève des difficultés liées aux fortes variabilités anatomiques et au faible contraste des images CT considérées. Nous avons envisagé différentes stratégies de recalage non-rigide exploitant les informations relatives aux structures anatomiques, aux combinaisons intensité-structure, ou aux relations inter-structures. Les méthodes proposées s'appuient notamment sur l’utilisation de descripteurs structurels des organes tels que les cartes de distances euclidiennes ou du champ scalaire solution de l’équation de Laplace. Ces méthodes ont permis d'améliorer significativement la précision de la mise en correspondance, tant au niveau anatomique que dosimétrique. Les plus performantes ont été utilisées pour analyser une population de 118 individus. Les comparaisons statistiques des distributions de dose entre les patients souffrant ou non de saignements rectaux ont permis d’identifier une sous-région du rectum où la dose semble corrélée à la toxicité. La sous-région rectale identifiée apparaît potentiellement impliquée et hautement prédictive du risque de saignement. L'approche proposée permet d’améliorer les performances des modèles mathématiques de prédiction de la toxicité. / This thesis deals with the issue of predicting the toxicity within the context of prostate cancer radiotherapy. With the aim of analyzing the spatial correlations between dose and side effects, this problem is addressed in a population analysis framework. Inter-individual matching of both the anatomy and planned dose distribution raises difficulties related to high anatomical variability and low contrast in the CT images. We considered different strategies for non-rigid registration involving the use of information on anatomical structures, intensity-structure combinations, or inter-structures relations. The proposed methods are primarily based on the use of structural descriptors of organs such as Euclidean distance maps or scalar field solution of the Laplace equation. These methods allowed us to significantly improve the accuracy of the matching, at both the dosimetric and the anatomical level. The most accurate matching strategy has been used for analyzing a population of. Statistical comparisons of dose distributions between patients with or without rectal bleeding have been used to identify a rectal sub-region likely correlated with toxicity. The identified rectal sub-region appears potentially involved in side effects and highly predictive of the risk of bleeding. The proposed approach makes it possible to improve the performance of mathematical models for predicting the toxicity.
219

Using image-based large-eddy simulations to investigate the intracardiac flow and its turbulent nature / Utilisation de simulations aux grandes échelles à partir d'images médicales pour l'étude de l'écoulement intracardiaque et de sa nature turbulente

Chnafa, Christophe 21 November 2014 (has links)
Le premier objectif de cette thèse est de générer et d'analyser une base de données pour l'écoulement intracardiaque dans des géométries réalistes. Dans ce but, une stratégie couplant simulation numérique et imagerie médicale est appliquée à un cœur gauche pathologique et à un cœur gauche sain. Le second objectif est d'illustrer comment cette base de données peut être analysée afin de mieux comprendre l'écoulement intracardiaque, en portant une attention particulière aux caractéristiques instationnaires de l'écoulement et à sa nature turbulente. Une chaîne numérique pour simuler l'écoulement dans des géométries spécifiques au patient est tout d'abord présentée. La cavité cardiaque et ses mouvements sont extraits à partir d'images médicales à l'aide d'un algorithme de recalage d'image afin d'obtenir le domaine de calcul. Les équations qui régissent l'écoulement sont écrites dans le cadre d'un maillage se déformant au cours du temps (approche arbitrairement Lagrangienne ou Eulérienne). Les valves cardiaques sont modélisées à l'aide de frontières immergées. L'application de cette chaîne numérique à deux cœurs gauches, l'un pathologique, l'autre sain est ensuite détaillée. L'écoulement sanguin est caractérisé par sa nature transitoire, donnant un écoulement complexe et cyclique. Il est montré que l'écoulement n'est ni laminaire, ni pleinement turbulent, justifiant a posteriori l'utilisation de simulation aux grandes échelles. Le développement instationnaire de la turbulence est analysé à l'aide de l'écoulement moyenné sur un nombre suffisant de cycles cardiaques. Les statistiques de l'écoulement, l'énergie turbulente, la production de turbulence et une analyse spectrale sont notamment présentées. Une étude Lagrangienne est aussi effectuée en utilisant des statistiques calculées à l'aide de particules ensemencées dans l'écoulement. En plus des caractéristiques habituellement rapportées, ce travail met en évidence le caractère perturbé et transitoire de l'écoulement, tout en identifiant les mécanismes de production de la turbulence. / The first objective of this thesis is to generate and analyse CFD-based databases for the intracardiac flow in realistic geometries. To this aim, an image-based CFD strategy is applied to both a pathological and a healthy human left hearts. The second objective is to illustrate how the numerical database can be analysed in order to gain insight about the intracardiac flow, mainly focusing on the unsteady and turbulent features. A numerical framework allowing insight in fluid dynamics inside patient-specific human hearts is first presented. The heart cavities and their wall dynamics are extracted from medical images, with the help of an image registration algorithm, in order to obtain a patient-specific moving numerical domain. Flow equations are written on a conformal moving computational domain, using an Arbitrary Lagrangian-Eulerian framework. Valves are modelled using immersed boundaries.Application of this framework to compute flow and turbulence statistics in both a realistic pathological and a realistic healthy human left hearts is presented. The blood flow is characterized by its transitional nature, resulting in a complex cyclic flow. Flow dynamics is analysed in order to reveal the main fluid phenomena and to obtain insights into the physiological patterns commonly detected. It is demonstrated that the flow is neither laminar nor fully turbulent, thus justifying a posteriori the use of Large Eddy Simulation.The unsteady development of turbulence is analysed from the phase averaged flow, flow statistics, the turbulent stresses, the turbulent kinetic energy, its production and through spectral analysis. A Lagrangian analysis is also presented using Lagrangian particles to gather statistical flow data. In addition to a number of classically reported features on the left heart flow, this work reveals how disturbed and transitional the flow is and describes the mechanisms of turbulence production.
220

Multimodal image registration in 2D and 3D correlative microscopy / Recalage d'images multimodales en microscopie corrélative 2D et 3D

Toledo Acosta, Bertha Mayela 23 May 2018 (has links)
Cette thèse porte sur la définition d'un schéma de recalage automatique en microscopie corrélative 2D et 3D, en particulier pour des images de microscopie optique et électronique (CLEM). Au cours des dernières années, la CLEM est devenue un outil d'investigation important et puissant dans le domaine de la bio-imagerie. En utilisant la CLEM, des informations complémentaires peuvent être collectées à partir d'un échantillon biologique. La superposition des différentes images microscopiques est généralement réalisée à l'aide de techniques impliquant une assistance manuelle à plusieurs étapes, ce qui est exigeant et prend beaucoup de temps pour les biologistes. Pour faciliter et diffuser le procédé de CLEM, notre travail de thèse est axé sur la création de méthodes de recalage automatique qui soient fiables, faciles à utiliser et qui ne nécessitent pas d'ajustement de paramètres ou de connaissances complexes. Le recalage CLEM doit faire face à de nombreux problèmes dus aux différences entre les images de microscopie électronique et optique et leur mode d'acquisition, tant en termes de résolution du pixel, de taille des images, de contenu, de champ de vision et d'apparence. Nous avons conçu des méthodes basées sur l'intensité des images pour aligner les images CLEM en 2D et 3D. Elles comprennent plusieurs étapes : représentation commune des images LM et EM à l'aide de la transformation LoG, pré-alignement exploitant des mesures de similarité à partir d'histogrammes avec une recherche exhaustive, et un recalage fin basé sur l'information mutuelle. De plus, nous avons défini une méthode de sélection robuste de modèles de mouvement, et un méthode de détection multi-échelle de spots, que nous avons exploitées dans le recalage CLEM 2D. Notre schéma de recalage automatisé pour la CLEM a été testé avec succès sur plusieurs ensembles de données CLEM réelles 2D et 3D. Les résultats ont été validés par des biologistes, offrant une excellente perspective sur l'utilité de nos développements. / This thesis is concerned with the definition of an automated registration framework for 2D and 3D correlative microscopy images, in particular for correlative light and electron microscopy (CLEM) images. In recent years, CLEM has become an important and powerful tool in the bioimaging field. By using CLEM, complementary information can be collected from a biological sample. An overlay of the different microscopy images is commonly achieved using techniques involving manual assistance at several steps, which is demanding and time consuming for biologists. To facilitate and disseminate the CLEM process for biologists, the thesis work is focused on creating automatic registration methods that are reliable, easy to use and do not require parameter tuning or complex knowledge. CLEM registration has to deal with many issues due to the differences between electron microscopy and light microscopy images and their acquisition, both in terms of pixel resolution, image size, content, field of view and appearance. We have designed intensity-based methods to align CLEM images in 2D and 3D. They involved a common representation of the LM and EM images using the LoG transform, a pre-alignment step exploiting histogram-based similarities within an exhaustive search, and a fine mutual information-based registration. In addition, we have defined a robust motion model selection method, and a multiscale spot detection method which were exploited in the 2D CLEM registration. Our automated CLEM registration framework was successfully tested on several real 2D and 3D CLEM datasets and the results were validated by biologists, offering an excellent perspective in the usefulness of our methods.

Page generated in 0.1 seconds