Spelling suggestions: "subject:"superresolution"" "subject:"hyperresolution""
271 |
Echantillonnage compressif appliqué à la microscopie de fluorescence et à la microscopie de super résolution / Compressive fluorescence microscopy for biological imaging and super resolution microscopy.Chahid, Makhlad 19 December 2014 (has links)
Mes travaux de thèse portent sur l’application de la théorie de l’échantillonnagecompressif (Compressed Sensing ou Compressive Sampling, CS) à la microscopie defluorescence, domaine en constante évolution et outil privilégié de la recherche fondamentaleen biologie. La récente théorie du CS a démontré que pour des signauxparticuliers, dits parcimonieux, il est possible de réduire la fréquence d’échantillonnagede l’information à une valeur bien plus faible que ne le prédit la théorie classiquede l’échantillonnage. La théorie du CS stipule qu’il est possible de reconstruireun signal, sans perte d’information, à partir de mesures aléatoires fortement incomplèteset/ou corrompues de ce signal à la seule condition que celui-ci présente unestructure parcimonieuse.Nous avons développé une approche expérimentale inédite de la théorie du CSà la microscopie de fluorescence, domaine où les signaux sont naturellement parcimonieux.La méthode est basée sur l’association d’une illumination dynamiquestructurée à champs large et d’une détection rapide à point unique. Cette modalitépermet d’inclure l’étape de compression pendant l’acquisition. En outre, nous avonsmontré que l’introduction de dimensions supplémentaires (2D+couleur) augmentela redondance du signal, qui peut être pleinement exploitée par le CS afin d’atteindredes taux de compression très importants.Dans la continuité de ces travaux, nous nous sommes intéressés à une autre applicationdu CS à la microscopie de super résolution, par localisation de moléculesindividuelles (PALM/STORM). Ces nouvelles techniques de microscopie de fluorescenceont permis de s’affranchir de la limite de diffraction pour atteindre des résolutionsnanométriques. Nous avons exploré la possibilité d’exploiter le CS pour réduiredrastiquement les temps d’acquisition et de traitement.Mots clefs : échantillonnage compressif, microscopie de fluorescence, parcimonie,microscopie de super résolution, redondance, traitement du signal, localisation demolécules uniques, bio-imagerie / My PhD work deals with the application of Compressed Sensing (or CompressiveSampling, CS) in fluorescence microscopy as a powerful toolkit for fundamental biologicalresearch. The recent mathematical theory of CS has demonstrated that, for aparticular type of signal, called sparse, it is possible to reduce the sampling frequencyto rates well below that which the sampling theorem classically requires. Its centralresult states it is possible to losslessly reconstruct a signal from highly incompleteand/or inaccurate measurements if the original signal possesses a sparse representation.We developed a unique experimental approach of a CS implementation in fluorescencemicroscopy, where most signals are naturally sparse. Our CS microscopecombines dynamic structured wide-field illumination with fast and sensitive singlepointfluorescence detection. In this scheme, the compression is directly integratedin the measurement process. Additionally, we showed that introducing extra dimensions(2D+color) results in extreme redundancy that is fully exploited by CS to greatlyincrease compression ratios.The second purpose of this thesis is another appealing application of CS forsuper-resolution microscopy using single molecule localization techniques (e.g.PALM/STORM). This new powerful tool has allowed to break the diffraction barrierdown to nanometric resolutions. We explored the possibility of using CS to drasticallyreduce acquisition and processing times.
|
272 |
Uncertainty analysis of a particle tracking algorithm developed for super-resolution particle image velocimetryJoseph, Sujith 11 August 2003 (has links)
Particle Image Velocimetry (PIV) is a powerful technique to measure the velocity at many points in a flow simultaneously by performing correlation analysis on images of particles being transported by the flow. These images are acquired by illuminating the flow with two light pulses so that each particle appears once on each image. <p> The spatial resolution is an important parameter of this measuring system since it determines its ability to resolve features of interest in the flow. The super-resolution technique maximises the spatial resolution by augmenting the PIV analysis with a second pass that identifies specific particles and measures the distance between them. <p> The accuracy of the procedure depends on both the success with which the proper pairings are identified and the accuracy with which their centre-to-centre distance can be measured. This study presents an analysis of both the systematic uncertainty and random uncertainty associated with this process. The uncertainty is analysed as a function of several key parameters that define the quality of the image. The uncertainty analysis is performed by preparing 4000 member ensembles of simulated images with specific setpoints of each parameter. <p> It is shown that the systematic uncertainty is negligible compared to the random uncertainty for all conditions tested. Also, the image contrast and the selection of a threshold for the particle search are the most critical parameters influencing both success rate and uncertainty. It is also shown that high image intensities still yield accurate results. The search radius used by the super-resolution algorithm is shown to be a critical parameter also. By increasing the search radius, the success rate can be increased although this is accompanied by an increase in random uncertainty.
|
273 |
Uncertainty analysis of a particle tracking algorithm developed for super-resolution particle image velocimetryJoseph, Sujith 11 August 2003
Particle Image Velocimetry (PIV) is a powerful technique to measure the velocity at many points in a flow simultaneously by performing correlation analysis on images of particles being transported by the flow. These images are acquired by illuminating the flow with two light pulses so that each particle appears once on each image. <p> The spatial resolution is an important parameter of this measuring system since it determines its ability to resolve features of interest in the flow. The super-resolution technique maximises the spatial resolution by augmenting the PIV analysis with a second pass that identifies specific particles and measures the distance between them. <p> The accuracy of the procedure depends on both the success with which the proper pairings are identified and the accuracy with which their centre-to-centre distance can be measured. This study presents an analysis of both the systematic uncertainty and random uncertainty associated with this process. The uncertainty is analysed as a function of several key parameters that define the quality of the image. The uncertainty analysis is performed by preparing 4000 member ensembles of simulated images with specific setpoints of each parameter. <p> It is shown that the systematic uncertainty is negligible compared to the random uncertainty for all conditions tested. Also, the image contrast and the selection of a threshold for the particle search are the most critical parameters influencing both success rate and uncertainty. It is also shown that high image intensities still yield accurate results. The search radius used by the super-resolution algorithm is shown to be a critical parameter also. By increasing the search radius, the success rate can be increased although this is accompanied by an increase in random uncertainty.
|
274 |
Inverse Problems and Self-similarity in ImagingEbrahimi Kahrizsangi, Mehran 28 July 2008 (has links)
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes.
In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularization. The concept of image prior, which has been of crucial importance in image modelling and processing, has also been important in solving inverse problems since it algebraically translates to the regularization procedure.
Indeed, much recent progress in imaging has been due to advances in the formulation and practice of regularization. This, coupled with progress in optimization and numerical analysis, has yielded much improvement in computational methods of solving inverse imaging problems.
Historically, the idea of self-similarity was important in the development of fractal image coding. Here we show that the self-similarity properties of natural images may be used to construct image priors for the purpose of addressing certain inverse problems. Indeed, new trends in the area of non-local image processing have provided a rejuvenated appreciation of image self-similarity and opportunities to explore novel self-similarity-based priors.
We first revisit the concept of fractal-based methods and address some open theoretical problems in the area. This includes formulating a necessary and sufficient condition for the contractivity of the block fractal transform operator. We shall also provide some more generalized formulations of fractal-based self-similarity constraints of an image. These formulations can be developed algebraically and also in terms of the set-based method of Projection Onto Convex Sets (POCS).
We then revisit the traditional inverse problems of single frame image zooming and multi-frame resolution enhancement, also known as super-resolution. Some ideas will be borrowed from newly developed non-local denoising algorithms in order to formulate self-similarity priors. Understanding the role of scale and choice of examples/samples is also important in these proposed models. For this purpose, we perform an extensive series of numerical experiments and analyze the results. These ideas naturally lead to the method of self-examples, which relies on the regularity properties of natural images at different scales, as a means of solving the single-frame image zooming problem.
Furthermore, we propose and investigate a multi-frame super-resolution counterpart which does not require explicit motion estimation among video sequences.
|
275 |
Inverse Problems and Self-similarity in ImagingEbrahimi Kahrizsangi, Mehran 28 July 2008 (has links)
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes.
In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularization. The concept of image prior, which has been of crucial importance in image modelling and processing, has also been important in solving inverse problems since it algebraically translates to the regularization procedure.
Indeed, much recent progress in imaging has been due to advances in the formulation and practice of regularization. This, coupled with progress in optimization and numerical analysis, has yielded much improvement in computational methods of solving inverse imaging problems.
Historically, the idea of self-similarity was important in the development of fractal image coding. Here we show that the self-similarity properties of natural images may be used to construct image priors for the purpose of addressing certain inverse problems. Indeed, new trends in the area of non-local image processing have provided a rejuvenated appreciation of image self-similarity and opportunities to explore novel self-similarity-based priors.
We first revisit the concept of fractal-based methods and address some open theoretical problems in the area. This includes formulating a necessary and sufficient condition for the contractivity of the block fractal transform operator. We shall also provide some more generalized formulations of fractal-based self-similarity constraints of an image. These formulations can be developed algebraically and also in terms of the set-based method of Projection Onto Convex Sets (POCS).
We then revisit the traditional inverse problems of single frame image zooming and multi-frame resolution enhancement, also known as super-resolution. Some ideas will be borrowed from newly developed non-local denoising algorithms in order to formulate self-similarity priors. Understanding the role of scale and choice of examples/samples is also important in these proposed models. For this purpose, we perform an extensive series of numerical experiments and analyze the results. These ideas naturally lead to the method of self-examples, which relies on the regularity properties of natural images at different scales, as a means of solving the single-frame image zooming problem.
Furthermore, we propose and investigate a multi-frame super-resolution counterpart which does not require explicit motion estimation among video sequences.
|
276 |
Development of advanced methods for super-resolution microscopy data analysis and segmentation / Développement de méthodes avancées pour l'analyse et la segmentation de données de microscopie à super-résolutionAndronov, Leonid 09 January 2018 (has links)
Parmi les méthodes de super-résolution, la microscopie par localisation de molécules uniques se distingue principalement par sa meilleure résolution réalisable en pratique mais aussi pour l’accès direct aux propriétés des molécules individuelles. Les données principales de la microscopie par localisation sont les coordonnées des fluorochromes, un type de données peu répandu en microscopie conventionnelle. Le développement de méthodes spéciales pour le traitement de ces données est donc nécessaire. J’ai développé les logiciels SharpViSu et ClusterViSu qui permettent d’effectuer les étapes de traitements les plus importantes, notamment une correction des dérives et des aberrations chromatiques, une sélection des événements de localisations, une reconstruction des données dans des images 2D ou dans des volumes 3D par le moyen de différentes techniques de visualisation, une estimation de la résolution à l’aide de la corrélation des anneaux de Fourier, et une segmentation à l’aide de fonctions K et L de Ripley. En plus, j’ai développé une méthode de segmentation de données de localisation en 2D et en 3D basée sur les diagrammes de Voronoï qui permet un clustering de manière automatique grâce à modélisation de bruit par les simulations Monte-Carlo. En utilisant les méthodes avancées de traitement de données, j’ai mis en évidence un clustering de la protéine CENP-A dans les régions centromériques des noyaux cellulaires et des transitions structurales de ces clusters au moment de la déposition de la CENP-A au début de la phase G1 du cycle cellulaire. / Among the super-resolution methods single-molecule localization microscopy (SMLM) is remarkable not only for best practically achievable resolution but also for the direct access to properties of individual molecules. The primary data of SMLM are the coordinates of individual fluorophores, which is a relatively rare data type in fluorescence microscopy. Therefore, specially adapted methods for processing of these data have to be developed. I developed the software SharpViSu and ClusterViSu that allow for most important data processing steps, namely for correction of drift and chromatic aberrations, selection of localization events, reconstruction of data in 2D images or 3D volumes using different visualization techniques, estimation of resolution with Fourier ring correlation, and segmentation using K- and L-Ripley functions. Additionally, I developed a method for segmentation of 2D and 3D localization data based on Voronoi diagrams, which allows for automatic and unambiguous cluster analysis thanks to noise modeling with Monte-Carlo simulations. Using advanced data processing methods, I demonstrated clustering of CENP-A in the centromeric regions of the cell nucleus and structural transitions of these clusters upon the CENP-A deposition in early G1 phase of the cell cycle.
|
277 |
Super-resolution STED and two-photon microscopy of dendritic spine and microglial dynamics / Imagerie de la dynamique des microglies et des épines dendritiques par microscopie super-résolutive STED et bi-photoniquePfeiffer, Thomas 21 November 2017 (has links)
Les changements des connections neuronales interviendraient dans la formation de la mémoire. J’ai développé de nouvelles approches basées sur l’imagerie photonique pour étudier (i) les interactions entre les microglies et les épines dendritiques, et (ii) le renouvellement des épines dans l’hippocampe in vivo. Ces deux phénomènes contribueraient au remodelage des circuits synaptiques intervenant dans la mémoire. (i) Les microglies sont impliquées dans de nouvelles fonctions en condition saine. J’ai examiné l’effet de la plasticité synaptique sur la dynamique morphologique des microglies, et sur leur interaction avec les épines. En combinant l’électrophysiologie et l’imagerie bi-photonique dans des tranches aigües de souris transgéniques, je démontre que la microglie intensifie son interaction physique avec les épines. Ainsi pour continuer l’étude de ces interactions et leur impact fonctionnel plus précisément, j’ai optimisé l’imagerie STED dans des tranches aigües. (ii) La plasticité structurale des épines est cruciale pour la mémoire, mais les connaissances à ce sujet dans l’hippocampe in vivo restent limitées. J’ai donc établi une technique d’imagerie chronique STED in vivo pour visualiser les épines dans l’hippocampe. Cette approche a révélé une densité double de celle reportée précédemment à l’aide de la microscopie bi-photonique. De plus j’ai observé un renouvellement des épines de 40% en 5 jours, représentant un taux important de remodelage synaptique dans l’hippocampe. Les approches d’imagerie super-résolutive permettent l’étude des interactions microglie-épine, et du renouvellement des épines hippocampiques avec une résolution inédite chez la souris vivante. / Activity-dependent changes in neuronal connectivity are thought to underlie learning and memory. I developed and applied novel high-resolution imaging-based approaches to study (i) microglia-spine interactions and (ii) the turnover of dendritic spines in the mouse hippocampus, which are both thought to contribute to the remodeling of synaptic circuits underlying memory formation. (i) Microglia have been implicated in a variety of novel tasks beyond their classic immune defensive roles. I examined the effect of synaptic plasticity on microglial morphological dynamics and interactions with spines, using a combination of electrophysiology and two-photon microscopy in acute brain slices. I demonstrated that microglia intensify their physical interactions with spines after the induction of hippocampal synaptic plasticity. To study these interactions and their functional impact in greater detail, I optimized and applied time-lapse STED imaging in acute brain slices. (ii) Spine structural plasticity is thought to underpin memory formation. Yet, we know very little about it in the hippocampus in vivo, which is the archetypical memory center of the mammalian brain. I established chronic in vivo STED imaging of hippocampal spines in the living mouse using a modified cranial window technique. The super-resolution approach revealed a spine density that was two times higher than reported in the two-photon literature, and a spine turnover of 40% over 5 days, indicating a high level of structural remodeling of hippocampal synaptic circuits. The developed super-resolution imaging approaches enable the examination of microglia-synapse interactions and dendritic spines with unprecedented resolution in the living brain (tissue).
|
278 |
Uma abordagem híbrida baseada em Projeções sobre Conjuntos Convexos para Super-Resolução espacial e espectral / A hybrid approach based on projections onto convex sets for spatial and spectral super-resolutionCunha, Bruno Aguilar 10 November 2016 (has links)
Submitted by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T16:07:35Z
No. of bitstreams: 1
CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T16:07:44Z (GMT) No. of bitstreams: 1
CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T16:07:53Z (GMT) No. of bitstreams: 1
CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) / Made available in DSpace on 2017-10-17T16:08:04Z (GMT). No. of bitstreams: 1
CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5)
Previous issue date: 2016-11-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / This work proposes both a study and a development of an algorithm for super-resolution of digital images using projections onto convex sets. The method is based on a classic algorithm for spatial super-resolution which considering the subpixel information present in a set of lower resolution images, generate an image of higher resolution and better visual quality. We propose the incorporation of a new restriction based on the Richardson-Lucy algorithm in order to restore and recover part of the spatial frequencies lost during the degradation and decimation process of the high resolution images. In this way the algorithm provides a hybrid approach based on projections onto convex sets which is capable of promoting both the spatial and spectral image super-resolution. The proposed approach was compared with the original algorithm from Sezan and Tekalp and later with a method based on a robust framework that is considered nowadays one of the most effective methods for super-resolution. The results, considering both the visual and the mean square error analysis, demonstrate that the proposed method has great potential promoting increased visual quality over the images studied. / Este trabalho visa o estudo e o desenvolvimento de um algoritmo para super-resolução de imagens digitais baseado na teoria de projeções sobre conjuntos convexos. O método é baseado em um algoritmo clássico de projeções sobre restrições convexas para super- resolução espacial onde se busca, considerando as informações subpixel presentes em um conjunto de imagens de menor resolução, gerar uma imagem de maior resolução e com melhor qualidade visual. Propomos a incorporação de uma nova restrição baseada no algoritmo de Richardson-Lucy para restaurar e recuperar parte das frequências espaciais perdidas durante o processo de degradação e decimação das imagens de alta resolução. Nesse sentido o algoritmo provê uma abordagem híbrida baseada em projeções sobre conjuntos convexos que é capaz de promover simultaneamente a super-resolução espacial e a espectral. A abordagem proposta foi comparada com o algoritmo original de Sezan e Tekalp e posteriormente com um método baseado em um framework de super-resolução robusta, considerado um dos métodos mais eficazes na atualidade. Os resultados obtidos, considerando as análises visuais e também através do erro médio quadrático, demonstram que o método proposto possui grande potencialidade promovendo o aumento da qualidade visual das imagens estudadas.
|
279 |
Restauração de imagens com precisão subpixel utilizando restrições convexas / Restoring images with subpixel precision using restrictionsAntunes Filho, Amauri 09 December 2016 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2017-06-01T14:27:18Z
No. of bitstreams: 1
ANTUNES_FILHO_Amauri_2016.pdf: 23049873 bytes, checksum: 5246fc4b37bd3364d5abf8cf81fbdb6f (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-06-01T14:27:31Z (GMT) No. of bitstreams: 1
ANTUNES_FILHO_Amauri_2016.pdf: 23049873 bytes, checksum: 5246fc4b37bd3364d5abf8cf81fbdb6f (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-06-01T14:39:27Z (GMT) No. of bitstreams: 1
ANTUNES_FILHO_Amauri_2016.pdf: 23049873 bytes, checksum: 5246fc4b37bd3364d5abf8cf81fbdb6f (MD5) / Made available in DSpace on 2017-06-01T14:39:33Z (GMT). No. of bitstreams: 1
ANTUNES_FILHO_Amauri_2016.pdf: 23049873 bytes, checksum: 5246fc4b37bd3364d5abf8cf81fbdb6f (MD5)
Previous issue date: 2016-12-09 / Não recebi financiamento / The super-resolution aims to obtain a higher resolution image, using information from one or more low resolution images. There are different applications where super-resolution may be used, such as medical and forensic images. This work proposes a study and development of algorithms, based on Tekalp and Sezan’s algorithm, using the projection onto convex sets theory, in order to obtain super-resolution, therefore obtaining a higher resolution image, from a low resolution images set, with subpixel informations. We proposed the adition of a convex restriction based on Richardon-Lucy’s algorithm, modified to be weighted by Canny’s filter, along with total variation regularization, aiming to restore frequencies lost during high resolution images decimation and degradation processes . Therefore, we have a hybrid approach, that implements spatial and spectral super-resolution simultaneously, based on projection onto convex sets. The obtained results by the proposed algorithms were compared to Tekalp and Sezan’s base algorithm. The visual analysis of the images, along with the mean square error were taken in consideration for comparisons. In development, grayscale images were used, but the methods are extensible for color images. Results showed improvement in the obtained images, with less noise, blurring and more edge definition than the low resolution images. We conclude that the approach has potential for medical applications and forensic computation. / A super-resolução tem por objetivo a obtenção de uma imagem de maior resolução, utilizando informações de uma ou mais imagens de baixa resolução. Existem diferentes aplicações onde a utilização da super-resolução é empregada, como imagens médicas e forenses. A proposta deste trabalho é o estudo e desenvolvimento de algoritmos, baseados no algoritmo de Tekalp e Sezan, que utilizam a teoria de projeções sobre conjuntos convexos com o objetivo de super-resolução, obtendo uma imagem de maior resolução a partir de um conjunto de imagens com informações subpixel. Propomos também, uma restrição convexa baseada no algoritmo de Richardson-Lucy, modificado para ser ponderado pelo filtro de Canny, juntamente com regularização total variation, com o intuito de restaurar frequências perdidas durante os processos de decimação e degradação das imagens de alta resolução. Com isso temos uma abordagem híbrida, que implementa super-resolução espacial e espectral simultaneamente, baseada em projeções sobre conjuntos convexos. Os resultados obtidos pelos algoritmos propostos foram comparados com o algoritmo base de Tekalp e Sezan. Para as comparações, levou-se em consideração a análise visual das imagens juntamente com o erro quadrático médio. No desenvolvimento, foram utilizadas imagens em tons de cinza, mas os métodos são extensíveis para imagens coloridas. Os resultados apresentaram melhoria nas imagens obtidas em relação as imagens de baixa resolução, minimizando o ruído, o borramento e melhor definição das bordas. Concluímos que a abordagem possui potencial para aplicações médicas e em computação forense.
|
280 |
Numérisation 3D de visages par une approche de super-résolution spatio-temporelle non-rigideOuji, Karima 28 June 2012 (has links)
La mesure de la forme 3D du visage est une problématique qui attire de plus en plus de chercheurs et qui trouve son application dans des domaines divers tels que la biométrie, l’animation et la chirurgie faciale. Les solutions actuelles sont souvent basées sur des systèmes projecteur/caméra et utilisent de la lumière structurée pour compenser l’insuffisance de la texture faciale. L’information 3D est ensuite calculée en décodant la distorsion des patrons projetés sur le visage. Une des techniques les plus utilisées de la lumière structurée est la codification sinusoïdale par décalage de phase qui permet une numérisation 3D de résolution pixélique. Cette technique exige une étape de déroulement de phase, sensible à l’éclairage ambiant surtout quand le nombre de patrons projetés est limité. En plus, la projection de plusieurs patrons impacte le délai de numérisation et peut générer des artefacts surtout pour la capture d’un visage en mouvement. Une alternative aux approches projecteur-caméra consiste à estimer l’information 3D par appariement stéréo suivi par une triangulation optique. Cependant, le modèle calculé par cette technique est généralement non-dense et manque de précision. Des travaux récents proposent la super-résolution pour densifier et débruiter les images de profondeur. La super-résolution a été particulièrement proposée pour les caméras 3D TOF (Time-Of-Flight) qui fournissent des scans 3D très bruités. Ce travail de thèse propose une solution de numérisation 3D à faible coût avec un schéma de super-résolution spatio-temporelle. Elle utilise un système multi-caméra étalonné assisté par une source de projection non-étalonnée. Elle est particulièrement adaptée à la reconstruction 3D de visages, i.e. rapide et mobile. La solution proposée est une approche hybride qui associe la stéréovision et la codification sinusoïdale par décalage de phase, et qui non seulement profite de leurs avantages mais qui surmonte leurs faiblesses. Le schéma de la super-résolution proposé permet de corriger l’information 3D, de compléter la vue scannée du visage en traitant son aspect déformable. / 3D face measurement is increasingly demanded for many applications such as bio-metrics, animation and facial surgery. Current solutions often employ a structured light camera/projector device to overcome the relatively uniform appearance of skin. Depth in-formation is recovered by decoding patterns of the projected structured light. One of the most widely used structured-light coding is sinusoidal phase shifting which allows a 3Ddense resolution. Current solutions mostly utilize more than three phase-shifted sinusoidal patterns to recover the depth information, thus impacting the acquisition delay. They further require projector-camera calibration whose accuracy is crucial for phase to depth estimation step. Also, they need an unwrapping stage which is sensitive to ambient light, especially when the number of patterns decreases. An alternative to projector-camera systems consists of recovering depth information by stereovision using a multi-camera system. A stereo matching step finds correspondence between stereo images and the 3D information is obtained by optical triangulation. However, the model computed in this way generally is quite sparse. To up sample and denoise depth images, researchers looked into super-resolution techniques. Super-resolution was especially proposed for time-of-flight cameras which have very low data quality and a very high random noise. This thesis proposes a3D acquisition solution with a 3D space-time non-rigid super-resolution capability, using a calibrated multi-camera system coupled with a non calibrated projector device, which is particularly suited to 3D face scanning, i.e. rapid and easily movable. The proposed solution is a hybrid stereovision and phase-shifting approach, using two shifted patterns and a texture image, which not only takes advantage of the assets of stereovision and structured light but also overcomes their weaknesses. The super-resolution scheme involves a 3D non-rigid registration for 3D artifacts correction in the presence of small non-rigid deformations as facial expressions.
|
Page generated in 0.0898 seconds