• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 222
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 147
  • 98
  • 76
  • 69
  • 64
  • 44
  • 44
  • 39
  • 39
  • 38
  • 36
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Imaging the bone cell network with nanoscale synchrotron computed tomography

Joita Pacureanu, Alexandra 19 January 2012 (has links) (PDF)
The osteocytes are the most abundant and longest living bone cells, embedded in the bone matrix. They are interconnected with each other through dendrites, located in slender canals called canaliculi. The osteocyte lacunae, cavities in which the cells are located, together with the canaliculi form a communication network throughout the bone matrix, permitting transport of nutrients, waste and signals. These cells were firstly considered passive, but lately it has become increasingly clear their role as mechanosensory cells and orchestrators of bone remodeling. Despite recent advances in imaging techniques, none of the available methods can provide an adequate 3D assessment of the lacuno-canalicular network (LCN). The aims of this thesis were to achieve 3D imaging of the LCN with synchrotron radiation X-ray computed tomography (SR-CT) and to develop tools for 3D detection and segmentation of this cell network, leading towards automatic quantification of this structure. We demonstrate the feasibility of parallel beam SR-CT to image in 3D the LCN (voxel~300 nm). This technique can provide data on both the morphology of the cell network and the composition of the bone matrix. Compared to the other 3D imaging methods, this enables imaging of tissue covering a number of cell lacunae three orders of magnitude greater, in a simpler and faster way. This makes possible the study of sets of specimens in order to reach biomedical conclusions. Furthermore, we propose the use of divergent holotomography, to image the ultrastructure of bone tissue (voxel~60 nm). The image reconstruction provides phase maps, obtained after the application of a suitable phase retrieval algorithm. This technique permits assessment of the cell network with higher accuracy and it enables the 3D organization of collagen fibres organization in the bone matrix, to be visualized for the first time. In order to obtain quantitative parameters on the geometry of the cell network, this has to be segmented. Due to the limitations in spatial resolution, canaliculi appear as 3D tube-like structures measuring only 1-3 voxels in diameter. This, combined with the noise, the low contrast and the large size of each image (8 GB), makes the segmentation a difficult task. We propose an image enhancement method, based on a 3D line filter combined with bilateral filtering. This enables improvement in canaliculi detection, reduction of the background noise and cell lacunae preservation. For the image segmentation we developed a method based on variational region growing. We propose two expressions for energy functionals to minimize in order to detect the desired structure, based on the 3D line filter map and the original image. Preliminary quantitative results on human femoral samples are obtained based on connected components analysis and a few observations related to the bone cell network and its relation with the bone matrix are presented.
292

Kompiuterinės regos taikymų tyrimai medienos kokybės kontrolei / Research of computer vision application for the control of wood quality

Malinauskas, Mindaugas 26 May 2004 (has links)
Wood is a limited natural resource whose characteristics have been difficult to identify by advanced technology. The most expensive is a hardwood. There is a desire to optimize the production of wood products by minimizing waste and maximizing output. It could be achieved if one had capabilities to evaluate the common wood structures ― knots, rots, empty spaces and other defects. There are many nondestructive testing methods that could be applied to the evaluation of the inner quality of a log. The most successful one is an x-ray computerized tomography. This thesis covers principles of computerized tomography and image reconstruction algorithms. Testing was performed on implementations of tomographic image reconstruction algorithms using synthetic and real data. Research on an application of an x-ray computerized tomography for the analysis of wood structures was made. Inspection of glued zones in glued wooden articles was accomplished using tomographic imaging. Guidelines for the creation of a wood specialized x-ray computerized tomograph were given.
293

Multiscale methods in signal processing for adaptive optics

Maji, Suman Kumar 14 November 2013 (has links) (PDF)
In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics.
294

Suivi de culture cellulaire par imagerie sans lentille / Measurement of morphological modifications of cell population using lensless imaging.

Vinjimore Kesavan, Srikanth 15 December 2014 (has links)
Biological studies always start from curious observations. This is exemplified by description of cells for the first time by Robert Hooke in 1665, observed using his microscope. Since then the field of microscopy and cell biology grew hand in hand, with one field pushing the growth of the other and vice-versa. From basic description of cells in 1665, with parallel advancements in microscopy, we have travelled a long way to understand sub-cellular processes and molecular mechanisms. With each day, our understanding of cells increases and several questions are being posed and answered. Several high-resolution microscopic techniques are being introduced (PALM, STED, STORM, etc.) that push the resolution limit to few tens of nm, taking us to a new era where ‘seeing is believing'. Having said this, it is to be noted that the world of cells is vast, with information spread from nanometers to millimetres, and also over extended time-period, implying that not just one microscopic technique could acquire all the available information. The knowledge in the field of cell biology comes from a combination of imaging and quantifying techniques that complement one another.Majority of modern-day microscopic techniques focuses on increasing resolution which, is achieved at the expense of cost, compactness, simplicity, and field of view. The substantial decrease in the field of observation limits the visibility to a few single cells at best. Therefore, despite our ability to peer through the cells using increasingly powerful optical instruments, fundamental biology questions remain unanswered at mesoscopic scales. A global view of cell population with significant statistics both in terms of space and time is necessary to understand the dynamics of cell biology, taking in to account the heterogeneity of the population and the cell-cell variability. Mesoscopic information is as important as microscopic information. Although the latter gains access to sub-cellular functions, it is the former that leads to high-throughput, label-free measurements. By focussing on simplicity, cost, feasibility, field of view, and time-lapse in-incubator imaging, we developed ‘Lensfree Video Microscope' based on digital in-line holography that is capable of providing a new perspective to cell culture monitoring by being able to capture the kinetics of thousands of cells simultaneously. In this thesis, we present our lensfree video microscope and its applications in in-vitro cell culture monitoring and quantification.We validated the system by performing more than 20,000 hours of real-time imaging, in diverse conditions (e.g.: 37°C, 4°C, 0% O2, etc.) observing varied cell types and culture conditions (e.g.: primary cells, human stem cells, fibroblasts, endothelial cells, epithelial cells, 2D/3D cell culture, etc.). This permitted us to develop label-free cell based assays to study the major cellular events – cell adhesion and spreading, cell division, cell division orientation, cell migration, cell differentiation, network formation, and cell death. The results that we obtained respect the heterogeneity of the population, cell to cell variability (a raising concern in the biological community) and the massiveness of the population, whilst adhering to the standard cell culture practices - a rare combination that is seldom attained by existing real-time monitoring methods.We believe that our microscope and associated metrics would complement existing techniques by bridging the gap between mesoscopic and microscopic information. / Biological studies always start from curious observations. This is exemplified by description of cells for the first time by Robert Hooke in 1665, observed using his microscope. Since then the field of microscopy and cell biology grew hand in hand, with one field pushing the growth of the other and vice-versa. From basic description of cells in 1665, with parallel advancements in microscopy, we have travelled a long way to understand sub-cellular processes and molecular mechanisms. With each day, our understanding of cells increases and several questions are being posed and answered. Several high-resolution microscopic techniques are being introduced (PALM, STED, STORM, etc.) that push the resolution limit to few tens of nm, taking us to a new era where ‘seeing is believing'. Having said this, it is to be noted that the world of cells is vast, with information spread from nanometers to millimetres, and also over extended time-period, implying that not just one microscopic technique could acquire all the available information. The knowledge in the field of cell biology comes from a combination of imaging and quantifying techniques that complement one another.Majority of modern-day microscopic techniques focuses on increasing resolution which, is achieved at the expense of cost, compactness, simplicity, and field of view. The substantial decrease in the field of observation limits the visibility to a few single cells at best. Therefore, despite our ability to peer through the cells using increasingly powerful optical instruments, fundamental biology questions remain unanswered at mesoscopic scales. A global view of cell population with significant statistics both in terms of space and time is necessary to understand the dynamics of cell biology, taking in to account the heterogeneity of the population and the cell-cell variability. Mesoscopic information is as important as microscopic information. Although the latter gains access to sub-cellular functions, it is the former that leads to high-throughput, label-free measurements. By focussing on simplicity, cost, feasibility, field of view, and time-lapse in-incubator imaging, we developed ‘Lensfree Video Microscope' based on digital in-line holography that is capable of providing a new perspective to cell culture monitoring by being able to capture the kinetics of thousands of cells simultaneously. In this thesis, we present our lensfree video microscope and its applications in in-vitro cell culture monitoring and quantification.We validated the system by performing more than 20,000 hours of real-time imaging, in diverse conditions (e.g.: 37°C, 4°C, 0% O2, etc.) observing varied cell types and culture conditions (e.g.: primary cells, human stem cells, fibroblasts, endothelial cells, epithelial cells, 2D/3D cell culture, etc.). This permitted us to develop label-free cell based assays to study the major cellular events – cell adhesion and spreading, cell division, cell division orientation, cell migration, cell differentiation, network formation, and cell death. The results that we obtained respect the heterogeneity of the population, cell to cell variability (a raising concern in the biological community) and the massiveness of the population, whilst adhering to the standard cell culture practices - a rare combination that is seldom attained by existing real-time monitoring methods.We believe that our microscope and associated metrics would complement existing techniques by bridging the gap between mesoscopic and microscopic information.
295

Détection et caractérisation d'exoplanètes dans des images à grand contraste par la résolution de problème inverse / Detection and characterization of exoplanets in high contrast images by the inverse problem approach

Cantalloube, Faustine 30 September 2016 (has links)
L’imagerie d’exoplanètes permet d’obtenir de nombreuses informations sur la lumière qu’elles émettent, l’interaction avec leur environnement et sur leur nature. Afin d’extraire l’information des images, il est indispensable d’appliquer des méthodes de traitement d’images adaptées aux instruments. En particulier, il faut séparer les signaux planétaires des tavelures présentes dans les images qui sont dues aux aberrations instrumentales quasi-statiques. Dans mon travail de thèse je me suis intéressée à deux méthodes innovantes de traitement d’images qui sont fondés sur la résolution de problèmes inverses.La première méthode, ANDROMEDA, est un algorithme dédié à la détection et à la caractérisation de point sources dans des images haut contraste via une approche maximum de vraisemblance. ANDROMEDA exploite la diversité temporelle apportée par la rotation de champ de l’image (où se trouvent les objets astrophysiques) alors que la pupille (où les aberrations prennent naissance) est gardée fixe. A partir de l’application sur données réelles de l’algorithme dans sa version originale, j’ai proposé et qualifié des améliorations afin de prendre en compte les résidus non modélisés par la méthode tels que les structures bas ordres variant lentement et le niveau résiduel de bruit correlé dans les données. Une fois l’algorithme ANDROMEDA opérationnel, j’ai analysé ses performances et sa sensibilité aux paramètres utilisateurs, montrant la robustesse de la méthode. Une comparaison détaillée avec les algorithmes les plus utilisés dans la communauté a prouvé que cet algorithme est compétitif avec des performances très intéressantes dans le contexte actuel. En particulier, il s’agit de la seule méthode qui permet une détection entièrement non-supervisée. De plus, l’application à de nombreuses données ciel venant d’instruments différents a prouvé la fiabilité de la méthode et l’efficacité à extraire rapidement et systématiquement (avec un seul paramètre utilisateur à ajuster) les informations contenues dans les images. Ces applications ont aussi permis d’ouvrir des perspectives pour adapter cet outil aux grands enjeux actuels de l’imagerie d’exoplanètes.La seconde méthode, MEDUSAE, consiste à estimer conjointement les aberrations et les objets d’intérêt scientifique, en s’appuyant sur un modèle de formation d’images coronographiques. MEDUSAE exploite la redondance d’informations apportée par des images multi-spectrales. Afin de raffiner la stratégie d’inversion de la méthode et d’identifier les paramètres les plus critiques, j’ai appliqué l’algorithme sur des données générées avec le modèle utilisé dans l’inversion. J’ai ensuite appliqué cette méthode à des données simulées plus réalistes afin d’étudier l’impact de la différence entre le modèle utilisé dans l’inversion et les données réelles. Enfin, j’ai appliqué la méthode à des données réelles et les résultats préliminaires que j’ai obtenus ont permis d’identifier les informations importantes dont la méthode a besoin et ainsi de proposer plusieurs pistes de travail qui permettraient de rendre cet algorithme opérationnel sur données réelles. / Direct imaging of exoplanets provides valuable information about the light they emit, their interactions with their host star environment and their nature. In order to image such objects, advanced data processing tools adapted to the instrument are needed. In particular, the presence of quasi-static speckles in the images, due to optical aberrations distorting the light from the observed star, prevents planetary signals from being distinguished. In this thesis, I present two innovative image processing methods, both based on an inverse problem approach, enabling the disentanglement of the quasi-static speckles from the planetary signals. My work consisted of improving these two algorithms in order to be able to process on-sky images.The first one, called ANDROMEDA, is an algorithm dedicated to point source detection and characterization via a maximum likelihood approach. ANDROMEDA makes use of the temporal diversity provided by the image field rotation during the observation, to recognize the deterministic signature of a rotating companion over the stellar halo. From application of the original version on real data, I have proposed and qualified improvements in order to deal with the non-stable large scale structures due to the adaptative optics residuals and with the remaining level of correlated noise in the data. Once ANDROMEDA became operational on real data, I analyzed its performance and its sensitivity to the user-parameters proving the robustness of the algorithm. I also conducted a detailed comparison to the other algorithms widely used by the exoplanet imaging community today showing that ANDROMEDA is a competitive method with practical advantages. In particular, it is the only method that allows a fully unsupervised detection. By the numerous tests performed on different data set, ANDROMEDA proved its reliability and efficiency to extract companions in a rapid and systematic way (with only one user parameter to be tuned). From these applications, I identified several perspectives whose implementation could significantly improve the performance of the pipeline.The second algorithm, called MEDUSAE, consists in jointly estimating the aberrations (responsible for the speckle field) and the circumstellar objects by relying on a coronagraphic image formation model. MEDUSAE exploits the spectral diversity provided by multispectral data. In order to In order to refine the inversion strategy and probe the most critical parameters, I applied MEDUSAE on a simulated data set generated with the model used in the inversion. To investigate further the impact of the discrepancy between the image model used and the real images, I applied the method on realistic simulated images. At last, I applied MEDUSAE on real data and from the preliminary results obtained, I identified the important input required by the method and proposed leads that could be followed to make this algorithm operational to process on-sky data.
296

Assessment of the benefits and drawbacks of high resolution PET for the imaging of cancer in the head

Anton-Rodriguez, Jose January 2018 (has links)
Introduction: In Positron Emission Tomography (PET), the use of resolution modelling (RM) in iterative image reconstruction enables the modelling of aspects of detection which result in mispositioning of measured data and the subsequent blurring of reconstructed images. RM reconstruction can result in significant improvements in spatial resolution, voxel variance and count rate bias and could be a software alternative to detection hardware that is able to achieve higher resolution. Such hardware typically consists of small scintillation crystals, small bore diameters and depth of interaction discrimination, such as for the High Resolution Research Tomograph (HRRT, Siemens), which used a double crystal layer phoswich detector system. However, RM implementation comes with penalties such as slower rates of convergence, potentially higher region of interest variance and Gibbs artefacts. Methods: Assessment of the benefits and drawbacks of RM was done in the first part of this thesis together with the measurement and modelling of spatially varying resolution kernels for different scanner configurations and PET isotopes for the HRRT. It is also unclear as to whether high resolution scanning offers significant advantages over clinical PET-CT scanners for applications in the head. Through direct comparison to our HRRT, we explore whether there are significant advantages of high resolution scanning for an application in the head over clinical PET-CT. For this comparison our Biograph TruePoint TrueV (Siemens) optimised for whole body imaging was used and a novel clinical study using both scanners was set where we scanned Neurofibromatosis 2 (NF2) patients with vestibular schwannomas (VS) using [18F]fluorodeoxyglucose (FDG) and [18F]fluorothymidine (FLT). The clinical objective was to assess if uptake within VS of FLT and FDG could be measured and whether this uptake was predictive of tumour growth. Finally an assessment of the feasibility and impact of reducing the original injected activities in our clinical study was performed using bootstrapping resampling. Conclusions: RM provides greater but additive improvements in image resolution compared to DOI on the HRRT. Isotope specific image based RM could be estimated from published positron range distributions and measurements using fluorine-18. With the clinical project, uptake of FDG and FLT within the VS lesions was observed, these uptake values were correlated to each other, and high uptake was predictive of tumour growth with little differences in predictive power between FLT and FDG. Although there were benefits of the HRRT for imaging small lesions, in our clinical application there was little difference between the two scanners to discriminate lesion growth. Using the PET-CT scanner data and knowledge of lesion location, doses could be reduced to 5-10% without any significant loss of ability to discriminate lesion growth.
297

Développement d'une méthode de reconstruction d'image basée sur la détection de la fluorescence X pour l'analyse d'échantillon / Development of an image reconstruction method based on the detected X-ray fluorescence for sample analysis

Hamawy, Lara 23 October 2014 (has links)
Une nouvelle technique qui localise et identifie les éléments fluorescents dans un échantillon a été développé. Cette modalité d'imagerie pratique utilise une source de rayons X poly-chromatique pour irradier l'échantillon et favoriser l'émission de fluorescence. De nombreux facteurs qui affectent l'ensemble du système comme l'atténuation, l'auto-absorption de l'échantillon, la probabilité de fluorescence et la diffusion Compton ont été pris en compte. Ensuite, un système de détection efficace a été établi pour acquérir les données de fluorescence optimales et discriminer entre les éléments en fonction de leur fluorescence caractéristiques. Ce dispositif, couplé avec une technique de reconstruction d'image approprié conduit à une image détaillée à deux dimensions. Par rapport aux techniques classiques de reconstruction d'image, la méthode de reconstruction développée est une technique statistique qui a une convergence appropriée vers une image avec une résolution acceptable. En outre, c'est une technique simplifiée qui permet l'imagerie de nombreuses applications différentes. / A new technique that localizes and identifies fluorescing elements in a sample wasdeveloped. This practical imaging modality employs a polychromatic X-ray source toirradiate the sample and prompts the fluorescence emission. Many factors affecting thewhole system like attenuation, sample self-absorption, probability of fluorescence andCompton scattering were taken into account. Then, an effective detection system wasestablished to acquire the optimum fluorescence data and discriminate betweenelements depending on their characteristic fluorescence. This set-up, coupled with anappropriate image reconstruction technique leads to a detailed two-dimensional image.Compared to the conventional image reconstruction techniques, the developedreconstruction method is a statistical technique and has an appropriate convergencetoward an image with acceptable resolution. Moreover, it is a simplified technique thatallows the imaging of many different applications.
298

Méthodes d'illumination et de détection innovantes pour l'amélioration du contraste et de la résolution en imagerie moléculaire de fluorescence en rétrodiffusion / Innovative illumination and detection schemes for the enhancement of contrast and resolution of fluorescence reflectance imaging

Fantoni, Frédéric 05 December 2014 (has links)
Depuis quelques années, les techniques d'imagerie de fluorescence font l'objet d'une attention particulière, celles-ci permettant d'étudier de manière non invasive un nombre important de processus cellulaires. En particulier, les techniques de fluorescence en rétrodiffusion (FRI pour Fluorescence Reflectance Imaging) présentent plusieurs avantages en termes de facilité de mise en oeuvre, de rapidité et de coût, mais elles sont aussi sujettes à des limites fortes : la pénétration des tissus reste relativement faible (quelques millimètres seulement), et il est impossible d'avoir une information quantitative du fait de la diffusion des photons. L'objectif de cette thèse a été de réduire les effets des signaux parasites afin d'améliorer les performances de la FRI aussi bien au niveau du contraste que de la résolution. Pour ce faire nous avons décidé d'utiliser de nouvelles techniques d'illumination et de détection. Contrairement aux systèmes classiques qui utilisent une illumination et une détection large champ, nous balayons l'objet d'étude avec une ligne laser, des images étant acquises à chaque position de la ligne. On a alors accès à une pile d'images contenant un nombre d'informations bien plus important que dans le cas classique. Trois axes ont été suivis pour l'exploitation de ces informations. Les méthodes développées ont été testées en simulation avec le logiciel NIRFAST et un algorithme de Monte-Carlo mais aussi expérimentalement. Les validations expérimentales ont été réalisées sur fantômes optiques et en in vivo sur petit animal en les comparant à une illumination uniforme plus classique. En améliorant à la fois le contraste et la résolution, ces différentes méthodes nous permettent d'obtenir de l'information exploitable plus loin en profondeur en réduisant les effets néfastes des signaux parasites et de la diffusion. / Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality to noninvasively monitor fluorescence-targeted tumors. However, in some situations, this kind of imaging suffers from a lack of depth penetration and a poor resolution due to the diffusive nature of photons in tissue. The objective of the thesis was to tackle these limitations. Rather than using a wide-field illumination like usual systems, the technique developed relies on the scanning of the medium with a laser line illumination and the acquisition of images at each position of excitation. Several detection schemes are proposed to take advantage of the stack of images acquired to enhance the resolution and the contrast of the final image. These detection techniques were tested both in simulation with the NIRFAST software and a Monte-Carlo algorithm and experimentally. The experimental validation was performed on tissue-like phantoms and in vivo with a preliminary testing. The results are compared to those obtained with a classical wide-field illumination. As they enhance both the contrast and the resolution, these methods allow us to image deeper targets by reducing the negative effects of parasite signals and diffusion.
299

Sobre o cálculo de atenuação e de atividade em tomografia por emissão a partir de dados de atividade / Activity and attenuation reconstruction for emission computed tomography using emisssion data only

Pereira, Fabiana Crepaldi 07 January 2004 (has links)
Orientadores: Alvaro Rodolfo De Piero, Julio Cesar Hadler Neto / Tese (doutorado) - Universidade Estadual de Campinas. Instituto de Fisica Gleb Wataghin / Made available in DSpace on 2018-08-05T10:59:51Z (GMT). No. of bitstreams: 1 Pereira_FabianaCrepaldi_D.pdf: 53871488 bytes, checksum: 48e21ff9ca1abbb0f20a15dd735bf597 (MD5) Previous issue date: 2004 / Resumo: Esta tese aborda o problema e estimar a atenuação a partir de ados e emissão,em tomografia computadorizada por emissão.São apresentados novos métodos visando solucionar o problema mais persistente:uma interferência,em forma e sombra --crosstalk --entre as imagens e ativi ade e atenuação.O primeiro grupo e métodos se baseia na minimização a verossimilhança e forma iterativa e o segundo,no uso e condições e consistência.Nossas simulações chegaram a resultados que indicam novas direções para a solução do problema da sombra / Abstract: This thesis eals with the problem of estimating the attenuation from activity ata in emission computed tomography.We present several new methods aiming at solving the main rawback of the problem:the 'crosstalk'between activity and attenuation images.The first group of methods is base on iteratively solving a regularized maximum likelihood model and the second on using consistency conditions.Our simulations show results that indicate new directions for the solution of the 'crosstalk' problem / Doutorado / Física Geral / Doutor em Ciências
300

Um estudo sobre algoritmos de interpolação de sequencias numericas / A study of algorithms for interpolation of numerical sequences

Delgado, Eric Magalhães 14 August 2018 (has links)
Orientador: Max Henrique Machado Costa / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-14T19:10:17Z (GMT). No. of bitstreams: 1 Delgado_EricMagalhaes_M.pdf: 9006828 bytes, checksum: 1af1a945e326c075901fa8c1f0341128 (MD5) Previous issue date: 2009 / Resumo: Esta dissertação apresenta um estudo sobre algoritmos de interpolação e dizimação de sequências numéricas, cujos filtros são derivados do filtro de reconstrução ideal. É proposto um algoritmo adaptativo de interpolação cúbica e avaliado os ganhos deste algoritmo quando comparado aos algoritmos clássicos. A idéia é explorar o compromisso entre qualidade e complexidade dos filtros de interpolação. A adaptação do filtro, obtida através de estimativas espaciais e espectrais da sequência a ser interpolada, é útil já que proporciona um uso eficiente de filtros complexos em regiões criticas como, por exemplo, regiões de borda de uma imagem. Simulações em imagens típicas mostram um ganho quantitativo significativo do algoritmo adaptativo quando comparado aos algoritmos clássicos. Além disso, é analisado o algoritmo de interpolação quando existe informação do processo de aquisição da sequência a ser interpolada. / Abstract: This dissertation presents a study on interpolation and decimation algorithms of numerical sequences, whose filters are derived from the ideal reconstruction filter. An adaptive algorithm of cubic interpolation is proposed and the gains of this algorithm is analized by comparing with the classic algorithms. The idea is to explore the trade-off between quality and complexity of the interpolation filters. The adaptation of the filter, obtained from spacial and spectral estimates of the sequence to be interpolated, is useful because it provides an efficient use of complex filter in critical regions as, for example, regions of edge of an image. Simulations in typical images show a significant quantitative gain of the adaptive algorithm when compared to classical algorithms. Furthermore, an interpolation algorithm is analyzed based on the knowledge of the acquisition process of the sequence to be interpolated. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica

Page generated in 0.6349 seconds