• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 377
  • 377
  • 146
  • 97
  • 76
  • 68
  • 63
  • 44
  • 44
  • 39
  • 38
  • 37
  • 35
  • 31
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Quantitative accuracy of iterative reconstruction algorithms in positron emission tomography

Armstrong, Ian January 2017 (has links)
Positron Emission Tomography (PET) plays an essential role in the management of patients with cancer. It is used to detect and characterise malignancy as well as monitor response to therapy. PET is a quantitative imaging tool, producing images that quantify the uptake of a radiotracer that has been administered to the patient. The most common measure of uptake derived from the image is known as a Standardised Uptake Value (SUV). Data acquired on the scanner is processed to produce images that are reported by clinicians. This task is known as image reconstruction and uses computational algorithms to process the scan data. The last decade has seen substantial development of these algorithms, which have become commercially available: modelling of the scanner spatial resolution (resolution modelling) and time of flight (TOF). The Biograph mCT was the first scanner from Siemens Healthcare to feature these two algorithms and the scanner at Central Manchester University Hospitals was the first Biograph mCT to go live in the UK. This PhD project, sponsored by Siemens Healthcare, aims to evaluate the effect of these algorithms on SUV in routine oncology imaging through a combination of phantom and patient studies. Resolution modelling improved visualisation of small objects and resulted in significant increases of uptake measurements. This may pose a challenge to clinicians when interpreting established uptake metrics that are used as an indication of disease status. Resolution modelling reduced the variability of SUV. This improved precision is particularly beneficial when assessing SUV changes during therapy monitoring. TOF was shown to reduce image noise with a conservation of FDG uptake measurements, relative to non-TOF algorithms. As a result of this work, TOF has been used routinely since mid-2014 at the CMUH department. This has facilitated a reduction of patient and staff radiation dose and an increase of 100 scans performed each year in the department.
122

Spatiotemporal image reconstruction with resolution recovery for dynamic PET/CT in oncology

Kotasidis, Fotis January 2011 (has links)
Positron emission tomography (PET) is a powerful and highly specialised imaging modality that has the inherent ability to detect and quantify changes in the bio-distribution of an intravenously administered radio-labelled tracer, through dynamic image acquisition of the system under study. By modelling the temporal distribution of the tracer, parameters of interest regarding specific biological processes can be derived. Traditionally parameter estimation is done by first reconstructing a set of dynamic images independently, followed by kinetic modelling, leading to parameters of reduced accuracy and precision. Furthermore only simple geometrical models are used during image reconstruction to model the mapping between the image space and the data space, leading to images of reduced resolution. This thesis attempts to address some of the problems associated with the current methodology, by implementing and evaluating new spatiotemporal image reconstruction strategies in oncology PET/CT imaging, with simulated, phantom and real data. More specifically this thesis is concerned with iterative reconstruction techniques, the incorporation of resolution recovery and kinetic modelling strategies within the image reconstruction process and the application of such methods in perfusion [15O]H2O imaging. This work is mainly based upon 2 whole body PET/CT scanners, the Siemens Biograph 6 B-HiRez and TruePoint TrueV, but some aspects of this work were also implemented for the High resolution research tomograph (HRRT).
123

Variational image segmentation, inpainting and denoising

Li, Zhi 27 July 2016 (has links)
Variational methods have attracted much attention in the past decade. With rigorous mathematical analysis and computational methods, variational minimization models can handle many practical problems arising in image processing, such as image segmentation and image restoration. We propose a two-stage image segmentation approach for color images, in the first stage, the primal-dual algorithm is applied to efficiently solve the proposed minimization problem for a smoothed image solution without irrelevant and trivial information, then in the second stage, we adopt the hillclimbing procedure to segment the smoothed image. For multiplicative noise removal, we employ a difference of convex algorithm to solve the non-convex AA model. And we also improve the non-local total variation model. More precisely, we add an extra term to impose regularity to the graph formed by the weights between pixels. Thin structures can benefit from this regularization term, because it allows to adapt the weights value from the global point of view, thus thin features will not be overlooked like in the conventional non-local models. Since now the non-local total variation term has two variables, the image u and weights v, and it is concave with respect to v, the proximal alternating linearized minimization algorithm is naturally applied with variable metrics to solve the non-convex model efficiently. In the meantime, the efficiency of the proposed approaches is demonstrated on problems including image segmentation, image inpainting and image denoising.
124

Detecção de fraturas radiculares por meio da tomografia computadorizada de feixe cônico em imagens reconstruídas com voxels diferentes aos de aquisição / Detection of root fractures by cone beam computed tomography images with variation of voxel reconstruction

Marchini, Monikelly do Carmo Nascimento, 1986- 18 August 2018 (has links)
Orientador: Solange Maria de Almeida / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Odontologia de Piracicaba / Made available in DSpace on 2018-08-18T00:47:58Z (GMT). No. of bitstreams: 1 Marchini_MonikellydoCarmoNascimento_M.pdf: 1171679 bytes, checksum: e017160a8fc47671be3a04ceeee1a993 (MD5) Previous issue date: 2011 / Resumo: O diagnóstico de fraturas radiculares longitudinais sem separação dos fragmentos é um grande desafio na prática clínica. Portanto, o objetivo neste trabalho foi avaliar a eficácia da Tomografia Computadorizada de Feixe Cônico (TCFC) na detecção de fraturas radiculares longitudinais e comparar as imagens adquiridas e reconstruídas com o mesmo voxel com as imagens reconstruídas com voxels menores aos de aquisição. Para realização desta pesquisa foram utilizados 40 dentes humanos posteriores extraídos que foram instrumentados com o sistema rotatório K3. As fraturas obtidas se caracterizaram por serem incompletas e sem separação dos fragmentos, sendo realizadas em uma máquina de ensaio universal (Instron 4411), com a introdução de uma ponta cônica com força controlada aplicada na entrada dos canais. Para aquisição das imagens, os dentes foram posicionados em uma mandíbula macerada e escaneados em um aparelho de TCFC antes e após a realização das fraturas. As imagens foram adquiridas utilizando-se 3 protocolos com tempos de aquisição de 10, 20 e 40 segundos e valores de voxel de 0,4; 0,3; e 0,25mm, respectivamente. Depois de reconstruídas em um mesmo voxel (0,25mm), foram avaliadas nos três planos de corte (axial, coronal e sagital) por 3 avaliadores. Após a análise estatística de Mc Nemar e curva ROC, foi observado que não houve diferença estatisticamente significante entre as imagens adquiridas/reconstruídas com valores de voxel de 0,4/0,25mm(p= 0,0022), 0,3/0,25mm (p= 0,0056) e 0,25/0,25mm (p= 0,0005). A sensibilidade (50%, 55% e 70%), especificidade (90%, 90% e 100%) e acurácia (70%, 72,5% e 85%) aumentaram de acordo com a diminuição do valor do voxel adquirido. Conclui-se que a TCFC mostrou-se eficaz no diagnóstico de fraturas radiculares longitudinais e que as imagens reconstruídas com voxels menores aos de aquisição diminuíram a sensibilidade e a especificidade em relação às imagens adquiridas e reconstruídas com o mesmo voxel, porém estatisticamente não obtiveram diferenças significantes entre si na detecção dessas fraturas / Abstract: Longitudinal root fractures diagnosis with no separation of the fragments is a greater challenge in clinical practice. Therefore, the aim of this study was to evaluate the effectiveness of Cone Beam Computed Tomography (CBCT) in the detection of longitudinal root fractures and to compare the images acquired and reconstructed with the same voxel in the reconstructed images with the smaller acquisition voxels. For this study, 40 extracted human posterior teeth were instrumented with K3 rotary system. The fractures obtained were characterized by incompleteness and without separation of the fragments, being held in a universal testing machine (Instron 4411) with the introduction of a conical tip with controlled force applied inside the root. For image acquisition, the teeth were placed in a dry mandible and scanned on a CBCT device before and after the fractures. Images were acquired with 3 protocols for acquisition times of 10, 20 and 40 seconds, voxel values of 0.4 mm, 0.3 mm and 0.25 mm, respectively. After reconstructed in the same voxel (0.25 mm) were evaluated in three planes (axial, coronal and sagital) for 3 observers. After statistical McNemar analysis and ROC curve was observed that there was no statistically significant difference between images acquired/reconstructed with voxel values of 0.4/0,25 mm (p = 0.0022), 0.3/0,25 mm (p = 0.0056) and 0.25/0,25 mm (p = 0.0005). The sensitivity (50%, 55% and 70%), specificity (90%, 90% and 100%) and accuracy (70%, 72.5% and 85%) increased according to the declining value of the acquired voxel. It could be concluded the CBCT proved to be effective in the diagnosis of longitudinal root fractures and that the images reconstructed with the smaller acquisition voxels decreased sensitivity and specificity regarding the images acquired and reconstructed with the same voxel, but had no statistically significant differences each other in detecting these fractures / Mestrado / Radiologia Odontologica / Mestre em Radiologia Odontológica
125

Reamostragem uniforme utilizando a função SINC / Uniform resampling using the sinc function

Camargo, Ana Carolina 29 March 2006 (has links)
Orientador: Lucio Tunes dos Santos / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-06T01:09:38Z (GMT). No. of bitstreams: 1 Camargo_AnaCarolina_M.pdf: 3666219 bytes, checksum: 1c8c0d68b9fefa425dbd20b478818406 (MD5) Previous issue date: 2006 / Resumo: É comum ser preciso reconstruir funções cujas amostras não estão numa grade igualmente espaçada. Isto é devido ao fato que alguns dos algoritmos mais usados requerem amostras em uma grade Cartesiana regular (uniforme). Portanto, é necessário fazer uma reamostragem uniforme, i.e., interpolar as amostras não uniformes em um conjunto de pontos igualmente espaçados. Neste trabalho, primeiro mostramos que o problema de reamostragem pode ser formulado como um problema de resolver um sistema de equações lineares. Uma solução para este sistema pode ser encontrada utilizando a matriz pseudoinversa, um processo que é impraticável para um número grande de variáveis. A partir de características do problema, é possível desenvolver um algoritmo melhor, o qual usa apenas um número limitado de amostras para calcular cada amostra uniforme, transformando o problema original numa seqüência de sistemas lineares com menos variáveis. O resultado final pode ser visto como ótimo e computacionalmente eficiente. Aplicações são apresentadas para demonstrar a eficiência deste método / Abstract: Its common to be needed to reconstruct functions which samples falls on a nonequally spaced grid. This is due to the fact that some of the most used algorithms require samples in a regular (uniform) Cartesian grid. Therefore, it is necessary to make an uniform resampling, i.e., to interpolate the nonuniform samples in a set of equally spaced points. In this work, it is first shown that the resampling problem can be formulated as a problem of solving a system of linear equations. A solution for this system can be found using the pseudoinverse matrix, a process that is impractical for a large number of variables. From particular characteristics of the problem, it is possible to develop a better algorithm, which only uses a limited number of samples to calculate each uniform sample, transforming the original problem into a sequence of linear systems with less variables. The final result can be viewed as both optimal and computationally efficient. Applications are presented to demonstrate the efficiency of the method / Mestrado / Mestre em Matemática Aplicada
126

Limitations of Classical Tomographic Reconstructions from Restricted Measurements and Enhancing with Physically Constrained Machine Learning

January 2020 (has links)
abstract: This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given. The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined. Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space. The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured. The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
127

Rekonstrukce nekvalitních snímků obličejů / Facial image restoration

Bako, Matúš January 2020 (has links)
In this thesis, I tackle the problem of facial image super-resolution using convolutional neural networks with focus on preserving identity. I propose a method consisting of DPNet architecture and training algorithm based on state-of-the-art super-resolution solutions. The model of DPNet architecture is trained on Flickr-Faces-HQ dataset, where I achieve SSIM value 0.856 while expanding the image to four times the size. Residual channel attention network, which is one of the best and latest architectures, achieves SSIM value 0.858. While training models using adversarial loss, I encountered problems with artifacts. I experiment with various methods trying to remove appearing artefacts, which weren't successful so far. To compare quality assessment with human perception, I acquired image sequences sorted by percieved quality. Results show, that quality of proposed neural network trained using absolute loss approaches state-of-the-art methods.
128

Imagerie et analyse hyperspectrales d'observations interférométriques d'environnement circumstellaires / Hyperspectral analysis and imaging from interferometric observations of circumstellar environments

Dalla Vedova, Gaetan 23 September 2016 (has links)
L'observation des planètes extrasolaires, ainsi que l'étude de l'environnementcircumstellaire demandent des instruments très performants en matière dedynamique et de résolution angulaire. L'interférométrie classique et annulanteoffrent une solution. En particulier, dans le cas de l'interférométrie annulante,le flux de l'étoile sur l'axe de l'interféromètre est fortement réduit et permetainsi aux structures plus faibles hors axe d'émerger et être plus facilementdétectables. Dans ce contexte, la reconstruction d'image est un outilfondamental. Le développement d'interféromètres à haute résolution spectraletelle que AMBER, et bientôt MATISSE et GRAVITY, fait de la reconstruction d'imagepolychromatique une priorité.Cette thèse a comme objectif de développer et d'améliorer des techniques dereconstruction d'image hyperspectrale. Le travail présenté s'articule en deuxparties. En premier, nous discutons le potentiel de l'interférométrie annulantedans le cadre de la résolution du problème inverse. Ce travail repose sur dessimulations numériques et sur l'exploitation de données collectées sur le bancinterférométrique annulant PERSEE. Ensuite, nous avons adapté et développé desméthodes de reconstruction d'images monochromatique et polychromatique. Cestechniques ont été appliquées pour étudier l'environnement circumstellaire dedeux objets évolués, Achernar et Eta Carina, à partir de données PIONIER etAMBER.Ce travail apporte des éléments méthodologiques sur la reconstruction d'image etl'analyse hyperspectrale, ainsi que des études spécifiques sur l'environnementd'Achernar et d'Eta Carina / Environment of nearby stars requires instruments with high performances in termsof dynamics and angular resolution. The interferometry offers a solution. Inparticular, in the nulling interferometry, the flux of the star on the axis ofthe interferometer is strongly reduced, allowing to emerge fainter structuresaround it. In this context, the image reconstruction is a fundamental andpowerful tool. The advent of the high spectral resolution interferometers such asAMBER, MATISSE and GRAVITY boost the interest in the polychromatic imagereconstruction, in order to exploit all the available spectral information.The goal of this thesis is to develop and improve monochromatic and hyperspectralimaging techniques. The work here presented has two main parts. First, we discussthe performances of the nulling in the context of the inverse problem solving.This part is based on simulations and data collected on the nulling test benchPERSEE. Second, we adapted and developed monochromatic and hyperspectral imagereconstruction methods. Then, we applied these methods in order to study thecircumstellar environment of two evolved objects, Achernar and Eta Carina, fromPIONIER and AMBER observations.This work provides elements in the field of the image reconstruction forminterferometric observations as well as the specific studies on the environmentof Achernar and Eta Carina
129

A sparsity-based framework for resolution enhancement in optical fault analysis of integrated circuits

Cilingiroglu, Tenzile Berkin 12 March 2016 (has links)
The increasing density and smaller length scales in integrated circuits (ICs) create resolution challenges for optical failure analysis techniques. Due to flip-chip bonding and dense metal layers on the front side, optical analysis of ICs is restricted to backside imaging through the silicon substrate, which limits the spatial resolution due to the minimum wavelength of transmission and refraction at the planar interface. The state-of-the-art backside analysis approach is to use aplanatic solid immersion lenses in order to achieve the highest possible numerical aperture of the imaging system. Signal processing algorithms are essential to complement the optical microscopy efforts to increase resolution through hardware modifications in order to meet the resolution requirements of new IC technologies. The focus of this thesis is the development of sparsity-based image reconstruction techniques to improve resolution of static IC images and dynamic optical measurements of device activity. A physics-based observation model is exploited in order to take advantage of polarization diversity in high numerical aperture systems. Multiple-polarization observation data are combined to produce a single enhanced image with higher resolution. In the static IC image case, two sparsity paradigms are considered. The first approach, referred to as analysis-based sparsity, creates enhanced resolution imagery by solving a linear inverse problem while enforcing sparsity through non-quadratic regularization functionals appropriate to IC features. The second approach, termed synthesis-based sparsity, is based on sparse representations with respect to overcomplete dictionaries. The domain of IC imaging is particularly suitable for the application of overcomplete dictionaries because the images are highly structured; they contain predictable building blocks derivable from the corresponding computer-aided design layouts. This structure provides a strong and natural a-priori dictionary for image reconstruction. In the dynamic case, an extension of the synthesis-based sparsity paradigm is formulated. Spatial regions of active areas with the same behavior over time or over frequency are coupled by an overcomplete dictionary consisting of space-time or space-frequency blocks. This extended dictionary enables resolution improvement through sparse representation of dynamic measurements. Additionally, extensions to darkfield subsurface microscopy of ICs and focus determination based on image stacks are provided. The resolution improvement ability of the proposed methods has been validated on both simulated and experimental data.
130

Pattern Classification and Reconstruction for Hyperspectral Imagery

Li, Wei 12 May 2012 (has links)
In this dissertation, novel techniques for hyperspectral classification and signal reconstruction from random projections are presented. A classification paradigm designed to exploit the rich statistical structure of hyperspectral data is proposed. The proposed framework employs the local Fisher’s discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, followed by a subsequent Gaussianmixture- model or support-vector-machine classifier. An extension of this framework in a kernel induced space is also studied. This classification approach employs a maximum likelihood classifier and dimensionality reduction based on a kernel local Fisher’s discriminant analysis. The technique imposes an additional constraint on the kernel mapping—it ensures that neighboring points in the input space stay close-by in the projected subspace. In a typical remote sensing flow, the sender needs to invoke an appropriate compression strategy for downlinking signals (e.g., imagery to a base station). Signal acquisition using random projections significantly decreases the sender-side computational cost, while preserving useful information. In this dissertation, a novel class-dependent hyperspectral image reconstruction strategy is also proposed. The proposed method employs statistics pertinent to each class as opposed to the average statistics estimated over the entire dataset, resulting in a more accurate reconstruction from random projections. An integrated spectral-spatial model for signal reconstruction from random projections is also developed. In this approach, spatially homogeneous segments are combined with spectral pixel-wise classification results in the projected subspace. An appropriate reconstruction strategy, such as compressive projection principal component analysis (CPPCA), is employed individually in each category based on this integrated map. The proposed method provides better reconstruction performance as compared to traditional methods and the class-dependent CPPCA approach.

Page generated in 0.1362 seconds