• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 222
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 147
  • 98
  • 76
  • 69
  • 64
  • 44
  • 44
  • 39
  • 39
  • 38
  • 36
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Využití generativních modelů neuronových sítí v obrazové rekonstrukci / Generative neural networks in image reconstruction

Honzátko, David January 2018 (has links)
Recent research in generative models came up with a promising approach to modelling the prior proba- bility of natural images. The architecture of these prior models is based on deep neural networks. Although these priors were primarily designed for generating new natural-like images, its potential use is much broader. One of the possible applications is to use these models for solving the inverse problems in low-level vision (i.e., image reconstruction). This usage is mainly possible because the architecture of these models allows computing the derivative of the prior probability with respect to the input image. The main objective of this thesis is to evaluate the usage of these prior models in image reconstruction. This thesis proposes a novel model-based optimization method to two image reconstruction problems - image denoising and single-image super-resolution (SISR). The proposed method uses optimization algorithms for finding the maximum-a- posteriori probability, which is defined using the above mentioned prior models. The experimental results demonstrate that the proposed approach achieves reconstruction performance competitive with the current state-of-the-art methods, especially regarding SISR.
212

Avaliação de diferentes métodos de reconstrução de imagens no processamento de SPECT cerebral com simulador antropomórfico estriatal / Evaluation of different methods of image reconstruction in brain SPECT processing with striatal anthropomorphic simulator

Ana Carolina Trevisan 21 September 2015 (has links)
A dopamina (DA) é um neurotransmissor sintetizado nos neurônios dopaminérgicos da substância nigra, possuindo efeito importante sobre o Sistema Nervoso Central (SNC), dentre as quais a sensação de prazer e a motivação. Uma alteração nos transportadores de dopamina (DATs) se caracteriza por uma desordem progressiva do movimento devido à disfunção dos neurônios secretores de dopamina, gerando a Doença de Parkinson (DP). Por ser um distúrbio mais comum entre um espectro de doenças neurológicas, é necessário um estudo mais aprofundado para melhor diagnóstico. Esta dissertação apresenta um estudo do desempenho do filtro butterworth passa-baixa na reconstrução analítica Filtered Backprojection - FBP e reconstrução iterativa Ordered Subsets Expectation Maximization - OSEM, para garantir a qualidade da imagem de SPECT cerebral, adquirida pelo fantoma antropomórfico estriatal. Por avaliação interindividual de quatro especialistas em medicina nuclear, foram aplicadas notas para a análise visual das imagens, garantindo a qualidade da resolução espacial, contraste, ruído e diferenciação anatômica do corpo estriado. Para cada tipo de reconstrução, houve 49 imagens do corpo estriado, variando os valores das covariáveis apresentadas pelos algoritmos (iteração, subsets, ordem e frequência de corte). A fim de resultados consistentes, foram utilizados a regressão linear e o teste T-Student pareado. Os dados coletados demonstraram que é necessário utilizar um intervalo confiável de frequência de corte para FBP (0,9 a 1,6) e para OSEM (1,2 a 1,5) e variar a ordem de 0 a 10 que não influenciará a imagem. Para a reconstrução OSEM, ficou comprovado que o valor de iteração (i) e o número de subsets (s) que garantem melhor qualidade foram os mesmos que a empresa do algoritmo utilizado sugeriu (3i 8s). Esta, também, mostrou evidências de melhor qualidade da imagem, quando comparada à reconstrução FBP. Para uma imagem de qualidade, representando uma reconstrução confiável e uma análise visual segura, é necessário utilizar o intervalo de valores encontrados das covariáveis ordem e frequência de corte do filtro butterworth passa-baixa na reconstrução FBP e OSEM. Também é necessário utilizar o valor de iteração e subsets que a empresa sugeriu, e a reconstrução OSEM mostrou superioridade nas imagens comparadas à FBP, mas se o serviço não utilizar ainda este tipo de algoritmo, a imagem com FBP no intervalo proposto também garantirá a qualidade. / Dopamine is a synthesized neurotransmitter in dopaminergic neurons of the substantia nigra where it has an important effect on the central nervous system (CNS), such as the feeling of pleasure and motivation. A change in the dopamine transporters (DATs) is characterized by a progressive disorder of movement due to a dysfunction of the dopamine secreting neurons, causing Parkinson\'s Disease (PD). As it is a more common disorder among a spectrum of neurological diseases, further studies are necessary for a better diagnosis. This study presents an investigation on the performance of the low-pass butterworth filter in the Filtered Backprojection analytic reconstruction - FBP and Ordered Subsets Expectation Maximization iterative reconstruction - OSEM to ensure the quality of brain SPECT image, acquired by the anthropomorphic striatal phantom. By individual evaluation of four nuclear medicine specialists, grades were applied to the visual analysis of the images, ensuring the quality of the spatial resolution, contrast, noise and anatomical differentiation striatum. For each type of reconstruction, there were 49 pictures of the striatum, varying the values of covariates submitted by the algorithms (iteration, subsets, order and cutoff frequency). In order to obtain consistent results, we used linear regression and T-Student paired test. The collected data showed that it is necessary to use a reliable interval of cutoff frequency for both the FBP (0.9 to 1.6), and OSEM (1.2 to 1.5), and varying the order of 0 to 10, which does not influence the image. For OSEM reconstruction, it has been verified that the iteration value (i) and the number of subsets (s) ensuring best quality was the same as the company\'s algorithm suggested (3i 8s). This also showed evidence of better image quality when compared to FBP reconstruction. For an image quality, representing a reliable reconstruction and a safe visual analysis, you must use the range of values found of covariates order and cutoff frequency of the butterworth low-pass filter on the FBP and OSEM reconstruction. You must also use the iteration value and subsets that the company suggested and the OSEM reconstruction showed superiority on the images compared to FBP, but if the service does not use this type of algorithm, the image with FBP in the proposed range, also ensures quality.
213

Detecção de descontinuidades e reconstrução de funções a partir de dados espectrais : filtros splines e metodos iterativos / Detection of discontinuities and reconstruction of functions from spectral data : splines filters and iterative methods

Martinez, Ana Gabriela 02 August 2006 (has links)
Orientador: Alvaro Rodolfo De Pierro / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-05T18:58:18Z (GMT). No. of bitstreams: 1 Martinez_AnaGabriela_D.pdf: 800274 bytes, checksum: 9d484ffeb59df3623e4bb55d8c8fb1a1 (MD5) Previous issue date: 2006 / Resumo: A detecção de descontinuidades e um problema que aparece em muitas áreas de aplicação. Exemplos disto são os métodos de Fourier em tomografia computa dorizada, inversão em ressonância magnetica e as leis de conservação em qua»c~oes diferenciais. A determina»c~ao precisa dos pontos de descontinuidade e essencial para obter converg^encia exponencial da serie de Fourier para fun»c~oes cont³nuas por partes e evitar assim os efeitos do conhecido fen^omeno de Gibbs. Nos trabalhos de Wei et al. de 1999 e 2004 foram desenvolvidos ¯ltros polinomiais para reconstruir funções a partir de seus coeficientes de Fourier. No trabalho de Wei et al. do 2005 estes filtros foram usados para construir metodos iterativos rapidos para a detecção de de- scontinuidades. Nesta tese são introduzidos filtros mais gerais baseados em fun»c~oes splines, que conseguem maior precis~ao que aqueles apresentados em esses trabalhos e também são apresentados os correspondentes metodos iterativos para as descon- tinuidades. S~ao obtidas tambem estimativas para os erros assim como experi^encias numericas que validam os algoritmos. Mostra-se tambem um novo metodo que ap- resenta um melhor desempenho que aqueles baseados na serie parcial conjugada de Fourier usados nos trabalhos de Gelb e Tadmor / Abstract: Detecting discontinuities from Fourier coefficients is a problem that arises in several areas of application. Important examples are Fourier methods in Computed Tomography, Nuclear Magnetic Resonance Inversion and Conservation Law Differential Equations. Also, the knowledge of the precise location of the discontinuity points is essential to obtain exponential convergence of the Fourier series for a piecewise continuous function, avoiding the well known Gibbs phenomenon. In the work of Wei et al. (1999, 2004), polynomial filters were developed to reconstruct functions from their Fourier coefficients. In the work of Wei et. al. (2005), these fillters were used to develop fast iterative methods for discontinuity detection. In this thesis we introduce more general spline based filters, that achieve higher accuracy than those works, and the corresponding iterative methods for the discontinuities. Estimates for the errors are presented as well as many numerical experiments validating the algorithms. Also, we show that a new and simple method, not using any nonlinear solver, performs better than those based on the conjugate Fourier series as in the work of Gelb and tadmor / Doutorado / Analise Numerica / Doutor em Matemática Aplicada
214

Propagação de pontos caracteristicos e suas incertezas utilizando a transformada unscented / Propagating feature points and its uncertainty using the unscented transform

Dorini, Leyza Elmeri Baldo 20 February 2006 (has links)
Orientador: Siome Klein Goldenstein / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-06T08:17:58Z (GMT). No. of bitstreams: 1 Dorini_LeyzaElmeriBaldo_M.pdf: 1659888 bytes, checksum: 3ac1fe51a9d8159f6fa5df97cb4f52bc (MD5) Previous issue date: 2006 / Resumo: O correto estabelecimento de correspondências entre imagens tomadas de diferentes pontos de vista é um problema fundamental na área de visão computacional, sendo base para diversas tarefas de alto nível, tais como reconstrução 3D e análise de movimento. A grande maioria dos algoritmos de rastreamento de características não possui uma incerteza associada a posição estimada das características sendo rastreadas, informação esta de extrema importância, considerando sua vasta aplicabilidade. Exatamente este o foco principal deste trabalho, onde introduzimos um framework genérico que expande algoritmos de rastreamento de tal forma que eles possam propagar também informações de incerteza. Neste trabalho, por questão de simplicidade, utilizamos o algoritmo de rastreamento de características Kanade-Lucas-Tomasi (KLT) para demonstrar as vantagens do nosso método, denominado Unscented Feature Tracking (UFT). A abordagem consiste na introdução de Variáveis Aleatórias Gaussianas (GRVs) para a representação da localização dos pontos característicos, e utiliza a Transformada Unscented com Escala (SUT) para propagar e combinar GRVs. Mostramos uma aplicação do UFT em um procedimento de bundle adjustment, onde a função custo leva em conta a informação das GRVs, fornecendo melhores estimativas. O método é robusto, considerando que identifica e descarta anomalias, que podem comprometer de maneira expressiva o resultado de tarefas que utilizam as correspondências. Experimentos com seqüências de imagens reais e sintéticas comprovam os benefícios do método proposto / Abstract: To determine reliable correspondences between a pair of images is a fundamental problem in the computer vision community. It is the foundation of several high level tasks, such as 3D reconstruction and motion analysis. Although there are many feature tracking algorithms, most of them do not maintain information about the uncertainty of the feature locations' estimates. This information is very useful, since large errors can disturb the results of the correspondence-based tasks. This is the goal of this work, a new generic framework that augments feature tracking algorithms so that they also propagate uncertainty information. In this work, we use the well-known Kanade-Lucas-Tomasi (KLT) feature tracker to demonstrate the benefits of our method, called Unscented Feature Tracking (UFT). The approach consists on the introduction of Gaussian Random Variables (GRVs) for the representation of the features' locations, and on the use of the Scaled Unscented Transform (SUT) to propagate and combine GRVs. We also describe an improved bundle adjustment procedure as an application, where the cost function takes into account the information of the GRVs, and provides better estimates. Experiments with real and synthetic images confirm that UFT improves the quality of the feature tracking process and is a robust method for detect and reject outliers / Mestrado / Visão Computacional / Mestre em Ciência da Computação
215

Polygonal models from range scanned trees

Qiu, Li January 2009 (has links)
3D Models of botanical trees are very important in video games, simulation, virtual reality, digital city modeling and other fields of computer graphics. However, since the early days of computer graphics, the modeling of trees has been challenging, because of the huge dynamical range between its smallest and largest structures and their geometrical complexity. Trees are also ubiquitous which makes it even hard to model them in a realistic way, Current techniques are limited in that they model a tree either in a rule-based way or in an approximated way. These methods emphasize appearance while sacrificing its real structure. Recent development in range scanners are making 3D aquisition feasible for large and complex objects. This report presents the semi-automatic technique developed for modeling laser-scanned trees. First, the user draws a few strokes on the depth image plane generated from the dataset. Branches are then extracted through the 2D Curve detection algorithm originally developed. Afterwards, those short branches are connected together to generate the skeleton of the tree by forming a Minimum Spanning Tree (MST). Finally, the geometry of the tree skeleton is produced using allometric rules for branch thickness and branching angles.
216

3D tomographic imaging using ad hoc and mobile sensors

Chin, Renee Ka Yin January 2011 (has links)
The aim of this research is to explore the integration of ad hoc and mobile sensors into a conventional Electrical Resistance Tomography (ERT) system. This is motivated by the desire to improve the spatial resolution of 3D reconstructed images that are produced using ERT. The feasibility of two approaches, referred to as the Extended Electrical Tomography (EET) and Augmented Electrical Tomography (AET) are considered. The approaches are characterized according to the functionality of the sensors on the ad hoc 'pills'. This thesis utilizes spectral and numerical analysis techniques, with the goal of providing a better understanding of reconstruction limitations, including quality of measurements, sensitivity levels and spatial resolution. These techniques are applied such that an objective evaluation can be made, without having to depend heavily on visual inspection of a selection of reconstructed images when evaluating the performance of different set-ups. In EET, the sensors on the pills are used as part of the ERT electrode system. Localized voltage differences are measured on a pair of electrodes that are located on an ad hoc pill. This extends the number of measurements per data set and provides information that was previously unobtainable using conventional electrode arrangements. A standalone voltage measurement system is used to acquire measurements that are taken using the internal electrodes. The system mimics the situation that is envisaged for a wireless pill, specifically that it has a floating ground and is battery-powered. For the present exploratory purposes, the electronic hardware is located remotely and the measured signal is transmitted to the PC through a cable. The instrumentation and data acquisition circuits are separated through opto-isolators which essentially isolates both systems. Using a single pill located in the centre of a vessel furnished with 16 electrodes arranged in a single plane, spectral analysis indicates that 15 of the 16 extended measurements acquired using the adjacent current injection strategy are unique. Improvement is observed for both the sensitivity and spatial resolution for the voxels in the vicinity of the ad hoc pill when comparing the EET approach with the conventional ERT approach. This shows the benefit of the EET approach. However, visual inspection of reconstructed images reveals no apparent difference between images produced using a regular and extended dataset. Similar studies are conducted for cases considering the opposite strategy, different position and orientation of the pill, and the effect of using multiple pills. In AET, the sensors on the ad hoc pills are used as conductivity probes. Localized conductivity measurements provide conductivity values of the voxels in a discretized mesh of the vessel, which reduces the number of unknowns to be solved during reconstruction. The measurements are incorporated into the inverse solver as prior information. The Gauss-Newton algorithm is chosen for implementation of this approach because of its non-linear nature. Little improvement is seen with the inclusion of one localized conductivity measurement. The effect on the neighbouring voxels is insignificant and there is a lack of control over how the augmented measurement influences the solution of its neighbouring voxels. This is the first time that measurements using ad hoc and 'wireless' sensors within the region of interest have been incorporated into an electrical tomography system.
217

Restored interlaced volumetric imaging increases image quality and scanning speed during intravital imaging in living mice / インターレース撮像データからの立体情報復元手法開発によるマウス生体イメージングの画質およびスキャンスピードの向上

Sogabe, Maina 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第22376号 / 医博第4617号 / 新制||医||1043(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 松田 道行, 教授 林 康紀, 教授 江藤 浩之 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
218

Řízení optického stolku interferenčního mikroskopu na základě obrazové fáze / Control of an interference-microscope optical stage based on the image phase

Kvasnica, Lukáš January 2008 (has links)
Digital holographic microscopy is an interferometric imaging technique, the principle of which is the off-axis image plane holography. The principle of this technique enables to reconstruct both the image intensity and the image phase from the output interferencesignal. The reconstruction can be carried out on the basis of a single image plane hologram. This leads to the possibility of a realtime image reconstruction. The speed of the reconstruction depends on the detection and the computing process. The aim of this diploma thesis is to develop user software for the control of the detection camera and for the image plane hologram reconstruction. The effort was to achieve the highest number of image reconstructions per time unit, with the maximum utilization of the data transfer between the camera and the computer.The next aim of this thesis is the stabilization of the optical table position. The method of stabilization is based on the image phase information, which is used for the control loop feedback between reconstructed image phase and the piezoelectric actuator placed inside of the optical table. Experimental results, which prove the functionality of the stabilization, are presented.
219

Optimalizační úlohy s pravděpodobnostními omezeními / Optimization problems with chance constraints

Drobný, Miloslav January 2018 (has links)
Autor se v diplomové práci zabývá optimalizačními úlohami s pravděpodob- nostními omezeními. Konkrétně pak situacemi, kdy není známo pravděpo- dobnostní rozdělení přítomného náhodného efektu. K řešení těchto problém· lze přistoupit metodami optimistických a pesimistických scénář·, kdy z dané rodiny možných pravděpodobnostních rozdělení vybíráme bu¤ nejpříznivější možnou variantu, nebo naopak tu nejméně výhodnou. Optimalizační úlohy s pravděpodobnostními omezeními formulovanými pomocí výše zmíněných přístup· byly za učinění jistých předpoklad· transformovány do jednoduš- ších a řešitelných optimalizačních úloh. Dosažené výsledky byly aplikovány na reálná data z oblastí optimalizace portfolia a zpracování obrazu. 1
220

Data and image domain deep learning for computational imaging

Ghani, Muhammad Usman 22 January 2021 (has links)
Deep learning has overwhelmingly impacted post-acquisition image-processing tasks, however, there is increasing interest in more tightly coupled computational imaging approaches, where models, computation, and physical sensing are intertwined. This dissertation focuses on how to leverage the expressive power of deep learning in image reconstruction. We use deep learning in both the sensor data domain and the image domain to develop new fast and efficient algorithms to achieve superior quality imagery. Metal artifacts are ubiquitous in both security and medical applications. They can greatly limit subsequent object delineation and information extraction from the images, restricting their diagnostic value. This problem is particularly acute in the security domain, where there is great heterogeneity in the objects that can appear in a scene, highly accurate decisions must be made quickly, and the processing time is highly constrained. Motivated primarily by security applications, we present a new deep-learning-based MAR approach that tackles the problem in the sensor data domain. We treat the observed data corresponding to dense, metal objects as missing data and train an adversarial deep network to complete the missing data directly in the projection domain. The subsequent complete projection data is then used with an efficient conventional image reconstruction algorithm to reconstruct an image intended to be free of artifacts. Conventional image reconstruction algorithms assume that high-quality data is present on a dense and regular grid. Using conventional methods when these requirements are not met produces images filled with artifacts that are difficult to interpret. In this context, we develop data-domain deep learning methods that attempt to enhance the observed data to better meet the assumptions underlying the fast conventional analytical reconstruction methods. By focusing learning in the data domain in this way and coupling the result with existing conventional reconstruction methods, high-quality imaging can be achieved in a fast and efficient manner. We demonstrate results on four different problems: i) low-dose CT, ii) sparse-view CT, iii) limited-angle CT, and iv) accelerated MRI. Image domain prior models have been shown to improve the quality of reconstructed images, especially when data are limited. A novel principled approach is presented allowing the unified integration of both data and image domain priors for improved image reconstruction. The consensus equilibrium framework is extended to integrate physical sensor models, data models, and image models. In order to achieve this integration, the conventional image variables used in consensus equilibrium are augmented with variables representing data domain quantities. The overall result produces combined estimates of both the data and the reconstructed image that is consistent with the physical models and prior models being utilized. The prior models used in both image and data domains in this work are created using deep neural networks. The superior quality allowed by incorporating both data and image domain prior models is demonstrated for two applications: limited-angle CT and accelerated MRI. A major question that arises in the use of neural networks and in particular deep networks is their stability. That is, if the examples seen in the application environment differ from the training environment will the performance be robust. We perform an empirical stability analysis of data and image domain deep learning methods developed for limited-angle CT reconstruction. We consider three types of perturbations to test stability: adversarially optimized, random, and structural perturbations. Our empirical analysis reveals that the data-domain learning approach proposed in this dissertation is less susceptible to perturbations as compared to the image-domain post-processing approach. This is a very encouraging result and strongly supports the main argument of this dissertation that there is value in using data-domain learning and it should be a part of our computational imaging toolkit.

Page generated in 0.1794 seconds