• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

VARIATIONAL METHODS FOR IMAGE DEBLURRING AND DISCRETIZED PICARD'S METHOD

Money, James H. 01 January 2006 (has links)
In this digital age, it is more important than ever to have good methods for processing images. We focus on the removal of blur from a captured image, which is called the image deblurring problem. In particular, we make no assumptions about the blur itself, which is called a blind deconvolution. We approach the problem by miniming an energy functional that utilizes total variation norm and a fidelity constraint. In particular, we extend the work of Chan and Wong to use a reference image in the computation. Using the shock filter as a reference image, we produce a superior result compared to existing methods. We are able to produce good results on non-black background images and images where the blurring function is not centro-symmetric. We consider using a general Lp norm for the fidelity term and compare different values for p. Using an analysis similar to Strong and Chan, we derive an adaptive scale method for the recovery of the blurring function. We also consider two numerical methods in this disseration. The first method is an extension of Picards method for PDEs in the discrete case. We compare the results to the analytical Picard method, showing the only difference is the use of the approximation versus exact derivatives. We relate the method to existing finite difference schemes, including the Lax-Wendroff method. We derive the stability constraints for several linear problems and illustrate the stability region is increasing. We conclude by showing several examples of the method and how the computational savings is substantial. The second method we consider is a black-box implementation of a method for solving the generalized eigenvalue problem. By utilizing the work of Golub and Ye, we implement a routine which is robust against existing methods. We compare this routine against JDQZ and LOBPCG and show this method performs well in numerical testing.
142

Contribution to fluorescence microscopy, 3D thick samples deconvolution and depth-variant PSF

Maalouf, Elie 20 December 2010 (has links) (PDF)
The 3-D fluorescence microscope has become the method of choice in biological sciences for living cells study. However, the data acquired with conventional3-D fluorescence microscopy are not quantitatively significant because of distortions induced by the optical acquisition process. Reliable measurements need the correction of theses distortions. Knowing the instrument impulse response, also known as the PSF, one can consider the backward process of convolution induced by the microscope, known as "deconvolution". However, when the system response is not invariant in the observation field, the classical algorithms can introduce large errors in the results. In this thesis we propose a new approach, which can be easily adapted to any classical deconvolution algorithm, direct or iterative, for bypassing the non-invariance PSF problem, without any modification to the later. Based on the hypothesis that the minimal error in a restored image using non-invariance assumption is located near the used PSF position, the EMMA (Evolutive Merging Masks Algorithm) blends multiple deconvolutions in the invariance assumption using a specific merging mask set. In order to obtain sufficient number of measured PSF at various depths for a better restoration using EMMA (or any other depth-variant deconvolution algorithm) we propose a 3D PSF interpolation algorithm based on the image moments theory using Zernike polynomials as decomposition base. The known PSF are decomposed into Zernike moments set and each moment's variation is fitted into a polynomial function, the resulting functions are then used to interpolate the needed PSF's Zernike moments set to reconstruct the interpolated PSF.
143

High contrast limitations of slicer based integral field spectrographs

Salter, Graeme S. January 2010 (has links)
The viability of using a slicer based integral field spectrograph (IFS) for high contrast observations has been under scrutiny due to the belief that the one dimensional coherence that persists along the slice to the point of sampling at the detector will cause the creation of secondary speckles that will not have the same characteristics as normal speckles, thus stopping us from calibrating them out. It has also been previously assumed that a suitably low differential wavefront error when moving slice to slice was not guaranteed by design. It was for these reasons that slicer based IFSs were not selected for the current generation of planet finding instruments. As part of the EPICS (Exo Planet Imaging Camera and Spectrograph for the E-ELT) design study it was decided that slicers should be re-investigated due to results from on sky observations suggesting these limitations did not exist. The purpose of this thesis was to determine whether there was validity to the concerns mentioned above and therefore to answer the question; Would implementing a slicer based integral field spectrograph limit the achievable contrast of an instrument designed for the direct detection of exoplanets? Chapter 1 gives a brief introduction into the field of exoplanet research. Charpter 2 describes the noise limiting direct detection of exoplanets and the ways to get around it. Chapter 3 gives an overview of the two types of IFS under investigation by the EPICS consortium. Chapter 4 looks into details of the EPICS instrument and the IFS design study that came about. Chapter 5 shows simulations performed for the aim of achieving better contrasts via post processing methods and accurate data reduction as well as simulations of slicer based integral field spectrographs. Experimental tests using a slicer and a preoptics setup designed to simulate the limiting noise are described in Chapter 6. Chapter 7 looks at using SINFONI for high contrast observations and Chapter 8 details the conclusions drawn from the work presented in this thesis, as well as possible extensions to it. The work performed in this thesis dispels the concerns about the continued one dimensional coherence up to the detecter and suggests that slicer based integral field spectrographs do not inherently limit the contrast achievable; Results from experiments fit well with the requirements for EPICS to achieve its goals. Simulations also supported the idea that secondary speckle noise should not be an issue for the slicer based IFS. This means that a slicer based IFS is a viable option for the EPICS instrument.
144

Pokročilé metody zpracování signálů v zobrazování perfúze magnetickou rezonancí / Advanced signal processing methods in dynamic contrast enhanced magnetic resonance imaging

Bartoš, Michal January 2015 (has links)
Tato dizertační práce představuje metodu zobrazování perfúze magnetickou rezonancí, jež je výkonným nástrojem v diagnostice, především v onkologii. Po ukončení sběru časové sekvence T1-váhovaných obrazů zaznamenávajících distribuci kontrastní látky v těle začíná fáze zpracování dat, která je předmětem této dizertace. Je zde představen teoretický základ fyziologických modelů a modelů akvizice pomocí magnetické rezonance a celý řetězec potřebný k vytvoření obrazů odhadu parametrů perfúze a mikrocirkulace v tkáni. Tato dizertační práce je souborem uveřejněných prací autora přispívajícím k rozvoji metodologie perfúzního zobrazování a zmíněného potřebného teoretického rozboru.
145

Déconvolution et séparation d'images hyperspectrales en microscopie / Deconvolution and separation of hyperspectral images : applications to microscopy

Henrot, Simon 27 November 2013 (has links)
L'imagerie hyperspectrale consiste à acquérir une scène spatiale à plusieurs longueurs d'onde, e.g. en microscopie. Cependant, lorsque l'image est observée à une résolution suffisamment fine, elle est dégradée par un flou (convolution) et une procédure de déconvolution doit être utilisée pour restaurer l'image originale. Ce problème inverse, par opposition au problème direct modélisant la dégradation de l'image observée, est étudié dans la première partie . Un autre problème inverse important en imagerie, la séparation de sources, consiste à extraire les spectres des composants purs de l'image (sources) et à estimer les contributions de chaque source à l'image. La deuxième partie propose des contributions algorithmiques en restauration d'images hyperspectrales. Le problème est formulé comme la minimisation d'un critère pénalisé et résolu à l'aide d'une structure de calcul rapide. La méthode est adaptée à la prise en compte de différents a priori sur l'image, tels que sa positivité ou la préservation des contours. Les performances des techniques proposées sont évaluées sur des images de biocapteurs bactériens en microscopie confocale de fluorescence. La troisième partie est axée sur le problème de séparation de sources, abordé dans un cadre géométrique. Nous proposons une nouvelle condition suffisante d'identifiabilité des sources à partir des coefficients de mélange. Une étude innovante couplant le modèle d'observation avec le mélange de sources permet de montrer l'intérêt de la déconvolution comme étape préliminaire de la séparation. Ce couplage est validé sur des données acquises en spectroscopie Raman / Hyperspectral imaging refers to the acquisition of spatial images at many spectral bands, e.g. in microscopy. Processing such data is often challenging due to the blur caused by the observation system, mathematically expressed as a convolution. The operation of deconvolution is thus necessary to restore the original image. Image restoration falls into the class of inverse problems, as opposed to the direct problem which consists in modeling the image degradation process, treated in part 1 of the thesis. Another inverse problem with many applications in hyperspectral imaging consists in extracting the pure materials making up the image, called endmembers, and their fractional contribution to the data or abundances. This problem is termed spectral unmixing and its resolution accounts for the nonnegativity of the endmembers and abundances. Part 2 presents algorithms designed to efficiently solve the hyperspectral image restoration problem, formulated as the minimization of a composite criterion. The methods are based on a common framework allowing to account for several a priori assumptions on the solution, including a nonnegativity constraint and the preservation of edges in the image. The performance of the proposed algorithms are demonstrated on fluorescence confocal images of bacterial biosensors. Part 3 deals with the spectral unmixing problem from a geometrical viewpoint. A sufficient condition on abundance coefficients for the identifiability of endmembers is proposed. We derive and study a joint observation model and mixing model and demonstrate the interest of performing deconvolution as a prior step to spectral unmixing on confocal Raman microscopy data
146

Time-Varying Modeling of Glottal Source and Vocal Tract and Sequential Bayesian Estimation of Model Parameters for Speech Synthesis

January 2018 (has links)
abstract: Speech is generated by articulators acting on a phonatory source. Identification of this phonatory source and articulatory geometry are individually challenging and ill-posed problems, called speech separation and articulatory inversion, respectively. There exists a trade-off between decomposition and recovered articulatory geometry due to multiple possible mappings between an articulatory configuration and the speech produced. However, if measurements are obtained only from a microphone sensor, they lack any invasive insight and add additional challenge to an already difficult problem. A joint non-invasive estimation strategy that couples articulatory and phonatory knowledge would lead to better articulatory speech synthesis. In this thesis, a joint estimation strategy for speech separation and articulatory geometry recovery is studied. Unlike previous periodic/aperiodic decomposition methods that use stationary speech models within a frame, the proposed model presents a non-stationary speech decomposition method. A parametric glottal source model and an articulatory vocal tract response are represented in a dynamic state space formulation. The unknown parameters of the speech generation components are estimated using sequential Monte Carlo methods under some specific assumptions. The proposed approach is compared with other glottal inverse filtering methods, including iterative adaptive inverse filtering, state-space inverse filtering, and the quasi-closed phase method. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
147

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Lwando Orbet Kondlo January 2010 (has links)
<p>The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo / s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.</p>
148

Cepstral Deconvolution Method For Measurement Of Absorption And Scattering Coefficients Of Materials

Aslan, Gokhan 01 January 2007 (has links) (PDF)
Several methods are developed to measure absorption and scattering coefficients of materials. In this study, a new method based on cepstral deconvolution technique is proposed. A reverberation room method standardized recently by ISO (ISO 17497-1) is taken as the reference for measurements. Several measurements were conducted in a physically scaled reverberation room and results are evaluated according to these two methods, namely, the method given in the standard and cepstral deconvolution method. Two methods differ from each other in the estimation of specular parts of room impulse responses essential for determination of scattering coefficients. In the standard method, specular part is found by synchronous averaging of impulse responses. However, cepstral deconvolution method utilizes cepstral analysis to obtain the specular part instead of averaging. Results obtained by both of these two approaches are compared for five different test materials. Both of the methods gave almost same values for absorption coefficients. On the other hand, lower scattering coefficient values have been obtained for cepstral deconvolution with respect to the ISO method.
149

Correction for partial volume effects in PET imaging / Korrektion för partiella volymseffekter på PET-bilder

Wallstén, Elin January 2011 (has links)
The limited spatial resolution in positron emission tomography (PET) images leads to difficulties to measure correct uptake in tumours. This is called partial volume effects (PVE) and can lead to serious bias, especially for small tumours. Correct uptake values are valuable for evaluating therapies and can be used as a tool for treatment planning. The purpose of this project was to evaluate two methods for compensating for PVE. Also, a method for tumour delineation in PET-images was evaluated. The methods were used on images reconstructed with two algorithms, VUE-point HD (VP HD) and VP SharpIR. The evaluation was performed using a phantom including fillable spheres which were used to simulate tumours of different sizes. The first method used for PVE compensation was an iterative deconvolution method which to some degree restores the spatial resolution in the images. The tumour uptake was measured with volumes of interest (VOIs) based on a percentage of the maximum voxel value. The second method was to use recovery coefficients (RCs) as correction factors for the measured activity concentrations. These were calculated by convolving binary images of tumours with the point spread function (PSF). The binary images were achieved both from computed tomography (CT) images and from PET images with a threshold method for tumour delineation. The threshold method was based on both tumour activity and background activity, and was also compared with a conventional threshold technique. The results showed that images reconstructed with VP SharpIR can be used for activity concentration measurements with good precision for tumours larger than 13mm diameter. Smaller tumours benefit from RCs correction. The threshold method for tumour delineation showed substantially better results compared to the conventional threshold method. / Den begränsade spatiella upplösningen i bilder från positronemissions-tomografi (PET) leder till svårigheter i att mäta korrekt upptag i tumörer. Detta kallas partiella volymseffekter (PVE) och kan leda till stora fel, speciellt för små tumörer. Korrekta upptagsvärden är värdefulla vid behandlingsutvärdering och kan användas som ett verktyg för att planera behandlingar. Syftet med detta projekt var att utvärdera två metoder för att kompensera för PVE. Även en metod för tumöravgränsning i PET-bilder utvärderades. Metoderna användes på bilder som rekonstruerats med två olika algoritmer, VUE-point HD (VP HD) och VP SharpIR. Utvärderingen utfördes med ett fantom med sfärer som fylldes med aktivitet och därmed simulerade tumörer av olika storlekar. Den första metoden för PVE-kompensation var en iterativ avfaltningsmetod som, i viss mån, återställer bildernas spatiella upplösning. Upptaget i tumörerna mättes som medelupptaget i s.k. ”volumes of interests” (VOI:ar) som baserades på andelar av maximala voxelvärdet. Den andra metoden byggde på användning av s.k. recovery coefficients (RCs) som korrektionsfaktorer för de uppmätta aktivitetskoncentrationerna. Dessa beräknades genom att falta binära bilder av tumörerna med punktspridningsfunktionen (PSF). De binära bilderna framställdes både från bilder tagna med datortomografi (computed tomography, CT) och från PET-bilder med en tröskelmetod för tumöravgränsning. Tröskelmetoden baserades både på aktiviteten i tumören och på bakgrundsaktiviteten. Den jämfördes också med en konventionell tröskelmetod. Resultaten visade att bilder som rekonstruerats med VP SharpIR kan användas för mätning av aktivitetskoncentration med god precision för tumörer större än 13mm diameter. För mindre tumörer är det bättre att använda RC:s. Tröskelmetoden för tumöravgränsning visade avsevärt bättre resultat jämfört med den traditionella tröskelmetoden.
150

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Lwando Orbet Kondlo January 2010 (has links)
<p>The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo / s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.</p>

Page generated in 0.0831 seconds