• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Blind image deconvolution : nonstationary Bayesian approaches to restoring blurred photos

Bishop, Tom E. January 2009 (has links)
High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs.
52

Power Analysis in Applied Linear Regression for Cell Type-Specific Differential Expression Detection

Glass, Edmund 01 January 2016 (has links)
The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from any tissues suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. However, this variability may actually be advantageous, as heterogeneous gene expression measurements coupled with cell counts may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression. Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect the performance of linear regression. In this dissertation we specifically address the parameter space involved in the most rigorous use of linear regression to estimate cell type-specific differential expression and report under which conditions significant detection is probable. We define parameters affecting the sensitivity of cell type-specific differential expression estimation as follows: sample size, cell type-specific proportion variability, mean squared error (spread of observations around linear regression line), conditioning of the cell proportions predictor matrix, and the size of actual cell type-specific differential expression. Each parameter, with the exception of cell type-specific differential expression (effect size), affects the variability of cell type-specific differential expression estimates. We have developed a power-analysis approach to cell type by cell type and genomic site by site differential expression detection which relies upon Welch’s two-sample t-test and factors in differences in cell type-specific expression estimate variability and reduces false discovery. To this end we have published an R package, LRCDE, available in GitHub (http://www.github.com/ERGlass/lrcde.dev) which outputs observed statistics of cell type-specific differential expression, including two-sample t- statistic, t-statistic p-value, and power calculated from two-sample t-statistic on a genomic site- by-site basis.
53

Single-image full-focus reconstruction using depth-based deconvolution

Salahieh, Basel, Rodriguez, Jeffrey J., Stetson, Sean, Liang, Rongguang 30 September 2016 (has links)
In contrast with traditional extended depth-of-field approaches, we propose a depth-based deconvolution technique that realizes the depth-variant nature of the point spread function of an ordinary fixed-focus camera. The developed technique brings a single blurred image to focus at different depth planes which can be stitched together based on a depth map to output a full-focus image. Strategies to suppress the deconvolution's ringing artifacts are implemented on three levels: block tiling to eliminate boundary artifacts, reference maps to reduce ringing initiated by sharp edges, and depth-based masking to mitigate artifacts raised by neighboring depth-transition surfaces. The performance is validated numerically for planar and multidepth objects. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
54

Odstranění rozmazání pomocí dvou snímků s různou délkou expozice / Odstranění rozmazání pomocí dvou snímků s různou délkou expozice

Sabo, Jozef January 2011 (has links)
In the presented work we study the methods of image deblurring using two images of the same scene with different exposure times, focusing on two main approach categories, so called deconvolution and non-deconvolution methods. We present theoretical backgrounds on both categories and evaluate their limitations and advantages. We dedicate one section to compare both method categories on test data (images) for which we our MATLAB implementation of the methods. We also compare the effectiveness of said methods against the results of a selected single-image de-noising algorithm. We do not focus at computational efficiency of algorithms and work with single-channel images only.
55

Trojrozměrná rekonstrukce obrazu v digitální holografické mikroskopii / Three-dimensional reconstruction of image in digital holographic microscopy

Týč, Matěj Unknown Date (has links)
This thesis deals with the topic of 3D image processing for digital holographic microscopy - numerical refocusing. This method allows to perform mathematically accurate defocus correction on image of a sample captured away from the sample plane and it was applicable only for images that were made made using coherent illumination source. It has been generalized to a form in which it is also applicable to devices that use incoherent (non-monochromatic or extended) illumination sources. Another presented achievement concerns hologram processing. The advanced hologram processing method enables obtaining more data mainly concerning precision of quantities from one hologram — normally, one would have to capture multiple holograms to get those. Both methods have been verified experimentally.
56

Fonctions de coût pour l'estimation des filtres acoustiques dans les mélanges réverbérants / Cost functions for the estimation of acoustic filters in reverberant mixtures

Benichoux, Alexis 14 October 2013 (has links)
On se place dans le cadre du traitement des signaux audio multicanaux et multi-sources. À partir du mélange de plusieurs sources sonores enregistrées en milieu réverbérant, on cherche à estimer les réponses acoustiques (ou filtres de mélange) entre les sources et les microphones. Ce problème inverse ne peut être résolu qu'en prenant en compte des hypothèses sur la nature des filtres. Notre approche consiste d'une part à identifier mathématiquement les hypothèses nécessaires sur les filtres pour pouvoir les estimer et d'autre part à construire des fonctions de coût et des algorithmes permettant de les estimer effectivement. Premièrement, nous avons considéré le cas où les signaux sources sont connus. Nous avons développé une méthode d'estimation des filtres basée sur une régularisation convexe prenant en compte à la fois la nature parcimonieuse des filtres et leur enveloppe de forme exponentielle décroissante. Nous avons effectué des enregistrements en environnement réel qui ont confirmé l'efficacité de cet algorithme. Deuxièmement, nous avons considéré le cas où les signaux sources sont inconnus, mais statistiquement indépendants. Les filtres de mélange peuvent alors être estimés à une indétermination de permutation et de gain près à chaque fréquence par des techniques d'analyse en composantes indépendantes. Nous avons apporté une étude exhaustive des garanties théoriques par lesquelles l'indétermination de permutation peut être levée dans le cas où les filtres sont parcimonieux dans le domaine temporel. Troisièmement, nous avons commencé à analyser les hypothèses sous lesquelles notre algorithme d'estimation des filtres pourrait être étendu à l'estimation conjointe des signaux sources et des filtres et montré un premier résultat négatif inattendu : dans le cadre de la déconvolution parcimonieuse aveugle, pour une famille assez large de fonctions de coût régularisées, le minimum global est trivial. Des contraintes supplémentaires sur les signaux sources ou les filtres sont donc nécessaires. / This work is focused on the processing of multichannel and multisource audio signals. From an audio mixture of several audio sources recorded in a reverberant room, we wish to estimate the acoustic responses (a.k.a. mixing filters) between the sources and the microphones. To solve this inverse problem one need to take into account additional hypotheses on the nature of the acoustic responses. Our approach consists in first identifying mathematically the necessary hypotheses on the acoustic responses for their estimation and then building cost functions and algorithms to effectively estimate them. First, we considered the case where the source signals are known. We developed a method to estimate the acoustic responses based on a convex regularization which exploits both the temporal sparsity of the filters and the exponentially decaying envelope. Real-world experiments confirmed the effectiveness of this method on real data. Then, we considered the case where the sources signal are unknown, but statistically independent. The mixing filters can be estimated up to a permutation and scaling ambiguity. We brought up an exhaustive study of the theoretical conditions under which we can solve the indeterminacy, when the multichannel filters are sparse in the temporal domain. Finally, we started to analyse the hypotheses under which this algorithm could be extended to the joint estimation of the sources and the filters, and showed a first unexpected results : in the context of blind deconvolution with sparse priors, for a quite large family of regularised cost functions, the global minimum is trivial. Additional constraints on the source signals and the filters are needed.
57

Estudos dosimétricos da hidroxiapatita por ressonância paramagnética eletrônica e termoluminescência / Hydroxyapatite dosimetric studies by electron paramagnetic resonance and thermoluminescence

Oliveira, Luiz Carlos de 26 February 2010 (has links)
Estudos dosimétricos em hidroxiapatita (HAp) podem ser utilizados para a determinação da dose absorvida em tecidos duros em diversas situações, tais como acidentes radiológicos, controle de processos de esterilização e datação arqueológica. Essa tese apresenta estudos da resposta à dose de radiação tanto do sinal de Ressonância Paramagnética Eletrônica (RPE) quanto de termoluminescência (TL) para HAp. A fauna de mamíferos fósseis encontrada na Planície Costeira do Rio Grande do Sul é conhecida desde o final do século XIX, porém ainda pouco se sabe sobre seu contexto bio e cronoestratigráfico. Neste trabalho foram datadas, por RPE, onze amostras de dentes de mamíferos extintos, coletados no Arroio Chuí e ao longo da praia, proveniente do sul da costa do Rio Grande do Sul. As idades obtidas para essas amostras contribuem para um melhor entendimento da origem dos depósitos fossilíferos. Em um segundo momento do trabalho, é proposto um novo procedimento de decomposição de espectros complexos de RPE, voltados para a dosimetria e datação. O método consiste na utilização de funções do pacote de funções do software \"Easyspin\", que é gratuito, associadas a métodos de minimização de funções. Após a validação, o método foi aplicado na decomposição de espectro de duas amostras de dentes de Stego-mastodon waringi, provenientes do nordeste do Brasil. A decomposição visa verificar o efeito que tem as componentes sobrepostas ao sinal dosimétrico sobre o cálculo da dose acumulada e mostrou-se útil para melhorar a precisão na determinação da dose. Por fim, hidroxiapatita sintética carbonatada do tipo-A e hidroxiapatita natural extraída de dentes fósseis foram caracterizadas por TL. Os resultados obtidos por essa técnica mostrou que esses dois tipos de amostra são capazes de responder a radiação ionizante. Contudo, esses mesmos resultados também revelaram a impossibilidade de se fazer dosimetria por termoluminescência com fins à datação, para esse tipo de amostra. / Dosimetric studies on hydroxyapatite (HAp) can be used to determine the absorbed dose in hard tissues in several situation, such as radiologic accidents, control of sterilization process and archaeological dating. This PhD thesis presents studies about radiation dose response assessed by electron paramagnetic resonance (EPR) as well as thermoluminescence in HAp. The fossil mammalian fauna from the Coastal Plain of Rio Grande do Sul State has been known since the late XIX century; however, its biostratigraphic and chronostratigraphic context is still poorly known. The present works describes the results of electron spin resonance (ESR) dating on teeth of extinct mammals collected along the Chuí Creek and coastline. The ages obtained for these samples contribute to a better knowledge about the origin of these fossil. In a second stage of this work, we propose a new EPR spectrum deconvolution (or decomposition) procedure aimed to dosimetry and dating. The method uses the EasySpin (a freeware software) set of functions associated with minimization procedures. After validation, the method was applied in spectrum deconvolution of two Stegomastodon waringi enamel samples, originated from northeast of Brazil. The spectrum econvolution is aimed to verify the superposed components effects on the dosimetric signal over the accumulated dose calculation. The results have shown to be helpful in improving the dose calculation accuracy. In the last stage of this work, synthetic carbonated A-typ hydroxyapatite and natural one were investigated by Thermoluminescence. The obtained results shown that the two samples respond linearly to the ionizing radiation dose. However, the short thermoluminescent glow-peak lifetime suggest that it\'s inadequate for dating purpose.
58

Avaliação do método de deconvolução sobre dados de sísmica rasa / Evaluation of the deconvolution method on shallow seismic data.

Spadini, Allan Segovia 23 April 2012 (has links)
Neste trabalho de pesquisa foi realizado um estudo do método de deconvolução visando melhor adequação à situação encontrada na escala de investigação rasa para a estimativa da forma de onda e da resposta impulsiva da Terra. Procedimentos determinísticos e estatísticos (métodos cegos) foram avaliados sobre dados sintéticos e dados reais adquiridos com fontes de impacto e com uma fonte pseudo-aleatória. / In this research work a study of the method of deconvolution was conducted in order to improve the adequacy to the shallow subsurface scale of investigation to the estimate of the seismic wavelet and of the earth impulse response. Deterministic and statistical (blind) procedures were evaluated over synthetic and real data acquired with impact sources and a pseudo-random source.
59

Biopharmaceutical considerations and in vitro-in vivo correlations (IVIVCs) for orally administered amorphous formulations

Long, Chiau Ming January 2014 (has links)
Dissolution testing and physiological based pharmacokinetic modeling are the essential methods during drug development. However, there is a lack of a sound approach and understanding of the parameter that controls dissolution and absorption of amorphous formulations. Robust dissolution conditions and setup and PBPK models that have a predictability of in vivo results will expedite and facilitate the drug development process. In this project, cefuroxime axetil, CA (Zinnat® as the amorphous formulations); itraconazole, ITR (Sporanox® as the amorphous formulation) and a compound undergoing clinical trial, Compound X, CX (CX tablet as the amorphous formulation) were chosen. The design of experiments for the in vitro dissolution studies using different apparatus, media and setup which closely simulate the physiological condition of humans (CA and ITR) and dogs (CX) were implemented. The dissolution of CA, ITR and CX formulations was successfully characterised using different dissolution apparatus, setting and media (compendial, biorelevant and modified media) to simulate the changes of pH, contents, hydrodynamic conditions (flow rate and rotation speed) in human gastrointestinal tract (fasted and fed state). The change of hydrodynamics combined with media change that corresponded to the physiological conditions created with USP apparatus 4 and biorelevant dissolution media were able to mimic the in vivo performance of the tested formulations. Furthermore, surface UV dissolution imaging methodology that could be used to understand the mechanism of CA and ITR (Active compounds and their amorphous formulations) dissolution were developed in this project. The UV images developed using surface UV imaging apparatus provided a visual representation and a means for the qualitative as well as quantitative assessment of the differences in dissolution rates and concentration for the model compounds used. In this project, validated PBPK models for fasted state (CA, ITR) and fed state (CA, ITR and CX) were developed. These models incorporated in vitro degradation, particle size distribution, in vitro solubility and dissolution data as well as in vivo human/ dog pharmacokinetics data. Similarly, the results showed that level A IVIVCs for all three model compounds were successfully established. Dissolution profiles with USP apparatus 4 combined with biorelevant media showed close correlation with the in vivo absorption profiles. Overall, this project successfully provides a comprehensive biorelevant methodology to develop PBPK models and IVIVCs for orally administered amorphous formulations.
60

Latent feature models and non-invasive clonal reconstruction

Marass, Francesco January 2017 (has links)
Intratumoural heterogeneity complicates the molecular interpretation of biopsies, as multiple distinct tumour genomes are sampled and analysed at once. Ignoring the presence of these populations can lead to erroneous conclusions, and so a correct analysis must account for the clonal structure of the sample. Several methods to reconstruct tumour clonality from sequencing data have been proposed, spanning methods that either do not consider phylogenetic constraints or posit a perfect phylogeny. Models of the first type are typically latent feature models that can describe the observed data flexibly, but whose results may not be reconcilable with a phylogeny. The second type, instead, generally comprises non-parametric mixture models, with strict assumptions on the tumour’s evolutionary process. The focus of this dissertation is on the development of a phylogenetic latent feature model that can bridge the advantages of these two approaches, allowing deviations from a perfect phylogeny. The work is recounted by three statistical models of increasing complexity. First, I present a non-parametric model based on the Indian Buffet Process prior, and highlight the need for phylogenetic constraints. Second, I develop a finite, phylogenetic extension of the previous model, and show that it can outperform competing methods. Third, I generalise the phylogenetic model to arbitrary copy-number states. Markov chain Monte Carlo algorithms are presented to perform inference. The models are tested on datasets that include synthetic data, controlled biological data, and clinical data. In particular, the copy-number generalisation is applied to longitudinal circulating tumour DNA samples. Liquid biopsies that leverage circulating tumour DNA require sensitive techniques in order to detect mutations at low allele fractions. One method that allows sensitive mutation calling is the amplicon sequencing strategy TAm-Seq. I present bioinformatic tools to improve both the development of TAm-Seq amplicon panels and the analysis of its sequencing data. Finally, an enhancement of this method is presented and shown to detect mutations de novo and in a multiplexed manner at allele fractions less than 0.1%.

Page generated in 0.0293 seconds