Spelling suggestions: "subject:"deconvolution"" "subject:"econvolution""
161 |
Využití dekonvoluce v digitální fluorescenční mikroskopii kvasinek / Deconvolution fluorescence microscopy of yeast cellsŠtec, Tomáš January 2015 (has links)
Title: Deconvolution fluorescence microscopy of yeast cells Author: Tomáš Štec Department: Institute of Physics of Charles University Supervisor: prof. RNDr. Jarmoír Plášek, CSc., Institute of Physics of Charles Uni- versity Abstract: Fluorescence microscopy presents an fast and cheap alternative to more advanced imaging methods like confocal and electron microscopy, even though it is subject to heavy image distortion. It is possible to recover most of the original distortion-free image using deconvolution in computer image processing. This al- lows reconstruction of 3D structure of studied objects. Deconvolution procedure of NIS Elements AR program undergoes an thorough inspection in this diploma the- sis. It is then applied on restoration of 3D structure of calcofluor stained cell wall of budding yeast Saccharomyces cerevisiae. Changes of the structure of the cell wall during cell ageing are being examined. Cell wall of aged cells shows increased surface roughness and even ruptures at the end of cell life. Keywords: fluorescence, microscopy, deconvolution, NIS Elements AR, calcofluor, yeast, cell wall, ageing
|
162 |
Estimation of Pareto distribution functions from samples contaminated by measurement errorsKondlo, Lwando Orbet January 2010 (has links)
Magister Scientiae - MSc / The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher’s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available. / South Africa
|
163 |
Modelování a analýza signálů v zobrazování perfúze magnetickou rezonancí / Modeling and Signal Processing in Dynamic Contrast Enhanced Magnetic Resonance ImagingKratochvíla, Jiří January 2018 (has links)
The theoretical part of this work describes perfusion analysis of dynamic contrast enhanced magnetic resonance imaging from data acquisition to estimation of perfusion parameters. The main application fields are oncology, cardiology and neurology. The thesis is focused on quantitative perfusion analysis, specifically it contributes to solving of the the main challenge of this method – correct estimation of the contrast-agent concentration sequence in the arterial input of the region of interest (arterial input function). The goals of the thesis are stated based on literature review and on the expertise of our group. Blind deconvolution is selected as the method of choice. In the practical part of this thesis, a new method for arterial input function identification based on blind deconvolution is proposed. The method is designed for both preclinical and clinical applications. It was validated on synthetic, preclinical and clinical data. Furthermore, possibilities of the longer temporal sampling provided by blind deconvolution were analyzed. This can be used for improved spatial resolution and possibly for higher SNR. For easier deployment of the proposed methods into clinical and preclinical use, a software tool for perfusion data processing was designed.
|
164 |
Využití dekonvoluce v perfuzním zobrazování / Deconvolution in perfusion imagingLíbal, Marek January 2009 (has links)
The purpose of this study is to introduce the methods of the deconvolution and to programme some of them. For the simulation, the tissue homogeneity model and the model of arterial input fiction were used. These models were engaged as the test procedures with the aim of verify the functionality and utility of the Wiener filter, the Lucy-Richardson algorithm and the Singular value decomposition.
|
165 |
Estimating machining forces from vibration measurementsJoddar, Manish Kumar 11 December 2019 (has links)
The topic of force reconstruction has been studied quite extensively but most of the existing research work that has been done are in the domain of structural and civil engineering construction like bridges and beams. Considerable work in force reconstruction has also being done in fabrication of machines and structures like aircrafts, gear boxes etc. The topic of force reconstruction of the cutting forces during a machining process like turning or milling machines is a recent line of research to suffice the requirement of proactive monitoring of forces generated during the operation of the machine tool. The forces causing vibrations while machining if detected and monitored can enhance system productivity and efficiency of the process. The objective of this study was to investigate the algorithms available in literature for inverse force reconstruction and apply for reconstruction of cutting forces while machining on a computer numerically controlled (CNC) machine. This study has applied inverse force reconstruction technique algorithms 1) Deconvolution method, 2) Kalman filter recursive least square and 3) augmented Kalman filter for inverse reconstruction of forces for multi degree of freedom systems.
Results from experiments conducted as part of this thesis work shows the effectiveness of the methods of force reconstruction to monitor the forces generated during the machining process on machine tools in real time without employing dynamometers which are expensive and complex to set-up. This study for developing a cost-effective method of force reconstruction will be instrumental in applications for improving machining efficiency and proactive preventive maintenance. / Graduate
|
166 |
Estimation of Pareto Distribution Functions from Samples Contaminated by Measurement ErrorsKondlo, Lwando Orbet January 2010 (has links)
>Magister Scientiae - MSc / Estimation of population distributions, from samples that are contaminated
by measurement errors, is a common problem. This study considers the problem
of estimating the population distribution of independent random variables
Xi, from error-contaminated samples ~i (.j = 1, ... , n) such that Yi = Xi + f·.i,
where E is the measurement error, which is assumed independent of X. The
measurement error ( is also assumed to be normally distributed. Since the
observed distribution function is a convolution of the error distribution with
the true underlying distribution, estimation of the latter is often referred to
as a deconvolution problem. A thorough study of the relevant deconvolution
literature in statistics is reported.
We also deal with the specific case when X is assumed to follow a truncated
Pareto form. If observations are subject to Gaussian errors, then the observed
Y is distributed as the convolution of the finite-support Pareto and Gaussian
error distributions. The convolved probability density function (PDF)
and cumulative distribution function (CDF) of the finite-support Pareto and
Gaussian distributions are derived.
The intention is to draw more specific connections bet.ween certain deconvolution
methods and also to demonstrate the application of the statistical theory
of estimation in the presence of measurement error.
A parametric methodology for deconvolution when the underlying distribution
is of the Pareto form is developed.
Maximum likelihood estimation (MLE) of the parameters of the convolved distributions
is considered. Standard errors of the estimated parameters are calculated
from the inverse Fisher's information matrix and a jackknife method.
Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof-
fit tests are used to evaluate the fit of the posited distribution. A bootstrapping
method is used to calculate the critical values of the K-S test statistic,
which are not available.
Simulated data are used to validate the methodology. A real-life application
of the methodology is illustrated by fitting convolved distributions to astronomical
data
|
167 |
[pt] LAWIE: DECONVOLUÇÃO EM PICOS ESPARSOS USANDO O LASSO E FILTRO DE WIENER / [en] LAWIE: SPARSE-SPIKE DECONVOLUTION WITH LASSO AND WIENER FILTERFELIPE JORDAO PINHEIRO DE ANDRADE 06 November 2020 (has links)
[pt] Este trabalho propõe um algoritmo para o problema da deconvolução sísmica em picos esparsos. Intitulado LaWie, este algoritmo é baseado na combinação do Least Absolute Shrinkage and Selection Operator (LASSO)
e a modelagem de blocos usada no filtro de Wiener. A deconvolução é feita traço a traço para estimar o perfil de refletividade e a wavelet original que deu origem as amplitudes sísmicas. Este trabalho apresenta o resultado do método no dataset sintético do Marmousi2, onde existe um ground truth para comparações objetivas. Além disso, também apresenta os resultados no dataset real Netherlands Offshore F3 Block e mostra a aplicabilidade do algoritmo proposto para não apenas delinear o perfil de refletividades como
também para ressaltar características como fraturas neste dado. / [en] This work proposes an algorithm for solving the seismic sparse-spike deconvolution problem. Entitled LaWie, this algorithm is based on the combination of Least Absolute Shrinkage and Selection Operator (LASSO)
and the block modeling used in the Wiener filter. Deconvolution is done trace by trace to estimate the reflectivity profile and the convolution wavelet that originated the seismic amplitudes. This work presents the results in the synthetic dataset of Marmousi2, where there is a ground truth for objective comparisons. Also, this work presents the results in a real dataset, Netherlands Offshore F3 Block, and shows the applicability of the proposed algorithm to outline the reflectivity profile and highlight characteristics such as fractures in this data.
|
168 |
[pt] MODELAGEM ESPARSA E SUPERTRAÇOS PARA DECONVOLUÇÃO E INVERSÃO SÍSMICAS / [en] SPARSE MODELING AND SUPERTRACES FOR SEISMIC DECONVOLUTION AND INVERSIONRODRIGO COSTA FERNANDES 11 May 2020 (has links)
[pt] Dados de amplitude sísmica compõem o conjunto de insumos do trabalho de interpretação geofísica. À medida que a qualidade dos sensores sísmicos evoluem, há aumento importante tanto na resolução quanto no espaço ocupado para armazenamento. Neste contexto, as tarefas de deconvolução e inversão sísmicas se tornam mais custosas, em tempo de processamento ou em espaço ocupado, em memória principal ou secundária. Partindo do pressuposto de que é possível assumir, por aproximação, que traços de amplitudes sísmicas são o resultado da fusão entre um conteúdo oscilatório – um pulso gerado por um tipo de explosão, em caso de aquisição marítima – e a presença esparsa de contrastes de impedância e variação de densidade de rocha, pretende-se, neste trabalho, apresentar contribuições quanto à forma de realização de duas atividades em interpretação geofísica: a deconvolução e a inversão de refletividades em picos esparsos. Tomando como inspiração trabalhos em compressão volumétrica 3D e 4D, modelagem esparsa, otimização em geofísica, segmentação de imagens e visualização científica, apresenta-se, nesta tese, um conjunto de métodos que buscam estruturas fundamentais e geradoras das amplitudes: (i) uma abordagem para segmentação e seleção de traços sísmicos como representantes de todo o dado, (ii) uma abordagem para separação de amplitudes em ondaleta e picos esparsos de refletividade via deconvolução e (iii) uma outra para confecção de um operador linear – um dicionário – capaz de representar, parcial e aproximadamente, variações no conteúdo oscilatório – emulando alguns efeitos do subsolo –, com o qual é possível realizar uma inversão de refletividades. Por fim, apresentase um conjunto de resultados demonstrando a viabilidade das abordagens, o ganho eventual se aplicadas – incluindo a possibilidade de compressão – e a abertura de oportunidades de trabalhos futuros mesclando geofísica e computação. / [en] Seismic amplitude data are part of the input in a geophysical interpretation pipeline. As seismic sensors evolve, resolution and occupied storage space grows. In this context, tasks as seismic deconvolution and inversion become more expensive, in processing time or in – main or secondary – memory. Assuming that, approximately, seismic amplitude traces result from a fusion between an oscillatory content – a pulse generated by a kind of explosion, in the case of marine acquisition – and the sparse presence of impedance constrasts and rock density variation, this work presents contributions to the way of doing two geophysical interpretation activities: deconvolution and inversion, both targeting sparse-spike refletivity extraction.
Inspired by works in 3D and 4D volumetric compression, sparse modeling, optimization applied to geophysics, image segmentation and scientific visualization, this thesis presents a set of methods that try to fetch fundamental features that generate amplitude data: (i) an approach for seismic traces segmentation and selection, electing them as representatives of the whole data, (ii) an enhancement of an approach for separation of amplitudes into wavelet and sparse-spike reflectivities via deconvolution, and (iii) a way to generate a linear operator – a dictionary – partially and approximately capable of representing variations on the wavelet shape, emulating some effects of the subsoil, from which is possible to accomplish a reflectivity inversion. By the end, it is presented a set of results that demonstrate the viability of such approaches, the possible gain when they are applied – including compression – and some opportunities for future works mixing geophysics and computer science.
|
169 |
Bayesian methods for inverse problemsLian, Duan January 2013 (has links)
This thesis describes two novel Bayesian methods: the Iterative Ensemble Square Filter (IEnSRF) and the Warp Ensemble Square Root Filter (WEnSRF) for solving the barcode detection problem, the deconvolution problem in well testing and the history matching problem of facies patterns. For the barcode detection problem, at the expanse of overestimating the posterior uncertainty, the IEnSRF efficiently achieves successful detections with very challenging real barcode images which the other considered methods and commercial software fail to detect. It also performs reliable detection on low-resolution images under poor ambient light conditions. For the deconvolution problem in well testing, the IEnSRF is capable of quantifying estimation uncertainty, incorporating the cumulative production data and estimating the initial pressure, which were thought to be unachievable in the existing well testing literature. The estimation results for the considered real benchmark data using the IEnSRF significantly outperform the existing methods in the commercial software. The WEnSRF is utilised for solving the history matching problem of facies patterns. Through the warping transformation, the WEnSRF performs adjustment on the reservoir features directly and is thus superior in estimating the large-scale complicated facies patterns. It is able to provide accurate estimates of the reservoir properties robustly and efficiently with reasonably reliable prior reservoir structural information.
|
170 |
Prédiction de la conformité des matériaux d'emballage par intégration de méthodes de déformulation et de modélisation du coefficient de partage / Prediction of the compliance of packaging materials using deformulation methods and partition coefficients modellingGillet, Guillaume 14 November 2008 (has links)
Les matériaux plastiques contiennent des additifs, qui ne sont pas fixés dans la matrice polymère et risquent migrer dans les aliments. La directive européenne 2002/72 a introduit la possibilité de démontrer l’aptitude au contact alimentaire de ces matériaux à partir d’approches prédictives, dont l’application est limitée par la disponibilité de données de formulation et de physico-chimie. Ces travaux visent à adapter et développer des approches analytiques rapides pour identifier et quantifier les substances majoritaires contenues dans les plastiques et à développer une approche générique de prédiction des coefficients de partage entre des polymères et les simulants de l’aliment. Des méthodes conventionnelles d’extraction par solvant et de quantification en CLHP-UV-DEDL et CPG-DIF ont été comparées pour quatre formulations modèles de PEHD et PS. Une méthode rapide de déconvolution de spectres infrarouge d’extraits de PEHD a été développée pour identifier et quantifier les additifs. Un modèle prédictif des coefficients d’activité dans les PE et les simulants est proposé. Les contributions enthalpique et entropique non configurationnelle sont évaluées à partir d’un échantillonnage des énergies de contact paire à paire. Il est démontré que la contribution entropique configurationnelle est indispensable à la description de l’affinité de molécules de grande taille dans les simulants polaires ou non constitués de petites molécules. Des arbres de décision combinant approche expérimentale et modèle sont finalement discutés dans la logique de démonstration de la conformité et de veille sanitaire / Plastic packagings are formulated with additives, which can migrate from materials into foodstuffs. According to European directive 2002/72/EC, the ability of plastic materials to be used in contact with food can be demonstrated using modelling tools. Their use is however limited due to availability of some data, like the formulation of materials and partition coefficients of substances between plastics and food. On the one hand this work aims to develop the ability of laboratories to identify and quantify the main substances in plastic materials, and on the other hand it aims to develop a new method to predict partition coefficients between polymers and food simulants. Four formulations of both HDPE and PS were chosen and used during the work. Standard extraction methods and quantification methods using HPLC-UV-ELSD and GC-FID were compared. A new deconvolution process applied on infrared spectra of extracts was developed to identify and quantify additives contained in HDPE. Activity coefficients in both phases were approximated through a generalized off-lattice Flory-Huggins formulation applied to plastic materials and to liquids simulating food products. Potential contact energies were calculated with an atomistic semi-empirical forcefield. The simulations demonstrated that plastic additives have a significant chemical affinity, related to the significant contribution of the positional entropy, for liquids consisting in small molecules. Finally, decision trees, which combine both experimental and modelling approaches to demonstrate the compliance of plastic materials, were discussed
|
Page generated in 0.096 seconds