• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Modelování a analýza signálů v zobrazování perfúze magnetickou rezonancí / Modeling and Signal Processing in Dynamic Contrast Enhanced Magnetic Resonance Imaging

Kratochvíla, Jiří January 2018 (has links)
The theoretical part of this work describes perfusion analysis of dynamic contrast enhanced magnetic resonance imaging from data acquisition to estimation of perfusion parameters. The main application fields are oncology, cardiology and neurology. The thesis is focused on quantitative perfusion analysis, specifically it contributes to solving of the the main challenge of this method – correct estimation of the contrast-agent concentration sequence in the arterial input of the region of interest (arterial input function). The goals of the thesis are stated based on literature review and on the expertise of our group. Blind deconvolution is selected as the method of choice. In the practical part of this thesis, a new method for arterial input function identification based on blind deconvolution is proposed. The method is designed for both preclinical and clinical applications. It was validated on synthetic, preclinical and clinical data. Furthermore, possibilities of the longer temporal sampling provided by blind deconvolution were analyzed. This can be used for improved spatial resolution and possibly for higher SNR. For easier deployment of the proposed methods into clinical and preclinical use, a software tool for perfusion data processing was designed.
162

Využití dekonvoluce v perfuzním zobrazování / Deconvolution in perfusion imaging

Líbal, Marek January 2009 (has links)
The purpose of this study is to introduce the methods of the deconvolution and to programme some of them. For the simulation, the tissue homogeneity model and the model of arterial input fiction were used. These models were engaged as the test procedures with the aim of verify the functionality and utility of the Wiener filter, the Lucy-Richardson algorithm and the Singular value decomposition.
163

Estimating machining forces from vibration measurements

Joddar, Manish Kumar 11 December 2019 (has links)
The topic of force reconstruction has been studied quite extensively but most of the existing research work that has been done are in the domain of structural and civil engineering construction like bridges and beams. Considerable work in force reconstruction has also being done in fabrication of machines and structures like aircrafts, gear boxes etc. The topic of force reconstruction of the cutting forces during a machining process like turning or milling machines is a recent line of research to suffice the requirement of proactive monitoring of forces generated during the operation of the machine tool. The forces causing vibrations while machining if detected and monitored can enhance system productivity and efficiency of the process. The objective of this study was to investigate the algorithms available in literature for inverse force reconstruction and apply for reconstruction of cutting forces while machining on a computer numerically controlled (CNC) machine. This study has applied inverse force reconstruction technique algorithms 1) Deconvolution method, 2) Kalman filter recursive least square and 3) augmented Kalman filter for inverse reconstruction of forces for multi degree of freedom systems. Results from experiments conducted as part of this thesis work shows the effectiveness of the methods of force reconstruction to monitor the forces generated during the machining process on machine tools in real time without employing dynamometers which are expensive and complex to set-up. This study for developing a cost-effective method of force reconstruction will be instrumental in applications for improving machining efficiency and proactive preventive maintenance. / Graduate
164

Estimation of Pareto Distribution Functions from Samples Contaminated by Measurement Errors

Kondlo, Lwando Orbet January 2010 (has links)
>Magister Scientiae - MSc / Estimation of population distributions, from samples that are contaminated by measurement errors, is a common problem. This study considers the problem of estimating the population distribution of independent random variables Xi, from error-contaminated samples ~i (.j = 1, ... , n) such that Yi = Xi + f·.i, where E is the measurement error, which is assumed independent of X. The measurement error ( is also assumed to be normally distributed. Since the observed distribution function is a convolution of the error distribution with the true underlying distribution, estimation of the latter is often referred to as a deconvolution problem. A thorough study of the relevant deconvolution literature in statistics is reported. We also deal with the specific case when X is assumed to follow a truncated Pareto form. If observations are subject to Gaussian errors, then the observed Y is distributed as the convolution of the finite-support Pareto and Gaussian error distributions. The convolved probability density function (PDF) and cumulative distribution function (CDF) of the finite-support Pareto and Gaussian distributions are derived. The intention is to draw more specific connections bet.ween certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher's information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available. Simulated data are used to validate the methodology. A real-life application of the methodology is illustrated by fitting convolved distributions to astronomical data
165

[pt] LAWIE: DECONVOLUÇÃO EM PICOS ESPARSOS USANDO O LASSO E FILTRO DE WIENER / [en] LAWIE: SPARSE-SPIKE DECONVOLUTION WITH LASSO AND WIENER FILTER

FELIPE JORDAO PINHEIRO DE ANDRADE 06 November 2020 (has links)
[pt] Este trabalho propõe um algoritmo para o problema da deconvolução sísmica em picos esparsos. Intitulado LaWie, este algoritmo é baseado na combinação do Least Absolute Shrinkage and Selection Operator (LASSO) e a modelagem de blocos usada no filtro de Wiener. A deconvolução é feita traço a traço para estimar o perfil de refletividade e a wavelet original que deu origem as amplitudes sísmicas. Este trabalho apresenta o resultado do método no dataset sintético do Marmousi2, onde existe um ground truth para comparações objetivas. Além disso, também apresenta os resultados no dataset real Netherlands Offshore F3 Block e mostra a aplicabilidade do algoritmo proposto para não apenas delinear o perfil de refletividades como também para ressaltar características como fraturas neste dado. / [en] This work proposes an algorithm for solving the seismic sparse-spike deconvolution problem. Entitled LaWie, this algorithm is based on the combination of Least Absolute Shrinkage and Selection Operator (LASSO) and the block modeling used in the Wiener filter. Deconvolution is done trace by trace to estimate the reflectivity profile and the convolution wavelet that originated the seismic amplitudes. This work presents the results in the synthetic dataset of Marmousi2, where there is a ground truth for objective comparisons. Also, this work presents the results in a real dataset, Netherlands Offshore F3 Block, and shows the applicability of the proposed algorithm to outline the reflectivity profile and highlight characteristics such as fractures in this data.
166

[pt] MODELAGEM ESPARSA E SUPERTRAÇOS PARA DECONVOLUÇÃO E INVERSÃO SÍSMICAS / [en] SPARSE MODELING AND SUPERTRACES FOR SEISMIC DECONVOLUTION AND INVERSION

RODRIGO COSTA FERNANDES 11 May 2020 (has links)
[pt] Dados de amplitude sísmica compõem o conjunto de insumos do trabalho de interpretação geofísica. À medida que a qualidade dos sensores sísmicos evoluem, há aumento importante tanto na resolução quanto no espaço ocupado para armazenamento. Neste contexto, as tarefas de deconvolução e inversão sísmicas se tornam mais custosas, em tempo de processamento ou em espaço ocupado, em memória principal ou secundária. Partindo do pressuposto de que é possível assumir, por aproximação, que traços de amplitudes sísmicas são o resultado da fusão entre um conteúdo oscilatório – um pulso gerado por um tipo de explosão, em caso de aquisição marítima – e a presença esparsa de contrastes de impedância e variação de densidade de rocha, pretende-se, neste trabalho, apresentar contribuições quanto à forma de realização de duas atividades em interpretação geofísica: a deconvolução e a inversão de refletividades em picos esparsos. Tomando como inspiração trabalhos em compressão volumétrica 3D e 4D, modelagem esparsa, otimização em geofísica, segmentação de imagens e visualização científica, apresenta-se, nesta tese, um conjunto de métodos que buscam estruturas fundamentais e geradoras das amplitudes: (i) uma abordagem para segmentação e seleção de traços sísmicos como representantes de todo o dado, (ii) uma abordagem para separação de amplitudes em ondaleta e picos esparsos de refletividade via deconvolução e (iii) uma outra para confecção de um operador linear – um dicionário – capaz de representar, parcial e aproximadamente, variações no conteúdo oscilatório – emulando alguns efeitos do subsolo –, com o qual é possível realizar uma inversão de refletividades. Por fim, apresentase um conjunto de resultados demonstrando a viabilidade das abordagens, o ganho eventual se aplicadas – incluindo a possibilidade de compressão – e a abertura de oportunidades de trabalhos futuros mesclando geofísica e computação. / [en] Seismic amplitude data are part of the input in a geophysical interpretation pipeline. As seismic sensors evolve, resolution and occupied storage space grows. In this context, tasks as seismic deconvolution and inversion become more expensive, in processing time or in – main or secondary – memory. Assuming that, approximately, seismic amplitude traces result from a fusion between an oscillatory content – a pulse generated by a kind of explosion, in the case of marine acquisition – and the sparse presence of impedance constrasts and rock density variation, this work presents contributions to the way of doing two geophysical interpretation activities: deconvolution and inversion, both targeting sparse-spike refletivity extraction. Inspired by works in 3D and 4D volumetric compression, sparse modeling, optimization applied to geophysics, image segmentation and scientific visualization, this thesis presents a set of methods that try to fetch fundamental features that generate amplitude data: (i) an approach for seismic traces segmentation and selection, electing them as representatives of the whole data, (ii) an enhancement of an approach for separation of amplitudes into wavelet and sparse-spike reflectivities via deconvolution, and (iii) a way to generate a linear operator – a dictionary – partially and approximately capable of representing variations on the wavelet shape, emulating some effects of the subsoil, from which is possible to accomplish a reflectivity inversion. By the end, it is presented a set of results that demonstrate the viability of such approaches, the possible gain when they are applied – including compression – and some opportunities for future works mixing geophysics and computer science.
167

Bayesian methods for inverse problems

Lian, Duan January 2013 (has links)
This thesis describes two novel Bayesian methods: the Iterative Ensemble Square Filter (IEnSRF) and the Warp Ensemble Square Root Filter (WEnSRF) for solving the barcode detection problem, the deconvolution problem in well testing and the history matching problem of facies patterns. For the barcode detection problem, at the expanse of overestimating the posterior uncertainty, the IEnSRF efficiently achieves successful detections with very challenging real barcode images which the other considered methods and commercial software fail to detect. It also performs reliable detection on low-resolution images under poor ambient light conditions. For the deconvolution problem in well testing, the IEnSRF is capable of quantifying estimation uncertainty, incorporating the cumulative production data and estimating the initial pressure, which were thought to be unachievable in the existing well testing literature. The estimation results for the considered real benchmark data using the IEnSRF significantly outperform the existing methods in the commercial software. The WEnSRF is utilised for solving the history matching problem of facies patterns. Through the warping transformation, the WEnSRF performs adjustment on the reservoir features directly and is thus superior in estimating the large-scale complicated facies patterns. It is able to provide accurate estimates of the reservoir properties robustly and efficiently with reasonably reliable prior reservoir structural information.
168

Prédiction de la conformité des matériaux d'emballage par intégration de méthodes de déformulation et de modélisation du coefficient de partage / Prediction of the compliance of packaging materials using deformulation methods and partition coefficients modelling

Gillet, Guillaume 14 November 2008 (has links)
Les matériaux plastiques contiennent des additifs, qui ne sont pas fixés dans la matrice polymère et risquent migrer dans les aliments. La directive européenne 2002/72 a introduit la possibilité de démontrer l’aptitude au contact alimentaire de ces matériaux à partir d’approches prédictives, dont l’application est limitée par la disponibilité de données de formulation et de physico-chimie. Ces travaux visent à adapter et développer des approches analytiques rapides pour identifier et quantifier les substances majoritaires contenues dans les plastiques et à développer une approche générique de prédiction des coefficients de partage entre des polymères et les simulants de l’aliment. Des méthodes conventionnelles d’extraction par solvant et de quantification en CLHP-UV-DEDL et CPG-DIF ont été comparées pour quatre formulations modèles de PEHD et PS. Une méthode rapide de déconvolution de spectres infrarouge d’extraits de PEHD a été développée pour identifier et quantifier les additifs. Un modèle prédictif des coefficients d’activité dans les PE et les simulants est proposé. Les contributions enthalpique et entropique non configurationnelle sont évaluées à partir d’un échantillonnage des énergies de contact paire à paire. Il est démontré que la contribution entropique configurationnelle est indispensable à la description de l’affinité de molécules de grande taille dans les simulants polaires ou non constitués de petites molécules. Des arbres de décision combinant approche expérimentale et modèle sont finalement discutés dans la logique de démonstration de la conformité et de veille sanitaire / Plastic packagings are formulated with additives, which can migrate from materials into foodstuffs. According to European directive 2002/72/EC, the ability of plastic materials to be used in contact with food can be demonstrated using modelling tools. Their use is however limited due to availability of some data, like the formulation of materials and partition coefficients of substances between plastics and food. On the one hand this work aims to develop the ability of laboratories to identify and quantify the main substances in plastic materials, and on the other hand it aims to develop a new method to predict partition coefficients between polymers and food simulants. Four formulations of both HDPE and PS were chosen and used during the work. Standard extraction methods and quantification methods using HPLC-UV-ELSD and GC-FID were compared. A new deconvolution process applied on infrared spectra of extracts was developed to identify and quantify additives contained in HDPE. Activity coefficients in both phases were approximated through a generalized off-lattice Flory-Huggins formulation applied to plastic materials and to liquids simulating food products. Potential contact energies were calculated with an atomistic semi-empirical forcefield. The simulations demonstrated that plastic additives have a significant chemical affinity, related to the significant contribution of the positional entropy, for liquids consisting in small molecules. Finally, decision trees, which combine both experimental and modelling approaches to demonstrate the compliance of plastic materials, were discussed
169

Déconvolution aveugle parcimonieuse en imagerie échographique avec un algorithme CLEAN adaptatif / Sparse blind deconvolution in ultrasound imaging using an adaptative CLEAN algorithm

Chira, Liviu-Teodor 17 October 2013 (has links)
L'imagerie médicale ultrasonore est une modalité en perpétuelle évolution et notamment en post-traitement où il s'agit d'améliorer la résolution et le contraste des images. Ces améliorations devraient alors aider le médecin à mieux distinguer les tissus examinés améliorant ainsi le diagnostic médical. Il existe déjà une large palette de techniques "hardware" et "software". Dans ce travail nous nous sommes focalisés sur la mise en oeuvre de techniques dites de "déconvolution aveugle", ces techniques temporelles utilisant l'enveloppe du signal comme information de base. Elles sont capables de reconstruire des images parcimonieuses, c'est-à-dire des images de diffuseurs dépourvues de bruit spéculaire. Les principales étapes de ce type de méthodes consistent en i) l'estimation aveugle de la fonction d'étalement du point (PSF), ii) l'estimation des diffuseurs en supposant l'environnement exploré parcimonieux et iii) la reconstruction d'images par reconvolution avec une PSF "idéale". La méthode proposée a été comparée avec des techniques faisant référence dans le domaine de l'imagerie médicale en utilisant des signaux synthétiques, des séquences ultrasonores réelles (1D) et images ultrasonores (2D) ayant des statistiques différentes. La méthode, qui offre un temps d'exécution très réduit par rapport aux techniques concurrentes, est adaptée pour les images présentant une quantité réduite ou moyenne des diffuseurs. / The ultrasonic imaging knows a continuous advance in the aspect of increasing the resolution for helping physicians to better observe and distinguish the examined tissues. There is already a large range of techniques to get the best results. It can be found also hardware or signal processing techniques. This work was focused on the post-processing techniques of blind deconvolution in ultrasound imaging and it was implemented an algorithm that works in the time domain and uses the envelope signal as input information for it. It is a blind deconvolution technique that is able to reconstruct reflectors and eliminate the diffusive speckle noise. The main steps are: the estimation of the point spread function (PSF) in a blind way, the estimation of reflectors using the assumption of sparsity for the examined environment and the reconstruction of the image by reconvolving the sparse tissue with an ideal PSF. The proposed method was tested in comparison with some classical techniques in medical imaging reconstruction using synthetic signals, real ultrasound sequences (1D) and ultrasound images (2D) and also using two types of statistically different images. The method is suitable for images that represent tissue with a reduced amount or average scatters. Also, the technique offers a lower execution time than direct competitors.
170

Identification de sources acoustiques au passage d'un véhicule routier par imagerie acoustique parcimonieuse dans le domaine temporel / Identification of acoustic sources in a road vehicle pass-by situation with a sparse time-domain acoustic imaging method

Cousson, Rémi 14 December 2018 (has links)
Le présent travail de recherche s'inscrit dans le cadre de la caractérisation de l'émission de bruit des véhicules routiers. On désire identifier les sources de bruit d'un véhicule en mouvement, lors de son passage sur une voie de circulation en conditions réelles, à partir de mesures acoustiques effectuées à poste fixe en bord de voie. Les méthodes d'imagerie acoustique utilisées actuellement présentent des performances insuffisantes sur véhicules routiers. Un état de l'art a permis d’identifier une méthode existante, MSA-PSF, consistant à effectuer sous certaines hypothèses une déconvolution sur sources mobiles dans le domaine fréquentiel, et originellement utilisée en aéronautique. Cette méthode est ici adaptée au contexte des véhicules routiers. Dans un deuxième temps, une approche originale est introduite pour répondre spécifiquement aux contraintes de ce contexte : CLEANT. Il s’agit d’une méthode itérative, dans le domaine temporel avec une approche large bande, qui prend en compte les effets du déplacement des sources et qui comporte deux paramètres permettant d’affiner les résultats : le facteur de boucle et le critère d’arrêt. Une version filtrée en fréquence est également proposée et montre une amélioration de l’identification de sources secondaires dans certains cas. CLEANT présente l’avantage d’obtenir des signaux-sources temporels reconstruits, ouvrant la voie à d’autres analyses, en particulier l’utilisation de la cohérence avec des signaux issus de mesures embarquées pour la séparation des contributions de sources décorrélées. MSA-PSF et CLEANT sont évaluées sur des simulations numériques à l'aide d'indicateurs mesurant leurs performances sur les aspects localisation et quantification de sources. Elles sont par la suite testées expérimentalement en conditions contrôlées de laboratoire, par l'utilisation d'une source mobile. Cette expérience permet une première application à un cas pratique, impliquant un mouvement linéaire, la présence de deux sources simultanées et des signaux de différentes natures (tonale et large bande). Enfin, elles sont comparées à l’approche classique de formation de voies sur source mobile, dans le cadre d’une expérience avec véhicule en conditions réelles. L'approche originale CLEANT fournit des résultats très encourageants, représentant une amélioration de la formation de voies classique, notamment en basse fréquence sur les cas testés. L'application à un véhicule en conditions réelles illustre certains comportements potentiellement problématiques de CLEANT et les solutions apportées par sa version filtrée en fréquence ou par l'ajustement de ses différents paramètres. Un premier test des approches par référencement avec des signaux issus de mesures embarquées est également présenté pour discriminer l'origine physique des sources, et souligne l'incidence de la brièveté des signaux inhérente au contexte de sources au passage. / The study detailed in this manuscript is part of the effort to characterize the noise emission from road vehicles. We wish to identify the noise sources of a moving vehicle, when driven on a roadway in real-world conditions, with roadside acoustic measurements. The current acoustic imaging methods do not provide sufficient performance on road vehicles. A state of the art led to the selection of an existing method, MSA-PSF, which consists in deconvolving signals from mobile sources in the frequency domain under certain assumptions, and was originally developed for aeroacoustics. This method is adapted here to the context of road vehicles. Then, an original approach is proposed in order to tackle the specific constraints of this context: CLEANT. This is an iterative method, performed in time domain with a wideband approach, which takes into account the effect of sources motion and includes two parameters designed to refine the result: the loop factor and the stopping criterium. A further version of the algorithm, including a frequency filter, is also proposed and shows significant improvement in identifying secondary sources in some particular cases. An interesting point of CLEANT is the availability of the sources reconstructed time signals, which enables other types of analysis, especially the use of the coherence with signals from on-board measurements in order to separate the contributions of uncorrelated sources. MSA-PSF and CLEANT are evaluated with numerical simulations and a set of indicators to measure their source localization and quantification performance. They are then tested in a controlled laboratory conditions experiment, using a moving source. This experiment represents a first application of the methods to a practical case, involving a linear motion, two simultaneous sources and different kinds of signals (tone and wideband). They are finally compared to the classical approach of moving source beamforming, within the frame of an experiment on a road vehicle, in real-world conditions. The original approach CLEANT yields very encouraging results, and is a clear improvement from the conventional beamforming, especially at low frequency for the tested cases. Applying it to a road vehicle in real-world conditions highlights a potentially troublesome behavior of the method, and the solution brought by CLEANT's frequency filtered version, or by adapting its various parameters. The coherence with reference signals to discriminate the physical origins of the sources is also tested and underlines the role of the short duration of the signals related to the sources passing-by context.

Page generated in 0.0764 seconds