Spelling suggestions: "subject:"deconvolution"" "subject:"econvolution""
11 |
Explicit deconvolution of wellbore storage distorted well test dataBahabanian, Olivier 25 April 2007 (has links)
The analysis/interpretation of wellbore storage distorted pressure transient test data remains one of the
most significant challenges in well test analysis. Deconvolution (i.e., the "conversion" of a variable-rate
distorted pressure profile into the pressure profile for an equivalent constant rate production sequence) has
been in limited use as a "conversion" mechanism for the last 25 years. Unfortunately, standard deconvolution
techniques require accurate measurements of flow-rate and pressure â at downhole (or sandface)
conditions. While accurate pressure measurements are commonplace, the measurement of sandface flowrates
is rare, essentially non-existent in practice.
As such, the "deconvolution" of wellbore storage distorted pressure test data is problematic.
In theory, this process is possible, but in practice, without accurate measurements of flowrates, this
process can not be employed. In this work we provide explicit (direct) deconvolution of wellbore storage
distorted pressure test data using only those pressure data. The underlying equations associated with each
deconvolution scheme are derived in the Appendices and implemented via a computational module.
The value of this work is that we provide explicit tools for the analysis of wellbore storage distorted
pressure data; specifically, we utilize the following techniques:
* Russell method (1966) (very approximate approach),
* "Beta" deconvolution (1950s and 1980s),
* "Material Balance" deconvolution (1990s).
Each method has been validated using both synthetic data and literature field cases and each method
should be considered valid for practical applications.
Our primary technical contribution in this work is the adaptation of various deconvolution methods for the
explicit analysis of an arbitrary set of pressure transient test data which are distorted by wellbore storage
â without the requirement of having measured sandface flowrates.
|
12 |
Vibration Signal-Based Fault Detection for Rotating MachinesMcDonald, Geoffrey Lyall Unknown Date
No description available.
|
13 |
Deconvolving Maps of Intra-Cardiac Elecrical PotentialPalmer, Keryn 26 July 2012 (has links)
Atrial fibrillation (AF) is the most common arrhythmia encountered in clinical practice, occurring in 1% of the adult population of North America. Although AF does not typically lead to risk of immediate mortality, it is a potent risk factor for ischemic stroke. When left untreated AF reduces quality of life, functional status, cardiac performance and is associated with higher medical costs and an increased risk of death. Catheter ablation is a commonly used treatment method for those who suffer from drugrefractory AF. Prior to ablation, intra-cardiac mapping can be used to determine the activation sequence of cardiac tissue, which may be useful in deciding where to place ablation lesions. However, the electrical potential that is recorded during mapping is not a direct reflection of the current density across the tissue because the potential recorded at each point above the heart tissue is influenced by every cell in the tissue. This causes the recorded potential to be a blurred version of the true tissue current density. The potential that is observed can be described as the convolution of the true current density with a point spread function. Accordingly, deconvolution can, in principle, be used in order to improve the resolution of potential maps. However, because the number of electrodes which can be deployed transvenously is limited by practical restrictions, the recorded potential field is a sparsely sampled version of the actual potential field. Further, an electrode array cannot sample over the entire atrial surface, so the potential map that is observed is a truncated version of the global electrical activity. Here, we investigate the effects of electrode sampling density and edge extension on the ability of deconvolution to improve the resolution of measured electrical potentials within the atria of the heart. In particular, we identify the density of sensing electrodes that are required to allow deconvolution to provide improved estimation of the true current density when compared to the observed potential field.
|
14 |
Non-stationary Iterative Time-Domain Deconvolution for Enhancing the Resolution of Shallow Seismic DataErhan Ergun (6697625) 13 August 2019 (has links)
<p>The resolution
of near-surface seismic reflection data is often limited by attenuation and
scattering in the shallow subsurface which reduces the high frequencies in the
data. Compensating for attenuation and scattering, as well as removing the
propagating source wavelet in a time-variant manner can be used to improve the
resolution. Here we investigate continuous non-stationary iterative time-domain
deconvolution (CNS-ITD), where the seismic wavelet is allowed to vary along the
seismic trace. The propagating seismic wavelet is then a combination of the
source wavelet and the effects of attenuation and scattering effects, and can
be estimated in a data-driven manner by performing a Gabor decomposition of the
data. For each Gabor window, the autocorrelation is estimated and windowed
about zero lag to estimate the propagating wavelet. Using the matrix-vector
equations, the estimated propagating wavelets are assigned to the related
columns of a seismic wavelet matrix, and these are then interpolated to the
time location where the maximum of the envelope of the trace occurs within the
iterative time-domain deconvolution. Advantages of using this data-driven,
time-varying approach include not requiring prior knowledge of the attenuation
and scattering structure and allowing for the sparse estimation of the
reflectivity within the iterative deconvolution. We first apply CNS-ITD to
synthetic data with a time-varying attenuation, where the method successfully
identified the reflectors and increased the resolution of the data. We then
applied CNS-ITD to two observed shallow seismic reflection datasets where
improved resolution was obtained. </p>
|
15 |
Some mathematical studies in least square deconvolution of positron doppler broadening spectra using Huber regularizationWoo, Kee-tsz., 胡紀慈. January 2003 (has links)
published_or_final_version / abstract / toc / Physics / Master / Master of Philosophy
|
16 |
Sparseness-constrained seismic deconvolution with curveletsHennenfent, Gilles, Herrmann, Felix J., Neelamani, Ramesh January 2005 (has links)
Continuity along reflectors in seismic images is used via Curvelet representation to stabilize the convolution operator inversion. The Curvelet transform is a new multiscale transform that provides sparse representations for images that comprise smooth objects separated by piece-wise smooth discontinuities (e.g. seismic images). Our iterative Curvelet-regularized deconvolution algorithm combines conjugate gradient-based inversion with noise regularization performed using non-linear Curvelet coefficient thresholding. The thresholding operation enhances the sparsity of Curvelet representations. We show on a synthetic example that our algorithm provides improved resolution and continuity along reflectors as well as reduced ringing effect compared to the iterative Wiener-based deconvolution approach.
|
17 |
Rolling element bearing fault diagnostics using the blind deconvolution techniqueKarimi, Mahdi January 2006 (has links)
Bearing failure is one of the foremost causes of breakdown in rotating machinery. Such failure can be catastrophic and can result in costly downtime. Bearing condition monitoring has thus played an important role in machine maintenance. In condition monitoring, the observed signal at a measurement point is often corrupted by extraneous noise during the transmission process. It is important to detect incipient faults in advance before catastrophic failure occurs. In condition monitoring, the early detection of incipient bearing signal is often made difficult due to its corruption by background vibration (noise). Numerous advanced signal processing techniques have been developed to detect defective bearing signals but with varying degree of success because they require a high Signal to Noise Ratio (SNR), and the fault components need to be larger than the background noise. Vibration analyses in the time and frequency domains are commonly used to detect machinery failure, but these methods require a relatively high SNR. Hence, it is essential to minimize the noise component in the observed signal before post processing is conducted. In this research, detection of failure in rolling element bearing faults by vibration analysis is investigated. The expected time intervals between the impacts of faulty bearing components signals are analysed using the blind deconvolution technique as a feature extraction technique to recover the source signal. Blind deconvolution refers to the process of learning the inverse of an unknown channel and applying it to the observed signal to recover the source signal of a damaged bearing. The estimation time period between the impacts is improved by using the technique and consequently provides a better approach to identify a damaged bearing. The procedure to obtain the optimum inverse equalizer filter is addressed to provide the filter parameters for the blind deconvolution process. The efficiency and robustness of the proposed algorithm is assessed initially using different kinds of corrupting noises. The result show that the proposed algorithm works well with simulated corrupting periodic noises. This research also shows that blind deconvolution behaves as a notch filter to remove the noise components. This research involves the application of blind deconvolution technique with optimum equalizer design for improving the SNR for the detection of damaged rolling element bearings. The filter length of the blind equalizer needs to be adjusted continuously due to different operating conditions, size and structure of the machines. To determine the optimum filter length a simulation test was conducted with a pre-recorded bearing signal (source) and corrupted with varying magnitude noise. From the output, the modified Crest Factor (CF) and Arithmetic Mean (AM) of the recovered signal can be plotted versus the filter length. The optimum filter length can be selected by observation when the plot converges close to the pre-determined source feature value. The filter length is selected based on the CF and AM plots, and these values are stored in a data training set for optimum determination of filter length using neural network. A pre-trained neural network is designed to train the behaviour of the system to target the optimum filter length. The performance of the blind deconvolution technique was assessed based on kurtosis values. The capability of blind deconvolution with optimum filter length developed from the simulation studies was further applied in a life bearing test rig. In this research, life time testing is also conducted to gauge the performance of the blind deconvolution technique in detecting a growing potential failure of a new bearing which is eventually run to failure. Results from unseeded new bearing tests are different, because seeded defects have certain defect characteristic frequencies which can be used to track a specific damaged frequency component. In this test, the test bearing was set to operate continuously until failures occurred. The proposed technique was then applied to monitor the condition of the test bearing and a trend of the bearing life was established. The results revealed the superiority of the technique in identifying the periodic components of the bearing before final break-down of the test bearing. The results show that the proposed technique with optimum filter length does improve the SNR of the deconvolved signal and can be used for automatic feature extraction and fault classification. This technique has potential for use in machine diagnostics.
|
18 |
Blind Deconvolution Based on Constrained Marginalized Particle FiltersMaryan, Krzysztof S. 09 1900 (has links)
This thesis presents a new approach to blind deconvolution algorithms. The proposed method is a combination of a classical blind deconvolution subspace method and a marginalized particle filter. It is shown that the new method provides better performance than just a marginalized particle filter, and better robustness than the classical subspace method. The properties of the new method make it a candidate for further exploration of its potential application in acoustic blind dereverberation. / Thesis / Master of Applied Science (MASc)
|
19 |
Optimal filters for deconvolution of transient signals in the presence of noiseBennia, Abdelhak 16 September 2005 (has links)
This dissertation presents different methods for the deconvolution of time domain signals. The techniques developed in this work are frequency domain filtering techniques. and are suitable for the type of deconvolution problems encountered in time domain reflectometry (TOR). They include a smoothing technique that is a variant of the well known lowpass filter. This technique is parameter dependent in order to allow for adequate choice of cutoff frequency. Another more powerful method developed is an adaptive smoothing (regularization) technique, which is both frequency dependent and input-signal dependent as well. Thus, it is an adaptive technique whose performance depends on a parameter associated with its smoothing constraint.
These frequency domain techniques and their variants are parameter dependent; hence a parameter optimization criterion must be included. However, in deriving an optimization criterion, great importance must be given to its adequacy in the determination of the appropriate parameter value as well its time efficiency. A parameter optimization method that fulfills those two reqUirements is also developed. The method is fully implemented in the frequency domain in which the filtering techniques are used.
The techniques developed are derived with a magnitude component only. i.e., non-causal. The limited derivation is due to the fact that we are usually interested in reducing only the noise level from the magnitude point of view. However, If we consider time domain measurements as an example, physical pulses and transients are causal functions of time, i.e., their values are zero before t = 0, the time at which they begin. Their measured waveform data are also causal. When deconvolution processing is applied to remove instrumentation errors and/or suppress the effects of noise, non-causal deconvolution methods, that were mentioned previously, may introduce unacceptable errors. The conventional deconvolution is modified to ensure that causality is maintained in the deconvolution result.
The impulse response of an unknown system is recovered from time domain reflectometry data by implementing a method based on the homomorphic deconvolution technique. In time domain reflectometry, the reflected waveform by a line with several discontinuities is represented as the convolution of the reflection coefficient of the line and the input excitation of the line source. The reflection coefficient is generally a train of spikes (delta functions) when the discontinuities are resistive. However, this is not the case when the discontinuities are capacitive in nature. In this work, we will attempt to show that the conventional frequency domain deconvolution techniques fail to provide good estimates when the waveform contains certain amounts of noise. Since it has been shown that homomorphic systems are useful in separating signals which have combined through convolution, homomorphic filtering can then be applied to recover either the input excitation or the impulse response (reflection coeffiCient) of the network. / Ph. D.
|
20 |
Self-correcting multi-channel Bussgang blind deconvolution using expectation maximization (EM) algorithm and feedbackTang, Sze Ho 15 January 2009 (has links)
A Bussgang based blind deconvolution algorithm called self-correcting multi-channel Bussgang (SCMB) blind deconvolution algorithm was proposed. Unlike the original Bussgang blind deconvolution algorithm where the probability density function (pdf) of the signal being recovered is assumed to be completely known, the proposed SCMB blind deconvolution algorithm relaxes this restriction by parameterized the pdf with a Gaussian mixture model and expectation maximization (EM) algorithm, an iterative maximum likelihood approach, is employed to estimate the parameter side by side with the estimation of the equalization filters of the original Bussgang blind deconvolution algorithm. A feedback loop is also designed to compensate the effect of the parameter estimation error on the estimation of the equalization filters. Application of the SCMB blind deconvolution framework for binary image restoration, multi-pass synthetic aperture radar (SAR) autofocus and inverse synthetic aperture radar (ISAR) autofocus are exploited with great results.
|
Page generated in 0.0551 seconds