Spelling suggestions: "subject:"nonuniform sampling"" "subject:"nonuniform sampling""
1 |
NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES: USING NEURAL NETWORK SURROGATE MODELS WITH NON-UNIFORM DATA SAMPLING / NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSESWeir, Lauren January 2024 (has links)
This thesis demonstrates a parameter estimation technique for bioprocesses that utilizes
measurement noise in experimental data to determine credible intervals on parameter
estimates, with this information of potential use in prediction, robust control,
and optimization. To determine these estimates, the work implements Bayesian inference
using nested sampling, presenting an approach to develop neural network (NN)
based surrogate models. To address challenges associated with non-uniform sampling
of experimental measurements, an NN structure is proposed. The resultant surrogate
model is utilized within a Nested Sampling Algorithm that samples possible parameter
values from the parameter space and uses the NN to calculate model output
for use in the likelihood function based on the joint probability distribution of the
noise of output variables. This method is illustrated against simulated data, then
with experimental data from a Sartorius fed-batch bioprocess. Results demonstrate
the feasibility of the proposed technique to enable rapid parameter estimation for
bioprocesses. / Thesis / Master of Applied Science (MASc) / Bioprocesses require models that can be developed quickly for rapid production of desired
pharmaceuticals. Parameter estimation is necessary for these models, especially
first principles models. Generating parameter estimates with confidence intervals is
important for model based control. Challenges with parameter estimation that must
be addressed are the presence of non-uniform sampling and measurement noise in
experimental data. This thesis demonstrates a method of parameter estimation that
generates parameter estimates with credible intervals by incorporating measurement
noise in experimental data, while also employing a dynamic neural network surrogate
model that can process non-uniformly sampled data. The proposed technique
implements Bayesian inference using nested sampling and was tested against both
simulated and real experimental fed-batch data.
|
2 |
Strategies for non-uniform rate sampling in digital control theoryKhan, Mohammad Samir January 2010 (has links)
This thesis is about digital control theory and presents an account of methods for enabling and analysing intentional non-uniform sampling in discrete compensators. Most conventional control algorithms cause numerical problems where data is collected at sampling rates that are substantially higher than the dynamics of the equivalent continuous-time operation that is being implemented. This is of relevant interest in applications of digital control, in which high sample rates are routinely dictated by the system stability requirements rather than the signal processing needs. Considerable recent progress in reducing the sample frequency requirements has been made through the use of non-uniform sampling schemes, so called alias-free signal processing. The approach prompts the simplification of complex systems and consequently enhances the numerical conditioning of the implementation algorithms that otherwise, would require very high uniform sample rates. Such means of signal representation and analysis presents a variety of options and thus is being researched and practiced in a number of areas in communications. However, the control communities have not yet investigated the use of intentional non-uniform sampling, and hence the ethos of this research project is to investigate the effectiveness of such sampling regimes, in the context of exploiting the benefits. Digital control systems exhibit bandwidth limitations enforced by their closed-loop frequency requirements, the calculation delays in the control algorithm and the interfacing conversion times. These limitations pave the way for additional phase lags within the control loop that demand very high sample rates. Since non-uniform sampling is propitious in reducing the sample frequency requirements of digital processing, it proffers the prospects of being utilised in achieving a higher control bandwidth without opting for very high uniform sample rates. The concept, to the author s knowledge, has not formally been studied and very few definite answers exist in control literature regarding the associated analysis techniques. The key contributions adduced in this thesis include the development and analysis of the control algorithm designed to accommodate intentional non-uniform sample frequencies. In addition, the implementation aspects are presented on an 8-bit microcontroller and an FPGA board. This work begins by establishing a brief historical perspective on the use of non-uniform sampling and its role for digital processing. The study is then applied to the problem of digital control design, and applications are further discoursed. This is followed by consideration of its implementation aspects on standard hardware.
|
3 |
Contribution à l'étude de l'échantillonnage non uniforme dans le domaine de la radio intelligente. / Non Uniform sampling contributions in the context of Cognitive RadioTraore, Samba 09 December 2015 (has links)
Nous proposons un nouveau schéma d’échantillonnage non uniforme périodique appelé Système d’Échantillonnage Non Uniforme en Radio Intelligente (SENURI). Notre schéma détecte la localisation spectrale des bandes actives dans la bande totale échantillonnée afin de réduire la fréquence moyenne d’échantillonnage, le nombre d’échantillons prélevé et par conséquent la consommation d’énergie au niveau du traitement numérique. La fréquence moyenne d’échantillonnage du SENURI dépend uniquement du nombre de bandes contenues dans le signal d’entrée x(t). Il est nettement plus performant, en termes d’erreur quadratique, qu’une architecture classique d’échantillonnage non uniforme périodique constituée de p branches, lorsque le spectre de x(t) change dynamiquement. / In this work we consider the problem of designing an effective sampling scheme for sparse multi-band signals. Based on previous results on periodic non-uniform sampling (Multi-Coset) and using the well known Non-Uniform Fourier Transform through Bartlett’s method for Power Spectral Density estimation, we propose a new sampling scheme named the Dynamic Single Branch Non-uniform Sampler (DSB-NUS). The idea of the proposed scheme is to reduce the average sampling frequency, the number of samples collected, and consequently the power consumption of the Analog to Digital Converter (ADC). In addition to that our proposed method detects the location of the bands in order to adapt the sampling rate. In this thesis, we show through simulation results that compared to existing multi-coset based samplers, our proposed sampler provides superior performance, both in terms of sampling rate and energy consumption. It is notconstrained by the inflexibility of hardware circuitry and is easily reconfigurable. We also show the effect of the false detection of active bands on the average sampling rate of our new adaptive non-uniform sub-Nyquist sampler scheme.
|
4 |
Mesh models of images, their generation, and their application in image scalingMostafavian, Ali 22 January 2019 (has links)
Triangle-mesh modeling, as one of the approaches for representing images based on nonuniform sampling, has become quite popular and beneficial in many applications. In this thesis, image representation using triangle-mesh models and its application in image scaling are studied. Consequently, two new methods, namely, the SEMMG and MIS methods are proposed, where each solves a different problem. In particular, the SEMMG method is proposed to address the problem of image representation by producing effective mesh models that are used for representing grayscale images, by minimizing squared error. The MIS method is proposed to address the image-scaling problem for grayscale images that are approximately piecewise-smooth, using triangle-mesh models.
The SEMMG method, which is proposed for addressing the mesh-generation problem, is developed based on an earlier work, which uses a greedy-point-insertion (GPI) approach to generate a mesh model with explicit representation of discontinuities (ERD). After in-depth analyses of two existing methods for generating the ERD models, several weaknesses are identified and specifically addressed to improve the quality of the generated models, leading to the proposal of the SEMMG method. The performance of the SEMMG method is then evaluated by comparing the quality of the meshes it produces with those obtained by eight other competing methods, namely, the error-diffusion (ED) method of Yang, the modified Garland-Heckbert (MGH) method, the ERDED and ERDGPI methods of Tu and Adams, the Garcia-Vintimilla-Sappa (GVS) method, the hybrid wavelet triangulation (HWT) method of Phichet, the binary space partition (BSP) method of Sarkis, and the adaptive triangular meshes (ATM) method of Liu. For this evaluation, the error between the original and reconstructed images, obtained from each method under comparison, is measured in terms of the PSNR. Moreover, in the case of the competing methods whose implementations are available, the subjective quality is compared in addition to the PSNR. Evaluation results show that the reconstructed images obtained from the SEMMG method are better than those obtained by the competing methods in terms of both PSNR and subjective quality. More specifically, in the case of the methods with implementations, the results collected from 350 test cases show that the SEMMG method outperforms the ED, MGH, ERDED, and ERDGPI schemes in approximately 100%, 89%, 99%, and 85% of cases, respectively. Moreover, in the case of the methods without implementations, we show that the PSNR of the reconstructed images produced by the SEMMG method are on average 3.85, 0.75, 2, and 1.10 dB higher than those obtained by the GVS, HWT, BSP, and ATM methods, respectively. Furthermore, for a given PSNR, the SEMMG method is shown to produce much smaller meshes compared to those obtained by the GVS and BSP methods, with approximately 65% to 80% fewer vertices and 10% to 60% fewer triangles, respectively. Therefore, the SEMMG method is shown to be capable of producing triangular meshes of higher quality and smaller sizes (i.e., number of vertices or triangles) which can be effectively used for image representation.
Besides the superior image approximations achieved with the SEMMG method, this work also makes contributions by addressing the problem of image scaling. For this purpose, the application of triangle-mesh mesh models in image scaling is studied. Some of the mesh-based image-scaling approaches proposed to date employ mesh models that are associated with an approximating function that is continuous everywhere, which inevitably yields edge blurring in the process of image scaling. Moreover, other mesh-based image-scaling approaches that employ approximating functions with discontinuities are often based on mesh simplification where the method starts with an extremely large initial mesh, leading to a very slow mesh generation with high memory cost. In this thesis, however, we propose a new mesh-based image-scaling (MIS) method which firstly employs an approximating function with selected discontinuities to better maintain the sharpness at the edges. Secondly, unlike most of the other discontinuity-preserving mesh-based methods, the proposed MIS method is not based on mesh simplification. Instead, our MIS method employs a mesh-refinement scheme, where it starts from a very simple mesh and iteratively refines the mesh to reach a desirable size. For developing the MIS method, the performance of our SEMMG method, which is proposed for image representation, is examined in the application of image scaling. Although the SEMMG method is not designed for solving the problem of image scaling, examining its performance in this application helps to better understand potential shortcomings of using a mesh generator in image scaling. Through this examination, several shortcomings are found and different techniques are devised to address them. By applying these techniques, a new effective mesh-generation method called MISMG is developed that can be used for image scaling. The MISMG method is then combined with a scaling transformation and a subdivision-based model-rasterization algorithm, yielding the proposed MIS method for scaling grayscale images that are approximately piecewise-smooth. The performance of our MIS method is then evaluated by comparing the quality of the scaled images it produces with those obtained from five well-known raster-based methods, namely, bilinear interpolation, bicubic interpolation of Keys, the directional cubic convolution interpolation (DCCI) method of Zhou et al., the new edge-directed image interpolation (NEDI) method of Li and Orchard, and the recent method of super-resolution using convolutional neural networks (SRCNN) by Dong et al.. Since our main goal is to produce scaled images of higher subjective quality with the least amount of edge blurring, the quality of the scaled images are first compared through a subjective evaluation followed by some objective evaluations. The results of the subjective evaluation show that the proposed MIS method was ranked best overall in almost 67\% of the cases, with the best average rank of 2 out of 6, among 380 collected rankings with 20 images and 19 participants. Moreover, visual inspections on the scaled images obtained with different methods show that the proposed MIS method produces scaled images of better quality with more accurate and sharper edges. Furthermore, in the case of the mesh-based image-scaling methods, where no implementation is available, the MIS method is conceptually compared, using theoretical analysis, to two mesh-based methods, namely, the subdivision-based image-representation (SBIR) method of Liao et al. and the curvilinear feature driven image-representation (CFDIR) method of Zhou et al.. / Graduate
|
5 |
Contribution à l'étude de l'échantillonnage non uniforme dans le domaine de la radio intelligente. / Non Uniform sampling contributions in the context of Cognitive RadioTraore, Samba 09 December 2015 (has links)
Nous proposons un nouveau schéma d’échantillonnage non uniforme périodique appelé Système d’Échantillonnage Non Uniforme en Radio Intelligente (SENURI). Notre schéma détecte la localisation spectrale des bandes actives dans la bande totale échantillonnée afin de réduire la fréquence moyenne d’échantillonnage, le nombre d’échantillons prélevé et par conséquent la consommation d’énergie au niveau du traitement numérique. La fréquence moyenne d’échantillonnage du SENURI dépend uniquement du nombre de bandes contenues dans le signal d’entrée x(t). Il est nettement plus performant, en termes d’erreur quadratique, qu’une architecture classique d’échantillonnage non uniforme périodique constituée de p branches, lorsque le spectre de x(t) change dynamiquement. / In this work we consider the problem of designing an effective sampling scheme for sparse multi-band signals. Based on previous results on periodic non-uniform sampling (Multi-Coset) and using the well known Non-Uniform Fourier Transform through Bartlett’s method for Power Spectral Density estimation, we propose a new sampling scheme named the Dynamic Single Branch Non-uniform Sampler (DSB-NUS). The idea of the proposed scheme is to reduce the average sampling frequency, the number of samples collected, and consequently the power consumption of the Analog to Digital Converter (ADC). In addition to that our proposed method detects the location of the bands in order to adapt the sampling rate. In this thesis, we show through simulation results that compared to existing multi-coset based samplers, our proposed sampler provides superior performance, both in terms of sampling rate and energy consumption. It is notconstrained by the inflexibility of hardware circuitry and is easily reconfigurable. We also show the effect of the false detection of active bands on the average sampling rate of our new adaptive non-uniform sub-Nyquist sampler scheme.
|
6 |
New Theoretical and Computational Methods for the Collection and Interpretation of Biomolecular Nuclear Magnetic Resonance DataJameson, Gregory Thomas 23 September 2022 (has links)
No description available.
|
7 |
A New Beamforming Approach Using 60 GHz Antenna Arrays for Multi–Beams 5G ApplicationsAl-Sadoon, M.A.G., Patwary, M.N., Zahedi, Y., Ojaroudi Parchin, Naser, Aldelemy, Ahmad, Abd-Alhameed, Raed 26 May 2022 (has links)
Yes / Recent studies and research have centred on new solutions in different elements and stages
to the increasing energy and data rate demands for the fifth generation and beyond (B5G). Based on
a new-efficient digital beamforming approach for 5G wireless communication networks, this work
offers a compact-size circular patch antenna operating at 60 GHz and covering a 4 GHz spectrum
bandwidth. Massive Multiple Input Multiple Output (M–MIMO) and beamforming technology
build and simulate an active multiple beams antenna system. Thirty-two linear and sixty-four
planar antenna array configurations are modelled and constructed to work as base stations for 5G
mobile communication networks. Furthermore, a new beamforming approach called Projection
Noise Correlation Matrix (PNCM) is presented to compute and optimise the fed weights of the array
elements. The key idea of the PNCM method is to sample a portion of the measured noise correlation
matrix uniformly in order to provide the best representation of the entire measured matrix. The
sampled data will then be utilised to build a projected matrix using the pseudoinverse approach in
order to determine the best fit solution for a system and prevent any potential singularities caused
by the matrix inversion process. The PNCM is a low-complexity method since it avoids eigenvalue
decomposition and computing the entire matrix inversion procedure and does not require including
signal and interference correlation matrices in the weight optimisation process. The suggested
approach is compared to three standard beamforming methods based on an intensive Monte Carlo
simulation to demonstrate its advantage. The experiment results reveal that the proposed method
delivers the best Signal to Interference Ratio (SIR) augmentation among the compared beamformers
|
8 |
Covering Problems via Structural ApproachesGrant, Elyot January 2011 (has links)
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible.
In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide:
- An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC.
- A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure.
- A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity.
- Applications of the above to various capacitated covering problems via linear programming strengthening and rounding.
In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability.
|
9 |
Covering Problems via Structural ApproachesGrant, Elyot January 2011 (has links)
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible.
In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide:
- An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC.
- A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure.
- A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity.
- Applications of the above to various capacitated covering problems via linear programming strengthening and rounding.
In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability.
|
10 |
Estimation spectrale parcimonieuse de signaux à échantillonnage irrégulier : application à l’analyse vibratoire d’aubes de turbomachines à partir de signaux tip-timing / Sparse spectral analysis of irregularly sampled signals : application to the vibrating analysis of turbomachine blades from tip-timing signalsBouchain, Antoine 25 April 2019 (has links)
Dans le cadre de la certification de ses moteurs d'hélicoptères, Safran Helicopter Engines réalise des essais en fonctionnement lors desquels les réponses vibratoires de turbomachines (compresseurs et turbines) sont mesurées. Les réponses vibratoires contiennent des modes (ou raies spectrales) dont les fréquences et amplitudes doivent être caractérisées. Les mesures sont réalisées par la technologie tip-timing qui permet d'observer les vibrations de toutes les pales d'un aubage en rotation.Cependant, la technologie tip-timing présente deux spécificités importantes. Premièrement, l'échantillonnage des signaux de vibrations est irrégulier quasi-périodique. Deuxièmement, l'ordre de grandeur des fréquences de vibration est généralement supérieur à la fréquence d'échantillonnage équivalente. Ces deux caractéristiques donnent lieu à des artefacts des composantes fréquentielles sur les spectres des signaux de vibrations. Ceux-ci gênent alors fortement l'identification du contenu spectral et perturbent donc l'interprétation du comportement vibratoire des pales.La nouvelle méthode d'analyse spectrale proposée s'appuie sur une modélisation parcimonieuse des signaux tip-timing et prend en compte les variations de la fréquence de rotation. L'analyse spectrale des signaux est alors réalisée par la minimisation d'un critère des moindres carrés linéaires régularisé par une pénalisation de "norme-l0" par l'algorithme Block-OMP.À l'aide de résultats numériques sur signaux synthétiques, il est démontré que cette méthode fournit de bonnes performances d'estimations des composantes spectrales et réalise une réduction importante de leurs artefacts. La prise en compte des variations de la fréquence de rotation permet en effet de tirer profit de l'utilisation de longues durées d'observation afin de réduire significativement les artefacts des composantes fréquentielles contenus dans les spectres. Par ailleurs, avec des performances légèrement meilleures à celles de l'ESMV (méthode reconnue pour l'analyse spectrale des signaux tip-timing), la méthode proposée est environ cent fois plus rapide.Deux cas de données réelles sont étudiés. À travers une détection de crique de pale, le premier cas d'étude montre que la méthode proposée est pertinente et réalise des estimations comparables aux méthodes industrielles. Le second cas d'étude présente plusieurs vibrations synchrones et asynchrones simultanées. Cela met en avant la capacité de réduction des artefacts des composantes fréquentielles de la méthode développée afin de faciliter l'interprétation du contenu vibratoire complexe de ce signal.L'optimisation du placement des sondes tip-timing est également étudiée pour faciliter l'identification des composantes synchrones. À partir de résultats numériques, il est démontré qu'éloigner les capteurs améliore l'estimation des amplitudes ce type de composantes. / As part of the certification of its helicopter engines, Safran Helicopter Engines performs operational tests in which the vibrations responses of turbomachines (compressors and turbines) are measured. The vibratory responses contain modes (or spectral lines) whose frequencies and amplitudes must be characterized. The measurements are provided by the tip-timing technology which can observe the vibrations of all the blades while rotating.However, tip-timing technology has two important features. Firstly, the sampling of the vibrating signals is irregular quasi-periodic. Secondly, the vibrating frequencies are generally higher than the equivalent sampling frequency. These two characteristics generate frequency components artefacts onto the vibrating signals spectrum. As a consequence, they strongly hinder the identification of the spectral content and thus disturb the interpretation of the blades vibratory behaviour.The proposed new spectral analysis method relies on sparse modelling of the tip-timing signals and considers the variations of the rotational frequency. The spectral analysis of the signals is then performed by the minimization of a linear least squares criterion regularized by a penalty of "norm-l0" by the Block-OMP algorithm.Using numerical results from synthetic signals, it is shown that this method provides good spectral component estimation performances and achieves a significant reduction of their artefacts. Considering the variations of the rotational frequency allows to take advantage of the use of long observation periods in order to significantly reduce the frequency components artefacts contained in the spectrum. In addition, with slightly better performances than the ESMV (acknowledged method for the tip-timing signals spectral analysis), the proposed method is about a hundred times faster.Two cases of real data are studied. Through a detection of a blade crack, the first studied case shows that the proposed method is relevant and makes equivalent estimates with respect to industrial methods. The second studied case presents several simultaneous synchronous and asynchronous vibrations. That highlights the ability to reduce the frequency components artefacts of the developed method in order to simplify the interpretation of the complex vibratory content of this signal.The optimization of the positioning of the tip-timing probes is also studied in order to simplify the identification of synchronous components. From numerical results, it is demonstrated that moving away the probes improves the amplitudes estimation of this type of components.
|
Page generated in 0.096 seconds