• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 1
  • Tagged with
  • 20
  • 20
  • 8
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Uniform Sampling Methods for various Compact Spaces

O'Hagan, Sean 04 1900 (has links)
<p> We look at methods to generate uniformly distributed points from the classical matrix groups, spheres, projective spaces, and Grassmannians. We motivate the discussion with a number of applications ranging from number theory to wireless communications. The uniformity of the samples and the efficiency of the algorithms are compared. </p> / Thesis / Master of Science (MSc)
2

NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES: USING NEURAL NETWORK SURROGATE MODELS WITH NON-UNIFORM DATA SAMPLING / NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES

Weir, Lauren January 2024 (has links)
This thesis demonstrates a parameter estimation technique for bioprocesses that utilizes measurement noise in experimental data to determine credible intervals on parameter estimates, with this information of potential use in prediction, robust control, and optimization. To determine these estimates, the work implements Bayesian inference using nested sampling, presenting an approach to develop neural network (NN) based surrogate models. To address challenges associated with non-uniform sampling of experimental measurements, an NN structure is proposed. The resultant surrogate model is utilized within a Nested Sampling Algorithm that samples possible parameter values from the parameter space and uses the NN to calculate model output for use in the likelihood function based on the joint probability distribution of the noise of output variables. This method is illustrated against simulated data, then with experimental data from a Sartorius fed-batch bioprocess. Results demonstrate the feasibility of the proposed technique to enable rapid parameter estimation for bioprocesses. / Thesis / Master of Applied Science (MASc) / Bioprocesses require models that can be developed quickly for rapid production of desired pharmaceuticals. Parameter estimation is necessary for these models, especially first principles models. Generating parameter estimates with confidence intervals is important for model based control. Challenges with parameter estimation that must be addressed are the presence of non-uniform sampling and measurement noise in experimental data. This thesis demonstrates a method of parameter estimation that generates parameter estimates with credible intervals by incorporating measurement noise in experimental data, while also employing a dynamic neural network surrogate model that can process non-uniformly sampled data. The proposed technique implements Bayesian inference using nested sampling and was tested against both simulated and real experimental fed-batch data.
3

Strategies for non-uniform rate sampling in digital control theory

Khan, Mohammad Samir January 2010 (has links)
This thesis is about digital control theory and presents an account of methods for enabling and analysing intentional non-uniform sampling in discrete compensators. Most conventional control algorithms cause numerical problems where data is collected at sampling rates that are substantially higher than the dynamics of the equivalent continuous-time operation that is being implemented. This is of relevant interest in applications of digital control, in which high sample rates are routinely dictated by the system stability requirements rather than the signal processing needs. Considerable recent progress in reducing the sample frequency requirements has been made through the use of non-uniform sampling schemes, so called alias-free signal processing. The approach prompts the simplification of complex systems and consequently enhances the numerical conditioning of the implementation algorithms that otherwise, would require very high uniform sample rates. Such means of signal representation and analysis presents a variety of options and thus is being researched and practiced in a number of areas in communications. However, the control communities have not yet investigated the use of intentional non-uniform sampling, and hence the ethos of this research project is to investigate the effectiveness of such sampling regimes, in the context of exploiting the benefits. Digital control systems exhibit bandwidth limitations enforced by their closed-loop frequency requirements, the calculation delays in the control algorithm and the interfacing conversion times. These limitations pave the way for additional phase lags within the control loop that demand very high sample rates. Since non-uniform sampling is propitious in reducing the sample frequency requirements of digital processing, it proffers the prospects of being utilised in achieving a higher control bandwidth without opting for very high uniform sample rates. The concept, to the author s knowledge, has not formally been studied and very few definite answers exist in control literature regarding the associated analysis techniques. The key contributions adduced in this thesis include the development and analysis of the control algorithm designed to accommodate intentional non-uniform sample frequencies. In addition, the implementation aspects are presented on an 8-bit microcontroller and an FPGA board. This work begins by establishing a brief historical perspective on the use of non-uniform sampling and its role for digital processing. The study is then applied to the problem of digital control design, and applications are further discoursed. This is followed by consideration of its implementation aspects on standard hardware.
4

Contribution à l'étude de l'échantillonnage non uniforme dans le domaine de la radio intelligente. / Non Uniform sampling contributions in the context of Cognitive Radio

Traore, Samba 09 December 2015 (has links)
Nous proposons un nouveau schéma d’échantillonnage non uniforme périodique appelé Système d’Échantillonnage Non Uniforme en Radio Intelligente (SENURI). Notre schéma détecte la localisation spectrale des bandes actives dans la bande totale échantillonnée afin de réduire la fréquence moyenne d’échantillonnage, le nombre d’échantillons prélevé et par conséquent la consommation d’énergie au niveau du traitement numérique. La fréquence moyenne d’échantillonnage du SENURI dépend uniquement du nombre de bandes contenues dans le signal d’entrée x(t). Il est nettement plus performant, en termes d’erreur quadratique, qu’une architecture classique d’échantillonnage non uniforme périodique constituée de p branches, lorsque le spectre de x(t) change dynamiquement. / In this work we consider the problem of designing an effective sampling scheme for sparse multi-band signals. Based on previous results on periodic non-uniform sampling (Multi-Coset) and using the well known Non-Uniform Fourier Transform through Bartlett’s method for Power Spectral Density estimation, we propose a new sampling scheme named the Dynamic Single Branch Non-uniform Sampler (DSB-NUS). The idea of the proposed scheme is to reduce the average sampling frequency, the number of samples collected, and consequently the power consumption of the Analog to Digital Converter (ADC). In addition to that our proposed method detects the location of the bands in order to adapt the sampling rate. In this thesis, we show through simulation results that compared to existing multi-coset based samplers, our proposed sampler provides superior performance, both in terms of sampling rate and energy consumption. It is notconstrained by the inflexibility of hardware circuitry and is easily reconfigurable. We also show the effect of the false detection of active bands on the average sampling rate of our new adaptive non-uniform sub-Nyquist sampler scheme.
5

Mesh models of images, their generation, and their application in image scaling

Mostafavian, Ali 22 January 2019 (has links)
Triangle-mesh modeling, as one of the approaches for representing images based on nonuniform sampling, has become quite popular and beneficial in many applications. In this thesis, image representation using triangle-mesh models and its application in image scaling are studied. Consequently, two new methods, namely, the SEMMG and MIS methods are proposed, where each solves a different problem. In particular, the SEMMG method is proposed to address the problem of image representation by producing effective mesh models that are used for representing grayscale images, by minimizing squared error. The MIS method is proposed to address the image-scaling problem for grayscale images that are approximately piecewise-smooth, using triangle-mesh models. The SEMMG method, which is proposed for addressing the mesh-generation problem, is developed based on an earlier work, which uses a greedy-point-insertion (GPI) approach to generate a mesh model with explicit representation of discontinuities (ERD). After in-depth analyses of two existing methods for generating the ERD models, several weaknesses are identified and specifically addressed to improve the quality of the generated models, leading to the proposal of the SEMMG method. The performance of the SEMMG method is then evaluated by comparing the quality of the meshes it produces with those obtained by eight other competing methods, namely, the error-diffusion (ED) method of Yang, the modified Garland-Heckbert (MGH) method, the ERDED and ERDGPI methods of Tu and Adams, the Garcia-Vintimilla-Sappa (GVS) method, the hybrid wavelet triangulation (HWT) method of Phichet, the binary space partition (BSP) method of Sarkis, and the adaptive triangular meshes (ATM) method of Liu. For this evaluation, the error between the original and reconstructed images, obtained from each method under comparison, is measured in terms of the PSNR. Moreover, in the case of the competing methods whose implementations are available, the subjective quality is compared in addition to the PSNR. Evaluation results show that the reconstructed images obtained from the SEMMG method are better than those obtained by the competing methods in terms of both PSNR and subjective quality. More specifically, in the case of the methods with implementations, the results collected from 350 test cases show that the SEMMG method outperforms the ED, MGH, ERDED, and ERDGPI schemes in approximately 100%, 89%, 99%, and 85% of cases, respectively. Moreover, in the case of the methods without implementations, we show that the PSNR of the reconstructed images produced by the SEMMG method are on average 3.85, 0.75, 2, and 1.10 dB higher than those obtained by the GVS, HWT, BSP, and ATM methods, respectively. Furthermore, for a given PSNR, the SEMMG method is shown to produce much smaller meshes compared to those obtained by the GVS and BSP methods, with approximately 65% to 80% fewer vertices and 10% to 60% fewer triangles, respectively. Therefore, the SEMMG method is shown to be capable of producing triangular meshes of higher quality and smaller sizes (i.e., number of vertices or triangles) which can be effectively used for image representation. Besides the superior image approximations achieved with the SEMMG method, this work also makes contributions by addressing the problem of image scaling. For this purpose, the application of triangle-mesh mesh models in image scaling is studied. Some of the mesh-based image-scaling approaches proposed to date employ mesh models that are associated with an approximating function that is continuous everywhere, which inevitably yields edge blurring in the process of image scaling. Moreover, other mesh-based image-scaling approaches that employ approximating functions with discontinuities are often based on mesh simplification where the method starts with an extremely large initial mesh, leading to a very slow mesh generation with high memory cost. In this thesis, however, we propose a new mesh-based image-scaling (MIS) method which firstly employs an approximating function with selected discontinuities to better maintain the sharpness at the edges. Secondly, unlike most of the other discontinuity-preserving mesh-based methods, the proposed MIS method is not based on mesh simplification. Instead, our MIS method employs a mesh-refinement scheme, where it starts from a very simple mesh and iteratively refines the mesh to reach a desirable size. For developing the MIS method, the performance of our SEMMG method, which is proposed for image representation, is examined in the application of image scaling. Although the SEMMG method is not designed for solving the problem of image scaling, examining its performance in this application helps to better understand potential shortcomings of using a mesh generator in image scaling. Through this examination, several shortcomings are found and different techniques are devised to address them. By applying these techniques, a new effective mesh-generation method called MISMG is developed that can be used for image scaling. The MISMG method is then combined with a scaling transformation and a subdivision-based model-rasterization algorithm, yielding the proposed MIS method for scaling grayscale images that are approximately piecewise-smooth. The performance of our MIS method is then evaluated by comparing the quality of the scaled images it produces with those obtained from five well-known raster-based methods, namely, bilinear interpolation, bicubic interpolation of Keys, the directional cubic convolution interpolation (DCCI) method of Zhou et al., the new edge-directed image interpolation (NEDI) method of Li and Orchard, and the recent method of super-resolution using convolutional neural networks (SRCNN) by Dong et al.. Since our main goal is to produce scaled images of higher subjective quality with the least amount of edge blurring, the quality of the scaled images are first compared through a subjective evaluation followed by some objective evaluations. The results of the subjective evaluation show that the proposed MIS method was ranked best overall in almost 67\% of the cases, with the best average rank of 2 out of 6, among 380 collected rankings with 20 images and 19 participants. Moreover, visual inspections on the scaled images obtained with different methods show that the proposed MIS method produces scaled images of better quality with more accurate and sharper edges. Furthermore, in the case of the mesh-based image-scaling methods, where no implementation is available, the MIS method is conceptually compared, using theoretical analysis, to two mesh-based methods, namely, the subdivision-based image-representation (SBIR) method of Liao et al. and the curvilinear feature driven image-representation (CFDIR) method of Zhou et al.. / Graduate
6

Contribution à l'étude de l'échantillonnage non uniforme dans le domaine de la radio intelligente. / Non Uniform sampling contributions in the context of Cognitive Radio

Traore, Samba 09 December 2015 (has links)
Nous proposons un nouveau schéma d’échantillonnage non uniforme périodique appelé Système d’Échantillonnage Non Uniforme en Radio Intelligente (SENURI). Notre schéma détecte la localisation spectrale des bandes actives dans la bande totale échantillonnée afin de réduire la fréquence moyenne d’échantillonnage, le nombre d’échantillons prélevé et par conséquent la consommation d’énergie au niveau du traitement numérique. La fréquence moyenne d’échantillonnage du SENURI dépend uniquement du nombre de bandes contenues dans le signal d’entrée x(t). Il est nettement plus performant, en termes d’erreur quadratique, qu’une architecture classique d’échantillonnage non uniforme périodique constituée de p branches, lorsque le spectre de x(t) change dynamiquement. / In this work we consider the problem of designing an effective sampling scheme for sparse multi-band signals. Based on previous results on periodic non-uniform sampling (Multi-Coset) and using the well known Non-Uniform Fourier Transform through Bartlett’s method for Power Spectral Density estimation, we propose a new sampling scheme named the Dynamic Single Branch Non-uniform Sampler (DSB-NUS). The idea of the proposed scheme is to reduce the average sampling frequency, the number of samples collected, and consequently the power consumption of the Analog to Digital Converter (ADC). In addition to that our proposed method detects the location of the bands in order to adapt the sampling rate. In this thesis, we show through simulation results that compared to existing multi-coset based samplers, our proposed sampler provides superior performance, both in terms of sampling rate and energy consumption. It is notconstrained by the inflexibility of hardware circuitry and is easily reconfigurable. We also show the effect of the false detection of active bands on the average sampling rate of our new adaptive non-uniform sub-Nyquist sampler scheme.
7

New Theoretical and Computational Methods for the Collection and Interpretation of Biomolecular Nuclear Magnetic Resonance Data

Jameson, Gregory Thomas 23 September 2022 (has links)
No description available.
8

A New Beamforming Approach Using 60 GHz Antenna Arrays for Multi–Beams 5G Applications

Al-Sadoon, M.A.G., Patwary, M.N., Zahedi, Y., Ojaroudi Parchin, Naser, Aldelemy, Ahmad, Abd-Alhameed, Raed 26 May 2022 (has links)
Yes / Recent studies and research have centred on new solutions in different elements and stages to the increasing energy and data rate demands for the fifth generation and beyond (B5G). Based on a new-efficient digital beamforming approach for 5G wireless communication networks, this work offers a compact-size circular patch antenna operating at 60 GHz and covering a 4 GHz spectrum bandwidth. Massive Multiple Input Multiple Output (M–MIMO) and beamforming technology build and simulate an active multiple beams antenna system. Thirty-two linear and sixty-four planar antenna array configurations are modelled and constructed to work as base stations for 5G mobile communication networks. Furthermore, a new beamforming approach called Projection Noise Correlation Matrix (PNCM) is presented to compute and optimise the fed weights of the array elements. The key idea of the PNCM method is to sample a portion of the measured noise correlation matrix uniformly in order to provide the best representation of the entire measured matrix. The sampled data will then be utilised to build a projected matrix using the pseudoinverse approach in order to determine the best fit solution for a system and prevent any potential singularities caused by the matrix inversion process. The PNCM is a low-complexity method since it avoids eigenvalue decomposition and computing the entire matrix inversion procedure and does not require including signal and interference correlation matrices in the weight optimisation process. The suggested approach is compared to three standard beamforming methods based on an intensive Monte Carlo simulation to demonstrate its advantage. The experiment results reveal that the proposed method delivers the best Signal to Interference Ratio (SIR) augmentation among the compared beamformers
9

Covering Problems via Structural Approaches

Grant, Elyot January 2011 (has links)
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible. In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide: - An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC. - A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure. - A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity. - Applications of the above to various capacitated covering problems via linear programming strengthening and rounding. In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability.
10

Covering Problems via Structural Approaches

Grant, Elyot January 2011 (has links)
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible. In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide: - An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC. - A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure. - A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity. - Applications of the above to various capacitated covering problems via linear programming strengthening and rounding. In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability.

Page generated in 0.0828 seconds