• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 14
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 94
  • 94
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The Effect of Receiver Nonlinearity and Nonlinearity Induced Interference on the Performance of Amplitude Modulated Signals

Moore, Natalie 22 August 2018 (has links)
All wireless receivers have some degree of nonlinearity that can negatively impact performance. Two major effects from this nonlinearity are power compression, which leads to amplitude and phase distortions in the received signal, and desensitization caused by a high powered interfering signal at an adjacent channel. As the RF spectrum becomes more crowded, the interference caused by these adjacent signals will become a more significant problem for receiver design. Therefore, having bit and symbol error rate expressions that take the receiver nonlinearity into account will allow for determining the linearity requirements of a receiver. This thesis examines the modeling of the probability density functions of M-PAM and M-QAM signals through an AWGN channel taking into account the impact of receiver nonlinearity. A change of variables technique is used to provide a relationship between the pdf of these signals with a linear receiver and the pdf with a nonlinear receiver. Additionally, theoretical bit and symbol error rates are derived from the pdf expressions. Finally, this approach is extended by deriving pdf and error rate expressions for these signals when nearby blocking signals cause desensitization of the signal of interest. Matlab simulation shows that the derived expressions for a nonlinear receiver have the same accuracy as the accepted expressions for linear receivers. / Master of Science / All wireless receivers have some amount of nonlinearity that can distort a received signal and impact performance. For amplitude modulated signals, the power compression caused by the nonlinear receiver will cause distortions in the amplitude and phase of the received signal. Additionally, a high powered interfering signal at a close frequency can decrease the gain and distort the received signal. This thesis examines how the probability density of an amplitude modulated signal with a nonlinear receiver can be modeled for both of these situations. These theoretical probability density functions are used to derive theoretical error rate expressions for the signals both with and without the adjacent channel interference. Simulations in Matlab show that the accuracy of these derived expressions is similar to the accuracies of the linear receiver expressions. These derived expressions will be able to remove the need for time consuming simulation when designing receivers for wireless systems.
42

Analysis of diagnostic climate model cloud parameterisations using large-eddy simulations

Rosch, Jan, Heus, Thijs, Salzmann, Marc, Mülmenstädt, Johannes, Schlemmer, Linda, Quaas, Johannes 28 April 2016 (has links) (PDF)
Current climate models often predict fractional cloud cover on the basis of a diagnostic probability density function (PDF) describing the subgrid-scale variability of the total water specific humidity, qt, favouring schemes with limited complexity. Standard shapes are uniform or triangular PDFs the width of which is assumed to scale with the gridbox mean qt or the grid-box mean saturation specific humidity, qs. In this study, the qt variability is analysed from large-eddy simulations for two stratocumulus, two shallow cumulus, and one deep convective cases. We find that in most cases, triangles are a better approximation to the simulated PDFs than uniform distributions. In two of the 24 slices examined, the actual distributions were so strongly skewed that the simple symmetric shapes could not capture the PDF at all. The distribution width for either shape scales acceptably well with both the mean value of qt and qs, the former being a slightly better choice. The qt variance is underestimated by the fitted PDFs, but overestimated by the existing parameterisations. While the cloud fraction is in general relatively well diagnosed from fitted or parameterised uniform or triangular PDFs, it fails to capture cases with small partial cloudiness, and in 10 – 30% of the cases misdiagnoses clouds in clear skies or vice-versa. The results suggest choosing a parameterisation with a triangular shape, where the distribution width would scale with the grid-box mean qt using a scaling factor of 0.076. This, however, is subject to the caveat that the reference simulations examined here were partly for rather small domains and driven by idealised boundary conditions.
43

Uncertainty and sensitivity analysis of a materials test reactor / Mogomotsi Ignatius Modukanele

Modukanele, Mogomotsi Ignatius January 2013 (has links)
This study was based on the uncertainty and sensitivity analysis of a generic 10 MW Materials Test Reactor (MTR). In this study an uncertainty and sensitivity analysis methodology called code scaling applicability and uncertainty (CSAU) was implemented. Although this methodology follows 14 steps, only the following were carried out: scenario specification, nuclear power plant (NPP) selection, phenomena identification and ranking table (PIRT), selection of frozen code, provision of code documentation, determination of code applicability, determination of code and experiment accuracy, NPP sensitivity analysis calculations, combination of biases and uncertainties, and total uncertainty to calculate specific scenario in a specific NPP. The thermal hydraulic code Flownex®1 was used to model only the reactor core to investigate the effects of the input parameters on the selected output parameters of the hot channel in the core. These output parameters were mass flow rate, temperature of the coolant, outlet pressure, centreline temperature of the fuel and surface temperature of the cladding. The PIRT process was used in conjunction with the sensitivity analysis results in order to select the relevant input parameters that significantly influenced the selected output parameters. The input parameters that have the largest effect on the selected output parameters were found to be the coolant flow channel width between the plates in the hot channel, the width of the fuel plates itself in the hot channel, the heat generation in the fuel plate of the hot channel, the global mass flow rate, the global coolant inlet temperature, the coolant flow channel width between the plates in the cold channel, and the width of the fuel plates in the cold channel. The uncertainty of input parameters was then propagated in Flownex using the Monte Carlo based uncertainty analysis function. From these results, the corresponding probability density function (PDF) of each selected output parameter was constructed. These functions were found to follow a normal distribution. / MIng (Nuclear Engineering), North-West University, Potchefstroom Campus, 2014
44

Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt

Van der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
45

Multi-regime Turbulent Combustion Modeling using Large Eddy Simulation/ Probability Density Function

Shashank Satyanarayana Kashyap (6945575) 14 August 2019 (has links)
Combustion research is at the forefront of development of clean and efficient IC engines, gas turbines, rocket propulsion systems etc. With the advent of faster computers and parallel programming, computational studies of turbulent combustion is increasing rapidly. Many turbulent combustion models have been previously developed based on certain underlying assumptions. One of the major assumptions of the models is the regime it can be used for: either premixed or non-premixed combustion. However in reality, combustion systems are multi-regime in nature, i.e.,\ co-existence of premixed and non-premixed modes. Thus, there is a need for development of multi-regime combustion models which closely follows the physics of combustion phenomena. Much of previous modeling efforts for multi-regime combustion was done using flamelet-type models. As a first, the current study uses the highly robust transported Probability Density Function (PDF) method coupled with Large Eddy Simulation (LES) to develop a multi-regime model. The model performance is tested for Sydney Flame L, a piloted methane-air turbulent flame. The concept of flame index is used to detect the extent of premixed and non-premixed combustion modes. The drawbacks of using the traditional flame index definition in the context of PDF method are identified. Necessary refinements to this definition, which are based on the species gradient magnitudes, are proposed for the multi-regime model development. This results in identifying a new model parameter beta which defines a gradient threshold for the calculation of flame index. A parametric study is done to determine a suitable value for beta, using which the multi-regime model performance is assessed for Flame L by comparing it against the widely used non-premixed PDF model for three mixing models: Modified Curl (MCurl), Interaction by Exchange with Mean (IEM) and Euclidean Minimum Spanning Trees (EMST). The multi-regime model shows a significant improvement in prediction of mean scalar quantities compared to the non-premixed PDF model when MCurl mixing model is used. Similar improvements are observed in the multi-regime model when IEM and EMST mixing models are used. The results show potential foundation for further multi-regime model development using PDF model.
46

Estimation of Emission Strength and Air Pollutant Concentrations by Lagrangian Particle Modeling

Manomaiphiboon, Kasemsan 30 March 2004 (has links)
A Lagrangian particle model was applied to estimating emission strength and air pollutant concentrations specifically for the short-range dispersion of an air pollutant in the atmospheric boundary layer. The model performance was evaluated with experimental data. The model was then used as the platform of parametric uncertainty analysis, in which effects of uncertainties in five parameters (Monin-Obukhov length, friction velocity, roughness height, mixing height, and the universal constant of the random component) of the model on mean ground-level concentrations were examined under slightly and moderately stable conditions. The analysis was performed under a probabilistic framework using Monte Carlo simulations with Latin hypercube sampling and linear regression modeling. In addition, four studies related to the Lagrangian particle modeling was included. They are an alternative technique of formulating joint probability density functions of velocity for atmospheric turbulence based on the Koehler-Symanowski technique, analysis of local increments in a multidimensional single-particle Lagrangian particle model using the algebra of Ito integrals and the Wagner-Platen formula, analogy between the diffusion limit of Lagrangian particle models and the classical theory of turbulent diffusion, and evaluation of some proposed forms of the Lagrangian velocity autocorrelation of turbulence.
47

Eκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής

Γρενζελιάς, Αναστάσιος 25 June 2009 (has links)
Στη συγκεκριμένη εργασία ασχολήθηκα με την εκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής που επεξεργάστηκα. Στο θεωρητικό κομμάτι το μεγαλύτερο ενδιαφέρον παρουσίασαν ο Μη Καταστροφικός Έλεγχος και η Ακουστική Εκπομπή, καθώς και οι εφαρμογές τους. Τα δεδομένα που επεξεργάστηκα χωρίζονται σε δύο κατηγορίες: σε εκείνα που μου δόθηκαν έτοιμα και σε εκείνα που λήφθηκαν μετά από μετρήσεις. Στην επεξεργασία των πειραματικών δεδομένων χρησιμοποιήθηκε ο αλγόριθμος πρόβλεψης-μεγιστοποίησης, τον οποίο μελέτησα θεωρητικά και με βάση τον οποίο εξάχθηκαν οι παράμετροι για κάθε σήμα. Έχοντας βρει τις παραμέτρους, προχώρησα στην ταξινόμηση των σημάτων σε κατηγορίες με βάση τη θεωρία της αναγνώρισης προτύπων. Στο τέλος της εργασίας παρατίθεται το παράρτημα με τα αναλυτικά αποτελέσματα, καθώς και η βιβλιογραφία που χρησιμοποίησα. / In this diploma paper the subject was the calculation of the probability density function of parameters which come from signals of sources of acoustic emission. In the theoritical part, the chapters with the greatest interest were Non Destructive Control and Acoustic Emission and their applications. The data which were processed are divided in two categories: those which were given without requiring any laboratory research and those which demanded laboratory research. The expectation-maximization algorithm, which was used in the process of the laboratory data, was the basis for the calculation of the parameters of each signal. Having calculated the parameters, the signals were classified in categories according to the theory of pattern recognition. In the end of the paper, the results and the bibliography which was used are presented.
48

Synthetic Aperture Sonar Micronavigation Using An Active Acoustic Beacon.

Pilbrow, Edward Neil January 2007 (has links)
Synthetic aperture sonar (SAS) technology has rapidly progressed over the past few years with a number of commercial systems emerging. Such systems are typically based on an autonomous underwater vehicle platform containing multiple along-track receivers and an integrated inertial navigation system (INS) with Doppler velocity log aiding. While producing excellent images, blurring due to INS integration errors and medium fluctuations continues to limit long range, long run, image quality. This is particularly relevant in mine hunting, the main application for SAS, where it is critical to survey the greatest possible area in the shortest possible time, regardless of sea conditions. This thesis presents the simulation, design, construction, and sea trial results for a prototype "active beacon" and remote controller unit, to investigate the potential of such a device for estimating SAS platform motion and medium fluctuations. The beacon is deployed by hand in the area of interest and acts as an active point source with real-time data uploading and control performed by radio link. Operation is tightly integrated with the operation of the Acoustics Research Group KiwiSAS towed SAS, producing one-way and two-way time of flight (TOF) data for every ping by detecting the sonar chirps, time-stamping their arrival using a GPS receiver, and replying back at a different acoustic frequency after a fixed time delay. The high SNR of this reply signal, combined with the knowledge that it is produced by a single point source, provides advantages over passive point-like targets for SAS image processing. Stationary accuracies of < 2 mm RMS have been measured at ranges of up to 36m. This high accuracy allowed the beacon to be used in a separate study to characterise the medium fluctuation statistics in Lyttelton Harbour, New Zealand, using an indoor dive pool as a control. Probability density functions were fitted to the data then incorporated in SAS simulations to observe their effect on image quality. Results from recent sea trials in Lyttelton Harbour show the beacon TOF data, when used in a narrowband motion compensation (MOCOMP) process, provided improvements to the quality of SAS images centred on frequencies of 30 kHz and 100 kHz. This prototype uses simple matched-filtering algorithms for detection and while performing well under stationary conditions, the fluctuations caused by the narrow sonar transmit beam pattern (BP) and changing superposition of seabed multipath often cause dropouts and inaccurate detections during sea trials. An analysis of the BP effects and how the accuracy and robustness of the detection algorithms can be improved is presented. Overcoming these problems reliably is difficult without dedicated large scale testing facilities to allow conditions to be reproduced consistently.
49

Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt

Van der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
50

Uncertainty and sensitivity analysis of a materials test reactor / Mogomotsi Ignatius Modukanele

Modukanele, Mogomotsi Ignatius January 2013 (has links)
This study was based on the uncertainty and sensitivity analysis of a generic 10 MW Materials Test Reactor (MTR). In this study an uncertainty and sensitivity analysis methodology called code scaling applicability and uncertainty (CSAU) was implemented. Although this methodology follows 14 steps, only the following were carried out: scenario specification, nuclear power plant (NPP) selection, phenomena identification and ranking table (PIRT), selection of frozen code, provision of code documentation, determination of code applicability, determination of code and experiment accuracy, NPP sensitivity analysis calculations, combination of biases and uncertainties, and total uncertainty to calculate specific scenario in a specific NPP. The thermal hydraulic code Flownex®1 was used to model only the reactor core to investigate the effects of the input parameters on the selected output parameters of the hot channel in the core. These output parameters were mass flow rate, temperature of the coolant, outlet pressure, centreline temperature of the fuel and surface temperature of the cladding. The PIRT process was used in conjunction with the sensitivity analysis results in order to select the relevant input parameters that significantly influenced the selected output parameters. The input parameters that have the largest effect on the selected output parameters were found to be the coolant flow channel width between the plates in the hot channel, the width of the fuel plates itself in the hot channel, the heat generation in the fuel plate of the hot channel, the global mass flow rate, the global coolant inlet temperature, the coolant flow channel width between the plates in the cold channel, and the width of the fuel plates in the cold channel. The uncertainty of input parameters was then propagated in Flownex using the Monte Carlo based uncertainty analysis function. From these results, the corresponding probability density function (PDF) of each selected output parameter was constructed. These functions were found to follow a normal distribution. / MIng (Nuclear Engineering), North-West University, Potchefstroom Campus, 2014

Page generated in 0.1416 seconds