Spelling suggestions: "subject:"metaparameter"" "subject:"afterparameter""
371 |
Blind Received Signal Strength Difference Based Source Localization with System Parameter Error and Sensor Position UncertaintyLohrasbipeydeh, Hannan 27 August 2014 (has links)
Passive source localization in wireless sensor networks (WSNs) is an important field of research with numerous applications in signal processing and wireless communications.
One purpose of a WSN is to determine the position of a signal emitted
from a source. This position is estimated based on received noisy measurements from
sensors (anchor nodes) that are distributed over a geographical area. In most cases,
the sensor positions are assumed to be known exactly, which is not always reasonable.
Even if the sensor positions are measured initially, they can change over time.
Due to the sensitivity of source location estimation accuracy with respect to the
a priori sensor position information, the source location estimates obtained can vary
significantly regardless of the localization method used. Therefore, the sensor position
uncertainty should be considered to obtain accurate estimates. Among the many
localization approaches, signal strength based methods have the advantages of low
cost and simple implementation. The received signal energy mainly depends on the
transmitted power and path loss exponent which are often unknown in practical
scenarios.
In this dissertation, three received signal strength difference (RSSD) based methods
are presented to localize a source with unknown transmit power. A nonlinear
RSSD-based model is formulated for systems perturbed by noise. First, an effective
low complexity constrained weighted least squares (CWLS) technique in the presence
of sensor uncertainty is derived to obtain a least squares initial estimate (LSIE) of
the source location. Then, this estimate is improved using a computationally efficient
Newton method. The Cramer-Rao lower bound (CRLB) is derived to determine the
effect of sensor location uncertainties on the source location estimate. Results are
presented which show that the proposed method achieves the CRLB when the signal
to noise ratio (SNR) is sufficiently high.
Least squares (LS) based methods are typically used to obtain the location estimate
that minimizes the data vector error instead of directly minimizing the unknown
parameter estimation error. This can result in poor performance, particularly in noisy
environments, due to bias and variance in the location estimate. Thus, an efficient
two stage estimator is proposed here. First, a minimax optimization problem is developed
to minimize the mean square error (MSE) of the proposed RSSD-based model.
Then semidefinite relaxation is employed to transform this nonconvex and nonlinear
problem into a convex optimization problem. This can be solved e ciently to obtain
the optimal solution of the corresponding semidefinite programming (SDP) problem.
Performance results are presented which con rm the e ciency of the proposed method
which achieves the CRLB.
Finally, an extended total least squares (ETLS) method is developed for blind
localization which considers perturbations in the system parameters as well as the
constraints imposed by the relation between the observation matrix and data vector.
The corresponding nonlinear and nonconvex RSSD-based localization problem is then
transformed to an ETLS problem with fewer constraints. This is transformed to a
convex semidefinite programming (SDP) problem using relaxation. The proposed
ETLS-SDP method is extended to the case with an unknown path loss exponent.
The mean squared error (MSE) and corresponding CRLB are derived as performance
benchmarks. Performance results are presented which show that the RSSD-based
ETLS-SDP method attains the CRLB for a sufficiently large SNR. / Graduate / 0544 / lohrasbi@uvic.ca
|
372 |
Estimation and testing in location-scale families of distributionsPotgieter, Cornelis Jacobus 11 October 2011 (has links)
D.Phil. / We consider two problems relating to location-scale families of distributions. Firstly, we consider methods of parameter estimation when two samples come from the same type of distribution, but possibly differ in terms of location and spread. Although there are methods of estimation that are asymptotically efficient, our interest is in fi
nding methods which also have good small-sample properties. Secondly, we consider tests for the hypothesis that two samples come from the same location-scale family. Both these problems are addressed using methods based on empirical distribution functions and empirical characteristic functions.
|
373 |
Model Fitting for Electric Arc Furnace RefiningRathaba, Letsane Paul 10 June 2005 (has links)
The dissertation forms part of an ongoing project for the modelling and eventual control of an electric arc furnace (EAF) process. The main motivation behind such a project is the potential benefits that can result from automation of a process that has largely been operator controlled, often with results that leave sufficient room for improvement. Previous work in the project has resulted in the development of a generic model of the process. A later study concentrated on the control of the EAF where economic factors were taken into account. Simulation results from both studies clearly demonstrate the benefits that can accrue from successful implementation of process control. A major drawback to the practical implementation of the results is the lack of a model that is proven to be an accurate depiction of the specific plant where control is to be applied. Furthermore, the accuracy of any process model can only be verified against actual process data. There lies the raison d'etre for this dissertation: to take the existing model from the simulation environment to the real process. The main objective is to obtain a model that is able to mimic a selected set of process outputs. This is commonly a problem of system identification (SID): to select an appropriate model then fit the model to plant input/output data until the model response is similar to the plant under the same inputs (and initial conditions). The model fitting is carried out on an existing EAF model primarily by estimation of the model parameters for the EAF refining stage. Therefore the contribution of this dissertation is a model that is able to depict the EAF refining stage with reasonable accuracy. An important aspect of model fitting is experiment design. This deals with the selection of inputs and outputs that must be measured in order to estimate the desired parameters. This constitutes the problem of identifiability: what possibilities exist for estimating parameters using available I/O data or, what additional data is necessary to estimate desired parameters. In the dissertation an analysis is carried out to determine which parameters are estimable from available data. For parameters that are not estimable recommendations are made about additional measurements required to remedy the situation. Additional modelling is carried out to adapt the model to the particular process. This includes modelling to incorporate the oxyfuel subsystem, the bath oxygen content, water cooling and the effect of foaming on the arc efficiency. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
|
374 |
Maximum likelihood parameter estimation in time series models using sequential Monte CarloYildirim, Sinan January 2013 (has links)
Time series models are used to characterise uncertainty in many real-world dynamical phenomena. A time series model typically contains a static variable, called parameter, which parametrizes the joint law of the random variables involved in the definition of the model. When a time series model is to be fitted to some sequentially observed data, it is essential to decide on the value of the parameter that describes the data best, a procedure generally called parameter estimation. This thesis comprises novel contributions to the methodology on parameter estimation in time series models. Our primary interest is online estimation, although batch estimation is also considered. The developed methods are based on batch and online versions of expectation-maximisation (EM) and gradient ascent, two widely popular algorithms for maximum likelihood estimation (MLE). In the last two decades, the range of statistical models where parameter estimation can be performed has been significantly extended with the development of Monte Carlo methods. We provide contribution to the field in a similar manner, namely by combining EM and gradient ascent algorithms with sequential Monte Carlo (SMC) techniques. The time series models we investigate are widely used in statistical and engineering applications. The original work of this thesis is organised in Chapters 4 to 7. Chapter 4 contains an online EM algorithm using SMC for MLE in changepoint models, which are widely used to model heterogeneity in sequential data. In Chapter 5, we present batch and online EM algorithms using SMC for MLE in linear Gaussian multiple target tracking models. Chapter 6 contains a novel methodology for implementing MLE in a hidden Markov model having intractable probability densities for its observations. Finally, in Chapter 7 we formulate the nonnegative matrix factorisation problem as MLE in a specific hidden Markov model and propose online EM algorithms using SMC to perform MLE.
|
375 |
Methods for Non-invasive Trustworthy Estimation of Arterial Blood PressureKoohi, Iraj January 2017 (has links)
The trustworthiness of the blood pressure (BP) readings acquired by oscillometric home-based monitoring systems is a challenging issue that requires patients to see the doctor for trusted measurements, especially those who are obese or have cardiovascular diseases such as hypertension or atrial fibrillation. Even with the most accurate monitors one may get different readings if BP is repeatedly measured. Trusted BP readings are those measured with accurate devices at proper measurement conditions. The accurate monitors need an indicator to assure the trustworthiness of the measured BP. In this work, a novel algorithm called the Dynamic Threshold Algorithm (DTA) is proposed that calculates trusted boundaries of the measured systolic and diastolic pressures from the recorded oscillometric waveforms. The DTA determines a threshold from the heart rate of subjects to locate the oscillometric pulse at the mean arterial pressure (PULSEMAP) and uses the peak, trough, and pressure of the located pulse to calculate the trusted boundaries.
In terms of accuracy, a modeling approach is employed to estimate BP from the arterial lumen area oscillations model in the diastolic region (ALA-based). The model requires compliance parameter ‘c’ to estimate BP. To this end, a pre-developed linear regression model between ‘c’ and the corresponding amplitude ratio of the PULSEMAP is employed to evaluate ‘c’. The proposed method uses ‘c’ and estimates BP by minimizing differences between peak and trough amplitudes of the actual and corresponding simulated waveforms.
The proposed DTA and ALA-based methods were tested on two datasets of healthy subjects and one dataset of sick subjects with cardiovascular diseases, and results were validated against corresponding references and compared with two popular maximum amplitude and maximum/minimum slope algorithms. Mean absolute error (MAE) and standard deviation of errors (STDE) are used to evaluate and compare the results. For healthy subjects, the MAE of the estimated systolic (SBP) and diastolic (DBP) blood pressures was improved up to 57% and 57% with an STDE of 55% and 62%, respectively. For sick subjects, the MAE was improved up to 40% and 29% with an STDE of 36% and 20% for SBP and DBP, respectively.
|
376 |
Caractérisation locale de fautes dans les systèmes large échelle / Local fault characterization in large scale systemsLudinard, Romaric 02 October 2014 (has links)
Internet est un réseau de réseaux permettant la mise en œuvre de divers services consommés par les utilisateurs. Malheureusement, chacun des éléments présents dans le réseau ou impliqués dans ces services peut potentiellement exhiber des défaillances. Une défaillance peut être perçue par un nombre variable d'utilisateurs suivant la localisation dans le système de la source de celle-Ci. Cette thèse propose un ensemble de contributions visant à déterminer du point de vue d'un utilisateur percevant une défaillance, si celle-Ci est perçue par un faible nombre d'utilisateurs (défaillance isolée) ou à l'inverse par un très grand nombre d'utilisateurs (défaillance massive). Nous formalisons dans un premier temps les défaillances par leur impact sur la perception des services consommés par les utilisateurs. Nous montrons ainsi qu'il est impossible, du point de vue d'un utilisateur, de déterminer de manière certaine si une défaillance perçue est isolée ou massive. Cependant, il possible de déterminer de manière certaine pour chaque utilisateur, s'il a perçu une défaillance isolée, massive, ou s'il est impossible de le déterminer. Cette caractérisation est optimale et totalement parallélisable. Dans un second temps, nous proposons une architecture pour la caractérisation de fautes. Les entités du système s'organisent au sein d'une structure à deux niveaux permettant de regrouper ensemble les entités ayant des perceptions similaires et ainsi mener à bien l'approche proposée. Enfin, une analyse probabiliste de la résistance au dynamisme et aux comportements malveillants du second niveau de cette architecture complète ce document. / The Internet is a global system of interconnected computer networks that carries lots of services consumed by users. Unfortunately, each element this system may exhibit failures. A failure can be perceived by a variable range of users, according to the location of the failure source. This thesis proposes a set of contributions that aims at determining from a user perception if a failure is perceived by a few number of users (isolated failure) or in contrast by lots of them (massive failure). We formalize failures with respect to their impact on the services that are consumed by users. We show that it is impossible to determine with certainty if a user perceives a local or a massive failure, from the user point of view. Nevertheless, it is possible to determine for each user whether it perceives a local failure, a massive one or whether it is impossible to determine. This characterization is optimal and can be run in parallel. Then, we propose a self-Organizing architecture for fault characterization. Entities of the system organize themselves in a two-Layered overlay that allows to gather together entities with similar perception. This gathering allows us to successfully apply our characterization. Finally, a probabilistic evaluation of the resilience to dynamism and malicious behaviors of this architecture is performed.
|
377 |
Research into order parameters and graphene dispersions in liquid crystal systems using Raman spectroscopyZhang, Zhaopeng January 2015 (has links)
Polarized Raman Spectroscopy (PRS) is one of the experimental methods which can be employed to deduce orientational order parameters, e.g. 〈P_200 〉, 〈P_400 〉, in liquid crystals from the experimental depolarization ratio graph via fitting. However, it has long been known that the order parameters deduced from the different vibrational modes are found to be different within the same sample. As a result, only certain vibrational modes can be reliably selected for analysis, limiting the application of PRS. The possible explanations are discussed in this thesis. The first explanation is given by considering a dipole tilt β_0 which is defined as the tilt angle of the dipole vibrational direction and the molecular long axis. A second explanation comes from assuming different vibrational symmetries, i.e. cylindrical or elliptic cylindrical. Molecular biaxial order parameters are introduced in both explanations. A systematic check via calculation shows that a common set of order parameters (including molecular biaxial order parameters) can be obtained with different depolarization ratio graphs when the explanations are considered. Both depolarization ratio graphs can also agree well with that obtained from phenyl and cyano stretching modes experimentally. A supplementary discussion shows that by using the first explanation, 〈P_400 〉 which, in previous fitting, shown an excessive value by using cyano stretching mode is reduced (15% reduced at β_0=15°, 〈P_402 〉=0.0536). PRS is also employed to analyse the order parameters in a bent-core system using a molecular model with the bend angle Ω and tilt angle B_0. The effects of each of the uniaxial and phase biaxial order parameters are considered. With a total Raman tensor generated by the sum of Raman tensor from each arm, reasonable uniaxial order parameters fitting values can be obtained from PRS without considering biaxial order parameters. These results agree well with those deduced from the refractive index measurements, which shows a new approach to the investigation of bent-core systems. However, it is also shown that introducing phase biaxial order parameters can’t provide robust fitting, leading inaccurate fitting values in the end. Several different liquid crystals (5CB, E7, HAT-6 and SSY) have been examined on seeking graphene/graphene oxide dispersions in liquid crystal systems. Unfortunately, no stable dispersion was obtained by applying simple experimental techniques. However, a highlight comes from the test of a lyotropic liquid crystal formed by a discotic molecule in NMP suggesting a possible dispersion medium for graphene. Meanwhile, by using Raman spectroscopy, the interaction between liquid crystal molecules and graphene can be obtained from the peak shift of vibrational modes. The experimental results suggest a stronger interaction in E7 compared to 5CB. No shift in ZLI-1695 indicates the different effects from the rigid core. Further, the discotic liquid crystal (HAT-6) shows a strong interaction with graphene. These facts lead to a conclusion that the interaction still exists in the graphene/liquid crystal dispersion providing a guide on controlling and optimizing the dispersion quality for the future research.
|
378 |
The embedding of complete bipartite graphs onto grids with a minimum grid cutwidthRocha, Mário 01 January 2003 (has links)
Algorithms will be domonstrated for how to embed complete bipartite graphs onto 2xn type grids, where the imimum grid cutwidth is attained.
|
379 |
The Hurst parameter and option pricing with fractional Brownian motionOstaszewicz, Anna Julia 01 February 2013 (has links)
In the mathematical modeling of the classical option pricing models it is assumed that the underlying stock price process follows a geometric Brownian motion, but through statistical analysis persistency was found in the log-returns of some South African stocks and Brownian motion does not have persistency. We suggest the replacement of Brownian motion with fractional Brownian motion which is a Gaussian process that depends on the Hurst parameter that allows for the modeling of autocorrelation in price returns. Three fractional Black-Scholes (Black) models were investigated where the underlying is assumed to follow a fractional Brownian motion. Using South African options on futures and warrant prices these models were compared to the classical models. / Dissertation (MSc)--University of Pretoria, 2012. / Mathematics and Applied Mathematics / unrestricted
|
380 |
Inverse problems and data assimilation methods applied on protein polymerisation / Problèmes inverses et méthodes d’assimilation de données appliquées à la polymérisation de protéinesArmiento, Aurora 13 January 2017 (has links)
Cette thèse a pour objectif la mise en place d'une stratégie mathématique pour l'étude du processus physique de l'agrégation des protéines. L'étude de ce processus largement inconnu est particulièrement importante puisqu'il a été identifiée comme un élément clé d'une vaste gamme de maladies incurables, appelées maladies amyloïdes. Les maladies à prions appartiennent à cette classe et sont causées par l'agrégation d'une configuration mal pliée de la protéine prion. Notre travail contribue à la recherche sur les maladies à prions, en se concentrant sur deux types d'agrégats : les oligomères et les fibres.Les oligomères suspectés d'être les agrégats les plus toxiques sont étudiés dans la première partie de cette thèse. Nous fondons notre travail sur l'analyse de deux types de données expérimentales. D'une part, nous considérons les données de dispersion statique de la lumière (SLS), qui peuvent être interprétées biologiquement comme la mesure de la taille moyenne des oligomères et mathématiquement comme le deuxième moment de la concentration des agrégats. D'autre part, nous considérons les données de distribution de taille d'oligomère collectées à plusieurs instants en utilisant la Chromatographie d'Exclusion de Taille (SEC). Notre étude conduit à la conclusion importante selon laquelle au moins deux types différents d'oligomères sont présents. De plus, nous proposons une description de l'interaction entre ces oligomères en proposant pour la première fois un modèle à deux espèces. Notre modèle est composé d'un ensemble d'ODE avec les taux cinétiques comme paramètres. La description qualitative fournie par ce modèle a été couplée à l'information contenue dans les données expérimentales de SLS dans le cadre de l'assimilation de données. Au moyen de la méthode du filtre de Kalman étendue, nous résolvons un problème inverse non linéaire, estimant ainsi les coefficients cinétiques associés aux données expérimentales. Pour valider ce modèle, nous avons comparé notre estimation aux données expérimentales de SEC, en observant un très bon accord entre les deux. Notre caractérisation des espèces d'oligomères peut conduire à de nouvelles stratégies pour concevoir un premier traitement ciblé pour les maladies à prions.La méthodologie appliquée à l'étude des oligomères peut être considérée comme une première étape dans l'analyse des fibres. En raison des propriétés physiques de ces agrégats, des expériences moins nombreuses et moins précises peuvent être effectuées, et une approche mathématique peut donc apporter une contribution précieuse à leur étude. Notre contribution est de proposer une stratégie générale pour estimer l'état initial d'un système de fibres. Inspiré par la théorie de Lifshitz-Slyozov, nous décrivons ce système par une équation de transport couplée à une équation intégrale. L'estimation est faite en utilisant quelques observations empiriques sur le système. Nous considérons le cas général d'observation d'un moment d'ordre $n$. Il est en effet possible de mesurer le moment d'ordre $1$ par fluorescence de thioflavine T ou le moment d'ordre $2$ par SLS. Nous proposons une solution théorique et numérique du problème d'estimation de la condition initiale dans le cas linéaire d'un système de dépolymérisation. En particulier, pour des taux de dépolymérisation constants, nous proposons une stratégie de régularisation par noyau, qui fournit une première caractérisation de l'estimation. Dans le cas de taux de dépolymérisation variables, nous proposons la méthode d'assimilation variationnelle 4d-Var et la méthode d'assimilation de données séquentielle du filtrage de Kalman. Ces deux méthodes sont plus générales et peuvent être facilement adaptée pour traiter différents problèmes. Ce problème inverse est particulièrement intéressant puisqu'il peut également être appliqué dans d'autres domaines tels que le cycle cellulaire ou la formation de poussière. / The aim of this PhD thesis is to set up a mathematical strategy to investigate the physical process of protein aggregation. The study of this largely unknown process is particularly important since it has been identified as a key feature of a wide class of incurable diseases, called amyloid diseases. Prion diseases belong to this class and are caused by the aggregation of a misfolded configuration of the prion protein. Our work contributes to the research on prion diseases, by focusing on two kinds of aggregates: oligomers and fibrils. Oligomers, which are suspected of being the most toxic aggregates, are studied in the first part of this thesis. We base our work on the analysis of two types of experimental data. On the one hand, we consider Static Light Scattering (SLS) data, which can be interpreted biologically as the measurement of the average oligomer size and mathematically as the second moment of aggregate concentration. On the other hand, we consider oligomer size distribution data collected at several instants by using Size Exclusion Chromatography (SEC). Our study leads to the important conclusion that at least two different types of oligomers are present. Moreover, we provide a description of the interaction between these oligomers by proposing, for the first time, a two-species model. Our model is composed of a set of ODEs with the kinetic rates as parameters. The qualitative description provided by this model has been coupled to the information contained in the noisy experimental SLS data in a data assimilation framework. By means of the extended Kalman filter method, we solve a non-linear inverse problem, thereby estimating the kinetic coefficients associated to the experimental data. To validate this model we have compared our estimation to the experimental SEC data, observing a very good agreement between the two. Our oligomer species characterisation may lead to new strategies to design a first targeted treatment for prion diseases. The methodology applied to the study of oligomers can be seen as a first step in the analysis of fibrils. Due to the physical properties of these aggregates, fewer and less precise experiments can be performed and so a mathematical approach can provide a valuable contribution to their study. Our contribution is to propose a general strategy to estimate the initial condition of a fibril system. Inspired by the Lifshitz-Slyozov theory, we describe this system by a transport equation coupled with an integral equation. The estimation is performed making use of some empirical observations on the system. We consider the general case of observing a moment of order $n$. It is indeed possible to measure the first moment by Thioflavine T fluorescence or the second moment by SLS. We provide a theoretical and numerical solution of the initial condition estimation problem in the linear case of a depolymerising system. In particular, for constant depolymerisation rates, we propose a kernel regularisation strategy, that provides a first characterisation of the estimation. In the variable depolymerisation rates, we outline the variational data assimilation method $4$d-Var.This method is more general and can be easily adapted to treat different problems. This inverse problem is particularly interesting since it can also be applied in other fields such as the cell cycle or dust formation.
|
Page generated in 0.0637 seconds