81 |
Regularized Numerical Algorithms For Stable Parameter Estimation In Epidemiology And Implications For ForecastingDeCamp, Linda 08 August 2017 (has links)
When an emerging outbreak occurs, stable parameter estimation and reliable projections of future incidence cases using limited (early) data can play an important role in optimal allocation of resources and in the development of effective public health intervention programs. However, the inverse parameter identification problem is ill-posed and cannot be solved with classical tools of computational mathematics. In this dissertation, various regularization methods are employed to incorporate stability in parameter estimation algorithms. The recovered parameters are then used to generate future incident curves as well as the carrying capacity of the epidemic and the turning point of the outbreak.
For the nonlinear generalized Richards model of disease progression, we develop a novel iteratively regularized Gauss-Newton-type algorithm to reconstruct major characteristics of an emerging infection. This problem-oriented numerical scheme takes full advantage of a priori information available for our specific application in order to stabilize the iterative process. Another important aspect of our research is a reliable estimation of time-dependent transmission rate in a compartmental SEIR disease model. To that end, the ODE-constrained minimization problem is reduced to a linear Volterra integral equation of the first kind, and a combination of regularizing filters is employed to approximate the unknown transmission parameter in a stable manner. To justify our theoretical findings, extensive numerical experiments have been conducted with both synthetic and real data for various infectious diseases.
|
82 |
Markov processes in disease modelling : estimation and implementationMarais, Christiaan Antonie 15 September 2010 (has links)
There exists a need to estimate the potential financial, epidemiological and societal impact that diseases, and the treatment thereof, can have on society. Markov processes are often used to model diseases to estimate these quantities of interest and have an advantage over standard survival analysis techniques in that multiple events can be studied simultaneously. The theory of Markov processes is well established for processes for which the process parameters are known but not as much of the literature has focussed on the estimation of these transition parameters. This dissertation investigates and implements maximum likelihood estimators for Markov processes based on longitudinal data. The methods are described based on processes that are observed such that all transitions are recorded exactly, processes of which the state of the process is recorded at equidistant time points, at irregular time points and processes for which each process is observed at a possibly different irregular time point. Methods for handling right censoring and estimating the effect of covariates on parameters are described. The estimation methods are implemented by simulating Markov processes and estimating the parameters based on the simulated data so that the accuracy of the estimators can be investigated. We show that the estimators can provide accurate estimates of state prevalence if the process is stationary, even with relatively small sample sizes. Furthermore, we indicate that the estimators lack good accuracy in estimating the effect of covariates on parameters unless state transitions are recorded exactly. The methods are discussed with reference to the msm package for R which is freely available and a popular tool for estimating and implementing Markov processes in disease modelling. Methods are mentioned for the treatment of aggregate data, diseases where the state of patients are not known with complete certainty at every observation and diseases where patient interaction plays a role. / Dissertation (MSc)--University of Pretoria, 2010. / Statistics / unrestricted
|
83 |
ON-LINE PARAMETER ESTIMATION AND ADAPTIVE CONTROL OF PERMANENT MAGNET SYNCHRONOUS MACHINESUnderwood, Samuel J. 17 May 2006 (has links)
No description available.
|
84 |
Parameter Estimation : Towards Data-Driven and Privacy Preserving ApproachesLakshminarayanan, Braghadeesh January 2024 (has links)
Parameter estimation is a pivotal task across various domains such as system identification, statistics, and machine learning. The literature presents numerous estimation procedures, many of which are backed by well-studied asymptotic properties. In the contemporary landscape, highly advanced digital twins (DTs) offer the capability to faithfully replicate real systems through proper tuning. Leveraging these DTs, data-driven estimators can alleviate challenges inherent in traditional methods, notably their computational cost and sensitivity to initializations. Furthermore, traditional estimators often rely on sensitive data, necessitating protective measures. In this thesis, we consider data-driven and privacy-preserving approaches to parameter estimation that overcome many of these challenges. The first part of the thesis delves into an exploration of modern data-driven estimation techniques, focusing on the two-stage (TS) approach. Operating under the paradigm of inverse supervised learning, the TS approach simulates numerous samples across parameter variations and employs supervised learning methods to predict parameter values. Divided into two stages, the approach involves compressing data into a smaller set of samples and the second stage utilizes these samples to predict parameter values. The simplicity of the TS estimator underscores its interpretability, necessitating theoretical justification, which forms the core motivation for this thesis. We establish statistical frameworks for the TS estimator, yielding its Bayes and minimax versions, alongside developing an improved minimax TS variant that excels in computational efficiency and robustness to distributional shifts. Finally, we conduct an asymptotic analysis of the TS estimator. The second part of the thesis introduces an application of data-driven estimation methods, that includes the TS and neural network based approaches, in the design of tuning rules for PI controllers. Leveraging synthetic datasets generated from DTs, we train machine learning algorithms to meta-learn tuning rules, streamlining the calibration process without manual intervention. In the final part of the thesis, we tackle scenarios where estimation procedures must handle sensitive data. Here, we introduce differential privacy constraints into the Bayes point estimation problem to protect sensitive information. Proposing a unified approach, we integrate the estimation problem and differential privacy constraints into a single convex optimization objective, thereby optimizing the accuracy-privacy trade-off. In cases where both observations and parameter spaces are finite, this approach reduces to a tractable linear program which is solvable using off-the-shelf solvers. In essence, this thesis endeavors to address computational and privacy concerns within the realm of parameter estimation. / Skattning av parametrar utgör en fundamental uppgift inom en mängd fält, såsom systemidentifiering, statistik och maskininlärning. I litteraturen finns otaliga skattningsmetoder, utav vilka många understödjs av välstuderade asymptotiska egenskaper. Inom dagens forskning erbjuder noggrant kalibrerade digital twins (DTs) möjligheten att naturtroget återskapa verkliga system. Genom att utnyttja dessa DTs kan data-drivna skattningsmetoder minska problem som vanligtvis drabbar traditionella skattningsmetoder, i synnerhet problem med beräkningsbörda och känslighet för initialiseringvillkor. Traditionella skattningsmetoder kräver dessutom ofta känslig data, vilket leder till ett behov av skyddsåtgärder. I den här uppsatsen, undersöker vi data-drivna och integritetsbevarande parameterskattningmetoder som övervinner många av de nämnda problemen. Första delen av uppsatsen är en undersökning av moderna data-drivna skattningtekniker, med fokus på två-stegs-metoden (TS). Som metod inom omvänd övervakad maskininlärning, simulerar TS en stor mängd data med ett stort urval av parametrar och tillämpar sedan metoder från övervakad inlärning för att förutsäga parametervärden. De två stegen innefattar datakomprimering till en mindre mängd, varefter den mindre mängden data används för parameterskattning. Tack vare sin enkelhet och tydbarhet lämpar sig två-stegs-metoden väl för teoretisk analys, vilket är uppsatsens motivering. Vi utvecklar ett statistiskt ramverk för två-stegsmetoden, vilket ger Bayes och minimax-varianterna, samtidigt som vi vidareutvecklar minimax-TS genom en variant med hög beräkningseffektivitet och robusthet gentemot skiftade fördelningar. Slutligen analyserar vi två-stegs-metodens asymptotiska egenskaper. Andra delen av uppsatsen introducerar en tillämpning av data-drivna skattningsmetoder, vilket innefattar TS och neurala nätverk, i designen och kalibreringen av PI-regulatorer. Med hjälp av syntetisk data från DTs tränar vi maskininlärningsalgoritmer att meta-lära sig regler för kalibrering, vilket effektiverar kalibreringsprocessen utan manuellt ingripande. I sista delen av uppsatsen behandlar vi scenarion då skattningsprocessen innefattar känslig data. Vi introducerar differential-privacy-begränsningar i Bayes-punktskattningsproblemet för att skydda känslig information. Vi kombinerar skattningsproblemet och differential-privacy-begränsningarna i en gemensam konvex målfunktion, och optimerar således avvägningen mellan noggrannhet och integritet. Ifall både observations- och parameterrummen är ändliga, så reduceras problemet till ett lätthanterligt linjärt optimeringsproblem, vilket löses utan vidare med välkända metoder. Sammanfattningsvis behandlar uppsatsen beräkningsmässiga och integritets-angelägenheter inom ramen för parameterskattning. / <p>QC 20240306</p>
|
85 |
Method for Evaluating Changing Blood PerfusionSheng, Baoyi 21 December 2023 (has links)
This thesis provides insight into methods for estimating blood perfusion, emphasizing the need for accurate modeling in dynamic physiological environments. The thesis critically examines conventional error function solutions used in steady state or gradually changing blood flow scenarios, revealing their shortcomings in accurately reflecting more rapid changes in blood perfusion. To address this limitation, this study introduces a novel prediction model based on the finite-difference method (FDM) specifically designed to produce accurate results under different blood flow perfusion conditions. A comparative analysis concludes that the FDM-based model is consistent with traditional error function methods under constant blood perfusion conditions, thus establishing its validity under dynamic and steady blood flow conditions. In addition, the study attempts to determine whether analytical solutions exist that are suitable for changing perfusion conditions. Three alternative analytical estimation methods were explored, each exposing the common thread of inadequate responsiveness to sudden changes in blood perfusion. Based on the advantages and disadvantages of the error function and FDM estimation, a combination of these two methods was developed. Utilizing the simplicity and efficiency of the error function, the prediction of contact resistance and core temperature along with the initial blood perfusion was first made at the beginning of the data. Then the subsequent blood perfusion values were predicted using the FDM, as the FDM can effectively respond to changing blood perfusion values. / Master of Science / Blood perfusion, the process of blood flowing through our body's tissues, is crucial for our health. It's like monitoring traffic flow on roads, which is especially important during rapid changes, such as during exercise or medical treatments. Traditional methods for estimating blood perfusion, akin to older traffic monitoring techniques, struggle to keep up with these rapid changes. This research introduces a new approach, using a method often found in engineering and physics, called the finite-difference method (FDM), to create more accurate models of blood flow in various conditions. This study puts this new method to the test against the old standards. We discover that while both are effective under steady conditions, the FDM shines when blood flow changes quickly. We also examined three other methods, but they, too, fell short in these fast-changing scenarios. This work is more than just numbers and models; it's about potentially transforming how we understand and manage health. By combining the simplicity of traditional methods for initial blood flow estimates with the dynamic capabilities of the FDM, we're paving the way for more precise medical diagnostics and treatments.
|
86 |
Parameter estimation and auto-calibration of the STREAM-C modelSinha, Sumit 07 May 2005 (has links)
The STREAMC model is based on the same algorithm as implemented by the Steady Riverine Environmental Assessment Model (STREAM), a mathematical model for the dissolved oxygen (DO) distribution in freshwater streams used by Mississippi Department of Environmental Quality (MDEQ). Typically the water quality models are calibrated manually. In some cases where some objective criterion can be identified to quantify a successful calibration, an auto calibration may be preferable to the manual calibration approach. The auto calibration may be particularly applicable to relatively simple analytical models such as the steady-state STREAMC model. Various techniques of parameter estimation were identified for the model. The model was then subjected to various techniques of parameter estimation identified and/or developed. The parameter estimates obtained by different techniques were tabulated and compared. A final recommendation regarding a preferable parameter estimation technique leading to the auto calibration of the STREAMC model was made.
|
87 |
Poisson Approximation to Image Sensor NoiseJin, Xiaodan January 2010 (has links)
No description available.
|
88 |
A summary of confidence interval estimation of standard and certain non-centrality parametersHayslett, Homer T. (Homer Thornton) 10 June 2012 (has links)
In this thesis, confidence bounds on simple and more complex parameters are stated along with detailed computational procedures for finding these confidence bounds from the given data.
Confidence bounds on the more familiar parameters, i.e., μ, ơ², μ₁ - μ₂, and ơ²₁/ơ²₂, are briefly presented for the sake of completeness. The confidence statements for the less familiar parameters and combinations of parameters are treated in more detail.
In the cases of the non-centrality parameters of the non-central t², F and X² distributions, a variance-stabilizing transformation is used, a normal approximation is utilized, and confidence bounds are pub on the parameter. In the non-central t² and non-central F distributions iterative procedures are used to obtain confidence bounds on the non-centrality parameter, i.e., a first guess is made which is improved until the desired accuracy is obtained
This procedure is unnecessary in the non-central X² distribution, since the expressions for the upper and lower limits can be reduced to closed form.
Computational procedures and completely worked examples are included. / Master of Science
|
89 |
An improved algorithm for identification of time varying parameters using recursive digital techniquesMaloney, Bernard Christopher Patrick January 1986 (has links)
Identification is the process of determining values for the characteristic quantities, called parameters, of a system. Examples of such quantities are mass, inductance, resistance, spring coefficient, gain, et cetera. The decreasing cost of digital processors and the versatility of digital programming make digital methods an attractive means of accomplishing identification. It is important, however, that an identifier be able to track any change in a parameter if its output is to be used in any predictive capacity, such as in an adaptive controller. Most studies of digital identification have avoided the topic of time variations by using batch processing methods that implicitly assume constant parameters; this thesis does not.
This thesis first investigates the parameter-tracking capabilities of a popular, real-time digital identification algorithm, the recursive weighted least squares method. This method is claimed to be able to track only slowly time-varying parameters. Based on the results of this study, a method of improving the accuracy of estimates of time-varying parameters is developed. This method, called conditioning, is a post-processor to the recursive weighted least squares algorithm. The results of tests of this method using three different plant simulations are presented, demonstrating the improved accuracy achieved by conditioning estimates of time-varying parameters. / M.S.
|
90 |
Intersection of B-spline surfaces by elimination methodWong, Chee Kiang 03 March 2009 (has links)
Parametric surface representations such as the B-spline and Bezier geometries are widely used among the aerospace, automobile, and shipbuilding industries. These surfaces have proven to be very advantageous for defining and combining primitive geometries to form complex models. However, the task of finding the intersection curve between two surfaces has remained a difficult one. Presently, most of the research done in this area has resulted in various subdivision techniques. These subdivision techniques are based on approximations of the surface using planar polygons. This thesis presents an analytical approach to the intersection problem. The approach taken is to approximate the B-spline surface using subsets such as the ruled surface. Once the B-spline surface has been simplified, elimination techniques which solve for the surface variables can be used to analytically determine the intersection curve between two B-spline surfaces. / Master of Science
|
Page generated in 0.1449 seconds