231 |
An evaluation of the moving horizon estimation algorithm for online estimation of battery state of charge and state of healthBibin Nataraja, Pattel January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Moving Horizon Estimation (MHE) is a powerful estimation technique for tackling
the estimation problems of the state of dynamic systems in the presence of constraints,
nonlinearities and disturbances and measurement noises. In this work, the Moving
Horizon Estimation approach is applied in estimating the State of Charge (SOC) and
State of Health (SOH) of a battery and the results are compared against those for the
traditional estimation method of Extended Kalman Filter (EKF). The comparison of
the results show that MHE provides improvement in performance over EKF in terms
of different state initial conditions, convergence time, and process and sensor noise
variations. An equivalent circuit battery model is used to capture the dynamics of the
battery states, experimental data is used to identify the parameters of the battery
model. MHE based state estimation technique is applied to estimates the states
of the battery model, subjected to various estimated initial conditions, process and
measurement noises and the results are compared against the traditional EKF based
estimation method. Both experimental data and simulations are used to evaluate the
performance of the MHE. The results shows that MHE performs better than EKF
estimation even with unknown initial state of the estimator, MHE converges faster
to the actual states,and also MHE is found to be robust to measurement and process
noises.
|
232 |
Multivariate control charts for the mean vector and variance-covariance matrix with variable sampling intervalsCho, Gyo-Young 01 February 2006 (has links)
When using control charts to monitor a process it is frequently necessary to simultaneously monitor more than one parameter of the process. Multivariate control charts for monitoring the mean vector, for monitoring variance-covariance matrix and for simultaneously monitoring the mean vector and the variance-covariance matrix of a process with a multivariate normal distribution are investigated. A variable sampling interval (VSI) feature is considered in these charts.
Two basic approaches for using past sample information in the development of multivariate control charts are considered. The first approach, which is called the combine-accumulate approach, reduces each multivariate observation to a univariate statistic and then accumulates over past samples. The second approach, which is called the accumulate-combine approach, accumulates past sample information for each parameter and then forms a univariate statistic from the multivariate accumulations.
Multivariate control charts are compared on the basis of their average time to signal (ATS) performance. The numerical results show that the multivariate control charts based on the accumulate-combine approach are more efficient than the corresponding multivariate control charts based on the combine-accumulate approach in terms of ATS. Also VSI charts are more efficient than corresponding FSI charts. / Ph. D.
|
233 |
Uncertainty quantification techniques with diverse applications to stochastic dynamics of structural and nanomechanical systems and to modeling of cerebral autoregulationKatsidoniotaki, Maria January 2022 (has links)
This dissertation develops uncertainty quantification methodologies for modeling, response analysis and optimization of diverse dynamical systems. Two distinct application platforms are considered pertaining to engineering dynamics and precision medicine.
First, the recently developed Wiener path integral (WPI) technique for determining, accurately and in a computationally efficient manner, the stochastic response of diverse dynamical systems is employed for solving a high-dimensional, nonlinear system of stochastic differential equations governing the dynamics of a representative model of electrostatically coupled micromechanical oscillators. Compared to alternative modeling and solution treatments in the literature, the current development exhibits the following novelties: a) typically adopted linear, or higher-order polynomial, approximations of the nonlinear electrostatic forces are circumvented; and b) stochastic modeling is employed, for the first time, by considering a random excitation component representing the effect of diverse noise sources on the system dynamics.
Further, the WPI technique is enhanced and extended based on a Bayesian compressive sampling (CS) treatment. Specifically, sparse expansions for the system response joint PDF are utilized. Next, exploiting the localization capabilities of the WPI technique for direct evaluation of specific PDF points leads to an underdetermined linear system of equations for the expansion coefficients. Furthermore, relying on a Bayesian CS solution formulation yields a posterior distribution for the expansion coefficient vector. In this regard, a significant advantage of the herein-developed methodology relates to the fact that the uncertainty of the response PDF estimates obtained by the WPI technique is quantified. Also, an adaptive scheme is proposed based on the quantified uncertainty of the estimates for the optimal selection of PDF sample points. This yields considerably fewer boundary value problems to be solved as part of the WPI technique, and thus, the associated computational cost is significantly reduced.
Second, modeling and analysis of the physiological mechanism of dynamic cerebral autoregulation (DCA) is pursued based on the concept of diffusion maps. Specifically, a state-space description of DCA dynamics is considered based on arterial blood pressure (ABP), cerebral blood flow velocity (CBFV), and their time derivatives. Next, an eigenvalue analysis of the Markov matrix of a random walk on a graph over the dataset domain yields a low-dimensional representation of the intrinsic dynamics. Further dimension reduction is made possible by accounting only for the two most significant eigenvalues. The value of their ratio indicates whether the underlying system is governed by active or hypoactive dynamics, indicating healthy or impaired DCA function, respectively. The reliability of the technique is assessed by considering healthy individuals and patients with unilateral carotid artery stenosis or occlusion.
It is shown that the proposed ratio of eigenvalues can be used as a reliable and robust biomarker for assessing how active the intrinsic dynamics of the autoregulation is and for indicating healthy versus impaired DCA function. Further, an alternative joint time-frequency analysis methodology based on generalized harmonic wavelets is utilized for assessing DCA performance in patients with preeclampsia within one week postpartum, which is associated with an increased risk for postpartum maternal cerebrovascular complications. The results are compared with normotensive postpartum individuals and healthy non-pregnant female volunteers and suggest a faster, but less effective response of the cerebral autoregulatory mechanism in the first week postpartum, regardless of preeclampsia diagnosis.
|
234 |
The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyllBeedell, David C. (David Charles) January 1995 (has links)
No description available.
|
235 |
Statistically Efficient Methods for Computation-Aware Uncertainty Quantification and Rare-Event OptimizationHe, Shengyi January 2024 (has links)
The thesis covers two fundamental topics that are important across the disciplines of operations research, statistics and even more broadly, namely stochastic optimization and uncertainty quantification, with the common theme to address both statistical accuracy and computational constraints. Here, statistical accuracy encompasses the precision of estimated solutions in stochastic optimization, as well as the tightness or reliability of confidence intervals. Computational concerns arise from rare events or expensive models, necessitating efficient sampling methods or computation procedures.
In the first half of this thesis, we study stochastic optimization that involves rare events, which arises in various contexts including risk-averse decision-making and training of machine learning models. Because of the presence of rare events, crude Monte Carlo methods can be prohibitively inefficient, as it takes a sample size reciprocal to the rare-event probability to obtain valid statistical information about the rare-event. To address this issue, we investigate the use of importance sampling (IS) to reduce the required sample size. IS is commonly used to handle rare events, and the idea is to sample from an alternative distribution that hits the rare event more frequently and adjusts the estimator with a likelihood ratio to retain unbiasedness. While IS has been long studied, most of its literature focuses on estimation problems and methodologies to obtain good IS in these contexts. Contrary to these studies, the first half of this thesis provides a systematic study on the efficient use of IS in stochastic optimization. In Chapter 2, we propose an adaptive procedure that converts an efficient IS for gradient estimation to an efficient IS procedure for stochastic optimization. Then, in Chapter 3, we provide an efficient IS for gradient estimation, which serves as the input for the procedure in Chapter 2.
In the second half of this thesis, we study uncertainty quantification in the sense of constructing a confidence interval (CI) for target model quantities or prediction. We are interested in the setting of expensive black-box models, which means that we are confined to using a low number of model runs, and we also lack the ability to obtain auxiliary model information such as gradients. In this case, a classical method is batching, which divides data into a few batches and then constructs a CI based on the batched estimates. Another method is the recently proposed cheap bootstrap that is constructed on a few resamples in a similar manner as batching.
These methods could save computation since they do not need an accurate variability estimator which requires sufficient model evaluations to obtain. Instead, they cancel out the variability when constructing pivotal statistics, and thus obtain asymptotically valid t-distribution-based CIs with only few batches or resamples. The second half of this thesis studies several theoretical aspects of these computation-aware CI construction methods. In Chapter 4, we study the statistical optimality on CI tightness among various computation-aware CIs. Then, in Chapter 5, we study the higher-order coverage errors of batching methods. Finally, Chapter 6 is a related investigation on the higher-order coverage and correction of distributionally robust optimization (DRO) as another CI construction tool, which assumes an amount of analytical information on the model but bears similarity to Chapter 5 in terms of analysis techniques.
|
236 |
Variable sampling intervals for control charts using count dataShobe, Kristin N. January 1988 (has links)
This thesis examines the use of variable sampling intervals as they apply to control charts that use count data. Papers by Reynolds, Arnold, and R. Amin developed properties for charts with an underlying normal distribution. These properties are extended in this thesis to accommodate an underlying Poisson distribution. / Master of Science
|
237 |
Nonparametric procedures for process control when the control value is not specifiedPark, Changsoon January 1984 (has links)
In industrial production processes, control charts have been developed to detect changes in the parameters specifying the quality of the production so that some rectifying action can be taken to restore the parameters to satisfactory values. Examples of the control charts are the Shewhart chart and the cumulative sum control chart (CUSUM chart). In designing a control chart, the exact distribution of the observations, e.g. normal distribution, is usually assumed to be known. But, when there is not sufficient information in determining the distribution, nonparametric procedures are appropriate. In such cases, the control value for the parameter may not be given because of insufficient information.
To construct a control chart when the control value is not given, a standard sample must be obtained when the process is known to be under control so that the quality of the product can be maintained at the same level as that of the standard sample. For this purpose, samples of fixed size are observed sequentially, and at each time a sample is observed a two-sample nonparametric statistic is obtained from the standard sample and the sequentially observed sample. With these sequentially obtained statistics, the usual process control procedure can be done. The truncation point is applied to denote the finite run length or the time at which sufficient information about the distribution of the observations and/or the control value is obtained so that the procedure may be switched to a parametric procedure or a nonparametric procedure with a control value.
To lessen the difficulties in the dependent structure of the statistics we use the fact that conditioned on the standard sample the statistics are i.i.d. random variables. Upper and lower bounds of the run length distribution are obtained for the Shewhart chart. A Brownian motion process is used to approximate the discrete time process of the CUSUM chart. The exact run length distribution of the approximated CUSUM chart is derived by using the inverse Laplace transform. Applying an appropriate correction to the boundary improves the approximation. / Ph. D.
|
238 |
'n Ondersoek na die eindige steekproefgedrag van inferensiemetodes in ekstreemwaarde-teorieVan Deventer, Dewald 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2005. / Extremes are unusual or rare events. However, when such events – for example
earthquakes, tidal waves and market crashes - do take place, they typically cause
enormous losses, both in terms of human lives and monetary value. For this reason,
it is of critical importance to accurately model extremal events. Extreme value theory
entails the development of statistical models and techniques in order to describe and
model such rare observations.
In this document we discuss aspects of extreme value theory. This theory consists of
two approaches: The classical maxima method, based on the properties of the
maximum of a sample and the more popular threshold theory, based upon the
properties of exceedances of a specified threshold value. This document provides
the practitioner with the theoretical and practical tools for both these approaches.
This will enable him/her to perform extreme value analyses with confidence.
Extreme value theory – for both approaches - is based upon asymptotic arguments.
For finite samples, the limiting result for the sample maximum holds approximately
only. Similarly, for finite choices of the threshold, the limiting distribution for
exceedances of that threshold holds only approximately. In this document we
investigate the quality of extreme value based inferences with regard to the unknown
underlying distribution when the sample size or threshold is finite. Estimation of
extreme tail quantiles of the underlying distribution, as well as the calculation of
confidence intervals, are typically the most important objectives of an extreme
analysis. For that reason, we evaluate the accuracy of extreme based inferences in
terms of these estimates. This investigation was carried out using a simulation study,
performed with the software package S-Plus.
|
239 |
The Applications of Regression Analysis in Auditing and Computer SystemsHubbard, Larry D. 05 1900 (has links)
This thesis describes regression analysis and shows how it can be used in account auditing and in computer system performance analysis. The study first introduces regression analysis techniques and statistics. Then, the use of regression analysis in auditing to detect "out of line" accounts and to determine audit sample size is discussed. These applications led to the concept of using regression analysis to predict job completion times in a computer system. The feasibility of this application of regression analysis was tested by constructing a predictive model to estimate job completion times using a computer system simulator. The predictive model's performance for the various job streams simulated shows that job completion time prediction is a feasible application for regression analysis.
|
240 |
Modelling catch sampling uncertainty in fisheries stock assessment : the Atlantic-Iberian sardine caseCaneco, Bruno January 2013 (has links)
The statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS) stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment models so that management decisions are based on realistic evaluation of the uncertainty about the status of the stock. The main goal of this study is to develop a stock assessment framework that accounts for some of the uncertainties associated with the AIS stock that are currently not integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty arising from the catch data sampling process. The central innovation the thesis is the development of a Bayesian integrated stock assessment (ISA) model, in which an observation model explicitly links stock dynamics parameters with statistical models for the various types of data observed from catches of the AIS stock. This allows for systematic and statistically consistent propagation of the uncertainty inherent in the catch sampling process across the whole stock assessment model, through to estimates of biomass and stock parameters. The method is tested by simulations and found to provide reliable and accurate estimates of stock parameters and associated uncertainty, while also outperforming existing designed-based and model-based estimation approaches. The method is computationally very demanding and this is an obstacle to its adoption by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed and presented in this thesis could provide an important contribution to the improvement in the evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other fish stock with similar data and dynamics structure. Furthermore, the models developed in this study establish a solid conceptual platform to allow future development of more complex models of fish population dynamics.
|
Page generated in 0.3587 seconds