• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Uncertainty modelling in power system state estimation

Al-Othman, Abdul Rahman K. January 2004 (has links)
As a special case of the static state estimation problem, the load-flow problem is studied in this thesis. It is demonstrated that the non-linear load-flow formulation may be solved by real-coded genetic algorithms. Due to its global optimisation ability, the proposed method can be useful for off-line studies where multiple solutions are suspected. This thesis presents two methods for estimating the uncertainty interval in power system state estimation due to uncertainty in the measurements. The proposed formulations are based on a parametric approach which takes in account the meter inaccuracies. A nonlinear and a linear formulation are proposed to estimate the tightest possible upper and lower bounds on the states. The uncertainty analysis, in power system state estimation, is also extended to other physical quantities such as the network parameters. The uncertainty is then assumed to be present in both measurements and network parameters. To find the tightest possible upper and lower bounds of any state variable, the problem is solved by a Sequential Quadratic Programming (SQP) technique. A new robust estimator based on the concept of uncertainty in the measurements is developed here. This estimator is known as Maximum Constraints Satisfaction (MCS). Robustness and performance of the proposed estimator is analysed via simulation of simple regression examples, D.C. and A.C. power system models.
2

Efficient Calibration and Predictive Error Analysis for Highly-Parameterized Models Combining Tikhonov and Subspace Regularization Techniques

Matthew James Tonkin Unknown Date (has links)
The development and application of environmental models to help understand natural systems, and support decision making, is commonplace. A difficulty encountered in the development of such models is determining which physical and chemical processes to simulate, and on what temporal and spatial scale(s). Modern computing capabilities enable the incorporation of more processes, at increasingly refined scales, than at any time previously. However, the simulation of a large number of fine scale processes has undesirable consequences: first, the execution time of many environmental models has not declined despite advances in processor speed and solution techniques; and second, such complex models incorporate a large number of parameters, for which values must be assigned. Compounding these problems is the recognition that since the inverse problem in groundwater modeling is non-unique the calibration of a single parameter set does not assure the reliability of model predictions. Practicing modelers are, then, faced with complex models that incorporate a large number of parameters whose values are uncertain, and that make predictions that are prone to an unspecified amount of error. In recognition of this, there has been considerable research into methods for evaluating the potential for error in model predictions arising from errors in the values assigned to model parameters. Unfortunately, some common methods employed in the estimation of model parameters, and the evaluation of the potential error associated with model parameters and predictions, suffer from limitations in their application that stem from an emphasis on obtaining an over-determined, parsimonious, inverse problem. That is, common methods of model analysis exhibit artifacts from the propagation of subjective a-priori parameter parsimony throughout the calibration and predictive error analyses. This thesis describes theoretical and practical developments that enable the estimation of a large number of parameters, and the evaluation of the potential for error in predictions made by highly parameterized models. Since the focus of this research is on the use of models in support of decision making, the new methods are demonstrated by application to synthetic applications, where the performance of the method can be evaluated under controlled conditions; and to real-world applications, where the performance of the method can be evaluated in terms of trade-offs in computational effort versus calibration results and the ability to rigorously yet expediently investigate predictive error. The applications suggest that the new techniques are applicable to a range of environmental modeling disciplines. Mathematical innovations described in this thesis focus on combining complementary regularized inversion (calibration) techniques with novel methods for analyzing model predictive error. Several of the innovations are founded on explicit recognition of the existence of the calibration solution and null spaces – that is, that with the available observations there are some (combinations of) parameters that can be estimated; and there are some (combinations of) parameters that cannot. The existence of a non-trivial calibration null space is at the heart of the non-uniqueness problem in model calibration: this research expands upon this concept by recognizing that there are combinations of parameters that lie within the calibration null space yet possess non-trivial projections onto the predictive solution space, and these combinations of parameters are at the heart of predictive error analysis. The most significant contribution of this research is the attempt to develop a framework for model analysis that promotes computational efficiency in both the calibration and the subsequent analysis of the potential for error in model predictions. Fundamental to this framework is the use of a large number of parameters, the use of Tikhonov regularization, and the use of subspace techniques. Use of a large number of parameters enables parameter detail to be represented in the model at a scale approaching true variability; the use of Tikhonov constraints enables the modeler to incorporate preferred conditions on parameter values and/or their variation throughout the calibration and the predictive analysis; and, the use of subspace techniques enables model calibration and predictive analysis to be undertaken expediently, even when undertaken using a large number of parameters. This research focuses on the inability of the calibration process to accurately identify parameter values: it is assumed that the models in question accurately represent the relevant processes at the relevant scales so that parameter and predictive error depend only on parameter detail not represented in the model and/or accurately inferred through the calibration process. Contributions to parameter and predictive error arising from incorrect model identification are outside the scope of this research.
3

A forecasting approach to estimating cartel damages : The importance of considering estimation uncertainty

Prohorenko, Didrik January 2020 (has links)
In this study, I consider the performance of simple forecast models frequently applied in counterfactual analysis when the information at hand is limited. Furthermore, I discuss the robustness of the standard t-test commonly used to statistically detect cartels. I empirically verify that the standard t-statistics encompasses parameter estimation uncertainty when one of the time series in a two-sided t-test has been estimated. Thereafter, I compare the results with those from a corrected t-test, recently proposed, where the uncertainty has been accounted for. The results from the study show that a simple OLS-model can be used to detect a cartel and to compute a counterfactual price when data is limited, at least as long as the price overcharge inflicted by the cartel members is relatively large. Yet, the level of accuracy may vary and at a point where the data used for estimating the model become relatively limited, the model predictions tend to be inaccurate.
4

Exploring Software Project Planning through Effort Uncertainty in Large Software Projects : An Industrial Case Study

Ellis, Jesper, Eriksson, Elion January 2023 (has links)
Background. Effort estimation is today a crucial part of software development planning. However, much of the earlier research has been focused on the general conditions of effort estimation. Little to no effort has been spent on solution verification (SV) of the projects. It is not surprising considering that SV becomes more relevant, the larger the project. To improve effort estimation, it is key to consider the uncertainties from the assumptions and conditions it relies on. Objectives. The main objective of this study is to identify differences and similarities between general effort estimation and effort estimation in SV in order to find potential improvements to software project planning of large projects. More specifically, this thesis aims to identify what and how activities and factors affect effort uncertainty and what theory and methods can be applied to increase the accuracy of effort estimation in SV. Methods. An exploratory case study was conducted to reach the objectives. It was designed accordingly to the triangulation method and consisted of unstructured interviews, a questionnaire, and archival research. The analysis followed a four steps procedure. First, it aimed to identify each SV activity’s contribution to effort and effort uncertainty. Secondly, identify and analyze which and how factors impact the identified activities. Third, investigate the factors that impact effort uncertainty. Fourth and last, an analysis of how the factors and sources of uncertainty could be used to improve software project planning. Results. The result shows that the activities could be divided into two different groups, based on their difference in contribution to effort and effort uncertainty. The two activities showing a higher uncertainty than effort were trouble report handling& troubleshooting, which is by far the most uncertainty-causing, and fault correction lead-time. The fault-related factors were both collectively and non-collectively found to be the most uncertainty-causing. Furthermore, it showed that the type of product and what type of objective the employee has influenced the cause of uncertainty. Conclusions. The SV process shifts from a proactive and structured way to a more reactive and unstructured way of working with the project life cycle. Moreover, size is not a cause of uncertainty of effort, but the differences in products create different causes. It was concluded that to most effectively address inaccuracy in effort estimation, one should address the activities that constitute a minority in effort but the majority of uncertainty. The most straightforward approach to increase the performance of effort estimation in SV would be to evaluate the inclusion of fault prediction and fault correction. Consequently, the implementation of uncertainty identification and prevention methods such as the six Ws framework and the bottom-up/top-down effort estimation practices. / Bakgrund. Ansträngningsuppskattning är idag en viktig del av planeringen avmjukvaruutveckling. Mycket av den tidigare forskningen har fokuserat på demgenerella förhållandena av ansträngningsuppskattning. Lite till ingen energi har lagts på lösningverifiering av projekten. Det är inte förvånande med tanke på lösningsverifiering (LV) blir mer relevant, desto större projekt. För att förbättra ansträngninguppskattningen så är det viktigt att ta hänsyn till dem osäkerheter som härstammar från dem antaganden och förhållanden som den vilar på. Syfte. Huvudmålet av studien är att identifiera likheter och skillnader mellan den generella teorin om ansträngninguppskattning gentemot ansträngninguppskattning inom LV i avsikt att identifiera potentiella förbättringar av mjukvaruutvecklings planering för större projekt. Mer specifikt, åsyftade studien till att identifiera vilka och hur aktiviteter och faktorer påverkar ansträngnings osäkerheter, samt vilken redan existerande teori och modeller som skulle kunna appliceras för att öka noggrannheten i ansträngninguppskattningen av lösningverifieringen. Metod. En utforskande fallstudie genomfördes för att uppfylla målen. Den designades i enlighet med trianguleringsmetoden och bestod av ostrukturerade intervjuer, ett frågeformulär, samt en arkivstudie. Analysen följde en procedur på fyra steg. Det första steget hade i avsikt att identifiera varje aktivitets, i LV proccessen, tillförande av ansträngning och osäkerhet. Det andra steget avsåg att identifiera och analysera vilka och hur faktorer påverkade dem identifierade aktiviteterna. Det tredje steget åsyftade att undersöka dem faktorer som påverkar ansträngningsosäkerheten. Och slutligen, det fjärde steget avsåg att analysera hur dem identifierade faktorerna och källorna till osäkerhet kan användas för att förbättra mjukvaruprojekts planering. Resultat. Resultatet visar att aktiviteterna kunde kategoriseras i två olika grupper baserat på differensen mellan ansträngningen och den relaterade osäkerheten. Dem två aktiviteter som visade en högre osäkerhet än ansträngning var felrapports hantering & felsökning, som visade sig orsaka mest osäkerhet, samt ledtid till följd av felkorrigering. De felrelaterade faktorerna var både självständigt och kollektivt dem som skapar mest osäkerhetsgrundande. Samtidigt visade det sig att typ av produkt och vilken typ av arbete influerade grunden till osäkerhet. Slutsatser. LV proccesen skiftar med projekt livscykeln, från en proaktiv och strukturerad process, till en mer reaktiv och ostrukturerad process. Storlek är inte en grund för osäkerhet av ansträngning, däremot skapar skillnader mellan produkterna olika osäkerhetsgrunder. För att på ett så effektivt sätt som möjligt adressera felaktigheter i ansträngningsuppskattningarna, bör fokus lika på dem aktiviteter som utgör en minoritet av ansträngning och samtidigt utgör en majoritet i osäkerhet. Den mest självklara tillvägagångssättet för att öka prestandan av anstängningsuppskatningarna av LV är att evaluera införandet av fel deterktering och fel korrektion i modelen. Följaktligen, att implementera osäkerhetsidentifications och förhindrande metoder, till exempel "the six Ws framework" och "bottom-up/top-down" ansträningningsuppskattnings metoderna.
5

Development of statistical methods for the surveillance and monitoring of adverse events which adjust for differing patient and surgical risks

Webster, Ronald A. January 2008 (has links)
The research in this thesis has been undertaken to develop statistical tools for monitoring adverse events in hospitals that adjust for varying patient risk. The studies involved a detailed literature review of risk adjustment scores for patient mortality following cardiac surgery, comparison of institutional performance, the performance of risk adjusted CUSUM schemes for varying risk profiles of the populations being monitored, the effects of uncertainty in the estimates of expected probabilities of mortality on performance of risk adjusted CUSUM schemes, and the instability of the estimated average run lengths of risk adjusted CUSUM schemes found using the Markov chain approach. The literature review of cardiac surgical risk found that the number of risk factors in a risk model and its discriminating ability were independent, the risk factors could be classified into their "dimensions of risk", and a risk score could not be generalized to populations remote from its developmental database if accurate predictions of patients' probabilities of mortality were required. The conclusions were that an institution could use an "off the shelf" risk score, provided it was recalibrated, or it could construct a customized risk score with risk factors that provide at least one measure for each dimension of risk. The use of report cards to publish adverse outcomes as a tool for quality improvement has been criticized in the medical literature. An analysis of the report cards for cardiac surgery in New York State showed that the institutions' outcome rates appeared overdispersed compared to the model used to construct confidence intervals, and the uncertainty associated with the estimation of institutions' out come rates could be mitigated with trend analysis. A second analysis of the mortality of patients admitted to coronary care units demonstrated the use of notched box plots, fixed and random effect models, and risk adjusted CUSUM schemes as tools to identify outlying hospitals. An important finding from the literature review was that the primary reason for publication of outcomes is to ensure that health care institutions are accountable for the services they provide. A detailed review of the risk adjusted CUSUM scheme was undertaken and the use of average run lengths (ARLs) to assess the scheme, as the risk profile of the population being monitored changes, was justified. The ARLs for in-control and out-of-control processes were found to increase markedly as the average outcome rate of the patient population decreased towards zero. A modification of the risk adjusted CUSUM scheme, where the step size for in-control to out-of-control outcome probabilities were constrained to no less than 0.05, was proposed. The ARLs of this "minimum effect" CUSUM scheme were found to be stable. The previous assessment of the risk adjusted CUSUM scheme assumed that the predicted probability of a patient's mortality is known. A study of its performance, where the estimates of the expected probability of patient mortality were uncertain, showed that uncertainty at the patient level did not affect the performance of the CUSUM schemes, provided that the risk score was well calibrated. Uncertainty in the calibration of the risk model appeared to cause considerable variation in the ARL performance measures. The ARLs of the risk adjusted CUSUM schemes were approximated using simulation because the approximation method using the Markov chain property of CUSUMs, as proposed by Steiner et al. (2000), gave unstable results. The cause of the instability was the method of computing the Markov chain transition probabilities, where probability is concentrated at the midpoint of its Markov state. If probability was assumed to be uniformly distributed over each Markov state, the ARLs were stabilized, provided that the scores for the patients' risk of adverse outcomes were discrete and finite.

Page generated in 0.1389 seconds