• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Properties of classes of life distributions with applications in reliability theory

Al-Sadi, Mohammed Hmood January 2004 (has links)
No description available.
2

New statistical methods in risk assessment by probability bounds

Montgomery, Victoria January 2009 (has links)
In recent years, we have seen a diverse range of crises and controversies concerning food safety, animal health and environmental risks including foot and mouth disease, dioxins in seafood, GM crops and more recently the safety of Irish pork. This has led to the recognition that the handling of uncertainty in risk assessments needs to be more rigorous and transparent. This would mean that decision makers and the public could be better informed on the limitations of scientific advice. The expression of the uncertainty may be qualitative or quantitative but it must be well documented. Various approaches to quantifying uncertainty exist, but none are yet generally accepted amongst mathematicians, statisticians, natural scientists and regulatory authorities. In this thesis we discuss the current risk assessment guidelines which describe the deterministic methods that are mainly used for risk assessments. However, probabilistic methods have many advantages, and we review some probabilistic methods that have been proposed for risk assessment. We then develop our own methods to overcome some problems with the current methods. We consider including various uncertainties and looking at robustness to the prior distribution for Bayesian methods. We compare nonparametric methods with parametric methods and we combine a nonparametric method with a Bayesian method to investigate the effect of using different assumptions for different random quantities in a model. These new methods provide alternatives for risk analysts to use in the future.
3

Hilbert-Valued Semimartingales and Levy Processes

Liu, Min January 2009 (has links)
No description available.
4

Meta-data to enhance case-based prediction

Premraj, Rahul January 2006 (has links)
The focus of this thesis is to measure the regularity of case bases used in Case-Based Prediction (CBP) systems and the reliability of their constituent cases prior to the system's deployment to influence user confidence on the delivered solutions. The reliability information, referred to as meta-data, is then used to enhance prediction accuracy. CBP is a strain of Case-Based Reasoning (CBR) that differs from the latter only in the solution feature which is a continuous value. Several factors make implementing such systems for prediction domains a challenge. Typically, the problem and solution spaces are unbounded in prediction problems that make it difficult to determine the portions of the domain represented by the case base. In addition, such problem domains often exhibit complex and poorly understood interactions between features and contain noise. As a result, the overall regularity in the case base is distorted which poses a hindrance to delivery of good quality solutions. Hence in this research, techniques have been presented that address the issue of irregularity in case bases with an objective to increase prediction accuracy of solutions. Although, several techniques have been proposed in the CBR literature to deal with irregular case bases, they are inapplicable to CBP problems. As an alternative, this research proposes the generation of relevant case-specific meta-data. The meta-data is made use of in Mantel's randomisation test to objectively measure regularity in the case base. Several novel visualisations using the meta-data have been presented to observe the degree of regularity and help identify suspect unreliable cases whose reuse may very likely yield poor solutions. Further, performances of individual cases are recorded to judge their reliability, which is reflected upon before selecting them for reuse along with their distance from the problem case. The intention is to overlook unreliable cases in favour of relatively distant yet more reliable ones for reuse to enhance prediction accuracy. The proposed techniques have been demonstrated on software engineering data sets where the aim is to predict the duration of a software project on the basis of past completed projects recorded in the case base. Software engineering is a human-centric, volatile and dynamic discipline where many unrecorded factors influence productivity. This degrades the regularity in case bases where cases are disproportionably spread out in the problem and solution spaces resulting in erratic prediction quality. Results from administering the proposed techniques were helpful to gain insight into the three software engineering data sets used in this analysis. The Mantel's test was very effective at measuring overall regularity within a case base, while the visualisations were learnt to be variably valuable depending upon the size of the data set. Most importantly, the proposed case discrimination system, that intended to reuse only reliable similar cases, was successful at increasing prediction accuracy for all three data sets. Thus, the contributions of this research are some novel approaches making use of meta-data to firstly provide the means to assess and visualise irregularities in case bases and cases from prediction domains and secondly, provide a method to identify unreliable cases to avoid their reuse in favour to more reliable cases to enhance overall prediction accuracy.
5

An intelligent monitoring system to predict potential catastrophic incidents

Painting, Andrew David January 2014 (has links)
This dissertation identified a gap in research for an intelligent monitoring system to monitor various indicators within complex engineering industries, in order to predict the potential situations that may lead to catastrophic failures. The accuracy of prediction was based upon lessons learnt from historic catastrophic incidents. These incidents are normally attributed to combinations of several minor errors or failures, and seldom occur through single point failures. The new system to monitor, identify and predict the conditions likely to cause a catastrophic failure could improve safety, reduce down time and prioritise funding. This novel approach involved the classification of ten common traits that are known to lead to catastrophe, based on six headings used by the Health and Safety Executive and four headings used in Engineering Governance. These were weight averaged to provide a ‘state’ condition for each asset, and amalgamated with a qualitative fault tree representation of a known catastrophic failure type. The information on current ‘state’ was plotted onto a coloured 2D surface graph over a period of time to demonstrate one particular visual tool. The research demonstrated that it was possible to create the monitoring system within Microsoft Excel and to run Visual Basic programs alongside Boolean logic calculations for the fault tree and the predictive tools, based upon the trend analysis of historic data. Another significant research success was the development of a standardised approach to the investigation of incidents and the dissemination of information.
6

On the consistency of some constrained maximum likelihood estimator used in crash data modelling / A propos de la consistance d’un estimateur du maximum de vraisemblance utilisé dans la modélisation de données d’accidents

Geraldo, Issa Cherif 15 December 2015 (has links)
L'ensemble des méthodes statistiques utilisées dans la modélisation de données nécessite la recherche de solutions optimales locales mais aussi l’estimation de la précision (écart-type) liée à ces solutions. Ces méthodes consistent à optimiser, par approximations itératives, la fonction de vraisemblance ou une version approchée. Classiquement, on utilise des versions adaptées de la méthode de Newton-Raphson ou des scores de Fisher. Du fait qu'elles nécessitent des inversions matricielles, ces méthodes peuvent être complexes à mettre en œuvre numériquement en grandes dimensions ou lorsque les matrices impliquées ne sont pas inversibles. Pour contourner ces difficultés, des procédures itératives ne nécessitant pas d’inversion matricielle telles que les algorithmes MM (Minorization-Maximization) ont été proposées et sont considérés comme pertinents pour les problèmes en grandes dimensions et pour certaines distributions discrètes multivariées. Parmi les nouvelles approches proposées dans le cadre de la modélisation en sécurité routière, figure un algorithme nommé algorithme cyclique itératif (CA). Cette thèse a un double objectif. Le premier est d'étudier l'algorithme CA des points de vue algorithmique et stochastique; le second est de généraliser l'algorithme cyclique itératif à des modèles plus complexes intégrant des distributions discrètes multivariées et de comparer la performance de l’algorithme CA généralisé à celle de ses compétiteurs. / Most of the statistical methods used in data modeling require the search for local optimal solutions but also the estimation of standard errors linked to these solutions. These methods consist in maximizing by successive approximations the likelihood function or its approximation. Generally, one uses numerical methods adapted from the Newton-Raphson method or Fisher’s scoring. Because they require matrix inversions, these methods can be complex to implement numerically in large dimensions or when involved matrices are not invertible. To overcome these difficulties, iterative procedures requiring no matrix inversion such as MM (Minorization-Maximization) algorithms have been proposed and are considered to be efficient for problems in large dimensions and some multivariate discrete distributions. Among the new approaches proposed for data modeling in road safety, is an algorithm called iterative cyclic algorithm (CA). This thesis has two main objectives: (a) the first is to study the convergence properties of the cyclic algorithm from both numerical and stochastic viewpoints and (b) the second is to generalize the CA to more general models integrating discrete multivariate distributions and compare the performance of the generalized CA to those of its competitors.

Page generated in 0.01 seconds