• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • Tagged with
  • 17
  • 17
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling qualitative judgements in Bayesian networks

Caballero, Jose Louis Galan January 2008 (has links)
Although Bayesian Networks (BNs) are increasingly being used to solve real world problems [47], their use is still constrained by the difficulty of constructing the node probability tables (NPTs). A key challenge is to construct relevant NPTs using the minimal amount of expert elicitation, recognising that it is rarely cost-effective to elicit complete sets of probability values. This thesis describes an approach to defining NPTs for a large class of commonly occurring nodes called ranked nodes. This approach is based on the doubly truncated Normal distribution with a central tendency that is invariably a type of a weighted function of the parent nodes. We demonstrate through two examples how to build large probability tables using the ranked nodes approach. Using this approach we are able to build the large probability tables needed to capture the complex models coming from assessing firm's risks in the safety or finance sector. The aim of the first example with the National Air-Traffic Services(NATS) is to show that using this approach we can model the impact of the organisational factors in avoiding mid-air aircraft collisions. The resulting model was validated by NATS and helped managers to assess the efficiency of the company handling risks and thus, control the likelihood of air-traffic incidents. In the second example, we use BN models to capture the operational risk (OpRisk) in financial institutions. The novelty of this approach is the use of causal reasoning as a means to reduce the uncertainty surrounding this type of risk. This model was validated against the Basel framework [160], which is the emerging international standard regulation governing how financial institutions assess OpRisks.
2

Modelling loss given default of corporate bonds and bank loans

Yao, Xiao January 2015 (has links)
Loss given default (LGD) modelling has become increasingly important for banks as they are required to comply with the Basel Accords for their internal computations of economic capital. Banks and financial institutions are encouraged to develop separate models for different types of products. In this thesis we apply and improve several new algorithms including support vector machine (SVM) techniques and mixed effects models to predict LGD for both corporate bonds and retail loans. SVM techniques are known to be powerful for classification problems and have been successfully applied to credit scoring and rating business. We improve the support vector regression models by modifying the SVR model to account for heterogeneity of bond seniorities to increase the predictive accuracy of LGD. We find the proposed improved versions of support vector regression techniques outperform other methods significantly at the aggregated level, and the support vector regression methods demonstrate significantly better predictive abilities compared with the other statistical models at the segmented level. To further investigate the impacts of unobservable firm heterogeneity on modelling recovery rates of corporate bonds a mixed effects model is considered, and we find that an obligor-varying linear factor model presents significant improvements in explaining the variations of recovery rates with a remarkably high intra-class correlation being observed. Our study emphasizes that the inclusion of an obligor-varying random effect term has effectively explained the unobservable firm level information shared by instruments of the same issuer. At last we incorporate the SVM techniques into a two-stage modelling framework to predict recovery rates of credit cards. The two-stage model with a support vector machine classifier is found to be advantageous on an out-of-time sample compared with other methods, suggesting that an SVM model is preferred to a logistic regression at the classification stage. We suggest that the choice of regression models is less influential in prediction of recovery rates than the choice of classification methods in the first step of two-stage models based on the empirical evidence. The risk weighted assets of financial institutions are determined by the estimates of LGD together with PD and EAD. A robust and accurate LGD model impacts banks when making business decisions including setting credit risk strategies and pricing credit products. The regulatory capital determined by the expected and unexpected losses is also important to the financial market stability which should be carefully examined by the regulators. In summary this research highlights the importance of LGD models and provides a new perspective for practitioners and regulators to manage credit risk quantitatively.
3

Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

Geroukis, Asterios, Brorson, Erik January 2014 (has links)
In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of risk. The results shows that the logistic regression outperforms the discriminant analysis in classifying all of the groups except for the middle one. We conclude that this is in line with previous studies.
4

Estimation of a risk profile to operatives and the public from motorway hard-shoulder incursions

Michalaki, Paraskevi January 2017 (has links)
This project focuses on the risk to the operatives and the public arising from hard-shoulder incursions on motorways, which are defined as the temporary violation of this lane by a vehicle travelling on the nearside lane. Even though interest has been raised around safety when stopping on the hard-shoulder, there is no significant research conducted to investigate and quantify this risk. In this EngD project, motorway hard-shoulder accidents were investigated individually from the main traffic lanes to explore the factors affecting their severity and likelihood and identify potential differences using discrete choice and time-series modelling techniques. Based on the safety triangle theory, it was assumed that eliminating the contributory factors for injury accidents would also minimise the risk of hard-shoulder incursions, which were used as a risk indicator. An observation-based survey was conducted to gain initial knowledge on the frequency of incursions within a motorway stretch and also basic conditions that may affect the severity as well. Further to the survey, in order to collect hard-shoulder incursion data automatically, potential vehicle detection solutions were investigated. A radar sensor-based system was identified as the most suitable for this purpose and was adapted to suit the project s requirements. The sensor was installed on a motorway site, following a series of requirements to ensure safe and effective deployment. The data collected from the radar sensor were processed to minimise the errors and then corresponded to the traffic related and environmental data available for the same period of time. Using the Generalised Linear Autoregressive Moving Average model, the final models developed provided the factors that mostly affect the occurrence of hard-shoulder incursions. The main factors are temperature, humidity, traffic composition and average speed on the main carriageway. Using these models it is possible to quantify the risk and forecast when this will be minimised at a particular motorway section at any time. The risk is estimated according to the explanatory variables proposed, by inputting the predictions of these conditions in the model. This model is a tool that may then allow the operatives to be deployed on the network in the safest manner, according to the levels of tolerable risk.
5

A systems thinking approach for modelling supply chain risk propagation

Ghadge, Abhijeet January 2013 (has links)
Supply Chain Risk Management (SCRM) is rapidly becoming a most sought after research area due to the influence of recent supply chain disruptions on global economy. The thesis begins with a systematic literature review of the developments within the broad domain of SCRM over the past decade. Thematic and descriptive analysis supported with modern knowledge management techniques brings forward seven distinctive research gaps for future research in SCRM. Overlapping research findings from an industry perspective, coupled with SCRM research gaps from the systematic literature review has helped to define the research problem for this study. The thesis focuses on a holistic and systematic approach to modelling risks within supply chain and logistics networks. The systems thinking approach followed conceptualises the phenomenon of risk propagation utilising several recent case studies, workshop findings and focus studies. Risk propagation is multidimensional and propagates beyond goods, finance and information resource. It cascades into technology, human resource and socio-ecological dimensions. Three risk propagation zones are identified that build the fundamentals for modelling risk behaviour in terms of cost and delay. The development of a structured framework for SCRM, a holistic supply chain risk model and a quantitative research design for risk assessment are the major contributions of this research. The developed risk assessment platform has the ability to capture the fracture points and cascading impact within a supply chain and logistics network. A reputed aerospace and defence organisation in UK was used to test the experimental modelling set up for its viability and for bridging the gap between theory and practice. The combined statistical and simulation modelling approach provides a new perspective to assessing the complex behavioural performance of risks during multiple interactions within network.
6

La perception de risque d'investissement / Investment risk perception

De jong, Marielle 04 June 2010 (has links)
Dans cette Thèse, trois cas pratiques sont étudiées dans le domaine de la gestion des fonds où les risques d’investissement semblent mal perçus du fait d’ambiguïtés dans la mesure de risque. Ces ambiguïtés sont analysées comme une erreur élémentaire où une trop forte simplification de la réalité qui peut aboutir à une approche confuse. Les trois études sont développées dans un contexte usuel d’investissement, et portent sur les actions pour la première, sur les obligations pour la seconde et sur les devises pour la troisième. Elles s’inscrivent dans les conventions traditionnelles de la théorie de la finance. Les trois études, qui font l’objet de chapitres distincts, montrent comment la perception de risque peut être troublée dès les premiers traitements des données financières et avant même une éventuelle évaluation des risques. Plusieurs mesures de risque, pourtant courantes dans la finance, apparaissent réductrices ou mal adaptées aux circonstances dans lesquelles elles sont utilisées. Nous décrivons comment, dans certains cas, des mauvaises décisions d’investissement peuvent être prises du fait d’erreur de mesure, ou comment dans d’autres cas le débat dans la littérature économique a été orienté vers de mauvaises directions. Les études soulignent que l’appréciation des risques financiers est loin d’être triviale, même dans les domaines habituellement considérés comme maîtrisés. Une approche systématique a été adoptée pour établir à quel moment précis les analyses intègrent une mauvaise perception du risque. / Three situations are studied in the field of fund management where investment risk may be misperceived due to an ambiguity in the way risk is being measured. The case studies involve equity, bonds and currencies respectively, and are inscribed in the traditional conventions of finance theory. It is shown how the perception of risk can fail immediately in the initial data processing stage even before a propoer analysis. Several risk measures that are frequently used in finance are shown to be defunct or badly adapted for the circumstances in which they tend to be used. We described how in certain cases sub-optimal investment decisions are taken based on an error in measurement, or how in certain cases the debate in the economic literature has been disorientated. The studies underline that the appreciation of financial risk is far from trivial, even in the realms that are generally considered as well-established.
7

Secure communications for critical infrastructure control systems

Dawson, Robert Edward January 2008 (has links)
In March 2000, 1 million litres of raw sewage was released into the water system of Maroochy Shire on Queensland’s sunshine coast. This environmental disaster was caused by a disgruntled ex-contractor using a radio transmitter to illicitly access the electronically controlled pumps in the control system. In 2007 CNN screened video footage of an experimental attack against a electrical generator. The attack caused the generator to shake and smoke, visually showing the damage caused by cyber attack. These attacks highlight the importance of securing the control systems which our critical infrastructures depend on. This thesis addresses securing control systems, focusing on securing the communications for supervisory control and data acquisition (SCADA) systems. We review the architectures of SCADA systems and produce a list of the system constraints that relate to securing these systems. With these constraints in mind, we survey both the existing work in information and SCADA security, observing the need to investigate further the problem of secure communications for SCADA systems. We then present risk modelling techniques, and model the risk in a simple SCADA system, using the ISM, a software tool for modelling information security risk. In modelling the risk, we verify the hypothesis that securing the communications channel is an essential part of an effective security strategy for SCADA systems. After looking at risk modelling, and establishing the value of securing communications, we move on to key management for SCADA systems. Appropriate key management techniques are a crucial part of secure communications, and form an important part of the contributions made in this work. We present a key management protocol that has been designed to run under the constraints specific to SCADA systems. A reductionist security proof is developed for a simplified version of the protocol, showing it is secure in the Bellare Rogaway model.
8

Modelování rizik v dopravě / Risk modelling in transportation

Lipovský, Tomáš January 2016 (has links)
This thesis deals with theoretical basics of risk modelling in transportation and optimization using aggregated traffic data. In this thesis is suggested the procedure and implemented the application solving network problem of shortest path between geographical points. The thesis includes method for special paths evaluation depending on the frequency of traffic incidents based on real historical data. The thesis also includes a~graphical interface for presentation of the achieved results.
9

Radiomics risk modelling using machine learning algorithms for personalised radiation oncology

Leger, Stefan 18 June 2019 (has links)
One major objective in radiation oncology is the personalisation of cancer treatment. The implementation of this concept requires the identification of biomarkers, which precisely predict therapy outcome. Besides molecular characterisation of tumours, a new approach known as radiomics aims to characterise tumours using imaging data. In the context of the presented thesis, radiomics was established at OncoRay to improve the performance of imaging-based risk models. Two software-based frameworks were developed for image feature computation and risk model construction. A novel data-driven approach for the correction of intensity non-uniformity in magnetic resonance imaging data was evolved to improve image quality prior to feature computation. Further, different feature selection methods and machine learning algorithms for time-to-event survival data were evaluated to identify suitable algorithms for radiomics risk modelling. An improved model performance could be demonstrated using computed tomography data, which were acquired during the course of treatment. Subsequently tumour sub-volumes were analysed and it was shown that the tumour rim contains the most relevant prognostic information compared to the corresponding core. The incorporation of such spatial diversity information is a promising way to improve the performance of risk models.:1. Introduction 2. Theoretical background 2.1. Basic physical principles of image modalities 2.1.1. Computed tomography 2.1.2. Magnetic resonance imaging 2.2. Basic principles of survival analyses 2.2.1. Semi-parametric survival models 2.2.2. Full-parametric survival models 2.3. Radiomics risk modelling 2.3.1. Feature computation framework 2.3.2. Risk modelling framework 2.4. Performance assessments 2.5. Feature selection methods and machine learning algorithms 2.5.1. Feature selection methods 2.5.2. Machine learning algorithms 3. A physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging 3.1. Intensity non-uniformity correction methods 3.2. Physical correction model 3.2.1. Correction strategy and model definition 3.2.2. Model parameter constraints 3.3. Experiments 3.3.1. Phantom and simulated brain data set 3.3.2. Clinical brain data set 3.3.3. Abdominal data set 3.4. Summary and discussion 4. Comparison of feature selection methods and machine learning algorithms for radiomics time-to-event survival models 4.1. Motivation 4.2. Patient cohort and experimental design 4.2.1. Characteristics of patient cohort 4.2.2. Experimental design 4.3. Results of feature selection methods and machine learning algorithms evaluation 4.4. Summary and discussion 5. Characterisation of tumour phenotype using computed tomography imaging during treatment 5.1. Motivation 5.2. Patient cohort and experimental design 5.2.1. Characteristics of patient cohort 5.2.2. Experimental design 5.3. Results of computed tomography imaging during treatment 5.4. Summary and discussion 6. Tumour phenotype characterisation using tumour sub-volumes 6.1. Motivation 6.2. Patient cohort and experimental design 6.2.1. Characteristics of patient cohorts 6.2.2. Experimental design 6.3. Results of tumour sub-volumes evaluation 6.4. Summary and discussion 7. Summary and further perspectives 8. Zusammenfassung
10

Modelling Risk in Real-Life Multi-Asset Portfolios / Riskmodellering av verkliga portföljer med varierande tillgångsklasser

Hahn, Karin, Backlund, Axel January 2023 (has links)
We develop a risk factor model based on data from a large number of portfolios spanning multiple asset classes. The risk factors are selected based on economic theory through an analysis of the asset holdings, as well as statistical tests. As many assets have limited historical data available, we implement and analyse the impact of regularisation to handle sparsity. Based on the factor model, two parametric methods for calculating Value-at-Risk (VaR) for a portfolio are developed: one with constant volatility and one with a CCC-GARCH volatility updating scheme. These methods are evaluated through backtesting on daily and weekly returns of a selected set of portfolios whose contents reflect the larger majority well. A historical data approach for calculating VaR serves as a benchmark model. We find that under daily returns, the historical data method outperforms the factor models in terms of VaR violation rates. None yield independent violations however. Under weekly returns, both factor models produce more accurate violation rates than the historical data model, with the CCC-GARCH model also yielding independent VaR violations for almost all portfolios due to its ability to adjust up VaR estimates in periods of increased market volatility. We conclude that if weekly VaR estimates are acceptable, tailored risk factor models provide accurate measures of portfolio risk. / Vi bygger en riskfaktormodell givet en stor mängd portföljer innehållande flera olika typer av tillgångar. Riskfaktorerna väljs ut baserat på ekonomisk teori genom en analys av portföljernas innehåll samt genom statistiska test. Eftersom många tillgångar har en liten mängd historisk data tillgänglig implementerar vi och analyserar effekterna av regularisering i faktorregressionen. Två parametriska metoder för att beräkna Value-at-Risk (VaR) utvecklas baserat på faktormodellen: en med konstant volatilitet och en med volatilitetsuppdatering genom CCC-GARCH. Metoderna utvärderas med bakåttestning på daglig och veckovis avkastning från utvalda portföljer vars innehåll reflekterar den större majoriteten. En historisk data-baserad metod för att beräkna VaR används som referensmodell. Under daglig avkastning överträffar historisk data-modellen faktormodellerna med avseende på frekvensen av VaR-överträdelser. Ingen modell resulterar dock i oberoende överträdelser. Under veckovis avkastning å andra sidan ger båda faktormodellerna mer exakta överträdelsefrekvenser än historisk data-modellen, där faktormodellen med CCC-GARCH också ger oberoende överträdelser för nästan alla portföljer, tack vare modellens förmåga att justera upp VaR-estimaten i perioder av högre volatilitet på marknaden. Sammanfattningsvis ger skräddarsydda riskfaktormodeller goda riskestimat, givet att det är acceptabelt med veckovisa beräkningar av VaR.

Page generated in 0.1035 seconds