• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigating the Effects of Sample Size, Model Misspecification, and Underreporting in Crash Data on Three Commonly Used Traffic Crash Severity Models

Ye, Fan 2011 May 1900 (has links)
Numerous studies have documented the application of crash severity models to explore the relationship between crash severity and its contributing factors. These studies have shown that a large amount of work was conducted on this topic and usually focused on different types of models. However, only a limited amount of research has compared the performance of different crash severity models. Additionally, three major issues related to the modeling process for crash severity analysis have not been sufficiently explored: sample size, model misspecification and underreporting in crash data. Therefore, in this research, three commonly used traffic crash severity models: multinomial logit model (MNL), ordered probit model (OP) and mixed logit model (ML) were studied in terms of the effects of sample size, model misspecification and underreporting in crash data, via a Monte-Carlo approach using simulated and observed crash data. The results of sample size effects on the three models are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which model type is used. Furthermore, among the three models, the ML model was found to require the largest sample size, while the OP model required the lowest sample size. The sample size requirement for the MNL model is intermediate to the other two models. In addition, when the sample size is sufficient, the results of model misspecification analysis lead to the following suggestions: in order to decrease the bias and variability of estimated parameters, logit models should be selected over probit models. Meanwhile, it was suggested to select more general and flexible model such as those allowing randomness in the parameters, i.e., the ML model. Another important finding was that the analysis of the underreported data for the three models showed that none of the three models was immune to this underreporting issue. In order to minimize the bias and reduce the variability of the model, fatal crashes should be set as the baseline severity for the MNL and ML models while, for the OP models, the rank for the crash severity should be set from fatal to property-damage-only (PDO) in a descending order. Furthermore, when the full or partial information about the unreported rates for each severity level is known, treating crash data as outcome-based samples in model estimation, via the Weighted Exogenous Sample Maximum Likelihood Estimator (WESMLE), dramatically improve the estimation for all three models compared to the result produced from the Maximum Likelihood estimator (MLE).
2

Improving safety of teenage and young adult drivers in Kansas

Amarasingha, Niranga January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Sunanda Dissanayake / Young drivers have elevated motor vehicle crash rates compared to other drivers. This dissertation investigated characteristics, contributory causes, and factors which increase the injury severity of young driver crashes in Kansas by comparing them with more experienced drivers. Crash data were obtained from the Kansas Department of Transportation. Young drivers were divided into two groups: 15-19 years (teen) and 20-24 years (young adult) for a detailed investigation. Using data from 2006 to 2009, frequencies, percentages, and crash rates were calculated for each characteristic and contributory cause. Contingency table analysis and odds ratios (OR) analysis were carried out to identify overly represented factors of young-driver crashes compared to experienced drivers. Young drivers were more likely to be involved in crashes due to failure to yield-right-of way, disregarding traffic signs/signals, turning, or lane changing, compared to experienced drivers. Ordered logistic regression models were developed to identify severity affecting factors in young driver crashes. According to model results, factors that decreased injury severity of the driver were seat belt use, driving at low speeds, driving newer vehicles, and driving with an adult passenger. The models also showed that alcohol involvement, driving on high-posted-speed-limit roadways, ejection at the time of crash, and trapping at the time of crash can increase young drivers’ injury severity. Based on identified critical factors, countermeasure ideas were suggested to improve the safety of young drivers. It is important for teen drivers and parents/guardians to gain better understanding about these critical factors that are helpful in preventing crashes and minimizing driving risk. Parents/guardians can consider high-risk conditions such as driving during dark, during weekends, on rural roads, on wet road surfaces, and on roadways with high speed limits, for planning teen driving. Protective devices, crash-worthy cars, and safer road infrastructures, such as rumble strips, and forgiving roadsides, will particularly reduce young drivers’ risk. Predictable traffic situations and low complexity resulting from improved road infrastructure are beneficial for young drivers. The effectiveness of Kansas Graduated Driver Licensing (GDL) system needs to be investigated in the future.
3

Etude des marchés d'assurance non-vie à l'aide d'équilibre de Nash et de modèle de risques avec dépendance

Dutang, Christophe 31 May 2012 (has links)
L’actuariat non-vie étudie les différents aspects quantitatifs de l’activité d’assurance. Cette thèse vise à expliquer sous différentes perspectives les interactions entre les différents agents économiques, l’assuré, l’assureur et le marché, sur un marché d’assurance. Le chapitre 1 souligne à quel point la prise en compte de la prime marché est importante dans la décision de l’assuré de renouveler ou non son contrat d’assurance avec son assureur actuel. La nécessitéd’un modèle de marché est établie. Le chapitre 2 répond à cette problématique en utilisant la théorie des jeux non-coopératifs pour modéliser la compétition. Dans la littérature actuelle, les modèles de compétition seréduisent toujours à une optimisation simpliste du volume de prime basée sur une vision d’un assureur contre le marché. Partant d’un modèle de marché à une période, un jeu d’assureurs est formulé, où l’existence et l’unicité de l’équilibre de Nash sont vérifiées. Les propriétés des primes d’équilibre sont étudiées pour mieux comprendre les facteurs clés d’une position dominante d’un assureur par rapport aux autres. Ensuite, l’intégration du jeu sur une période dans un cadre dynamique se fait par la répétition du jeu sur plusieurs périodes. Une approche par Monte-Carlo est utilisée pour évaluer la probabilité pour un assureur d’être ruiné, de rester leader, de disparaître du jeu par manque d’assurés en portefeuille. Ce chapitre vise à mieux comprendre la présence de cycles en assurance non-vie. Le chapitre 3 présente en profondeur le calcul effectif d’équilibre de Nash pour n joueurs sous contraintes, appelé équilibre de Nash généralisé. Il propose un panorama des méthodes d’optimisation pour la résolution des n sous-problèmes d’optimisation. Cette résolution sefait à l’aide d’une équation semi-lisse basée sur la reformulation de Karush-Kuhn-Tucker duproblème d’équilibre de Nash généralisé. Ces équations nécessitent l’utilisation du Jacobiengénéralisé pour les fonctions localement lipschitziennes intervenant dans le problème d’optimisation.Une étude de convergence et une comparaison des méthodes d’optimisation sont réalisées.Enfin, le chapitre 4 aborde le calcul de la probabilité de ruine, un autre thème fondamentalde l’assurance non-vie. Dans ce chapitre, un modèle de risque avec dépendance entre lesmontants ou les temps d’attente de sinistre est étudié. De nouvelles formules asymptotiquesde la probabilité de ruine en temps infini sont obtenues dans un cadre large de modèle de risquesavec dépendance entre sinistres. De plus, on obtient des formules explicites de la probabilité deruine en temps discret. Dans ce modèle discret, l’analyse structure de dépendance permet dequantifier l’écart maximal sur les fonctions de répartition jointe des montants entre la versioncontinue et la version discrète. / In non-life actuarial mathematics, different quantitative aspects of insurance activity are studied.This thesis aims at explaining interactions among economic agents, namely the insured,the insurer and the market, under different perspectives. Chapter 1 emphasizes how essentialthe market premium is in the customer decision to lapse or to renew with the same insurer.The relevance of a market model is established.In chapter 2, we address this issue by using noncooperative game theory to model competition.In the current literature, most competition models are reduced to an optimisationof premium volume based on the simplistic picture of an insurer against the market. Startingwith a one-period model, a game of insurers is formulated, where the existence and uniquenessof a Nash equilibrium are verified. The properties of premium equilibria are examinedto better understand the key factors of leadership positions over other insurers. Then, thederivation of a dynamic framework from the one-period game is done by repeating of theone-shot game over several periods. A Monte-Carlo approach is used to assess the probabilityof being insolvent, staying a leader, or disappearing of the insurance game. This gives furtherinsights on the presence of non-life insurance market cycles.A survey of computational methods of a Nash equilibrium under constraints is conductedin Chapter 3. Such generalized Nash equilibrium of n players is carried out by solving asemismooth equation based on a Karush-Kuhn-Tucker reformulation of the generalized Nashequilibrium problem. Solving semismooth equations requires using the generalized Jacobianfor locally Lipschitzian function. Convergence study and method comparison are carried out.Finally, in Chapter 4, we focus on ruin probability computation, another fundemantalpoint of non-life insurance. In this chapter, a risk model with dependence among claimseverity or claim waiting times is studied. Asymptotics of infinite-time ruin probabilitiesare obtained in a wide class of risk models with dependence among claims. Furthermore,we obtain new explicit formulas for ruin probability in discrete-time. In this discrete-timeframework, dependence structure analysis allows us to quantify the maximal distance betweenjoint distribution functions of claim severity between the continuous-time and the discrete
4

Matematické modelování v neživotním pojištění / Mathematical modelling in general insurance

Zajíček, Jakub January 2015 (has links)
This diploma thesis deals with the mathematical models in general insurance. The aim of this thesis is to analyse selected mathematical models that are widely used in general insurance for the estimation of insurance portfolio statistics, pricing and the regulatory capital requirement calculation. Claim frequency models, claim severity models, aggregate loss models and generalized linear models are analysed. This thesis consists of a theoretical and a practical part. The theoretical part contains description of selected models. Described models are then applied to a real dataset in the practical part. The real dataset modelling was performed using the statistical software R. It has been proved that maximum likelihood parameter estimations are of better quality than the method of moments or quantile method estimations. The results of aggregate loss distribution computational methods are comparable. This comparability is mostly caused by a large number of observations. In the context of tariff analysis it was found that the most significant factors are driver's age and the driver's area of residence.

Page generated in 0.0493 seconds