• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 54
  • 23
  • 19
  • 16
  • 13
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Node Classification on Relational Graphs Using Deep-RGCNs

Chandra, Nagasai 01 March 2021 (has links) (PDF)
Knowledge Graphs are fascinating concepts in machine learning as they can hold usefully structured information in the form of entities and their relations. Despite the valuable applications of such graphs, most knowledge bases remain incomplete. This missing information harms downstream applications such as information retrieval and opens a window for research in statistical relational learning tasks such as node classification and link prediction. This work proposes a deep learning framework based on existing relational convolutional (R-GCN) layers to learn on highly multi-relational data characteristic of realistic knowledge graphs for node property classification tasks. We propose a deep and improved variant, Deep-RGCNs, with dense and residual skip connections between layers. These skip connections are known to be very successful with popular deep CNN-architectures such as ResNet and DenseNet. In our experiments, we investigate and compare the performance of Deep-RGCN with different baselines on multi-relational graph benchmark datasets, AIFB and MUTAG, and show how the deep architecture boosts the performance in the task of node property classification. We also study the training performance of Deep-RGCNs (with N layers) and discuss the gradient vanishing and over-smoothing problems common to deeper GCN architectures.
72

Regression Analysis for Ordinal Outcomes in Matched Study Design: Applications to Alzheimer's Disease Studies

Austin, Elizabeth 09 July 2018 (has links) (PDF)
Alzheimer's Disease (AD) affects nearly 5.4 million Americans as of 2016 and is the most common form of dementia. The disease is characterized by the presence of neurofibrillary tangles and amyloid plaques [1]. The amount of plaques are measured by Braak stage, post-mortem. It is known that AD is positively associated with hypercholesterolemia [16]. As statins are the most widely used cholesterol-lowering drug, there may be associations between statin use and AD. We hypothesize that those who use statins, specifically lipophilic statins, are more likely to have a low Braak stage in post-mortem analysis. In order to address this hypothesis, we wished to fit a regression model for ordinal outcomes (e.g., high, moderate, or low Braak stage) using data collected from the National Alzheimer's Coordinating Center (NACC) autopsy cohort. As the outcomes were matched on the length of follow-up, a conditional likelihood-based method is often used to estimate the regression coefficients. However, it can be challenging to solve the conditional-likelihood based estimating equation numerically, especially when there are many matching strata. Given that the likelihood of a conditional logistic regression model is equivalent to the partial likelihood from a stratified Cox proportional hazard model, the existing R function for a Cox model, coxph( ), can be used for estimation of a conditional logistic regression model. We would like to investigate whether this strategy could be extended to a regression model for ordinal outcomes. More specifically, our aims are to (1) demonstrate the equivalence between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the likelihood of a conditional logistic regression model, (2) prove equivalence, or lack there-of, between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the conditional likelihood of models appropriate for multiple ordinal outcomes: an adjacent categories model, a continuation-ratio model, and a cumulative logit model, and (3) clarify how to set up stratified discrete time Cox proportional hazards model for multiple ordinal outcomes with matching using the existing coxph( ) R function and interpret the regression coefficient estimates that result. We verified this theoretical proof through simulation studies. We simulated data from the three models of interest: an adjacent categories model, a continuation-ratio model, and a cumulative logit model. We fit a Cox model using the existing coxph( ) R function to the simulated data produced by each model. We then compared the coefficient estimates obtained. Lastly, we fit a Cox model to the NACC dataset. We used Braak stage as the outcome variables, having three ordinal categories. We included predictors for age at death, sex, genotype, education, comorbidities, number of days having taken lipophilic statins, number of days having taken hydrophilic statins, and time to death. We matched cases to controls on the length of follow up. We have discussed all findings and their implications in detail.
73

An Application of an In-Depth Advanced Statistical Analysis in Exploring the Dynamics of Depression, Sleep Deprivation, and Self-Esteem

Gaffari, Muslihat 01 August 2024 (has links) (PDF)
Depression, intertwined with sleep deprivation and self-esteem, presents a significant challenge to mental health worldwide. The research shown in this paper employs advanced statistical methodologies to unravel the complex interactions among these factors. Through log-linear homogeneous association, multinomial logistic regression, and generalized linear models, the study scrutinizes large datasets to uncover nuanced patterns and relationships. By elucidating how depression, sleep disturbances, and self-esteem intersect, the research aims to deepen understanding of mental health phenomena. The study clarifies the relationship between these variables and explores reasons for prioritizing depression research. It evaluates how statistical models, such as log-linear, multinomial logistic regression, and generalized linear models, shed light on their intricate dynamics. Findings offer insights into risk and protective factors associated with these variables, guiding tailored interventions for individuals in psychological distress. Additionally, policymakers can utilize these insights to develop comprehensive strategies promoting mental health and well-being at a societal level.
74

Deep Learning One-Class Classification With Support Vector Methods

Hampton, Hayden D 01 January 2024 (has links) (PDF)
Through the specialized lens of one-class classification, anomalies–irregular observations that uncharacteristically diverge from normative data patterns–are comprehensively studied. This dissertation focuses on advancing boundary-based methods in one-class classification, a critical approach to anomaly detection. These methodologies delineate optimal decision boundaries, thereby facilitating a distinct separation between normal and anomalous observations. Encompassing traditional approaches such as One-Class Support Vector Machine and Support Vector Data Description, recent adaptations in deep learning offer a rich ground for innovation in anomaly detection. This dissertation proposes three novel deep learning methods for one-class classification, aiming to enhance the efficacy and accuracy of anomaly detection in an era where data volume and complexity present unprecedented challenges. The first two methods are designed for tabular data from a least squares perspective. Formulating these optimization problems within a least squares framework offers notable advantages. It facilitates the derivation of closed-form solutions for critical gradients that largely influence the optimization procedure. Moreover, this approach circumvents the prevalent issue of degenerate or uninformative solutions, a challenge often associated with these types of deep learning algorithms. The third method is designed for second-order tensors. This proposed method has certain computational advantages and alleviates the need for vectorization, which can lead to structural information loss when spatial or contextual relationships exist in the data structure. The performance of the three proposed methods are demonstrated with simulation studies and real-world datasets. Compared to kernel-based one-class classification methods, the proposed deep learning methods achieve significantly better performance under the settings considered.
75

Análise de dados categorizados com omissão / Analysis of categorical data with missingness

Poleto, Frederico Zanqueta 30 August 2006 (has links)
Neste trabalho aborda-se aspectos teóricos, computacionais e aplicados de análises clássicas de dados categorizados com omissão. Uma revisão da literatura é apresentada enquanto se introduz os mecanismos de omissão, mostrando suas características e implicações nas inferências de interesse por meio de um exemplo considerando duas variáveis respostas dicotômicas e estudos de simulação. Amplia-se a modelagem descrita em Paulino (1991, Brazilian Journal of Probability and Statistics 5, 1-42) da distribuição multinomial para a produto de multinomiais para possibilitar a inclusão de variáveis explicativas na análise. Os resultados são desenvolvidos em formulação matricial adequada para a implementação computacional, que é realizada com a construção de uma biblioteca para o ambiente estatístico R, a qual é disponibilizada para facilitar o traçado das inferências descritas nesta dissertação. A aplicação da teoria é ilustrada por meio de cinco exemplos de características diversas, uma vez que se ajusta modelos estruturais lineares (homogeneidade marginal), log-lineares (independência, razão de chances adjacentes comum) e funcionais lineares (kappa, kappa ponderado, sensibilidade/especificidade, valor preditivo positivo/negativo) para as probabilidades de categorização. Os padrões de omissão também são variados, com omissões em uma ou duas variáveis, confundimento de células vizinhas, sem ou com subpopulações. / We consider theoretical, computational and applied aspects of classical categorical data analyses with missingness. We present a literature review while introducing the missingness mechanisms, highlighting their characteristics and implications in the inferences of interest by means of an example involving two binary responses and simulation studies. We extend the multinomial modeling scenario described in Paulino (1991, Brazilian Journal of Probability and Statistics 5, 1-42) to the product-multinomial setup to allow for the inclusion of explanatory variables. We develop the results in matrix formulation and implement the computational procedures via subroutines written under R statistical environment. We illustrate the application of the theory by means of five examples with different characteristics, fitting structural linear (marginal homogeneity), log-linear (independence, constant adjacent odds ratio) and functional linear models (kappa, weighted kappa, sensitivity/specificity, positive/negative predictive value) for the marginal probabilities. The missingness patterns includes missingness in one or two variables, neighbor cells confounded, with or without explanatory variables.
76

Contributions à la réduction de dimension

Kuentz, Vanessa 20 November 2009 (has links)
Cette thèse est consacrée au problème de la réduction de dimension. Cette thématique centrale en Statistique vise à rechercher des sous-espaces de faibles dimensions tout en minimisant la perte d'information contenue dans les données. Tout d'abord, nous nous intéressons à des méthodes de statistique multidimensionnelle dans le cas de variables qualitatives. Nous abordons la question de la rotation en Analyse des Correspondances Multiples (ACM). Nous définissons l'expression analytique de l'angle de rotation planaire optimal pour le critère de rotation choisi. Lorsque le nombre de composantes principales retenues est supérieur à deux, nous utilisons un algorithme de rotations planaires successives de paires de facteurs. Nous proposons également différents algorithmes de classification de variables qualitatives qui visent à optimiser un critère de partitionnement basé sur la notion de rapports de corrélation. Un jeu de données réelles illustre les intérêts pratiques de la rotation en ACM et permet de comparer empiriquement les différents algorithmes de classification de variables qualitatives proposés. Puis nous considérons un modèle de régression semiparamétrique, plus précisément nous nous intéressons à la méthode de régression inverse par tranchage (SIR pour Sliced Inverse Regression). Nous développons une approche basée sur un partitionnement de l'espace des covariables, qui est utilisable lorsque la condition fondamentale de linéarité de la variable explicative est violée. Une seconde adaptation, utilisant le bootstrap, est proposée afin d'améliorer l'estimation de la base du sous-espace de réduction de dimension. Des résultats asymptotiques sont donnés et une étude sur des données simulées démontre la supériorité des approches proposées. Enfin les différentes applications et collaborations interdisciplinaires réalisées durant la thèse sont décrites. / This thesis concentrates on dimension reduction approaches, that seek for lower dimensional subspaces minimizing the lost of statistical information. First we focus on multivariate analysis for categorical data. The rotation problem in Multiple Correspondence Analysis (MCA) is treated. We give the analytic expression of the optimal angle of planar rotation for the chosen criterion. If more than two principal components are to be retained, this planar solution is used in a practical algorithm applying successive pairwise planar rotations. Different algorithms for the clustering of categorical variables are also proposed to maximize a given partitioning criterion based on correlation ratios. A real data application highlights the benefits of using rotation in MCA and provides an empirical comparison of the proposed algorithms for categorical variable clustering. Then we study the semiparametric regression method SIR (Sliced Inverse Regression). We propose an extension based on the partitioning of the predictor space that can be used when the crucial linearity condition of the predictor is not verified. We also introduce bagging versions of SIR to improve the estimation of the basis of the dimension reduction subspace. Asymptotic properties of the estimators are obtained and a simulation study shows the good numerical behaviour of the proposed methods. Finally applied multivariate data analysis on various areas is described.
77

Snap Scholar: The User Experience of Engaging with Academic Research Through a Tappable Stories Medium

Burk, Ieva 01 January 2019 (has links)
With the shift to learn and consume information through our mobile devices, most academic research is still only presented in long-form text. The Stanford Scholar Initiative has explored the segment of content creation and consumption of academic research through video. However, there has been another popular shift in presenting information from various social media platforms and media outlets in the past few years. Snapchat and Instagram have introduced the concept of tappable “Stories” that have gained popularity in the realm of content consumption. To accelerate the growth of the creation of these research talks, I propose an alternative to video: a tappable Snapchat-like interface. This style is achieved using AMP, Google’s open source project to optimize web experiences on mobile, and particularly the AMP Stories visual medium. My research seeks to explore how the process and quality of consuming the content of academic papers would change if instead of watching videos, users would consume content through Stories on mobile instead. Since this form of content consumption is still largely unresearched in the academic context, I approached this research with a human-centered design process, going through a few iterations to test various prototypes before formulating research questions and designing an experiment. I tested various formats of research consumption through Stories with pilot users, and learned many lessons to iterate from along the way. I created a way to consume research papers in a Stories format, and designed a comparative study to measure the effectiveness of consuming research papers through the Stories medium and the video medium. The results indicate that Stories are a quicker way to consume the same content, and improve the user’s pace of comprehension. Further, the Stories medium provides the user a self-paced method—both temporally and content-wise—to consume technical research topics, and is deemed as a less boring method to do so in comparison to video. While Stories gave the learner a chance to actively participate in consumption by tapping, the video experience is enjoyed because of its reduced effort and addition of an audio component. These findings suggest that the Stories medium may be a promising interface in educational contexts, for distributing scientific content and assisting with active learning.
78

Penalized mixed-effects ordinal response models for high-dimensional genomic data in twins and families

Gentry, Amanda E. 01 January 2018 (has links)
The Brisbane Longitudinal Twin Study (BLTS) was being conducted in Australia and was funded by the US National Institute on Drug Abuse (NIDA). Adolescent twins were sampled as a part of this study and surveyed about their substance use as part of the Pathways to Cannabis Use, Abuse and Dependence project. The methods developed in this dissertation were designed for the purpose of analyzing a subset of the Pathways data that includes demographics, cannabis use metrics, personality measures, and imputed genotypes (SNPs) for 493 complete twin pairs (986 subjects.) The primary goal was to determine what combination of SNPs and additional covariates may predict cannabis use, measured on an ordinal scale as: “never tried,” “used moderately,” or “used frequently”. To conduct this analysis, we extended the ordinal Generalized Monotone Incremental Forward Stagewise (GMIFS) method for mixed models. This extension includes allowance for a unpenalized set of covariates to be coerced into the model as well as flexibility for user-specified correlation patterns between twins in a family. The proposed methods are applicable to high-dimensional (genomic or otherwise) data with ordinal response and specific, known covariance structure within clusters.
79

Agreement between raters and groups of raters/ accord entre observateurs et groupes d'observateurs

Vanbelle, Sophie 11 June 2009 (has links)
Agreement between raters on a categorical scale is not only a subject of scientific research but also a problem frequently encountered in practice. Whenever a new scale is developed to assess individuals or items in a certain context, inter-rater agreement is a prerequisite for the scale to be actually implemented in routine use. Cohen's kappa coeffcient is a landmark in the developments of rater agreement theory. This coeffcient, which operated a radical change in previously proposed indexes, opened a new field of research in the domain. In the first part of this work, after a brief review of agreement on a quantitative scale, the kappa-like family of agreement indexes is described in various instances: two raters, several raters, an isolated rater and a group of raters and two groups of raters. To quantify the agreement between two individual raters, Cohen's kappa coefficient (Cohen, 1960) and the intraclass kappa coefficient (Kraemer, 1979) are widely used for binary and nominal scales, while the weighted kappa coefficient (Cohen, 1968) is recommended for ordinal scales. An interpretation of the quadratic (Schuster, 2004) and the linear (Vanbelle and Albert, 2009c) weighting schemes is given. Cohen's kappa (Fleiss, 1971) and intraclass kappa (Landis and Koch, 1977c) coefficients were extended to the case where agreement is searched between several raters. Next, the kappa-like family of agreement coefficients is extended to the case of an isolated rater and a group of raters (Vanbelle and Albert, 2009a) and to the case of two groups of raters (Vanbelle and Albert, 2009b). These agreement coefficients are derived on a population-based model and reduce to the well-known Cohen's kappa coefficient in the case of two single raters. The proposed agreement indexes are also compared to existing methods, the consensus method and Schouten's agreement index (Schouten, 1982). The superiority of the new approach over the latter is shown. In the second part of the work, methods for hypothesis testing and data modeling are discussed. Firstly, the method proposed by Fleiss (1981) for comparing several independent agreement indexes is presented. Then, a bootstrap method initially developed by McKenzie et al. (1996) to compare two dependent agreement indexes, is extended to several dependent agreement indexes (Vanbelle and Albert, 2008). All these methods equally apply to the kappa coefficients introduced in the first part of the work. Next, regression methods for testing the effect of continuous and categorical covariates on the agreement between two or several raters are reviewed. This includes the weighted least-squares method allowing only for categorical covariates (Barnhart and Williamson, 2002) and a regression method based on two sets of generalized estimating equations. The latter method was developed for the intraclass kappa coefficient (Klar et al., 2000), Cohen's kappa coefficient (Williamson et al., 2000) and the weighted kappa coefficient (Gonin et al., 2000). Finally, a heuristic method, restricted to the case of independent observations, is presented (Lipsitz et al., 2001, 2003) which turns out to be equivalent to the generalized estimating equations approach. These regression methods are compared to the bootstrap method extended by Vanbelle and Albert (2008) but they were not generalized to agreement between a single rater and a group of raters nor between two groups of raters. / Sujet d'intenses recherches scientifiques, l'accord entre observateurs sur une échelle catégorisée est aussi un problème fréquemment rencontré en pratique. Lorsqu'une nouvelle échelle de mesure est développée pour évaluer des sujets ou des objets, l'étude de l'accord inter-observateurs est un prérequis indispensable pour son utilisation en routine. Le coefficient kappa de Cohen constitue un tournant dans les développements de la théorie sur l'accord entre observateurs. Ce coefficient, radicalement différent de ceux proposés auparavant, a ouvert de nouvelles voies de recherche dans le domaine. Dans la première partie de ce travail, après une brève revue des mesures d'accord sur une échelle quantitative, la famille des coefficients kappa est décrite dans différentes situations: deux observateurs, plusieurs observateurs, un observateur isolé et un groupe d'observateurs, et enfin deux groupes d'observateurs. Pour quantifier l'accord entre deux observateurs, le coefficient kappa de Cohen (Cohen, 1960) et le coefficient kappa intraclasse (Kraemer, 1979) sont largement utilisés pour les échelles binaires et nominales. Par contre, le coefficient kappa pondéré (Cohen, 1968) est recommandé pour les échelles ordinales. Schuster (2004) a donné une interprétation des poids quadratiques tandis que Vanbelle and Albert (2009c) se sont interessés aux poids linéaires. Les coefficients d'accord correspondant au coefficient kappa de Cohen (Fleiss, 1971) et au coefficient kappa intraclasse (Landis and Koch, 1977c) sont aussi donnés dans le cas de plusieurs observateurs. La famille des coefficients kappa est ensuite étendue au cas d'un observateur isolé et d'un groupe d'observateurs (Vanbelle and Albert, 2009a) et au cas de deux groupes d'observateurs (Vanbelle and Albert, 2009b). Les coefficients d'accord sont élaborés à partir d'un modèle de population et se réduisent au coefficient kappa de Cohen dans le cas de deux observateurs isolés. Les coefficients d'accord proposés sont aussi comparés aux méthodes existantes, la méthode du consensus et le coefficient d'accord de Schouten (Schouten, 1982). La supériorité de la nouvelle approche sur ces dernières est démontrée. Des méthodes qui permettent de tester des hypothèses et modéliser des coefficients d'accord sont abordées dans la seconde partie du travail. Une méthode permettant la comparaison de plusieurs coefficients d'accord indépendants (Fleiss, 1981) est d'abord présentée. Puis, une méthode basée sur le bootstrap, initialement développée par McKenzie et al. (1996) pour comparer deux coefficients d'accord dépendants, est étendue au cas de plusieurs coefficients dépendants par Vanbelle and Albert (2008). Pour finir, des méthodes de régression permettant de tester l'effet de covariables continues et catégorisées sur l'accord entre deux observateurs sont exposées. Ceci comprend la méthode des moindres carrés pondérés (Barnhart and Williamson, 2002), admettant seulement des covariables catégorisées, et une méthode de régression basée sur deux équations d'estimation généralisées. Cette dernière méthode a été développée dans le cas du coefficient kappa intraclasse (Klar et al., 2000), du coefficient kappa de Cohen (Williamson et al., 2000) et du coefficient kappa pondéré (Gonin et al., 2000). Enfin, une méthode heuristique, limitée au cas d'observations indépendantes, est présentée (Lipsitz et al., 2001, 2003). Elle est équivalente à l'approche par les équations d'estimation généralisées. Ces méthodes de régression sont comparées à l'approche par le bootstrap (Vanbelle and Albert, 2008) mais elles n'ont pas encore été généralisées au cas d'un observateur isolé et d'un groupe d'observateurs ni au cas de deux groupes d'observateurs. / Het bepalen van overeenstemming tussen beoordelaars voor categorische gegevens is niet alleen een kwestie van wetenschappelijk onderzoek, maar ook een probleem dat men veelvuldig in de praktijk tegenkomt. Telkens wanneer een nieuwe schaal wordt ontwikkeld om individuele personen of zaken te evalueren in een bepaalde context, is interbeoordelaarsovereenstemming een noodzakelijke voorwaarde vooraleer de schaal in de praktijk kan worden toegepast. Cohen's kappa coëfficiënt is een mijlpaal in de ontwikkeling van de theorie van interbeoordelaarsovereenstemming. Deze coëfficiënt, die een radicale verandering met de voorgaande indices inhield, opende een nieuw onderzoeksspoor in het domein. In het eerste deel van dit werk wordt, na een kort overzicht van overeenstemming voor kwantitatieve gegevens, de kappa-achtige familie van overeenstemmingsindices beschreven in verschillende gevallen: twee beoordelaars, verschillende beoordelaars, één geïsoleerde beoordelaar en een groep van beoordelaars, en twee groepen van beoordelaars. Om de overeenstemming tussen twee individuele beoordelaars te kwantificeren worden Cohen's kappa coëfficiënt (Cohen, 1960) en de intraklasse kappa coëfficiënt (Kraemer, 1979) veelvuldig gebruikt voor binaire en nominale gegevens, terwijl de gewogen Kappa coëfficiënt (Cohen, 1968) aangewezen is voor ordinale gegevens. Een interpretatie van de kwadratische (Schuster, 2004) en lineaire (Vanbelle and Albert, 2009c) weegschema's wordt gegeven. Overeenstemmingsindices die overeenkomen met Cohen's Kappa (Fleiss, 1971) en intraklasse-kappa (Landis and Koch, 1977c) coëfficiënten kunnen worden gebruikt om de overeenstemming tussen verschillende beoordelaars te beschrijven. Daarna wordt de familie van kappa-achtige overeenstemmingscoëfficiënten uitgebreid tot het geval van één geïsoleerde beoordelaar en een groep van beoordelaars (Vanbelle and Albert, 2009a) en tot het geval van twee groepen van beoordelaars (Vanbelle and Albert, 2009b). Deze overeenstemmingscoëfficiënten zijn afgeleid van een populatie-gebaseerd model en kunnen worden herleid tot de welbekende Cohen's coëfficiënt in het geval van twee individuele beoordelaars. De voorgestelde overeenstemmingsindices worden ook vergeleken met bestaande methodes, de consensusmethode en Schoutens overeenstemmingsindex (Schouten, 1982). De superioriteit van de nieuwe benadering over de laatstgenoemde wordt aangetoond. In het tweede deel van het werk worden hypothesetesten en gegevensmodellering besproken. Vooreerst wordt de methode voorgesteld door Fleiss (1981) om verschillende onafhankelijke overeenstemmingsindices te vergelijken, voorgesteld. Daarna wordt een bootstrapmethode, oorspronkelijk ontwikkeld door McKenzie et al. (1996) om twee onafhankelijke overeenstemmingsindices te vergelijken, uitgebreid tot verschillende afhankelijke overeenstemmingsindices (Vanbelle and Albert, 2008). Al deze methoden kunnen ook worden toegepast op de overeenstemmingsindices die in het eerste deel van het werk zijn beschreven. Ten slotte wordt een overzicht gegeven van regressiemethodes om het eect van continue en categorische covariabelen op de overeenstemming tussen twee of meer beoordelaars te testen. Dit omvat de gewogen kleinste kwadraten methode, die alleen werkt met categorische covariabelen (Barnhart and Williamson, 2002) en een regressiemethode gebaseerd op twee sets van gegeneraliseerde schattingsvergelijkingen. De laatste methode was ontwikkeld voor de intraklasse kappa coëfficiënt (Klar et al., 2000), Cohen's kappa coëfficiënt (Williamson et al., 2000) en de gewogen kappa coëfficiënt (Gonin et al., 2000). Ten slotte wordt een heuristische methode voorgesteld die alleen van toepassing is op het geval van onafhankelijk waarnemingen (Lipsitz et al., 2001, 2003). Ze blijkt equivalent te zijn met de benadering van de gegeneraliseerde schattingsvergelijkingen. Deze regressiemethoden worden vergeleken met de bootstrapmethode uitgebreid door Vanbelle and Albert (2008) maar werden niet veralgemeend tot de overeenstemming tussen een enkele beoordelaar en een groep van beoordelaars, en ook niet tussen twee groepen van beoordelaars.
80

Spatial Analysis of Fatal Automobile Crashes in Kentucky

Oris, William Nathan 01 December 2011 (has links)
Fatal automobile crashes have claimed the lives of over 33,000 people each year in the United States since 1995. As in any point event, fatal crash events do not occur randomly in time or space. The objectives of this study were to identify spatial patterns and hot spots in FARS (Fatal Analysis Reporting System) fatal crash events based on temporal and demographic characteristics. The methods employed included 1) rate calculation using FARS points and average daily traffic flow; 2) planar kernel density estimation of FARS crash events based on temporal and demographic attributes within the data; and 3) two case studies using network kernel density estimation along roadways to determine hot spots fatal crashes in Jefferson County and Warren County. Rate calculation analyses revealed that travel on roads with high speed limits and winding topography led to the highest number of crashes and highest rate of fatal crashesper 1,000 daily vehicles. Planar kernel density estimation results showed temporalpatterns, revealing that ‘hot spots’ and fatalities were highest in the summer, and typically occurred from 2pm-6pm on the weekends. Further, the 16 to 25 year age group was responsible for the most significant ‘hot spots’ and the most fatal accidents. Also showing that the most significant hot spots involving alcohol occurring in close proximity to meeting places such as bars and restaurants. Finally, results from the network kernel density estimation revealed that most hot spots were in high traffic areas of where majorr oads converged with secondary roads.

Page generated in 0.0194 seconds