• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the Asymptotic Theory of Permutation Statistics

Strasser, Helmut, Weber, Christian January 1999 (has links) (PDF)
In this paper limit theorems for the conditional distributions of linear test statistics are proved. The assertions are conditioned by the sigma-field of permutation symmetric sets. Limit theorems are proved both for the conditional distributions under the hypothesis of randomness and under general contiguous alternatives with independent but not identically distributed observations. The proofs are based on results on limit theorems for exchangeable random variables by Strasser and Weber. The limit theorems under contiguous alternatives are consequences of an LAN-result for likelihood ratios of symmetrized product measures. The results of the paper have implications for statistical applications. By example it is shown that minimum variance partitions which are defined by observed data (e.g. by LVQ) lead to asymptotically optimal adaptive tests for the k-sample problem. As another application it is shown that conditional k-sample tests which are based on data-driven partitions lead to simple confidence sets which can be used for the simultaneous analysis of linear contrasts. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
2

Generalized Maximally Selected Statistics

Hothorn, Torsten, Zeileis, Achim January 2007 (has links) (PDF)
Maximally selected statistics for the estimation of simple cutpoint models are embedded into a generalized conceptual framework based on conditional inference procedures. This powerful framework contains most of the published procedures in this area as special cases, such as maximally selected chi-squared and rank statistics, but also allows for direct construction of new test procedures for less standard test problems. As an application, a novel maximally selected rank statistic is derived from this framework for a censored response partitioned with respect to two ordered categorical covariates and potential interactions. This new test is employed to search for a high-risk group of rectal cancer patients treated with a neo-adjuvant chemoradiotherapy. Moreover, a new efficient algorithm for the evaluation of the asymptotic distribution for a large class of maximally selected statistics is given enabling the fast evaluation of a large number of cutpoints. / Series: Research Report Series / Department of Statistics and Mathematics
3

Permutation Tests for Structural Change

Zeileis, Achim, Hothorn, Torsten January 2006 (has links) (PDF)
The supLM test for structural change is embedded into a permutation test framework for a simple location model. The resulting conditional permutation distribution is compared to the usual (unconditional) asymptotic distribution, showing that the power of the test can be clearly improved in small samples. Furthermore, generalizations are discussed for binary and multivariate dependent variables as well as model-based permutation testing for structural change. The procedures suggested are illustrated using both artificial and real-world data (number of youth homicides, employment discrimination data, structural-change publications, and stock returns). / Series: Research Report Series / Department of Statistics and Mathematics
4

Advanced mathematics and deductive reasoning skills : testing the Theory of Formal Discipline

Attridge, Nina January 2013 (has links)
This thesis investigates the Theory of Formal Discipline (TFD): the idea that studying mathematics develops general reasoning skills. This belief has been held since the time of Plato (2003/375B.C), and has been cited in recent policy reports (Smith, 2004; Walport, 2010) as an argument for why mathematics should hold a privileged place in the UK's National Curriculum. However, there is no rigorous research evidence that justifies the claim. The research presented in this thesis aims to address this shortcoming. Two questions are addressed in the investigation of the TFD: is studying advanced mathematics associated with development in reasoning skills, and if so, what might be the mechanism of this development? The primary type of reasoning measured is conditional inference validation (i.e. `if p then q; not p; therefore not q'). In two longitudinal studies it is shown that the conditional reasoning behaviour of mathematics students at AS level and undergraduate level does change over time, but that it does not become straightforwardly more normative. Instead, mathematics students reason more in line with the `defective' interpretation of the conditional, under which they assume p and reason about q. This leads to the assumption that not-p cases are irrelevant, which results in the rejection of two commonly-endorsed invalid inferences, but also in the rejection of the valid modus tollens inference. Mathematics students did not change in their reasoning behaviour on a thematic syllogisms task or a thematic version of the conditional inference task. Next, it is shown that mathematics students reason significantly less in line with a defective interpretation of the conditional when it is phrased `p only if q' compared to when it is phrased `if p then q', despite the two forms being logically equivalent. This suggests that their performance is determined by linguistic features rather than the underlying logic. The final two studies investigated the heuristic and algorithmic levels of Stanovich's (2009a) tri-process model of cognition as potential mechanisms of the change in conditional reasoning skills. It is shown that mathematicians' defective interpretation of the conditional stems in part from heuristic level processing and in part from effortful processing, and that the executive function skills of inhibition and shifting at the algorithmic level are correlated with its adoption. It is suggested that studying mathematics regularly exposes students to implicit `if then' statements where they are expected to assume p and reason about q, and that this encourages them to adopt a defective interpretation of conditionals. It is concluded that the TFD is not supported by the evidence; while mathematics does seem to develop abstract conditional reasoning skills, the result is not more normative reasoning.
5

Implementing a Class of Permutation Tests: The coin Package

Zeileis, Achim, Wiel, Mark A. van de, Hornik, Kurt, Hothorn, Torsten 11 1900 (has links) (PDF)
The R package coin implements a unified approach to permutation tests providing a huge class of independence tests for nominal, ordered, numeric, and censored data as well as multivariate data at mixed scales. Based on a rich and exible conceptual framework that embeds different permutation test procedures into a common theory, a computational framework is established in coin that likewise embeds the corresponding R functionality in a common S4 class structure with associated generic functions. As a consequence, the computational tools in coin inherit the exibility of the underlying theory and conditional inference functions for important special cases can be set up easily. Conditional versions of classical tests|such as tests for location and scale problems in two or more samples, independence in two- or three-way contingency tables, or association problems for censored, ordered categorical or multivariate data|can easily be implemented as special cases using this computational toolbox by choosing appropriate transformations of the observations. The paper gives a detailed exposition of both the internal structure of the package and the provided user interfaces along with examples on how to extend the implemented functionality. (authors' abstract)
6

Residual-based shadings for visualizing (conditional) independence

Zeileis, Achim, Meyer, David, Hornik, Kurt January 2005 (has links) (PDF)
Residual-based shadings for enhancing mosaic and association plots to visualize independence models for contingency tables are extended in two directions: (a) perceptually uniform HCL colors are used and (b) the result of an associated significance test is coded by the appearance of color in the visualization. For obtaining (a), a general strategy for deriving diverging palettes in the perceptually-based HCL space is suggested. As for (b), cut offs that control the appearance of color are computed in a data-driven way based on the conditional permutation distribution of maximum-type test statistics. The shadings are first established for the case of independence in 2-way tables and then extended to more general independence models for multi-way tables, including in particular conditional independence problems. / Series: Research Report Series / Department of Statistics and Mathematics
7

Implementing a Class of Permutation Tests: The coin Package

Hothorn, Torsten, Hornik, Kurt, van de Wiel, Mark A., Zeileis, Achim January 2007 (has links) (PDF)
The R package coin implements a unified approach to permutation tests providing a huge class of independence tests for nominal, ordered, numeric, and censored data as well as multivariate data at mixed scales. Based on a rich and flexible conceptual framework that embeds different permutation test procedures into a common theory, a computational framework is established in coin that likewise embeds the corresponding R functionality in a common S4 class structure with associated generic functions. As a consequence, the computational tools in coin inherit the flexibility of the underlying theory and conditional inference functions for important special cases can be set up easily. Conditional versions of classical tests - such as tests for location and scale problems in two or more samples, independence in two- or three-way contingency tables, or association problems for censored, ordered categorical or multivariate data - can be easily be implemented as special cases using this computational toolbox by choosing appropriate transformations of the observations. The paper gives a detailed exposition of both the internal structure of the package and the provided user interfaces. / Series: Research Report Series / Department of Statistics and Mathematics
8

Methods for handling missing data in cohort studies where outcomes are truncated by death

Wen, Lan January 2018 (has links)
This dissertation addresses problems found in observational cohort studies where the repeated outcomes of interest are truncated by both death and by dropout. In particular, we consider methods that make inference for the population of survivors at each time point, otherwise known as 'partly conditional inference'. Partly conditional inference distinguishes between the reasons for missingness; failure to make this distinction will cause inference to be based not only on pre-death outcomes which exist but also on post-death outcomes which fundamentally do not exist. Such inference is called 'immortal cohort inference'. Investigations of health and cognitive outcomes in two studies - the 'Origins of Variance in the Old Old' and the 'Health and Retirement Study' - are conducted. Analysis of these studies is complicated by outcomes of interest being missing because of death and dropout. We show, first, that linear mixed models and joint models (that model both the outcome and survival processes) produce immortal cohort inference. This makes the parameters in the longitudinal (sub-)model difficult to interpret. Second, a thorough comparison of well-known methods used to handle missing outcomes - inverse probability weighting, multiple imputation and linear increments - is made, focusing particularly on the setting where outcomes are missing due to both dropout and death. We show that when the dropout models are correctly specified for inverse probability weighting, and the imputation models are correctly specified for multiple imputation or linear increments, then the assumptions of multiple imputation and linear increments are the same as those of inverse probability weighting only if the time of death is included in the dropout and imputation models. Otherwise they may not be. Simulation studies show that each of these methods gives negligibly biased estimates of the partly conditional mean when its assumptions are met, but potentially biased estimates if its assumptions are not met. In addition, we develop new augmented inverse probability weighted estimating equations for making partly conditional inference, which offer double protection against model misspecification. That is, as long as one of the dropout and imputation models is correctly specified, the partly conditional inference is valid. Third, we describe methods that can be used to make partly conditional inference for non-ignorable missing data. Both monotone and non-monotone missing data are considered. We propose three methods that use a tilt function to relate the distribution of an outcome at visit j among those who were last observed at some time before j to those who were observed at visit j. Sensitivity analyses to departures from ignorable missingness assumptions are conducted on simulations and on real datasets. The three methods are: i) an inverse probability weighted method that up-weights observed subjects to represent subjects who are still alive but are not observed; ii) an imputation method that replaces missing outcomes of subjects who are alive with their conditional mean outcomes given past observed data; and iii) a new augmented inverse probability method that combines the previous two methods and is doubly-robust against model misspecification.
9

Essays on Inference in Linear Mixed Models

Kramlinger, Peter 28 April 2020 (has links)
No description available.
10

Analyses Of Crash Occurence And Injury Severities On Multi Lane Highways Using Machine Learning Algorithms

Das, Abhishek 01 January 2009 (has links)
Reduction of crash occurrence on the various roadway locations (mid-block segments; signalized intersections; un-signalized intersections) and the mitigation of injury severity in the event of a crash are the major concerns of transportation safety engineers. Multi lane arterial roadways (excluding freeways and expressways) account for forty-three percent of fatal crashes in the state of Florida. Significant contributing causes fall under the broad categories of aggressive driver behavior; adverse weather and environmental conditions; and roadway geometric and traffic factors. The objective of this research was the implementation of innovative, state-of-the-art analytical methods to identify the contributing factors for crashes and injury severity. Advances in computational methods render the use of modern statistical and machine learning algorithms. Even though most of the contributing factors are known a-priori, advanced methods unearth changing trends. Heuristic evolutionary processes such as genetic programming; sophisticated data mining methods like conditional inference tree; and mathematical treatments in the form of sensitivity analyses outline the major contributions in this research. Application of traditional statistical methods like simultaneous ordered probit models, identification and resolution of crash data problems are also key aspects of this study. In order to eliminate the use of unrealistic uniform intersection influence radius of 250 ft, heuristic rules were developed for assigning crashes to roadway segments, signalized intersection and access points using parameters, such as 'site location', 'traffic control' and node information. Use of Conditional Inference Forest instead of Classification and Regression Tree to identify variables of significance for injury severity analysis removed the bias towards the selection of continuous variable or variables with large number of categories. For the injury severity analysis of crashes on highways, the corridors were clustered into four optimum groups. The optimum number of clusters was found using Partitioning around Medoids algorithm. Concepts of evolutionary biology like crossover and mutation were implemented to develop models for classification and regression analyses based on the highest hit rate and minimum error rate, respectively. Low crossover rate and higher mutation reduces the chances of genetic drift and brings in novelty to the model development process. Annual daily traffic; friction coefficient of pavements; on-street parking; curbed medians; surface and shoulder widths; alcohol / drug usage are some of the significant factors that played a role in both crash occurrence and injury severities. Relative sensitivity analyses were used to identify the effect of continuous variables on the variation of crash counts. This study improved the understanding of the significant factors that could play an important role in designing better safety countermeasures on multi lane highways, and hence enhance their safety by reducing the frequency of crashes and severity of injuries. Educating young people about the abuses of alcohol and drugs specifically at high schools and colleges could potentially lead to lower driver aggression. Removal of on-street parking from high speed arterials unilaterally could result in likely drop in the number of crashes. Widening of shoulders could give greater maneuvering space for the drivers. Improving pavement conditions for better friction coefficient will lead to improved crash recovery. Addition of lanes to alleviate problems arising out of increased ADT and restriction of trucks to the slower right lanes on the highways would not only reduce the crash occurrences but also resulted in lower injury severity levels.

Page generated in 0.198 seconds