• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 135
  • 10
  • 4
  • Tagged with
  • 928
  • 928
  • 467
  • 437
  • 384
  • 380
  • 380
  • 184
  • 174
  • 92
  • 68
  • 66
  • 63
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Goodness-of-Fit and Change-Point Tests for Functional Data

Gabrys, Robertas 01 May 2010 (has links)
A test for independence and identical distribution of functional observations is proposed in this thesis. To reduce dimension, curves are projected on the most important functional principal components. Then a test statistic based on lagged cross--covariances of the resulting vectors is constructed. We show that this dimension reduction step introduces asymptotically negligible terms, i.e. the projections behave asymptotically as iid vector--valued observations. A complete asymptotic theory based on correlations of random matrices, functional principal component expansions, and Hilbert space techniques is developed. The test statistic has chi-square asymptotic null distribution. Two inferential tests for error correlation in the functional linear model are put forward. To construct them, finite dimensional residuals are computed in two different ways, and then their autocorrelations are suitably defined. From these autocorrelation matrices, two quadratic forms are constructed whose limiting distributions are chi--squared with known numbers of degrees of freedom (different for the two forms). A test for detecting a change point in the mean of functional observations is developed. The null distribution of the test statistic is asymptotically pivotal with a well-known asymptotic distribution. A comprehensive asymptotic theory for the estimation of a change--point in the mean function of functional observations is developed. The procedures developed in this thesis can be readily computed using the R package fda. All theoretical insights obtained in this thesis are confirmed by simulations and illustrated by real life-data examples.
142

Formulation of Error Structures Under Non-Orthogonal Situations

Seely, Justus Frandsen 01 May 1965 (has links)
To gain an appreciation or understanding for the title of this study we must first understand what the phrases "non-orthogonal" and "error structure" mean. With an understanding of these terms the title of this study will become clear. To obtain an understanding of the term non-orthogonal, consider an experiment where differing treatments are applied to groups of experi­mental units in order to observe the differential treatment responses. If an equal number of experimental units are in each group, then we say we have an orthogonal situation. This means that when equal numbers exist among the experimental units, that the variability associated with the individual sources of variation can be orthogonally partitioned, such that the sources of variability add to the total source of variation. However, if unequal numbers exist among the experimental units, then we say we have a non-orthogonal situation. This implies that we can no longer obtain a completely orthogonal partition, and that the sources of variability associated with the individual sources of variation do not add to the total source of variation. The phrase, error structure, can best be described with reference to the statistical technique known as the analysis of variance. For any typical analysis of variance, there exists a one to one correspondence between the mean squares and the recognized sources of variation in the underlying model.
143

The Effectiveness of Categorical Variables in Discriminant Function Analysis

Waite, Preston Jay 01 May 1971 (has links)
A preliminary study of the feasibility of using categorical variables in discriminant function analysis was performed. Data including both continuous and categorical variables were used and predictive results examined. The discriminant function techniques were found to be robust enough to include the use of categorical variables. Some problems were encountered with using the trace criterion for selecting the most discriminating variables when these variables are categorical. No monotonic relationship was found to exist between the trace and the number of correct predictions. This study did show that the use of categorical variables does have much potential as a statistical tool in classification procedures. (50 pages)
144

A Fortran List Processor (FLIP)

Fugal, Karl A. 01 May 1970 (has links)
A series of Basic Assembler Language subroutines were developed and made available to the FORTRAN IV language processor which makes list processing possible in a flexible and easily understood way. The subroutine will create and maintain list structures in the computer's core storage. The subroutines are sufficiently general to permit FORTRAN programmers to tailor list processing routines to their own individual requirements. List structure sizes are limited only by the amount of core storage available. (61 pages)
145

Statistical Analysis for Tolerances of Noxious Weed Seeds

Dodge, Yadolah 01 May 1971 (has links)
An analysis of the previous method for testing tolerances of noxious weed seeds was performed. Problems of the current techniques were discussed, and the solution to these problems was given. A new technique of testing through the sequential test ratio was developed, and results examined. The sequential test was found to be useful enough to include the use of it in determining tolerances for noxious weed seeds. This study did show that the use of sequential tests does have excellent potential and flexibility as a statistical tool for the tolerances of noxious weed seeds. (75 pages)
146

Evaluation of Multivariate Homogenous Arma Model

Tseng, Lucy Chienhua 01 May 1980 (has links)
The purpose of this thesis is to study a restricted multivariate AFRMA model, called the Homogeneous Model. This model is defined as one in which each univariate component of the multivariate model is of the same order in p and q as it is in the multivariate model. From a mathematical respect, multivariate ARMA model is homogeneous if , and only if, its coefficient matrices are diagonal. From a physical respect, the present observation of a phenomenon can be modeled only by it s own past observation and its present and past "errors." The estimation procedures are developed based on maximum likelihood method and on O'Connell' s method for univariate model. The homogeneous model is evaluated by four types of data. Those data are generated reflecting different degrees of nonhomogeneity. It is found that the homogeneous model is sensitive to departures from the homogeneous assumptions. Small departures cause no serious problem, however, large departures are serious.
147

Bayesian Models for Repeated Measures Data Using Markov Chain Monte Carlo Methods

Li, Yuanzhi 01 May 2016 (has links)
Bayesian models for repeated measures data are fitted to three different data an analysis projects. Markov Chain Monte Carlo (MCMC) methodology is applied to each case with Gibbs sampling and / or an adaptive Metropolis-Hastings (MH ) algorithm used to simulate the posterior distribution of parameters. We implement a Bayesian model with different variance-covariance structures to an audit fee data set. Block structures and linear models for variances are used to examine the linear trend and different behaviors before and after regulatory change during year 2004-2005. We proposed a Bayesian hierarchical model with latent teacher effects, to determine whether teacher professional development (PD) utilizing cyber-enabled resources lead to meaningful student learning outcomes measured by 8th grade student end-of-year scores (CRT scores) for students with teachers who underwent PD. Bayesian variable selection methods are applied to select teacher learning instrument variables to predict teacher effects. We fit a Bayesian two-part model with the first-part a multivariate probit model and the second-p art a log-normal regression to a repeated measures health care data set to analyze the relationship between Body Mass Index (BMI) and health care expenditures and the correlation between the probability of expenditures and dollar amount spent given expenditures. Models were fitted to a training set and predictions were made on both the training set and the test set.
148

Combinatorial Games on Graphs

Williams, Trevor K. 01 May 2017 (has links)
Combinatorial games are intriguing and have a tendency to engross students and lead them into a serious study of mathematics. The engaging nature of games is the basis for this thesis. Two combinatorial games along with some educational tools were developed in the pursuit of the solution of these games. The game of Nim is at least centuries old, possibly originating in China, but noted in the 16th century in European countries. It consists of several stacks of tokens, and two players alternate taking one or more tokens from one of the stacks, and the player who cannot make a move loses. The formal and intense study of Nim culminated in the celebrated Sprague-Grundy Theorem, which is now one of the centerpieces in the theory of impartial combinatorial games. We study a variation on Nim, played on a graph. Graph Nim, for which the theory of Sprague-Grundy does not provide a clear strategy, was originally developed at the University of Colorado Denver. Graph Nim was first played on graphs of three vertices. The winning strategy, and losing position, of three vertex Graph Nim has been discovered, but we will expand the game to four vertices and develop the winning strategies for four vertex Graph Nim. Graph Theory is a markedly visual field of mathematics. It is extremely useful for graph theorists and students to visualize the graphs they are studying. There exists software to visualize and analyze graphs, such as SAGE, but it is often extremely difficult to learn how use such programs. The tools in GeoGebra make pretty graphs, but there is no automated way to make a graph or analyze a graph that has been built. Fortunately GeoGebra allows the use of JavaScript in the creation of buttons which allow us to build useful Graph Theory tools in GeoGebra. We will discuss two applets we have created that can be used to help students learn some of the basics of Graph Theory. The game of thrones is a two-player impartial combinatorial game played on an oriented complete graph (or tournament) named after the popular fantasy book and TV series. The game of thrones relies on a special type of vertex called a king. A king is a vertex, k, in a tournament, T, which for all x in T either k beats x or there exists a vertex y such that k beats y and y beats x. Players take turns removing vertices from a given tournament until there is only one king left in the resulting tournament. The winning player is the one which makes the final move. We develop a winning position and classify those tournaments that are optimal for the first or second-moving player.
149

Feature Screening of Ultrahigh Dimensional Feature Spaces With Applications in Interaction Screening

Reese, Randall D. 01 August 2018 (has links)
Data for which the number of predictors exponentially exceeds the number of observations is becoming increasingly prevalent in fields such as bioinformatics, medical imaging, computer vision, And social network analysis. One of the leading questions statisticians must answer when confronted with such “big data” is how to reduce a set of exponentially many predictors down to a set of a mere few predictors which have a truly causative effect on the response being modelled. This process is often referred to as feature screening. In this work we propose three new methods for feature screening. The first method we propose (TC-SIS) is specifically intended for use with data having both categorical response and predictors. The second method we propose (JCIS) is meant for feature screening for interactions between predictors. JCIS is rare among interaction screening methods in that it does not require first finding a set of causative main effects before screening for interactive effects. Our final method (GenCorr) is intended for use with data having a multivariate response. GenCorr is the only method for multivariate screening which can screen for both causative main effects and causative interactions. Each of these aforementioned methods will be shown to possess both theoretical robustness as well as empirical agility.
150

Statistical Analysis and Modeling of Cyber Security and Health Sciences

Pokhrel, Nawa Raj 29 May 2018 (has links)
Being in the era of information technology, importance and applicability of analytical statistical model an interdisciplinary setting in the modern statistics have increased significantly. Conceptually understanding the vulnerabilities in statistical perspective helps to develop the set of modern statistical models and bridges the gap between cybersecurity and abstract statistical /mathematical knowledge. In this dissertation, our primary goal is to develop series of the strong statistical model in software vulnerability in conjunction with Common Vulnerability Scoring System (CVSS) framework. In nutshell, the overall research lies at the intersection of statistical modeling, cybersecurity, and data mining. Furthermore, we generalize the model of software vulnerability to health science particularly in the stomach cancer data. In the context of cybersecurity, we have applied the well-known Markovian process in the combination of CVSS framework to determine the overall network security risk. The developed model can be used to identify critical nodes in the host access graph where attackers may be most likely to focus. Based on that information, a network administrator can make appropriate, prioritized decisions for system patching. Further, a flexible risk ranking technique is described, where the decisions made by an attacker can be adjusted using a bias factor. The model can be generalized for use with complicated network environments. We have further proposed a vulnerability analytic prediction model based on linear and non-linear approaches via time series analysis. Using currently available data from National Vulnerability Database (NVD) this study develops and present sets of predictive model by utilizing Auto Regressive Moving Average (ARIMA), Artificial Neural Network (ANN), and Support Vector Machine (SVM) settings. The best model which provides the minimum error rate is selected for prediction of future vulnerabilities. In addition, we purpose a new philosophy of software vulnerability life cycle. It says that vulnerability saturation is a local phenomenon, and it possesses an increasing cyclic behavior within the software vulnerability life cycle. Based on the new philosophy of software vulnerability life cycle, we purpose new effective differential equation model to predict future software vulnerabilities by utilizing the vulnerability dataset of three major OS: Windows 7, Linux Kernel, and Mac OS X. The proposed analytical model is compared with existing models in terms of fitting and prediction accuracy. Finally, the predictive model not only applicable to predict future vulnerability but it can be used in the various domain such as engineering, finance, business, health science, and among others. For instance, we extended the idea on health science; to predict the malignant tumor size of stomach cancer as a function of age based on the given historical data from Surveillance Epidemiology and End Results (SEER).

Page generated in 0.0854 seconds