• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 135
  • 10
  • 4
  • Tagged with
  • 928
  • 928
  • 467
  • 437
  • 384
  • 380
  • 380
  • 184
  • 174
  • 92
  • 68
  • 66
  • 63
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Smoothing Spline Analysis of Variance Models On Accelerometer Data

Chen, Lulu 01 January 2023 (has links) (PDF)
In this thesis, the basics of smoothing spline analysis of variance are first introduced. Regular physical activity has been shown to reduce the risk of chronic diseases in older adults, such as heart disease, stroke, diabetes, and certain forms of cancer. Accurate measurement of physical activity levels in older adults is crucial to identify those who may require interventions to increase their activity levels and prevent functional decline. In our study, we collected data on the physical activity of older individuals by utilizing accelerometer accelerometers. To estimate the underlying patterns related to each covariate, we applies smoothing spline analysis of variance (SSANOVA) methods to two types of measurements from the accelerometer device. We investigates the underlying patterns of different participant groups and compared the patterns among groups. The paper reveals clear patterns of activity levels throughout the day and across days, with differences among groups observed. Additionally, the study compares the mean curve method and the SSANOVA model, and shows that the SSANOVA model is a more suitable method for analyzing physical activity data. The study provides valuable insights into daily physical activity patterns in older people and highlights the usefulness of the SSANOVA model for such data analysis.
42

Comparing process capability : a c pk ratio approach

Minardi, Michael 01 January 2002 (has links)
Quality control is an important aspect in the production of goods, which relies heavily on statistical methods. There are many approaches to testing the quality levels of a single production characteristic. This thesis gives a brief description of some of these including Shewart Control Charts, cp and Cpk These measures are used quite frequently with single-line production. This thesis focuses on two same-product factory lines by analyzing the relationship C(1)pk / C(2)pk Using this relationship, it is possible to test whether both factor lines are equally capable of producing the product. The thesis centers on a program which generates the distribution of C^(1)pk / C^(2)pk using a simulation approach with 2500 observations. The thesis explains how to use the program as well as the logic of the program.
43

Statistical Analysis for Pollutants in Precipitation Observed in a Selected Region

Malik, Rajat January 2010 (has links)
<p>The study of pollution and its effects on the environment has long been an interest for many researchers. Short and long term affects as well as future predictions based on past observations are important factors to consider when one undertakes such a study. The purpose of this thesis is to observe the long term changes and trends of pollutants in precipitation selected from the north-eastern region of the United States of America and Canada. The data was collected on a weekly basis between 1995 to 2006 on air pollutants Ammonium. Nitrate. any type of Sulphate. and Hydron (NH4: N03. XS04. and H+. respectively). In total. 19 different stations 'vvere investigated. Two types of statistical models were fit to the data which include the gamma regression and the random effect model. The gamma regression assumed independence of any spatial and temporal factors. This was used to conceptualize the overall trend and yearly fit. The preliminary analysis found strong evidence of spatial dependence. but temporal dependence was so weak that it could be ignored. The random effect model has been adopted to handle dependencies caused by any underlying mechanisms. Pollutant NH4 had no significant factors resulting from the fitting of the random effect model and the Year effect was non-significant. In the result for pollutant N03: the coefficient of Year was significant and decreasing. Pollutant XSO'l was revealed to have a significant and decreasing Year effect. The random effect model did not produce any significant factors for pollutant H+. Overall. the random effects model is a more reasonable approach to fitting this data because it considers spatial dependence where the gamma regression assumes independent responses.</p> / Master of Science (MS)
44

SELECTING THE MOST PROBABLE CATEGORY: THE R PACKAGE RS

Jin, Hong 10 1900 (has links)
<p>Selecting the most probable multinomial or multivariate hypergeometric category isa multiple-decision selection problem. In this package, xed sampling and inversesampling are used for selecting the most probable category. This package aims atproviding functionality to calculate, display and plot the probabilities of correctlyselecting the most probable category under the least favorable configuration for thesetwo sampling types. A function for finding the specified smallest acceptable samplesize (or cell quota and expected sample size) is included as well.</p> / Master of Science (MSc)
45

An Exploratory Statistical Analysis of NASDAQ Provided Trade Data

Foley, Michael 01 January 2014 (has links)
Since Benoit Mandelbrot's discovery of the fractal nature of financial price series, the quantitative analysis of financial markets has been an area of increasing interest for scientists, traders, and regulators. Further, major technological advances over this time have facilitated not only financial innovations, but also the computational ability to analyze and model markets. The stylized facts are qualitative statistical signatures of financial market data that hold true across different stocks and over many different timescales. In pursuit of a mechanistic understanding of markets, we look to accurately quantify such statistics. With this quantification, we can test computational market models against the stylized facts and run controlled experiments. This requires both discovery of new stylized facts, and a persistent testing of old stylized facts on new data. Using NASDAQ provided data covering the years 2008-2009, we analyze the trades of 120 stocks. An analysis of the stylized facts guides our exploration of the data, where our results corroborate other findings in the existing body of literature. In addition, we search for statistical indicators of market instability in our data sets. We find promising areas for further study, and obtain three key results. Throughout our sample data, high frequency trading plays a larger role in rapid price changes of all sizes than would be randomly expected, but plays a smaller role than usual during rapid price changes of large magnitude. Our analysis also yields further evidence of the long term persistence in the autocorrelations of signed order flow, as well as evidence of long range dependence in price returns.
46

Algorithms for operations on probability distributions in a computer algebra system

Evans, Diane Lynn 01 January 2001 (has links)
In mathematics and statistics, the desire to eliminate mathematical tedium and facilitate exploration has lead to computer algebra systems. These computer algebra systems allow students and researchers to perform more of their work at a conceptual level. The design of generic algorithms for tedious computations allows modelers to push current modeling boundaries outward more quickly.;Probability theory, with its many theorems and symbolic manipulations of random variables is a discipline in which automation of certain processes is highly practical, functional, and efficient. There are many existing statistical software packages, such as SPSS, SAS, and S-Plus, that have numeric tools for statistical applications. There is a potential for a probability package analogous to these statistical packages for manipulation of random variables. The software package being developed as part of this dissertation, referred to as "A Probability Programming Language" (APPL) is a random variable manipulator and is proposed to fill a technology gap that exists in probability theory.;My research involves developing algorithms for the manipulation of discrete random variables. By defining data structures for random variables and writing algorithms for implementing common operations, more interesting and mathematically intractable probability problems can be solved, including those not attempted in undergraduate statistics courses because they were deemed too mechanically arduous. Algorithms for calculating the probability density function of order statistics, transformations, convolutions, products, and minimums/maximums of independent discrete random variables are included in this dissertation.
47

Nonconvex selection in nonparametric additive models

Zhang, Xiangmin 01 December 2014 (has links)
High-dimensional data offers researchers increased ability to find useful factors in predicting a response. However, determination of the most important factors requires careful selection of the explanatory variables. In order to tackle this challenge, much work has been done on single or grouped variable selection under the penalized regression framework. Although the topic of variable selection has been extensively studied under the parametric framework, its applications to more flexible nonparametric models are yet to be explored. In order to implement the variable selection in nonparametric additive models, I introduce and study two nonconvex selection methods under the penalized regression framework, namely the group MCP and the adaptive group LASSO, aiming at improvements on the selection performances of the more widely known group LASSO method in such models. One major part of the dissertation focuses on the theoretical properties of the group MCP and the adaptive group LASSO. I derive their selection and estimation properties. The application of the presently proposed methods to nonparametric additive models are further examined using simulation. Their applications to areas such as the economics and genomics are presented as well. Under both the simulation studies and data applications, the group MCP and the adaptive group LASSO have shown their advantages over the more traditionally used group LASSO method. For the proposed adaptive group LASSO that uses the newly proposed weights, whose recursive application is therefore never studied before, I also derive its theoretical properties under a very general framework. Simulation studies under linear regression are included. In addition to the theoretical and empirical investigations, throughout the dissertation, several other important issues have been briefly discussed, including the computing algorithms and different ways of selecting tuning parameters.
48

A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

Smith, Laura 01 January 2001 (has links)
No description available.
49

Statistical problems involving compositions in a covariate role /

Li, Kin-tak, Christopher. January 1986 (has links)
Thesis--M. Phil., University of Hong Kong, 1987.
50

The statistical analysis of compositional data /

Shen, Shir-ming. January 1983 (has links)
Thesis (Ph. D.)--University of Hong Kong, 1984.

Page generated in 0.0886 seconds