• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Space-time clustering : finding the distribution of a correlation-type statistic.

Siemiatycki, Jack January 1971 (has links)
No description available.
32

The use of digital computers for statistical analysis in textiles

Sarvate, Sharad Ramchandra 05 1900 (has links)
No description available.
33

A criterion for selecting the probability density function of best fit for hydrologic data

Donthamsetti, Veerabhadra Rao 05 1900 (has links)
No description available.
34

New residuals in multivariate bilinear models : testing hypotheses, diagnosing models and validating model assumptions /

Seid Hamid, Jemila, January 2005 (has links) (PDF)
Diss. (sammanfattning) Uppsala : Sveriges lantbruksuniversitet, 2005. / Härtill 3 uppsatser.
35

Coefficients of variation : an approximate F-test /

Forkman, F. Johannes, January 2005 (has links) (PDF)
Lic.-avh. Uppsala : Sveriges lantbruksuniv.
36

Digital computers and geodetic computation : solution of normal equations and error analysis of geodetic networks

Ashkenazi, V. January 1965 (has links)
No description available.
37

Variable selection in high dimensional semi-varying coefficient models

Chen, Chi 06 September 2013 (has links)
With the development of computing and sampling technologies, high dimensionality has become an important characteristic of commonly used science data, such as some data from bioinformatics, information engineering, and the social sciences. The varying coefficient model is a flexible and powerful statistical model for exploring dynamic patterns in many scientific areas. It is a natural extension of classical parametric models with good interpretability, and is becoming increasingly popular in data analysis. The main objective of thesis is to apply the varying coefficient model to analyze high dimensional data, and to investigate the properties of regularization methods for high-dimensional varying coefficient models. We first discuss how to apply local polynomial smoothing and the smoothly clipped absolute deviation (SCAD) penalized methods to estimate varying coefficient models when the dimension of the model is diverging with the sample size. Based on the nonconcave penalized method and local polynomial smoothing, we suggest a regularization method to select significant variables from the model and estimate the corresponding coefficient functions simultaneously. Importantly, our proposed method can also identify constant coefficients at same time. We investigate the asymptotic properties of our proposed method and show that it has the so called “oracle property.” We apply the nonparametric independence Screening (NIS) method to varying coefficient models with ultra-high-dimensional data. Based on the marginal varying coefficient model estimation, we establish the sure independent screening property under some regular conditions for our proposed sure screening method. Combined with our proposed regularization method, we can systematically deal with high-dimensional or ultra-high-dimensional data using varying coefficient models. The nonconcave penalized method is a very effective variable selection method. However, maximizing such a penalized likelihood function is computationally challenging, because the objective functions are nondifferentiable and nonconcave. The local linear approximation (LLA) and local quadratic approximation (LQA) are two popular algorithms for dealing with such optimal problems. In this thesis, we revisit these two algorithms. We investigate the convergence rate of LLA and show that the rate is linear. We also study the statistical properties of the one-step estimate based on LLA under a generalized statistical model with a diverging number of dimensions. We suggest a modified version of LQA to overcome its drawback under high dimensional models. Our proposed method avoids having to calculate the inverse of the Hessian matrix in the modified Newton Raphson algorithm based on LQA. Our proposed methods are investigated by numerical studies and in a real case study in Chapter 5.
38

Probabilistic modelling of genomic trajectories

Campbell, Kieran January 2017 (has links)
The recent advancement of whole-transcriptome gene expression quantification technology - particularly at the single-cell level - has created a wealth of biological data. An increasingly popular unsupervised analysis is to find one dimensional manifolds or trajectories through such data that track the development of some biological process. Such methods may be necessary due to the lack of explicit time series measurements or due to asynchronicity of the biological process at a given time. This thesis aims to recast trajectory inference from high-dimensional "omics" data as a statistical latent variable problem. We begin by examining sources of uncertainty in current approaches and examine the consequences of propagating such uncertainty to downstream analyses. We also introduce a model of switch-like differentiation along trajectories. Next, we consider inferring such trajectories through parametric nonlinear factor analysis models and demonstrate that incorporating information about gene behaviour as informative Bayesian priors improves inference. We then consider the case of bifurcations in data and demonstrate the extent to which they may be modelled using a hierarchical mixture of factor analysers. Finally, we propose a novel type of latent variable model that performs inference of such trajectories in the presence of heterogeneous genetic and environmental backgrounds. We apply this to both single-cell and population-level cancer datasets and propose a nonparametric extension similar to Gaussian Process Latent Variable Models.
39

A statistical continuum approach for mass transport in fractured media

Robertson, Mark Donald January 1990 (has links)
The stochastic-continuum model developed by Schwartz and Smith [1988] is a new approach to the traditional continuum methods for solute transport in fractured media. Instead of trying to determine dispersion coefficients and an effective porosity for the hydraulic system, statistics on particle motion (direction, velocity and fracture length) collected from a discretely modeled sub-domain network are used to recreate particle motion in a full-domain continuum model. The discrete sub-domain must be large enough that representative statistics can be collected, yet small enough to be modeled with available resources. Statistics are collected in the discrete sub-domain model as the solute, represented by discrete particles, is moved through the network of fractures. The domain of interest, which is typically too large to be modeled discretely is represented by a continuum distribution of the hydraulic head. A particle tracking method is used to move the solute through the continuum model, sampling from the distributions for direction, velocity and fracture length. This thesis documents extensions and further testing of the stochastic-continuum two-dimensional model and initial work on a three-dimensional stochastic-continuum model. Testing of the model was done by comparing the mass distribution from the stochastic-continuum model to the mass distribution from the same domain modeled discretely. Analysis of the velocity statistics collected in the two-dimensional model suggested changes in the form of the fitted velocity distribution from a gaussian distribution to a gamma distribution, and the addition of a velocity correlation function. By adding these changes to the statistics collected, an improvement in the match of the spatial mass distribution moments between the stochastic-continuum and discrete models was effected. This extended two-dimensional model is then tested under a wide range of network conditions. The differences in the first spatial moments of the discrete and stochastic-continuum models were less than 10%, while the differences in the second spatial moments ranged from 6% to 30%. Initial results from the three-dimensional stochastic-continuum model showed that similar statistics to those used in the two-dimensional stochastic-continuum model can be used to recreate the nature of three-dimensional discrete particle motion. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
40

Information and distance measures with application to feature evaluation and to heuristic sequential classification

Vilmansen, Toomas Rein January 1974 (has links)
Two different aspects of the problem of selecting measurements for statistical pattern recognition are investigated. First, the evaluation of features for multiclass recognition problems by using measures of probabilistic dependence is examined. Secondly, the problem of evaluation and selection of features for a general tree type classifier is investigated. Measures of probabilistic dependence are derived from pairwise distance measures such as Bhattacharyya distance, divergence, Matusita's distance, and discrimination information. The properties for the dependence measures are developed in the context of feature class dependency. Inequalities relating the measures are derived. Also upper and lower bounds on error probability are derived for the different measures. Comparisons of the bounds are made. Feature ordering experiments are performed to compare the measures to error probability and to each other. A fairly general tree type sequential classifier is examined. An algorithm which uses distance measures for clustering probability distributions and which uses dependence and distance measures for ordering features is derived for constructing the decision tree. The concept of confidence in a decision in conjunction with backtracking is introduced in order to make decisions at any node of the tree tentative and reversible. Also, the idea of re-introducing classes at any stage is discussed. Experiments are performed to determine the storage and processing requirements of the classifier, to determine effects of various parameters on performance, and to determine the usefulness of procedures for backtracking and reintroducing of classes. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate

Page generated in 0.1293 seconds