• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 834
  • 412
  • 156
  • 83
  • 79
  • 32
  • 22
  • 16
  • 16
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • Tagged with
  • 2037
  • 2037
  • 543
  • 431
  • 430
  • 382
  • 380
  • 196
  • 178
  • 163
  • 155
  • 153
  • 147
  • 140
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Getting Things in Order: An Introduction to the R Package seriation

Hahsler, Michael, Hornik, Kurt, Buchta, Christian 18 March 2008 (has links) (PDF)
Seriation, i.e., finding a suitable linear order for a set of objects given data and a loss or merit function, is a basic problem in data analysis. Caused by the problem's combinatorial nature, it is hard to solve for all but very small sets. Nevertheless, both exact solution methods and heuristics are available. In this paper we present the package seriation which provides an infrastructure for seriation with R. The infrastructure comprises data structures to represent linear orders as permutation vectors, a wide array of seriation methods using a consistent interface, a method to calculate the value of various loss and merit functions, and several visualization techniques which build on seriation. To illustrate how easily the package can be applied for a variety of applications, a comprehensive collection of examples is presented.
42

Estimation of survival of left truncated and right censored data under increasing hazard

Shinohara, Russell. January 2007 (has links)
No description available.
43

Analysis of longitudinal failure time data /

Hasan, Md. Tariqul, January 2004 (has links)
Thesis (Ph.D.)--Memorial University of Newfoundland, 2005. / Bibliography: leaves 151-153.
44

Roof Maintenance Record Analysis Toward Proactive Maintenance Policies

Khuncumchoo, Non 04 April 2007 (has links)
The objective of this study is to propose an approach that assists facility managers in obtaining the needed information to establish a proactive roof maintenance plan. Two main methodologies are used in this research. The first approach, Historical Maintenance Data Analysis (HMDA), investigates and pinpoints the root cause of roof leaks by thoroughly collecting and analyzing roof maintenance records. HMDA hypothesizes that a mathematical model can be developed to reveal relationships between potential roof leak causes and leak incidences. The second approach, Roof Service Life Prediction (RSLP), investigates the applicability of the Factor Method in roof maintenance. The use of RSLP for leak predictions is based on the assumption that the first-time leak has a linear relationship with the estimated service life (ESL) of the roof. This research demonstrates that roof maintenance records can be used to predict and identify major factors that are likely causes of roof leaks in a mathematical causal model. Roof leaks are not totally random events and can be predicted. In this study, three parameters (Age, Workmanship, and Roof Repair) have a significant impact on the roof leaks probability within the first three years of a roof life. A unit change of workmanship and roofs age increases the odds of a roof leak. On the other hand, changes in roof repair decrease the odds of a roof leak. The Factor Method performed in the RSLP confirms the existence of a relationship between the ESL and the first-time leak. The correlations discovered are positive and significant to highly significant. The extents of correlation are found to be low to medium. The finding also illustrates a relatively simple and useful factor method technique that can be applied to the roof maintenance decision-making process. The estimated service life of a roof provides a reasonable estimation of a maintenance-free period. When ESL information is used in conjunction with knowledge obtained from HMDA, the new synthesis of knowledge will expand the facility maintenance professionals ability to develop and schedule a proactive roof maintenance plan.
45

Analyzing longitudinally correlated failure time data : a generalized estimating equation approach /

Hasan, Md. Tariqul, January 2001 (has links)
Thesis (M.Sc.)--Memorial University of Newfoundland, 2001. / Bibliography: leaves 87-89.
46

Instrumental variables in survival analysis /

Harvey, Danielle J. January 2001 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Statistics, August 2001. / Includes bibliographical references. Also available on the Internet.
47

Analysis of interval-censored failure time data with long-term survivors

Wong, Kin-yau., 黃堅祐. January 2012 (has links)
Failure time data analysis, or survival analysis, is involved in various research fields, such as medicine and public health. One basic assumption in standard survival analysis is that every individual in the study population will eventually experience the event of interest. However, this assumption is usually violated in practice, for example when the variable of interest is the time to relapse of a curable disease resulting in the existence of long-term survivors. Also, presence of unobservable risk factors in the group of susceptible individuals may introduce heterogeneity to the population, which is not properly addressed in standard survival models. Moreover, the individuals in the population may be grouped in clusters, where there are associations among observations from a cluster. There are methodologies in the literature to address each of these problems, but there is yet no natural and satisfactory way to accommodate the coexistence of a non-susceptible group and the heterogeneity in the susceptible group under a univariate setting. Also, various kinds of associations among survival data with a cure are not properly accommodated. To address the above-mentioned problems, a class of models is introduced to model univariate and multivariate data with long-term survivors. A semiparametric cure model for univariate failure time data with long-term survivors is introduced. It accommodates a proportion of non-susceptible individuals and the heterogeneity in the susceptible group using a compound- Poisson distributed random effect term, which is commonly called a frailty. It is a frailty-Cox model which does not place any parametric assumption on the baseline hazard function. An estimation method using multiple imputation is proposed for right-censored data, and the method is naturally extended to accommodate interval-censored data. The univariate cure model is extended to a multivariate setting by introducing correlations among the compound- Poisson frailties for individuals from the same cluster. This multivariate cure model is similar to a shared frailty model where the degree of association among each pair of observations in a cluster is the same. The model is further extended to accommodate repeated measurements from a single individual leading to serially correlated observations. Similar estimation methods using multiple imputation are developed for the multivariate models. The univariate model is applied to a breast cancer data and the multivariate models are applied to the hypobaric decompression sickness data from National Aeronautics and Space Administration, although the methodologies are applicable to a wide range of data sets. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
48

Numerical Simulations of Galaxy Formation: Angular Momentum Distribution and Phase Space Structure of Galactic Halos

Sharma, Sanjib January 2005 (has links)
Within the past decade, the CDM model has emerged as a standard paradigm of structure formation. While it has been very successful in explaining the structure of the Universe on large scales, on smaller (galactic) scales problems have surfaced. In this thesis, we investigate several of these problems in more detail. The thesis is organized as follows. In Chapter 1, we give a brief introduction about structure formation in the universe and discuss some of the problems being faced by the current CDM paradigm of galaxy formation.In Chapter 2, we analyze the angular momentum properties of virialized halos obtained from hydrodynamical simulations. We describe an analytical function that can be used to describe a wide variety of angular momentum distributions (AMDs), with just one parameter α. About $90-95% of halos turn out to haveα < 1.3, while exponential disks in cosmological halos would require 1.3 < α < 1.6. This implies that a typical halo in simulations has an excess of low angular momentum material as compared to that of observed exponential disks, a result which is consistent with the findings of earlier works.In Chapter 3, we perform controlled numerical experiments of merging galactic halos in order to shed light on the results obtained in cosmological simulations. We explore the properties of shape parameter α of AMDs and the spin ratio λGas/λDM in merger remnants and also their dependence on orbital parameters. We find that the shape parameter α is typically close to 1 for a wide range of orbital parameters, less than what is needed to form an exponential disk.The last chapter of the thesis (Chapter 4) is devoted to the analysis of phase space structure of dark matter halos. We first present a method to numerically estimate the densities of discretely sampled data based on a binary space partitioning tree. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. We use this technique to analyze the phase space structure of halos.
49

Bivariate B-splines and its Applications in Spatial Data Analysis

Pan, Huijun 1987- 16 December 2013 (has links)
In the field of spatial statistics, it is often desirable to generate a smooth surface for a region over which only noisy observations of the surface are available at some locations, or even across time. Kriging and kernel estimations are two of the most popular methods. However, these two methods become problematic when the domain is not regular, such as when it is rectangular or convex. Bivariate B-splines developed by mathematicians provide a useful nonparametric tool in bivariate surface modeling. They inherit several appealing properties of univariate B-splines and are applicable in various modeling problems. More importantly, bivariate B-splines have advantages over kriging and kernel estimation when dealing with complicated domains. The purpose of this dissertation is to develop a nonparametric surface fitting method by using bivariate B-splines that can handle complex spatial domains. The dissertation consists of four parts. The first part of this dissertation explains the challenges of smoothing over complicated domains and reviews existing methods. The second part introduces bivariate B-splines and explains its properties and implementation techniques. The third and fourth parts discuss application of the bivariate B-splines in two nonparametric spatial surface fitting problems. In particular, the third part develops a penalized B-splines method to reconstruct a smooth surface from noisy observations. A numerical algorithm is derived, implemented, and applied to simulated and real data. The fourth part develops a reduced rank mixed-effects model for functional principal components analysis of sparsely observed spatial data. A numerical algorithm is used to implement the method and tested on simulated and real data.
50

Functional data analysis with application to MS and cervical vertebrae data

Yaraee, Kate Unknown Date
No description available.

Page generated in 0.079 seconds