• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6757
  • 117
  • 29
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 6757
  • 1456
  • 1226
  • 1216
  • 1130
  • 963
  • 638
  • 636
  • 579
  • 465
  • 462
  • 453
  • 451
  • 404
  • 396
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Contributions to the theory of arrangement increasing functions

Unknown Date (has links)
A function $f(\underline{x})$ which increases each time we transpose an out of order pair of coordinates, $x\sb{j} > x\sb{k}$ for some $j x\sb{k}$ by transposing the two x coordinates. The theory of AI functions is tailor made for ranking and selection problems, in which case we assume that the density $f(\underline{\theta}$,$\underline{x})$ of observations with respective parameters $\theta\sb1, \..., \theta\sb{n}$ is AI, and the goal is to determine the largest or smallest parameters. / In this dissertation we present new applications of AI functions in such areas as biology and reliability, and we generalize the notion of AI functions. We consider multivector extensions, some with and one without respect to parameter vectors, and we connect these. Another generalization (TEGO) is motivated by the connection between total positivity (TP) and AI. TEGO results are shown to imply AI and TP results. We also define and develop a partial ordering on densities of rank vectors. The theory, which involves finding the extreme points of the convex set of AI rank densities, is then used to establish some power results of rank tests. / Source: Dissertation Abstracts International, Volume: 50-08, Section: B, page: 3563. / Major Professor: Fred Leysieffer. / Thesis (Ph.D.)--The Florida State University, 1989.
62

Nonparametric methods for imperfect repair models

Unknown Date (has links)
Under the imperfect repair model of Brown and Proschan (1983), a failed item is replaced by a new item (perfect repair) with probability p, and with probability 1 $-$ p, a minimal repair is performed; that is, the failed item is replaced by a working item of the same age. This procedure is repeated at each subsequent failure. Block, Borges, and Savits (1985) extend this model by allowing p to be a function of the age of the item. In both of these models, imperfect repairs are thus assumed to be minimal. Whitaker and Samaniego (1989) propose an estimator for the life distribution, F, of a new item, when either of these processes is observed until the time of the $m\sp{\rm th}$ perfect repair. / In this dissertation, we extend to the case of possibly discontinuous F, a result of Block, Borges, and Savits identifying the distribution of the waiting time until the first perfect repair. We then use a martingale approach to rederive and extend the weak convergence results of Whitaker and Samaniego. These results are used to derive asymptotic confidence bands for F, and an extension of the Wilcoxon two-sample test for data collected under these models. Finally, we propose a test of the minimal repair assumption, and the limiting distribution of the proposed test statistic is derived. / Source: Dissertation Abstracts International, Volume: 51-02, Section: B, page: 0831. / Major Professors: Myles Hollander; Jayaram Sethuraman. / Thesis (Ph.D.)--The Florida State University, 1989.
63

Analysis of cross-classified data using negative binomial models

Unknown Date (has links)
Several procedures are available for analyzing cross-classified data under the Poisson model. When data suggest the presence of "non-Poisson" variation an alternative model is desirable. Often a negative binomial model is useful as an alternative. In this dissertation methodology for analyzing data under a two-parameter negative binomial model is provided. A conditional likelihood approach is suggested to simplify estimation and inference procedures. Large sample properties of the conditional likelihood approach are derived. Based on simulations these properties are examined for small samples. The suggested methodology is applied to two sets of data from ecological research studies. / Source: Dissertation Abstracts International, Volume: 51-02, Section: B, page: 0832. / Major Professor: Duane Meeter. / Thesis (Ph.D.)--The Florida State University, 1989.
64

Conditional bootstrap methods for censored data

Unknown Date (has links)
We first consider the random censorship model of survival analysis. The pairs of positive random variables ($X\sb{i},Y\sb{i}$), i = 1,$\...$,n, are independent and identically distributed, with distribution functions F(t) = P($X\sb{i} \leq\ t$) and G(t) = P($Y\sb{i} \leq\ t$) and the Y's are independent of the X's. We observe only ($T\sb{i},\delta\sb{i}$), i = 1,$\...$,n, where $T\sb{i}$ = min($X\sb{i},Y\sb{i}$) and $\delta\sb{i}$ = I($X\sb{i} \leq\ Y\sb{i}$). The X's represent survival times, the Y's represent censoring times. Efron (1981) proposed two bootstrap methods for the random censorship model and showed that they are distributionally the same. Akritas (1986) established the weak convergence of the bootstrapped Kaplan-Meier estimator of F when bootstrapping is done by this method. Let us now consider bootstrapping more closely. Suppose that we wish to estimate the variance of F(t). If we knew the Y's then we would condition on them by the ancillarity principle, since the distribution of the Y's does not depend on F. That is, we would want to estimate Var$\{$F(t)$\vert Y\sb1,\...,Y\sb{n}\}$. Unfortunately, in the random censorship model we do not see all the Y's. If $\delta\sb{i}$ = 0 we see the exact value of $Y\sb{i}$, but if $\delta\sb{i}$ = 1 we know only that $Y\sb{i} > T\sb{i}$. Let us denote this information on the Y's by ${\cal C}$. Thus, what we want to estimate is Var$\{$F(t)$\vert{\cal C}\}$. Efron's scheme is appropriate for estimating the unconditional variance. We propose a new bootstrap method which provides an estimate of Var$\{$F(t)$\vert{\cal C}\}$. / In this research we show that the Kaplan-Meier estimator of F formed by the new bootstrap method has the same limiting distribution as the one by Efron's approach. The results of simulation studies assessing the small sample performance of the two bootstrap methods are reported. We also consider the model in which the $X\sb{i}$'s are censored by the $Y\sb{i}$'s and also by known fixed constants, and propose an appropriate bootstrap method for that model. This bootstrap method is a readily modified version of the new bootstrap method above. / Source: Dissertation Abstracts International, Volume: 51-12, Section: B, page: 5959. / Major Professor: Hani Doss. / Thesis (Ph.D.)--The Florida State University, 1990.
65

A hypothesis test of cumulative sums of multinomial parameters

Unknown Date (has links)
Consider $N$ times to repair, $T\sb1,T\sb2\cdots,T\sb{N}$, from a repair time distribution function $F(\cdot)$. Let $p\sb{0~1},p\sb{0~2},\cdots,p\sb{0~K}$ be $K$ proportions with $\sum\sbsp{\nu =1}{K}p\sb{0~\nu}$ $<$ 1. We wish to have at least 100 ($\sum\sbsp{\nu =1}{K}p\sb{0~\nu}$)% of items repaired by time $L\sb{i}$, $1 \le i \le K$, $K \ge 2$. Denote the unknown quantity $F(L\sb{i}$) - $F(L\sb{i-1})$ as $p\sb{i}$, $1 \le i \le K$. Thus we wish to test the hypothesis(UNFORMATTED TABLE OR EQUATION FOLLOWS) / A simple procedure is to test this hypothesis with the $K$ statistics $N\sb1$, $\sum\sbsp{\nu=1}{2}N\sb{\nu},\cdots,\sum\sbsp{\nu=a}{K}N\sb{\nu}$, where $\sum\sbsp{\nu=1}{i}N\sb{\nu}$ = the number of repairs that takes place on or before $l\sb{i}$, $1 \le i \le K$. Each $\sum\sbsp{\nu=n}{i}N\sb{\nu}$ is a binomial random variable with unknown parameter $\sum\sbsp{\nu=1}{i}p\sb{\nu}$. The hypothesis H$\sb0$ is rejected if any of the $\sum\sbsp{\nu=1}{i}N\sb{\nu}$ $\le$ $n\sbsp{i}{0}$, where the $n\sbsp{i}{0}$ are chosen from binomial tables. This test is shown to have several deficiencies. We construct an alternative procedure with which to test this hypothesis. / The Generalized Likelihood Ratio Statistic (GLRT) is based on the multinomial random variable ($N\sb1,N\sb2,\cdots,N\sb{K}$), with parameter ${(p\sb1,}$ $p\sb2,\cdots,$ $p\sb{K}$). The parameter space is(UNFORMATTED TABLE OR EQUATION FOLLOWS) / An algorithm is constructed and computer code supplied to calculate $\lambda(N)$ efficiently for any finite $N$. / For small samples computer code is given to calculate exactly $\delta$ or a p-value for an observed value of $\lambda(N(K))$, 2 $\le$ $K$ $\le$ 5, and $K\ \le\ N\ \le\ N(K)$. / For large $N$, we apply a theorem by Feder(1968) to evaluate the asymptotic critical values and power. / The GLRT statistic, $\lambda(N)$, is shown to be approximately a union-intersection test and thus is approximated by a collection of uniformly most powerful unbiased tests of binomial parameters. The GLRT is shown empirically in the case of $K$ = 3 to have higher power than competing union-intersection tests. / Two power estimation techniques are described and compared empirically. / References. Feder, Paul J. (1968), "On the distribution of the loglikelihood ratio test statistic when the true parameter is 'near' the boundaries of the hypothesis region," Annals of Mathematical Statistics, 39, 2044-2055. / Source: Dissertation Abstracts International, Volume: 49-08, Section: B, page: 3283. / Major Professor: Duane A. Meeter. / Thesis (Ph.D.)--The Florida State University, 1988.
66

A comparison of two methods of bootstrapping in a reliability model

Unknown Date (has links)
We consider bootstrapping in the following reliability model which was considered by Doss, Freitag, and Proschan (1987). Available for testing is a sample of iid systems each having the same structure of m independent components. Each system is continuously observed until it fails. For every component in each system, either a failure time or a censoring time is recorded. A failure time is recorded if the component fails before or at the time of system failure; otherwise a censoring time is recorded. To estimate the distribution of the component lifelengths F$\sb1,\...$,F$\sb{\rm m}$, one can formally compute the Kaplan-Meier estimates F$\sb1,\...$,F$\sb{\rm m}$. Various quantities of interest, such as the probability that a new system will survive time t$\sb0$, may then be estimated by combining F$\sb1,\...$,F$\sb{\rm m}$ in a suitable way. In this model, bootstrapping can be carried out in two different ways. One can resample n systems at random from the original n systems. Alternatively, one can construct artificial systems by generating independent random lifelengths from the Kaplan-Meier estimates F$\sb{\rm j}$, and from those form artificial data. The two methods are distinct. We show that asymptotically, bootstrapping by either method yields correct answers. We also compare the two methods via simulation studies. / Source: Dissertation Abstracts International, Volume: 49-12, Section: B, page: 5385. / Major Professor: Hani Doss. / Thesis (Ph.D.)--The Florida State University, 1988.
67

Estimation of the number of classes of objects through presence/absence data

Unknown Date (has links)
This research involves the estimation of the total number of classes of objects in a region by sampling sectors or quadrats. For each selected quadrat, the classes are recorded. From these data, estimates and/or confidence limits for the number of classes in the region are developed. Models which differ in their methods of sampling (simple random sampling or stratified random sampling) and in their assumptions concerning the classes are investigated. / We present three simple random sampling models: a mixture model, a Bayesian lower limit model, and a j$\sp{\rm th}$-order bootstrap bias-correction model. For the mixture model, we develop an asymptotic confidence relation for the number of classes as well as discuss optimal sampling designs. For the next model, we obtain an asymptotic Bayesian lower limit for the expected number of unobserved classes with the limit being robust to the prior on $\theta$, the number of classes. Our j$\sp{\rm th}$-order bootstrap bias-corrected estimator of $\theta$ extends the (first-order) bootstrap estimator reported by Smith and van Belle (1984). / Then we contrast stratified random sampling with simple random sampling and demonstrate that the expected number of observed classes can be greatly increased by stratification. We also extend some components of the simple random sampling models to stratified random sampling. / Source: Dissertation Abstracts International, Volume: 51-07, Section: B, page: 3444. / Major Professor: Duane Anthony Meeter. / Thesis (Ph.D.)--The Florida State University, 1990.
68

Likelihood ratio based confidence bands in survival analysis

Unknown Date (has links)
Thomas and Grunkemeier (1975) introduced a nonparametric likelihood ratio approach to confidence interval estimation of survival probabilities based on right censored data. We construct simultaneous confidence bands for survival, cumulative hazard rate and quantile functions using this approach. The boundaries of the bands for survival functions are contained within (0,1). A procedure essentially equivalent to a bias correction is developed. The resulting increase in coverage accuracy is illustrated by an example and a simulation study. We look at various versions of likelihood ratio based (LR) confidence bands for the survival function and compare them with the Hall-Wellner band and Nair's equal precision band. We show that LR bands for the cumulative hazard rate function and the quantile function can be obtained by employing a functional and the inverse transformation of the survival function respectively to an LR band for the survival function. At the mean time, the test-based and reflected methods are shown to be valid for constructing bands for the quantile function. The various confidence bands for the quantile function are illustrated through an example. / Source: Dissertation Abstracts International, Volume: 56-08, Section: B, page: 4414. / Major Professors: Myles Holander; Ian W. McKeague. / Thesis (Ph.D.)--The Florida State University, 1995.
69

Statistical Shape Analysis on Manifolds with Applications to Planar Contours and Structural Proteomics

Unknown Date (has links)
The technological advances in recent years have produced a wealth of intricate digital imaging data that is analyzed effectively using the principles of shape analysis. Such data often lies on either high-dimensional or infinite-dimensional manifolds. With computing power also now strong enough to handle this data, it is necessary to develop theoretically-sound methodology to perform the analysis in a computationally efficient manner. In this dissertation, we propose approaches of doing so for planar contours and the three-dimensional atomic structures of protein binding sites. First, we adapt Kendall's definition of direct similarity shapes of finite planar configurations to shapes of planar contours under certain regularity conditions and utilize Ziezold's nonparametric view of Frechet mean shapes. The space of direct similarity shapes of regular planar contours is embedded in a space of Hilbert-Schmidt operators in order to obtain the Veronese-Whitney extrinsic mean shape. For computations, it is necessary to use discrete approximations of both the contours and the embedding. For cases when landmarks are not provided, we propose an automated, randomized landmark selection procedure that is useful for contour matching within a population and is consistent with the underlying asymptotic theory. For inference on the extrinsic mean direct similarity shape, we consider a one-sample neighborhood hypothesis test and the use of nonparametric bootstrap to approximate confidence regions. Bandulasiri et al (2008) suggested using extrinsic reflection size-and-shape analysis to study the relationship between the structure and function of protein binding sites. In order to obtain meaningful results for this approach, it is necessary to identify the atoms common to a group of binding sites with similar functions and obtain proper correspondences for these atoms. We explore this problem in depth and propose an algorithm for simultaneously finding the common atoms and their respective correspondences based upon the Iterative Closest Point algorithm. For a benchmark data set, our classification results compare favorably with those of leading established methods. Finally, we discuss current directions in the field of statistics on manifolds, including a computational comparison of intrinsic and extrinsic analysis for various applications and a brief introduction of sample spaces with manifold stratification. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Degree Awarded: Summer Semester, 2011. / Date of Defense: May 26, 2011. / Intrinsic Analysis, Extrinsic Analysis, Landmark Selection, Shape Analysis, Statistics on Manifolds, Active Sites, Binding Sites, Proteomics, Planar Contours / Includes bibliographical references. / Vic Patrangenaru, Professor Directing Thesis; Washington Mio, University Representative; Jinfeng Zhang, Committee Member; Xufeng Niu, Committee Member.
70

Inference for a nonlinear semimartingale regression model

Unknown Date (has links)
Consider the semimartingale regression model $X(t)$ = $X(0)$ + $\int\sbsp{0}{t}$ $Y(s)\alpha(s,Z(s))$ $ds + M(t)$, where $Y, Z$ are observable covariate processes, $\alpha$ is a (deterministic) function of both time and the covariate process $Z$, and $M$ is a square integrable martingale. Under the assumption that i.i.d. copies of $X, Y, Z$ are observed continuously over a finite time interval, inference for the function $\alpha(t,z)$ is investigated. Applications of this model include hazard function estimation for survival analysis and inference for the drift function of a diffusion process. / An estimator $\ A$ for the time integrated $\alpha(t,z)$ and a kernel estimator of $\alpha(t,z)$ itself are introduced. For $X$ a counting process, $\ A$ reduces to the Nelson-Aalen estimator when $Z$ is not present in the model. Various forms of consistency are proved, rates of convergence and asymptotic distributions of the estimators are derived. Asymptotic confidence bands for the time integrated $\alpha(t,z)$ and a Kolmogorov-Smirnov-type test of equality of $\alpha$ at different levels of the covariate are given. / For the case $Y$ $\equiv$ 1 we introduce an estimator $\{\cal A}$ of the time and space integrated $\alpha(t,z)$. The asymptotic distribution of the estimator $\{\cal A}$ is derived under the assumption that the covariate process $Z$ is $\cal F\sb0$-adapted, where ($\cal F\sb{t}$) is the filtration with respect to which $M$ is a martingale. In the counting process case this amounts to assuming that $X$ is a doubly stochastic Poisson process. Weak convergence of the appropriately normalized time and state indexed process $\{\cal A}$ to a Gaussian random field is shown. As an application of this result, confidence bands for the covariate state integrated hazard function of a doubly stochastic Poisson process whose intensity does not explicitly depend on time are derived. / Source: Dissertation Abstracts International, Volume: 49-03, Section: B, page: 0816. / Major Professor: Ian W. McKeague. / Thesis (Ph.D.)--The Florida State University, 1987.

Page generated in 0.1023 seconds