• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6867
  • 727
  • 652
  • 593
  • 427
  • 427
  • 427
  • 427
  • 427
  • 424
  • 342
  • 133
  • 119
  • 111
  • 108
  • Tagged with
  • 13129
  • 2380
  • 2254
  • 2048
  • 1772
  • 1657
  • 1447
  • 1199
  • 1066
  • 904
  • 858
  • 776
  • 760
  • 741
  • 739
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Progress against cancer: A new measure

Brauner, Christopher Mark January 1995 (has links)
Measures of the impact of cancer on survival are often incomplete, and are subject to biases which cloud the assessment of progress against the disease. A new measure, the proportion diagnosed with cancer and dead by a particular age, is proposed. This measure incorporates incidence, survival, and mortality, and improves upon other measures in several ways. The measure is examined separately by sex/race combinations for three periods of diagnosis. To calculate the measure, long-term survival must be known or estimated. Only a limited period of followup is available for the population studied; therefore, a model expressing survival time as a function of age and diagnosis period is sought. The accelerated failure model is considered, but is poor at predicting later survival from early experience. Estimation is accomplished by projecting short-term survival experience from early diagnosis periods to later periods, and using the accelerated failure model to predict long-term survival.
332

A test of mode existence with applications to multimodality

Minnotte, Michael C. January 1993 (has links)
Modes, or local maxima, are often among the most interesting features of a probability density function. Given a set of data drawn from an unknown density, it is frequently desirable to know whether or not the density is multimodal, and various procedures have been suggested for investigating the question of multimodality in the context of hypothesis testing. Available tests, however, suffer from the encumbrance of testing the entire density at once, frequently through the use of nonparametric density estimates using a single bandwidth parameter. Such a procedure puts the investigator examining a density with several modes of varying sizes at a disadvantage. A new test is proposed involving testing the reality of individual observed modes, rather than directly testing the number of modes of the density as a whole. The test statistic used is a measure of the size of the mode, the absolute integrated difference between the estimated density and the same density with the mode in question excised at the level of the higher of its two surrounding antimodes. Samples are simulated from a conservative member of the composite null hypothesis to estimate p-values within a Monte Carlo setting. Such a test can be combined with the graphical notion of a "mode tree," in which estimated mode locations are plotted over a range of kernel bandwidths. In this way, one can obtain a procedure for examining, in an adaptive fashion, not only the reality of individual modes, but also the overall number of modes of the density. A proof of consistency of the test statistic is offered, simulation results are presented, and applications to real data are illustrated.
333

Multi-stage designs in dose-response studies

Spears, Floyd Martin January 1993 (has links)
Designs are explored that minimize the asymptotic variance of a single parameter in a dose-response study designed to estimate this parameter. An example is a design to find the dose producing 50% response. Uncertainty of parameter values of the dose-response curve is represented as a normal prior distribution. Because the integration of the criterion over the prior distribution is analytically untractable, numeric methods are used to find good designs. The extension to multi-stage experiments is straightforward. The normal prior distribution coupled with the asymptotically normal likelihood yields a normal posterior distribution that is used to optimize the succeeding stage. Simulation results suggest that the asymptotic methods are a good reflection of small sample properties of the designs, even with modest-sized experiments. If initial uncertainty of the parameters is large, two-stage designs can produce accuracy that would require a sample size fifty percent greater with a single-stage design.
334

A time series approach to quality control

Dittrich, Gayle Lynn January 1991 (has links)
One way that a process may be said to be "out-of-control" is when a cyclical pattern exists in the observations over time. It is necessary that an accurate control chart be developed to signal when a cycle is present in the process. Two control charts have recently been developed to deal with this problem. One, based on the periodogram, provides a test based on a finite number of frequencies. The other method uses a test which estimates a statistic which covers all frequency values. However, both methods fail to estimate the frequency value of the cycle and are computationally difficult. A new control chart is proposed which not only covers a continuous range of frequency values, but also estimates the frequency of the cycle. It in addition is easier to understand and compute than the two other methods.
335

An automatic algorithm for the estimation of mode location and numerosity in general multidimensional data

Elliott, Mark Nathan January 1995 (has links)
Exploratory data analysis in four or more dimensions present many challenges that are unknown in lower dimensionalities. The emptiness of high dimensional space makes merely locating the regions in which data is concentrated a nontrivial task. A nonparametric algorithm has been developed which determines the number and location of modes in a multidimensional data set. This algorithm appears to be free of the major disadvantages of standard methods. The procedure can be used in data exploration and can also automatically and nonparametrically test for multimodality. The algorithm performs well in several applications. In particular, the algorithm suggests that the Fisher-Anderson iris data, which contains three species, has four modes.
336

A new method for robust nonparametric regression

Wang, Ferdinand Tsihung January 1990 (has links)
Consider the problem of estimating the mean function underlying a set of noisy data. Least squares is appropriate if the error distribution of the noise is Gaussian, and if there is good reason to believe that the underlying function has some particular form. But what if the previous two assumptions fail to hold? In this regression setting, a robust method is one that is resistant against outliers, while a nonparametric method is one that allows the data to dictate the shape of the curve (rather than choosing the best parameters for a fit from a particular family). Although it is easy to find estimators that are either robust or nonparametric, the literature reveals very few that are both. In this thesis, a new method is proposed that uses the fact that the $L\sb1$ norm naturally leads to a robust estimator. In spite of the $L\sb1$ norm's reputation for being computationally intractable, it turns out that solving the least absolute deviations problem leads to a linear program with special structure. By utilizing this property, but over local neighborhoods, a method that is also nonparametric is obtained. Additionally, the new method generalizes naturally to higher dimensions; to date, the problem of smoothing in higher dimensions has met with little success. A proof of consistency is presented, and the results from simulated data are shown.
337

Visual estimation of structure in ranked data

Baggerly, Keith Alan January 1995 (has links)
Ranked data arise when some group of judges is asked to rank order a set of n items according to some preference function. A judge's ranking is denoted by a vector x = $(x\sb1,...,x\sb{n}),$ where $x\sb{i}$ is the rank assigned to item i. If we treat these vectors as points in $\Re\sp{n}$, we are led to consider the geometric structure encompassing the collection of all such vectors: the convex hull of the n! points in $\Re\sp{n}$ whose coordinates are permutations of the first n integers. These structures are known as permutation polytopes. The use of such structures for the analysis of ranked data was first proposed by Schulman $\lbrack65\rbrack$. Geometric constraints on the shapes of the permutation polytopes were later noted by McCullagh $\lbrack56\rbrack.$ Thompson $\lbrack77\rbrack$ advocated using the permutation polytopes as outlines for high-dimensional "histograms", and generalized the class of polytopes to deal with partial rankings (ties allowed). Graphical representation of ranked data can be achieved by putting varying masses at the vertices of the generalized permutation polytopes. Each face of the permutation polytope has a specific interpretation; for example, item i being ranked first. The estimation of structure in ranked data can thus be transformed into geometric (visual) problems, such as the location of faces with the highest concentrations of mass. This thesis addresses various problems in the context of such a geometric framework: the automation of graphical displays of the permutation polytopes; illustration and estimation of parametric models; and smoothing methods using duality--where every face is replaced with a point. A new way of viewing the permutation polytopes as projections of high-dimensional hypercubes is also given. The hypercubes are built as cartesian products of the $(\sbsp{2}{n})$ possible paired comparisons, and as such lead to methods for building rankings from collections of paired comparisons.
338

An examination of some open problems in time series analysis

Davis, Ginger Michelle January 2005 (has links)
We investigate two open problems in the area of time series analysis. The first is developing a methodology for multivariate time series analysis when our time series has components that are both continuous and categorical. Our specific contribution is a logistic smooth transition regression (LSTR) model whose transition variable is related to a categorical variable. This methodology is necessary for series that exhibit nonlinear behavior dependent on a categorical variable. The estimation procedure is investigated both with simulation and an economic example. The second contribution to time series analysis is examining the evolving structure in multivariate time series. The application area we concentrate on is financial time series. Many models exist for the joint analysis of several financial instruments such as securities due to the fact that they are not independent. These models often assume some type of constant behavior between the instruments over the time period of analysis. Instead of imposing this assumption, we are interested in understanding the dynamic covariance structure in our multivariate financial time series, which will provide us with an understanding of changing market conditions. In order to achieve this understanding, we first develop a multivariate model for the conditional covariance and then examine that estimate for changing structure using multivariate techniques. Specifically, we simultaneously model individual stock data that belong to one of three market sectors and examine the behavior of the market as a whole as well as the behavior of the sectors. Our aims are detecting and forecasting unusual changes in the system, such as market collapses and outliers, and understanding the issue of portfolio diversification in multivariate financial series from different industry sectors. The motivation for this research concerns portfolio diversification. The false assumption that investment in different industry sectors is uncorrelated is not made. Instead, we assume that the comovement of stocks within and between sectors changes with market conditions. Some of these market conditions include market crashes or collapses and common external influences.
339

Robust modeling

Wojciechowski, William Conrad January 2001 (has links)
In this data-rich age, datasets often contain many observations and variables. Verifying the quality of a large dataset is a formidable task that is not to be completed by manual inspection. Therefore, methods that automatically perform well even when the dataset contains anomalous data points are needed. Robust procedures are designed to have this type of stability. A new general purpose robust estimator is introduced. This Bayesian procedure applies Gibbs sampling and data augmentation to achieve robustness by weighting the observations in the likelihood of Bayes' theorem. Because this new estimator relies upon simulation, it has several advantages over existing robust methods. The derivation of the new method will be presented along with examples that compare the new method to existing procedures.
340

Robust empirical likelihood

Glenn, Nancy Louise January 2002 (has links)
This research introduces a new nonparametric technique: robust empirical likelihood. Robust empirical likelihood employs the empirical likelihood method to compute robust parameter estimates and confidence intervals. The technique uses constrained optimization to solve a robust version of the empirical likelihood function, thus allowing data analysts to estimate parameters accurately despite any potential contamination. Empirical likelihood combines the utility of a parametric likelihood with the flexibility of a nonparametric method. Parametric likelihoods are valuable because they have a wide variety of uses; in particular, they are used to construct confidence intervals. Nonparametric methods are flexible because they produce accurate results without requiring knowledge about the data's distribution. Robust empirical likelihood's applications include regression models, hypothesis testing, and all areas that use likelihood methods.

Page generated in 0.0571 seconds