• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 571
  • 240
  • 58
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1271
  • 617
  • 312
  • 268
  • 196
  • 195
  • 191
  • 177
  • 171
  • 166
  • 150
  • 122
  • 121
  • 106
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

New Algorithms in Rigid-Body Registration and Estimation of Registration Accuracy

Hedjazi Moghari, MEHDI 28 September 2008 (has links)
Rigid-body registration is an important research area with major applications in computer-assisted and image-guided surgery. In these surgeries, often the relationship between the preoperative and intraoperative images taken from a patient must be established. This relationship is computed through a registration process, which finds a set of transformation parameters that maps some point fiducials measured on a patient anatomy to a preoperative model. Due to point measurement error caused by medical measurement instruments, the estimated registration parameters are imperfect and this reduces the accuracy of the performed registrations. Medical measurement instruments often perturb the collected points from the patient anatomy by heterogeneous noise. If the noise characteristics are known, they can be incorporated in the registration algorithm in order to more reliably and accurately estimate the registration parameters and their variances. Current techniques employed in rigid-body registration are primarily based on the well-known Iterative Closest Points (ICP) algorithm. Such techniques are susceptible to the existence of noise in the data sets, and are also very sensitive to the initial alignment errors. Also, the literature offers no analytical solution on how to estimate the accuracy of the performed registrations in the presence of heterogenous noise. In an effort to alleviate these problems, we propose and validate various novel registration techniques based on the Unscented Kalman Filter (UKF) algorithm. This filter is generally employed for analyzing nonlinear systems corrupted by additive heterogenous Gaussian noise. First, we propose a new registration algorithm to fit two data sets in the presence of arbitrary Gaussian noise, when the corresponding points between the two data sets are assumed to be known. Next, we extend this algorithm to perform surface-based registration, where point correspondences are not available, but the data sets are roughly aligned. A solution to multi-body point and surface-based registration problem is then proposed based on the UKF algorithm. The outputs of the proposed UKF registration algorithms are then utilized to estimate the accuracy of the performed registration. For the first time, novel derivations are presented that can estimate the distribution of registration error at a target in the presence of an arbitrary Gaussian noise. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2008-09-28 07:25:38.229
42

COMPARISON OF TWO SAMPLES BY A NONPARAMETRIC LIKELIHOOD-RATIO TEST

Barton, William H. 01 January 2010 (has links)
In this dissertation we present a novel computational method, as well as its software implementation, to compare two samples by a nonparametric likelihood-ratio test. The basis of the comparison is a mean-type hypothesis. The software is written in the R-language [4]. The two samples are assumed to be independent. Their distributions, which are assumed to be unknown, may be discrete or continuous. The samples may be uncensored, right-censored, left-censored, or doubly-censored. Two software programs are offered. The first program covers the case of a single mean-type hypothesis. The second program covers the case of multiple mean-type hypotheses. For the first program, an approximate p-value for the single hypothesis is calculated, based on the premise that -2log-likelihood-ratio is asymptotically distributed as ­­χ2(1). For the second program, an approximate p-value for the p hypotheses is calculated, based on the premise that -2log-likelihood-ratio is asymptotically distributed as ­χ2(p). In addition we present a proof relating to use of a hazard-type hypothesis as the basis of comparison. We show that -2log-likelihood-ratio is asymptotically distributed as ­­χ2(1) for this hypothesis. The R programs we have developed can be downloaded free-of-charge on the internet at the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org, package name emplik2. The R-language itself is also available free-of-charge at the same site.
43

Jackknife Empirical Likelihood Inferences for the Skewness and Kurtosis

Zhang, Yan 10 May 2014 (has links)
Skewness and kurtosis are measures used to describe shape characteristics of distributions. In this thesis, we examine the interval estimates about the skewness and kurtosis by using jackknife empirical likelihood (JEL), adjusted JEL, extended JEL, traditional bootstrap, percentile bootstrap, and BCa bootstrap methods. The limiting distribution of the JEL ratio is the standard chi-squared distribution. The simulation study of this thesis makes a comparison of different methods in terms of the coverage probabilities and interval lengths under the standard normal distribution and exponential distribution. The proposed adjusted JEL and extended JEL perform better than the other methods. Finally we illustrate the proposed JEL methods and different bootstrap methods with three real data sets.
44

Införandet av nätjournal - En analys av Vårdförbundets medlemmars inställning

Nilsson Hörnell, Sara, Ström, Jonas January 2014 (has links)
Denna uppsats presenterar en undersökning av Vårdförbundets medlemmars inställning till nätjournal. Syftet med denna uppsats är att undersöka relationen mellan medlemmarnas inställning till nätjournal som reform och hur medlemmarna tror att nätjournal kommer påverka patienten samt deras eget arbete. En modell, som tagits fram genom explorativ och konfirmativ faktoranalys, presenterar relationen mellan medlemmarnas inställning till nätjournal som reform och hur medlemmarna tror att nätjournal kommer påverka patienten samt deras eget arbete i en strukturell ekvationsmodell. Undersökningen presenterar relationen utifrån medlemmarnas länstillhörighet. Länen är grupperade efter andel som ställer sig negativa till nätjournal som reform och fyra grupperingar kan urskiljas i en klusteranalys. Resultatet från undersökningen visar att det finns ett samband mellan inställning till nätjournal som reform och hur medlemmarna tror att nätjournal kommer påverka patienten samt deras eget arbete. Starkast relation till inställning till nätjournal som reform, i samtliga grupper, har frågor som berör patienten och hur denna påverkas av nätjournal. Skillnaderna mellan länsgrupperingarna kan urskiljas i fem enskilda frågor; ”Ditt sätt att skriva i journalen förändras av Journal på nätet?”, ”Patienter tar skada av den information de får via sin journalåtkomst?”, ”Obehöriga kommer att kunna ta del av patientuppgifter genom Journal på nätet?”, ”Journal på nätet ger en mer informerad patient?” och ”Patienters följsamhet till behandling ökar med tillgången till Journal på nätet?”.
45

Beyond Geometric Models: Multivariate Statistical Ecology with Likelihood Functions

Walker, Steven C. 23 February 2011 (has links)
Ecological problems often require multivariate analyses. Ever since Bray and Curtis (1957) drew an analogy between Euclidean distance and community dissimilarity, most multivariate ecological inference has been based on geometric ideas. For example, ecologists routinely use distance-based ordination methods (e.g. multidimensional scaling) to enhance the interpretability of multivariate data. More recently, distance-based diversity indices that account for functional differences between species are now routinely used. But in most other areas of science, inference is based on Fisher's (1922) likelihood concept; statisticians view likelihood as an advance over purely geometric approaches. Nevertheless, likelihood-based reasoning is rare in multivariate statistical ecology. Using ordination and functional diversity as case studies, my thesis addresses the questions: Why is likelihood rare in multivariate statistical ecology? Can likelihood be of practical use in multivariate analyses of real ecological data? Should the likelihood concept replace multidimensional geometry as the foundation for multivariate statistical ecology? I trace the history of quantitative plant ecology to argue that the geometric focus of contemporary multivariate statistical ecology is a legacy of an early 20th century debate on the nature of plant communities. Using the Rao-Blackwell and Lehmann-Scheffé theorems, which both depend on the likelihood concept, I show how to reduce bias and sampling variability in estimators of functional diversity. I also show how to use likelihood-based information criteria to select among ordination methods. Using computationally intensive Markov-chain Monte Carlo methods, I demonstrate how to expand the range of likelihood-based ordination procedures that are computationally feasible. Finally, using philosophical ideas from formal measurement theory, I argue that a likelihood-based multivariate statistical ecology outperforms the geometry-based alternative by providing a stronger connection between analysis and the real world. Likelihood should be used more often in multivariate ecology.
46

Longitudinal Data Analysis with Composite Likelihood Methods

Li, Haocheng January 2012 (has links)
Longitudinal data arise commonly in many fields including public health studies and survey sampling. Valid inference methods for longitudinal data are of great importance in scientific researches. In longitudinal studies, data collection are often designed to follow all the interested information on individuals at scheduled times. The analysis in longitudinal studies usually focuses on how the data change over time and how they are associated with certain risk factors or covariates. Various statistical models and methods have been developed over the past few decades. However, these methods could become invalid when data possess additional features. First of all, incompleteness of data presents considerable complications to standard modeling and inference methods. Although we hope each individual completes all of the scheduled measurements without any absence, missing observations occur commonly in longitudinal studies. It has been documented that biased results could arise if such a feature is not properly accounted for in the analysis. There has been a large body of methods in the literature on handling missingness arising either from response components or covariate variables, but relatively little attention has been directed to addressing missingness in both response and covariate variables simultaneously. Important reasons for the sparsity of the research on this topic may be attributed to substantially increased complexity of modeling and computational difficulties. In Chapter 2 and Chapter 3 of the thesis, I develop methods to handle incomplete longitudinal data using the pairwise likelihood formulation. The proposed methods can handle longitudinal data with missing observations in both response and covariate variables. A unified framework is invoked to accommodate various types of missing data patterns. The performance of the proposed methods is carefully assessed under a variety of circumstances. In particular, issues on efficiency and robustness are investigated. Longitudinal survey data from the National Population Health Study are analyzed with the proposed methods. The other difficulty in longitudinal data is model selection. Incorporating a large number of irrelevant covariates to the model may result in computation, interpretation and prediction difficulties, thus selecting parsimonious models are typically desirable. In particular, the penalized likelihood method is commonly employed for this purpose. However, when we apply the penalized likelihood approach in longitudinal studies, it may involve high dimensional integrals which are computationally expensive. We propose an alternative method using the composite likelihood formulation. Formulation of composite likelihood requires only a partial structure of the correlated data such as marginal or pairwise distributions. This strategy shows modeling tractability and computational cheapness in model selection. Therefore, in Chapter 4 of this thesis, I propose a composite likelihood approach with penalized function to handle the model selection issue. In practice, we often face the model selection problem not only from choosing proper covariates for regression predictor, but also from the component of random effects. Furthermore, the specification of random effects distribution could be crucial to maintain the validity of statistical inference. Thus, the discussion on selecting both covariates and random effects as well as misspecification of random effects are also included in Chapter 4. Chapter 5 of this thesis mainly addresses the joint features of missingness and model selection. I propose a specific composite likelihood method to handle this issue. A typical advantage of the approach is that the inference procedure does not involve explicit missing process assumptions and nuisance parameters estimation.
47

Novel Mathematical Aspects of Phylogenetic Estimation

Fischer, Mareike January 2009 (has links)
In evolutionary biology, genetic sequences carry with them a trace of the underlying tree that describes their evolution from a common ancestral sequence. Inferring this underlying tree is challenging. We investigate some curious cases in which different methods like Maximum Parsimony, Maximum Likelihood and distance-based methods lead to different trees. Moreover, we state that in some cases, ancestral sequences can be more reliably reconstructed when some of the leaves of the tree are ignored - even if these leaves are close to the root. While all these findings show problems inherent to either the assumed model or the applied method, sometimes an inaccurate tree reconstruction is simply due to insufficient data. This is particularly problematic when a rapid divergence event occurred in the distant past. We analyze an idealized form of this problem and determine a tight lower bound on the growth rate for the sequence length required to resolve the tree (independent of any particular branch length). Finally, we investigate the problem of intermediates in the fossil record. The extent of ‘gaps’ (missing transitional stages) has been used to argue against gradual evolution from a common ancestor. We take an analytical approach and demonstrate why, under certain sampling conditions, we may not expect intermediates to be found.
48

New integer-valued autoregressive and regression models with state-dependent parameters

Triebsch, Lea Kartika January 2008 (has links)
Zugl.: Kaiserslautern, Techn. Univ., Diss., 2008
49

Covariate measurement error methods in failure time regression /

Xie, Xiangwen, January 1997 (has links)
Thesis (Ph. D.)--University of Washington, 1997. / Vita. Includes bibliographical references (leaves [76]-79).
50

Marginal regression modelling of weakly dependent data /

Lumley, Thomas, January 1998 (has links)
Thesis (Ph. D.)--University of Washington, 1998. / Vita. Includes bibliographical references (p. [100]-109).

Page generated in 0.0365 seconds