• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 249
  • 111
  • 16
  • 15
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 440
  • 127
  • 110
  • 88
  • 74
  • 54
  • 51
  • 43
  • 36
  • 31
  • 31
  • 29
  • 29
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The effectiveness of citric acid as an adjunct to surgical reattachment procedures in humans a thesis submitted in partial fulfillment ... periodontics ... /

Mason, William E. January 1984 (has links)
Thesis (M.S.)--University of Michigan, 1984.
32

The effectiveness of citric acid as an adjunct to surgical reattachment procedures in humans a thesis submitted in partial fulfillment ... periodontics ... /

Mason, William E. January 1984 (has links)
Thesis (M.S.)--University of Michigan, 1984.
33

Biometry of the crystalline lens during accommodation

Rabie, E. P. January 1986 (has links)
No description available.
34

Proof of concept Iraqi enrollment via voice authentication project

Lee, Samuel K. 09 1900 (has links)
This thesis documents the findings of the Naval Postgraduate School (NPS) research team's efforts on the initial phase of the Iraqi Enrollment via Voice Authentication Project (IEVAP). The IEVAP is an Office of the Secretary of Defense sponsored research project commissioned to study the feasibility of speaker verification technology in support of the Global War on Terrorism security requirements. The intent of this project is to contribute toward the future employment of speech technologies in a variety of coalition military operations by developing a pilot proof-of-concept system that integrates speaker verification and automated speech recognition technology into a mobile platform to enhance warfighting capabilities. In this first phase of the IEVAP, NPS developed with the assistance of Nuance Communications, Inc. and the Defense Language Institute, a bilingual (English and Jordanian-Arabic) speech application that demonstrates the viability of speaker verification technology for use in operations in Iraq. Additionally, NPS conducted a test to assess the accuracy claim of Nuance's packaged speaker verification application, Nuance Caller Authentication 1.0 (for North American English). The NPS test consisted of 68 speaker enrollments and 411 speaker verification attempts. Upon completion of the test, NPS conducted a single data-point analysis yielding a system accuracy of 95.87%.
35

Bioinformatics-driven development of a queryable cardiometabolic database and its application in a biological setting

Hendry, Liesl Mary January 2017 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg in fulfilment of the requirements for the degree of Doctor of Philosophy. June 2017, Johannesburg / As sequencing and genotyping technologies are advancing, larger and more complex sets of biological data are being produced. Databases can be used to efficiently store and manage the data. Typically, publicly available datasets are accessed through web browsers that offer a user-friendly interface to a database, making complex queries simple to execute. However, research projectspecific data are not commonly stored in this way. In this research, a database (designed in MySQL) and accompanying interface (developed using PHP, HTML and CSS) has been designed for the storage and querying of the quality controlled data from the current project using Metabochip-genotyped Birth to Twenty (Bt20) cohort participants and their female caregivers. Users can easily access the data to generate summary statistics on the phenotype data and download phenotype, single nucleotide polymorphism (SNP) annotation and association analysis data that match user-supplied criteria. Some of the data from the database was used to investigate the genetics of blood pressure (BP) in black South African individuals. Hypertension is a major risk factor for cardiovascular diseases (CVDs). BP variation is known to have a genetic component, but genetic studies in indigenous Africans have been limited. Association analysis, carried out in a merged sample of caregivers and participants, pointed to novel regions of interest in the NOS1AP (DBP and SBP), MYRF (SBP) and POC1B (SBP) genes and two intergenic regions (DACH1|LOC440145 (DBP and SBP) and INTS10|LPL (SBP)). Two SNPs in the MYRF gene met the calculated “array-wide” significance threshold (p<6.7x10-7 for the merged dataset) for multiple testing. Genotype imputation is a useful addition to association studies to increase the SNP panel for association testing. An investigation into the efficiency of imputation in this dataset using a mixed population reference panel was carried out. Imputation was achieved with high confidence in all genes, but a more detailed view of the region was only seen in NOS1AP (DBP and SBP in both the merged and female caregiver datasets) and POC1B (Bt20 participant dataset only). Overall, the research contributed a useful tool for the efficient management of project-specific biological data. The analysis and genotype imputation, which is a promising tool in future studies in this or other African datasets, also provided some insight into the genetics of blood pressure in black South Africans with further functional and replication studies in larger samples required to confirm and explain the findings. / MT 2017
36

Parma: Applications of Vector-Autoregressive Models to Biological Inference with an Emphasis on Procrustes-Based Data

Unknown Date (has links)
Many phenomena in ecology, evolution, and organismal biology relate to how a system changes through time. Unfortunately, most of the statistical methods that are common in these fields represent samples as static scalars or vectors. Since variables in temporally-dynamic systems do not have stable values this representation is unideal. Differential equation and basis function representations provide alternative systems for description, but they are also not without drawbacks of their own. Differential equations are typically outside the scope of statistical inference, and basis function representations rely on functions that solely relate to the original data in regards to qualitative appearance, not in regards to any property of the original system. In this dissertation, I propose that vector autoregressive-moving average (VARMA) and vector autoregressive (VAR) processes can represent temporally-dynamic systems. Under this strategy, each sample is a time series, instead of a scalar or vector. Unlike differential equations, these representations facilitate statistical description and inference, and, unlike basis function representations, these processes directly relate to an emergent property of dynamic systems, their cross-covariance structure. In the first chapter, I describe how VAR representations for biological systems lead to both a metric for the difference between systems, the Euclidean process distance, and to a statistical test to assess whether two time series may have originated from a single VAR process, the likelihood ratio test for a common process. Using simulated time series, I demonstrate that the likelihood ratio test for a common process has a true Type I error rate that is close to the pre-specified nominal error rate, regardless of the number of subseries in the system or of the order of the processes. Further, using the Euclidean process distance as a measure of difference, I establish power curves for the test using logistic regression. The test has a high probability of rejecting a false null hypothesis, even for modest differences between series. In addition, I illustrate that if two competitors follow the Lotka-Volterra equations for competition with some additional white noise, the system deviates from VAR assumptions. Yet, the test can still differentiate between a simulation based on these equations in which the constraints on the system change and a simulation where the constraints do not change. Although the Type I error rate is inflated in this scenario, the degree of inflation does not appear to be larger when the system deviates more noticeably from model assumptions. In the second chapter, I investigate the likelihood ratio test for a common process's performance with shape trajectory data. Shape trajectories are an extension of geometric morphometric data in which a sample is a set of temporally-ordered shapes as opposed to a single static shape. Like all geometric morphometric data, each shape in a trajectory is inherently high-dimensional. Since the number of parameters in a VAR representation grows quadratically with the number of subseries, shape trajectory data will often require dimension reduction before a VAR representation can be estimated, but the effects that this reduction will have on subsequent inferences remains unclear. In this study, I simulated shape trajectories based on the movements of roundworms. I then reduced the number of variables that described each shape using principle components analysis. Based on these lower dimensional representations, I estimated the likelihood ratio test's Type I error rate and power with the simulated trajectories. In addition, I also used the same workflow on an empirical dataset of women walking (originally from Morris13) but also tried varying amounts of preprocessing before applying the workflow as well. The likelihood ratio test's Type I error rate was mildly inflated with the simulated shape trajectories but had a high probability of rejecting false null hypotheses. Without preprocessing, the likelihood ratio test for a common process had a highly inflated Type I error rate with the empirical data, but when the sampling density is lowered and the number of cycles is standardized within a comparison the degree of inflation becomes comparable to that of the simulated shape trajectories. Yet, these preprocessing steps do not appear to negatively impact the test's power. Visualization is a crucial step in geometric morphometric studies, but there are currently few, if any, methods to visualize differences in shape trajectories. To address this absence, I propose an extension to the classic vector-displacement diagram. In this new procedure, the VAR representations for two trajectories' processes generate two simulated trajectories that share the same shocks. Then, a vector-displacement diagram compares the simulated shapes at each time step. The set of all diagrams then illustrates the difference between the trajectories' processes. I assessed the validity of this procedure using two simulated shape trajectories, one based on the movements of roundworms and the other on the movements of earthworms. The result provided mixed results. Some diagrams do show comparisons between shapes that are similar to those in the original trajectories but others do not. Of particular note, diagrams show a bias towards whichever trajectory's process was used to generate pseudo-random shocks. This implies that the shocks to the system are just as crucial a component to a trajectory's behavior as the VAR model itself. Finally, in the third chapter I discuss a new R library to study dynamic systems and represent them as VAR and VARMA processes, iPARMA. Since certain processes can have multiple VARMA representations, the routines in this library place an emphasis on the reverse echelon format. For every process, there is only one VARMA model in reverse echelon format. The routines in iPARMA cover a diverse set of topics, but they all generally fall into one of four categories: simulation and study, model estimation, hypothesis testing, and visualization methods for shape trajectories. Within the chapter, I discuss highlights and features of key routines' algorithms, as well as how they differ from analogous routines in the R package MTS \citep{mtsCite}. In many regards, this dissertation is foundational, so it provides a number of lines for future research. One major area for further work involves alternative ways to represent a system as a VAR or VARMA process. For example, the parameter estimates in a VAR or VARMA model could depict a process as a point in parameter space. Other potentially fruitful areas include the extension of representational applications to other families of time series models, such as co-integrated models, or altering the generalized Procrustes algorithm to better suit shape trajectories. Based on these extensions, it is my hope that statistical inference based on stochastic process representations will help to progress what systems biologists are able to study and what questions they are able to answer about them. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2017. / May 3, 2017. / Function-valued Trait, Geometric morphometrics, Shape trajectory, Stochastic process, Time series analysis, Vector autoregressive-moving average (VARMA) model / Includes bibliographical references. / Dennis E. Slice, Professor Directing Dissertation; Paul M. Beaumont, University Representative; Peter Beerli, Committee Member; Anke Meyer-Baese, Committee Member; Sachin Shanbhag, Committee Member.
37

Developing SRSF Shape Analysis Techniques for Applications in Neuroscience and Genomics

Unknown Date (has links)
Dissertation focuses on exploring the capabilities of the SRSF statistical shape analysis framework through various applications. Each application gives rise to a specific mathematical shape analysis model. The theoretical investigation of the models, driven by real data problems, give rise to new tools and theorems necessary to conduct a sound inference in the space of shapes. From theoretical standpoint the robustness results are provided for the model parameters estimation and an ANOVA-like statistical testing procedure is discussed. The projects were a result of the collaboration between theoretical and application-focused research groups: the Shape Analysis Group at the Department of Statistics at Florida State University, the Center of Genomics and Personalized Medicine at FSU and the FSU's Department of Neuroscience. As a consequence each of the projects consists of two aspects—the theoretical investigation of the mathematical model and the application driven by a real life problem. The applications components, are similar from the data modeling standpoint. In each case the problem is set in an infinite dimensional space, elements of which are experimental data points that can be viewed as shapes. The three projects are: ``A new framework for Euclidean summary statistics in the neural spike train space''. The project provides a statistical framework for analyzing the spike train data and a new noise removal procedure for neural spike trains. The framework adapts the SRSF elastic metric in the space of point patterns to provides a new notion of the distance. ``SRSF shape analysis for sequencing data reveal new differentiating patterns''. This project uses the shape interpretation of the Next Generation Sequencing data to provide a new point of view of the exon level gene activity. The novel approach reveals a new differential gene behavior, that can't be captured by the state-of-the art techniques. Code is available online on github repository. ``How changes in shape of nucleosomal DNA near TSS influence changes of gene expression''. The result of this work is the novel shape analysis model explaining the relation between the change of the DNA arrangement on nucleosomes and the change in the differential gene expression. / A Dissertation submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2017. / October 30, 2017. / Functional Data Analysis, Genomics, Neuroscience, Next Generation Sequencing, Shape Analysis, Statistics / Includes bibliographical references. / Wei Wu, Professor Co-Directing Dissertation; Richard Bertram, Professor Co-Directing Dissertation; Anuj Srivastava, University Representative; Peter Beerli, Committee Member; Washington Mio, Committee Member; Giray Ökten, Committee Member.
38

Obesity and Aggressive Prostate Cancer: Bias and Biology

McBride, Russell Bailey January 2012 (has links)
Obesity is suspected to be a risk factor for aggressive PC due to its associations with altered circulating levels of metabolic and sex steroid hormones involved in prostate development as well as oncogenesis. However, the current observational evidence linking obesity to aggressive PC is inconsistent or conflicting, and there is growing concern that much of the heterogeneity across studies may be the result of obesity interfering with PC screening, diagnosis, and treatment. We performed a critical review of studies analyzing the association between anthropomorphic measures and overall PC risk, as well as risk of aggressive disease, and illustrate how unique aspects of PC diagnosis and treatment render its risk factor associations unusually susceptible to selection biases which are largely unabated by conventional statistical adjustment. Using a counterfactual framework to describe the selection processes that give rise to these biases, we demonstrate instances in which the use of marginal structural models (MSM) and inverse probability weighting (IPW) may be able to address such biases. Using data collected on a series of patients referred for prostate biopsy, and found to have PC, we examined the association between BMI, clinical and pathological characteristics. We found evidence of differential receipt of radical prostatectomy (RP) by BMI category, and history of obesity which, in the latter case, partially attenuated the association between obesity and high grade biopsy. After multivariate statistical adjustment and IPW, obesity was associated with increased odds of higher pathological grade and stage after RP, associations which were not apparent without the use of IPW. We also examined the association between one's exposure to history of obesity (measured at age 20, 40 and near the time of diagnosis), and found that men with a BMI ≥30 at all three measures had an increased odds of high pathologic stage (≥pT3), tumor volume >30mm3, and positive surgical margins, compared to never obese. In the multivariate models which did not use inverse probability weights, only the association between chronic obesity and high pathological grade reached statistical significance. These findings suggest that treatment selection factors caused a bias toward the null in our estimates of the associations between history of obesity and adverse tumor characteristics, and would have substantively altered the overall findings of the study. We then conducted multiplex immunoflorescence immunohistochemistry on tissue microarrays (TMA) made from representative cores of tumor tissue from RP specimens. Using a semi-automated, florescence microscopy and imaging technique, we measured nuclear expression or androgen receptor (AR), epithelial insulin like growth factor I receptor (IGF-IR), and proliferation marker Ki67, in 357 cases who received a RP. We then tested for associations between patient history of obesity and other demographic and clinical characteristics. Expression of AR and Ki67 were positively associated with tumor grade and stage, while Ki67 and IGF-IR were associated with tumor volume in excess of 30mm3. We also found an inverse association between IGF-IR and tumor grade. We did not, however, find that history of obesity was significantly associated with expression of any of the biomarkers. Thus we have found no evidence that the association between chronic obesity and aggressive disease is mediated by differential expression of androgen or IGF-I receptor, or greater tumor proliferation (Ki67). As researchers continue to understand the underlying causes of aggressive PC and pursue the goal of personalized medicine, studies such as these become increasingly important as they have the potential to reduce the biases inherent in these dataset and explore important interactions between risk factors, and tumor phenotypes that may point the way to new preventive and treatment.
39

Sparse selection in Cox models with functional predictors

Zhang, Yulei January 2012 (has links)
This thesis investigates sparse selection in the Cox regression models with functional predictors. Interest in sparse selection with functional predictors (Lindquist and McKeague, 2009; McKeague and Sen, 2010) can arise in biomedical studies. A functional predictor is a predictor with a trajectory which is usually indexed by time, location or other factors. When the trajectory of a covariate is observed for each subject, and we need to identify a common "sensitive" point of these trajectories which drives outcome, the problem can be formulated as sparse selection with functional predictors. For example, we may locate a gene that is associated to cancer risk along a chromosome. The functional linear regression method is widely used for the analysis of functional covariates. However, it could lack interpretability. The method we develop in this thesis has straightforward interpretation since it relates the hazard to some sensitive components of functional covariates. The Cox regression model has been extensively studied in the analysis of time-to-event data. In this thesis, we extend it to allow for sparse selection with functional predictors. Using the partial likelihood as the criterion function, and following the 3-step procedure for M-estimators established in van der Vaart and Wellner (1996), the consistency, rate of convergence and asymptotic distribution are obtained for M-estimators of the sensitive point and the regression coefficients. In this thesis, to study these large sample properties of the estimators, the fractional Brownian motion assumption is posed for the trajectories for mathematical tractability. Simulations are conducted to evaluate the finite sample performance of the methods, and a way to construct the confidence interval for the location parameter, i.e., the sensitive point, is proposed. The proposed method is applied to an adult brain cancer study and a breast cancer study to find the sensitive point, here the locus of a chromosome, which is closely related to cancer mortality. Since the breast cancer data set has missing values, we investigate the impact of varying proportions of missingness in the data on the accuracy of our estimator as well.
40

Haplotype Inference through Sequential Monte Carlo

Iliadis, Alexandros January 2013 (has links)
Technological advances in the last decade have given rise to large Genome Wide Studies which have helped researchers get better insights in the genetic basis of many common diseases. As the number of samples and genome coverage has increased dramatically it is currently typical that individuals are genotyped using high throughput platforms to more than 500,000 Single Nucleotide Polymorphisms. At the same time theoretical and empirical arguments have been made for the use of haplotypes, i.e. combinations of alleles at multiple loci in individual chromosomes, as opposed to genotypes so the problem of haplotype inference is particularly relevant. Existing haplotyping methods include population based methods, methods for pooled DNA samples and methods for family and pedigree data. Furthermore, the vast amount of available data pose new challenges for haplotyping algorithms. Candidate methods should scale well to the size of the datasets as the number of loci and the number of individuals are well to the thousands. In addition, as genotyping can be performed routinely, researchers encounter a number of specific new scenarios, which can be seen as hybrid between the population and pedigree inference scenarios and require special care to incorporate the maximum amount of information. In this thesis we present a Sequential Monte Carlo framework (TDS) and tailor it to address instances of haplotype inference and frequency estimation problems. Specifically, we first adjust our framework to perform haplotype inference in trio families resulting in a methodology that demonstrates an excellent tradeoff between speed and accuracy. Consequently, we extend our method to handle general nuclear families and demonstrate the gain using our approach as opposed to alternative scenarios. We further address the problem of haplotype inference in pooling data in which we show that our method achieves improved performance over existing approaches in datasets with large number of markers. We finally present a framework to handle the haplotype inference problem in regions of CNV/SNP data. Using our approach we can phase datasets where the ploidy of an individual can vary along the region and each individual can have different breakpoints.

Page generated in 0.0459 seconds