• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Statistical tests of complementary palindromes: An application of searching virus origin of replication

Chen, Chun-Lin 19 July 2009 (has links)
The human cytomegalovirus (CMV) is one of the viruses which extensively infect in the world. In order to grow and reproduce, the CMV invades designated cellular lives and influences their behavior. The origin of replication (also called the replication origin) is a particular sequence in the CMV DNA genome at which replication is initiated. In this study, we develop some statistical tests of complementary palindromes, which can be applied to narrow the search for replication origin of the CMV DNA sequence. Let X_(2k) be the number of complementary palindromes with length 2k and Y_(2k) be the number of non-covered complementary palindromes with length 2k inside a given DNA sequence. Consider the null hypothesis that the marginal probabilities of the four nucleotides remain the same (1/4) over the given sequence versus the alternative hypothesis that the marginal probabilities are different. The likelihood ratio test based on the joint distributions of Y_(18) and Y_(2k) | (Y_(2(k+1)), ...,Y_(18)), where k=1, ..., 8, under the null and the alternative hypotheses are derived. The null distribution of the test statistic is approximated by a scaled chi-squared distribution. The scale parameter and the degree of freedom are estimated by the method of moments. The Pearson's chi-squared test based on the marginal distributions of X_(2k), where k=1, ..., 9. The null distribution of the test statistic is also approximated by a scaled chi-squared distribution. There is an another focus about ratios statistics X_(2k)/X_(2(k+1)) and Y_(2k)/Y_(2(k+1)), which approximate a specific value under the null hypotheses. Simulation studies are performed to confirm the theoretical findings.
32

Automated Discovery of Pedigrees and Their Structures in Collections of STR DNA Specimens Using a Link Discovery Tool

Haun, Alex Brian 01 May 2010 (has links)
In instances of mass fatality, such as plane crashes, natural disasters, or terrorist attacks, investigators may encounter hundreds or thousands of DNA specimens representing victims. For example, during the January 2010 Haiti earthquake, entire communities were destroyed, resulting in the loss of thousands of lives. With such a large number of victims the discovery of family pedigrees is possible, but often requires the manual application of analytical methods, which are tedious, time-consuming, and expensive. The method presented in this thesis allows for automated pedigree discovery by extending Link Discovery Tool (LDT), a graph visualization tool designed for discovering linkages in large criminal networks. The proposed algorithm takes advantage of spatial clustering of graphs of DNA specimens to discover pedigree structures in large collections of specimens, saving both time and money in the identification process.
33

Mixture distributions with application to microarray data analysis

Lynch, O'Neil 01 June 2009 (has links)
The main goal in analyzing microarray data is to determine the genes that are differentially expressed across two types of tissue samples or samples obtained under two experimental conditions. In this dissertation we proposed two methods to determine differentially expressed genes. For the penalized normal mixture model (PMMM) to determine genes that are differentially expressed, we penalized both the variance and the mixing proportion parameters simultaneously. The variance parameter was penalized so that the log-likelihood will be bounded, while the mixing proportion parameter was penalized so that its estimates are not on the boundary of its parametric space. The null distribution of the likelihood ratio test statistic (LRTS) was simulated so that we could perform a hypothesis test for the number of components of the penalized normal mixture model. In addition to simulating the null distribution of the LRTS for the penalized normal mixture model, we showed that the maximum likelihood estimates were asymptotically normal, which is a first step that is necessary to prove the asymptotic null distribution of the LRTS. This result is a significant contribution to field of normal mixture model. The modified p-value approach for detecting differentially expressed genes was also discussed in this dissertation. The modified p-value approach was implemented so that a hypothesis test for the number of components can be conducted by using the modified likelihood ratio test. In the modified p-value approach we penalized the mixing proportion so that the estimates of the mixing proportion are not on the boundary of its parametric space. The null distribution of the (LRTS) was simulated so that the number of components of the uniform beta mixture model can be determined. Finally, for both modified methods, the penalized normal mixture model and the modified p-value approach were applied to simulated and real data.
34

Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

Liang, Yuli January 2015 (has links)
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
35

Essays on regime switching and DSGE models with applications to U.S. business cycle

Zhuo, Fan 09 November 2016 (has links)
This dissertation studies various issues related to regime switching and DSGE models. The methods developed are used to study U.S. business cycles. Chapter one considers and derives the limit distributions of likelihood ratio based tests for Markov regime switching in multiple parameters in the context of a general class of nonlinear models. The analysis simultaneously addresses three difficulties: (1) some nuisance parameters are unidentified under the null hypothesis, (2) the null hypothesis yields a local optimum, and (3) the conditional regime probabilities follow stochastic processes that can only be represented recursively. When applied to US quarterly real GDP growth rates, the tests suggest strong evidence favoring the regime switching specification over a range of sample periods. Chapter two develops a modified likelihood ratio (MLR) test to detect regime switching in state space models. I apply the filtering algorithm introduced in Gordon and Smith (1988) to construct a modified likelihood function under the alternative hypothesis of two regimes and I extend the analysis in Chapter one to establish the asymptotic distribution of the MLR statistic under the null hypothesis of a single regime. I also apply the test to a simple model of the U.S. unemployment rate. This contribution is the first to develop a test based on the likelihood ratio principle to detect regime switching in state space models. The final chapter estimates a search and matching model of the aggregate labor market with sticky price and staggered wage negotiation. It starts with a partial equilibrium search and matching model and expands into a general equilibrium model with sticky price and staggered wage. I study the quantitative implications of the model. The results show that (1) the price stickiness and staggered wage structure are quantitatively important for the search and matching model of the aggregate labor market; (2) relatively high outside option payments to the workers, such as unemployment insurance payments, are needed to match the data; and (3) workers have lower bargaining power relative to firms, which contrasts with the assumption in the literature that workers and firms share equally the surplus generated from their employment relationship.
36

Accelerated Life testing of Electronic Circuit Boards with Applications in Lead-Free Design

January 2012 (has links)
abstract: This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but the data generated and their analysis is for high performance avionics. Avionics equipment typically requires 20 years expected life by aircraft equipment manufacturers and therefore ALT is the only practical way of performing life test estimates. Both thermal and vibration ALT induced failure are performed and analyzed to resolve industry questions relating to the introduction of lead-free solder product and processes into high reliability avionics. In chapter 2, thermal ALT using an industry standard failure machine implementing Interconnect Stress Test (IST) that simulates circuit board life data is compared to real production failure data by likelihood ratio tests to arrive at a mechanical theory. This mechanical theory results in a statistically equivalent energy bound such that failure distributions below a specific energy level are considered to be from the same distribution thus allowing testers to quantify parameter setting in IST prior to life testing. In chapter 3, vibration ALT comparing tin-lead and lead-free circuit board solder designs involves the use of the likelihood ratio (LR) test to assess both complete failure data and S-N curves to present methods for analyzing data. Failure data is analyzed using Regression and two-way analysis of variance (ANOVA) and reconciled with the LR test results that indicating that a costly aging pre-process may be eliminated in certain cases. In chapter 4, vibration ALT for side-by-side tin-lead and lead-free solder black box designs are life tested. Commercial models from strain data do not exist at the low levels associated with life testing and need to be developed because testing performed and presented here indicate that both tin-lead and lead-free solders are similar. In addition, earlier failures due to vibration like connector failure modes will occur before solder interconnect failures. / Dissertation/Thesis / Ph.D. Industrial Engineering 2012
37

Statistical Signal Processing of ESI-TOF-MS for Biomarker Discovery

January 2012 (has links)
abstract: Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier. / Dissertation/Thesis / Ph.D. Electrical Engineering 2012
38

Problémy znaleckého dokazování v trestním řízení / The problems of judicial expertise in criminal proceedings

Chmel, Jan January 2021 (has links)
The problems of judicial expertise in criminal proceedings Abstract Judicial Expertise is a substantial and irreplaceable part of criminal proceedings. It's legal regulation is required to fulfil high demands. Firstly, it must provide an effective platform for usage of expert evidence in criminal proceedings. Secondly, it ought to ensure that experts provide quality outcomes in compliance with lege artis. Thirdly, it should offer satisfying conditions for expert's activities. This thesis selects a few of the current issues originating from aforementioned requirements. It analyses their origin and evaluates how the Czech legal regulation solves them. At first, the thesis defines fundamental institutes which create a base for an expert's function in criminal proceedings. Subsequently, it offers an overview of statutory regulation of judicial expertise in criminal proceedings. It deals with both special regulation in criminal law and general regulation of Act No. 254/2019 Sb., on judicial experts, expert offices and expert institutes, together with relevant ordinances. Chapter three deals with legislative changes in the field of judicial experts effective from 1st January 2021. It focuses on new experts appointing, remuneration and supervision of experts' activities. It analyses and compares how these issues...
39

An analysis of bulletproof as probabilistic genotyping software for forensic DNA analysis casework

Randolph, Brianna 14 June 2019 (has links)
Using computer systems for probabilistic genotyping on DNA evidence in forensic casework is beneficial as it allows a complete analysis of the data available for a wide range of profiles, a range that is limited when analyzed manually. One such software, Bulletproof, uses the exact method as the statistical foundation of its web-based interface to estimate the likelihood ratio of two hypotheses that explain the given evidence. In this investigation, the capability of Bulletproof was examined by analyzing the effects of evidence and reference sample template amount, injection time, and stutter filter utilization on likelihood ratio. In terms of likelihood ratio, deconvolution by the software is more efficient in cases in which evidence samples of high contrast ratios (such as 1:9 vs. 1:1) and low contributor count have high template, and when sample injection times are low. Reference sample template amount and injection time are less impactful than that of evidentiary samples. As with unknown samples, reference samples should be analyzed beforehand and artifacts removed for better deconvolution.
40

A density for a Generalized Likelihood-Ratio Test When the Sample Size is a Random Varible

Neville, Raymond H. 01 May 1966 (has links)
The main objective of this work will be to examine the hypothesis that all the treatment means are the same and equal to some unknown quantity, when we know that the variance is the same for each sample, and to determine if the conventional method for making this test (the F-test) applicable when the sample sizes are assumed to be random variables. This hypothesis can be tested by using a likelihood-ration test. To do this, a density function or distribution has to be found for this ratio, thus permitting us to make probability statements about the occurrence of this ration under the null hypothesis.

Page generated in 0.0625 seconds