71 |
Longitudinal Data Analysis Using Multilevel Linear Modeling (MLM): Fitting an Optimal Variance-Covariance StructureLee, Yuan-Hsuan 2010 August 1900 (has links)
This dissertation focuses on issues related to fitting an optimal variance-covariance structure in multilevel linear modeling framework with two Monte Carlo simulation studies. In the first study, the author evaluated the performance of common fit statistics such as Likelihood Ratio Test (LRT), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) and a new proposed method, standardized root mean square residual (SRMR), for selecting the correct within-subject covariance structure. Results from the simulated data suggested SRMR had the best performance in selecting the optimal covariance structure. A pharmaceutical example was also used to evaluate the performance of these fit statistics empirically. The LRT failed to decide which is a better model because LRT can only be used for nested models. SRMR, on the other hand, had congruent result as AIC and BIC and chose ARMA(1,1) as the optimal variance-covariance structure. In the second study, the author adopted a first-order autoregressive structure as the true within-subject V-C structure with variability in the intercept and slope (estimating [tau]00 and [tau]11 only) and investigated the consequence of misspecifying different levels/types of the V-C matrices simultaneously on the estimation and test of significance for the growth/fixed-effect and random-effect parameters, considering the size of the autoregressive parameter, magnitude of the fixed effect parameters, number of cases, and number of waves. The result of the simulation study showed that the commonly-used identity within-subject structure with unstructured between-subject matrix performed equally well as the true model in the evaluation of the criterion variables. On the other hand, other misspecified conditions, such as Under G & Over R conditions and Generally misspecified G & R conditions had biased standard error estimates for the fixed effect and lead to inflated Type I error rate or lowered statistical power. The two studies bridged the gap between the theory and practical application in the current literature. More research can be done to test the effectiveness of proposed SRMR in searching for the optimal V-C structure under different conditions and evaluate the impact of different types/levels of misspecification with various specifications of the within- and between- level V-C structures simultaneously.
|
72 |
Resampling Methodology in Spatial Prediction and Repeated Measures Time SeriesRister, Krista Dianne 2010 December 1900 (has links)
In recent years, the application of resampling methods to dependent data, such
as time series or spatial data, has been a growing field in the study of statistics. In
this dissertation, we discuss two such applications.
In spatial statistics, the reliability of Kriging prediction methods relies on the
observations coming from an underlying Gaussian process. When the observed data
set is not from a multivariate Gaussian distribution, but rather is a transformation
of Gaussian data, Kriging methods can produce biased predictions. Bootstrap
resampling methods present a potential bias correction. We propose a parametric
bootstrap methodology for the calculation of either a multiplicative or additive bias
correction factor when dealing with Trans-Gaussian data. Furthermore, we investigate
the asymptotic properties of the new bootstrap based predictors. Finally, we
present the results for both simulated and real world data.
In time series analysis, the estimation of covariance parameters is often of utmost
importance. Furthermore, the understanding of the distributional behavior of
parameter estimates, particularly the variance, is useful but often difficult. Block
bootstrap methods have been particularly useful in such analyses. We introduce a new procedure for the estimation of covariance parameters for replicated time series
data.
|
73 |
A study of statistical distribution of a nonparametric test for interval censored dataChang, Ping-chun 05 July 2005 (has links)
A nonparametric test for the interval-censored failure time data is proposed in determining whether p lifetime populations come from the same distribution. For the comparison problem based on interval-censored failure time data, Sun proposed some nonparametric test procedures in recent year. In this paper, we present simulation procedures to verify the test proposed by Sun. The simulation results indicate that the proposed test is not
approximately Chisquare distribution with degrees of freedom p-1 but Chisquare distribution with degrees of freedom p-1 times a constant.
|
74 |
Latent models for cross-covariance /Wegelin, Jacob A. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 139-145).
|
75 |
Generation of high fidelity covariance data sets for the natural molybdenum isotopes including a series of molybdenum sensitive critical experiment designsVan der Hoeven, Christopher Ainslie 15 October 2013 (has links)
Quantification of uncertainty in computational models of nuclear systems is required for assessing margins of safety for both design and operation of those systems. The largest source of uncertainty in computational models of nuclear systems derives from the nuclear cross section data used for modeling. There are two parts to cross section uncertainty data: the relative uncertainty in the cross section at a particular energy, and how that uncertainty is correlated with the uncertainty at all other energies. This cross section uncertainty and uncertainty correlation is compiled as covariance data. High fidelity covariance data exists for a few key isotopes, however the covariance data available for many structural materials is considered low fidelity, and is derived primarily from integral measurements with little meaningful correlation between energy regions. Low fidelity covariance data is acceptable for materials to which the operating characteristics of the modeled nuclear system are insensitive. However, in some cases, nuclear systems can be sensitive to isotopes with only low fidelity covariance data. Such is the case for the new U(19.5%)-10Moly foil fuel form to be produced at the Y-12 National Security Complex for use in research and test reactors. This fuel is ten weight percent molybdenum, the isotopes of which have only low fidelity covariance data. Improvements to the molybdenum isotope covariance data would benefit the modeling of systems using the new fuel form. This dissertation provides a framework for deriving high fidelity molybdenum isotope covariance data from a set of elemental molybdenum experimental cross section results. Additionally, a series of critical experiments featuring the new Y-12 fuel form was designed to address deficiencies in the critical experiment library with respect to molybdenum isotopes. Along with existing molybdenum sensitive critical experiments, these proposed experiments were used as a basis to compare the performance of the new high fidelity molybdenum covariance data set with the existing low fidelity covariance data set using the nuclear modeling code SCALE. The use of the high fidelity covariance data was found to result in reduced overall bias, reduced bias due to the molybdenum isotopes, and improved goodness-of-fit of computational results to experimental results. / text
|
76 |
Phylogenetic analysis of multiple genes based on spectral methodsAbeysundera, Melanie 28 October 2011 (has links)
Multiple gene phylogenetic analysis is of interest since single gene analysis often
results in poorly resolved trees. Here the use of spectral techniques for analyzing
multi-gene data sets is explored. The protein sequences are treated as categorical
time series and a measure of similarity between a pair of sequences, the spectral
covariance, is used to build trees. Unlike other methods, the spectral covariance
method focuses on the relationship between the sites of genetic sequences.
We consider two methods with which to combine the dissimilarity or distance
matrices of multiple genes. The first method involves properly scaling the dissimilarity
measures derived from different genes between a pair of species and using the
mean of these scaled dissimilarity measures as a summary statistic to measure the
taxonomic distances across multiple genes. We introduced two criteria for computing
scale coefficients which can then be used to combine information across genes, namely
the minimum variance (MinVar) criterion and the minimum coefficient of variation
squared (MinCV) criterion. The scale coefficients obtained with the MinVar and
MinCV criteria can then be used to derive a combined-gene tree from the weighted
average of the distance or dissimilarity matrices of multiple genes.
The second method is based on the singular value decomposition of a matrix made
up of the p-vectors of pairwise distances for k genes. By decomposing such a
matrix, we extract the common signal present in multiple genes to obtain a single tree
representation of the relationship between a given set of taxa. Influence functions for
the components of the singular value decomposition are derived to determine which
genes are most influential in determining the combined-gene tree.
|
77 |
Carbon dynamics of perennial grassland conversion for annual croppingFraser, Trevor James 20 August 2012 (has links)
Sequestering atmospheric carbon in soil is an attractive option for mitigation of rising atmospheric carbon dioxide concentrations through agriculture. Perennial crops are more likely to gain carbon while annual crops are more likely to lose carbon. A pair of eddy covariance towers were set up near Winnipeg Manitoba, Canada to measure carbon flux over adjacent fertilized long-term perennial grass hay fields with high soil organic carbon. In 2009 the forage stand of one field (Treatment) was sprayed with herbicide, cut and bailed; following which cattle manure was applied and the land was tilled. The forage stand in the other field (Control) continued to be cut and bailed. Differences between net ecosystem productivity of the fields were mainly due to gross primary productivity; ecosystem respiration was similar for both fields. When biomass removals and manure applications are included in the carbon balance, the Treatment conversion lost 149 g C m^(-2) and whereas the Control sequestered 96 g C m^(-2), for a net loss of 245 g C m^(-2) over the June to December period (210 days). This suggests that perennial grass converted for annual cropping can lose more carbon than perennial grasses can sequester in a season.
|
78 |
Analysis of MIMO systems for single-carrier transmitters in frequency-selective channel contextDupuy, Florian 16 December 2011 (has links) (PDF)
For fifteen years many studies have used MIMO systems to increase the Shannon capacity of the traditional SISO systems. To this end, a crucial problem is the design of transmitters which are optimal w.r.t. Shannon capacity, by the use of space-time codes or of prior knowledge on the transmission channel. These problems have been addressed by many studies in the case of frequency flat MIMO channels but are really less mature for frequency selective MIMO channels. This thesis focuses in the first part on the optimization, w.r.t. the ergodic capacity, of the covariance of the vector transmitted, via the Random Matrix Theory. Using multiple transmit antennas also gives rise to diversity, which improves the receiving performance. In the second part, we thus focus on the diversity, in the specific case of a MMSE receiver. Unlike the ML receiver, this receiver is suboptimal but very simple to implement. We first study the diversity at high SNR for frequency selective channels. We then focus on a diversity factor, the use of space-time codes in block (STBC), specifically the use of the Alamouti code. Thus, we propose and analyze in the multiuser context a new MMSE receiver robust to interference thanks to its ability to use optimally the degrees of freedom available in the channel
|
79 |
Carbon dynamics of perennial grassland conversion for annual croppingFraser, Trevor James 20 August 2012 (has links)
Sequestering atmospheric carbon in soil is an attractive option for mitigation of rising atmospheric carbon dioxide concentrations through agriculture. Perennial crops are more likely to gain carbon while annual crops are more likely to lose carbon. A pair of eddy covariance towers were set up near Winnipeg Manitoba, Canada to measure carbon flux over adjacent fertilized long-term perennial grass hay fields with high soil organic carbon. In 2009 the forage stand of one field (Treatment) was sprayed with herbicide, cut and bailed; following which cattle manure was applied and the land was tilled. The forage stand in the other field (Control) continued to be cut and bailed. Differences between net ecosystem productivity of the fields were mainly due to gross primary productivity; ecosystem respiration was similar for both fields. When biomass removals and manure applications are included in the carbon balance, the Treatment conversion lost 149 g C m^(-2) and whereas the Control sequestered 96 g C m^(-2), for a net loss of 245 g C m^(-2) over the June to December period (210 days). This suggests that perennial grass converted for annual cropping can lose more carbon than perennial grasses can sequester in a season.
|
80 |
複雑な内生抽出法に基づく標本への離散選択モデルの適用KITAMURA, Ryuichi, 酒井, 弘, SAKAI, Hiroshi, 北村, 隆一, 山本, 俊行, YAMAMOTO, Toshiyuki 01 1900 (has links)
No description available.
|
Page generated in 0.0635 seconds