61 |
Multisample analysis of structural equation models with stochastic constraints.January 1992 (has links)
Wai-tung Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 81-83). / Chapter CHAPTER 1 --- OVERVIEW OF CONSTRAINTED ESTIMATION OF STRUCTURAL EQUATION MODEL --- p.1 / Chapter CHAPTER 2 --- MULTISAMPLE ANALYSIS OF STRUCTURAL EQUATION MODELS WITH STOCHASTIC CONSTRAINTS --- p.4 / Chapter 2.1 --- The Basic Model --- p.4 / Chapter 2.2 --- Bayesian Approach to Nuisance Parameters --- p.5 / Chapter 2.3 --- Estimation and Algorithm --- p.8 / Chapter 2.4 --- Asymptotic Properties of the Bayesian Estimate --- p.11 / Chapter CHAPTER 3 --- MULTISAMPLE ANALYSIS OF STRUCTURAL EQUATION MODELS WITH EXACT AND STOCHASTIC CONSTRAINTS --- p.17 / Chapter 3.1 --- The Basic Model --- p.17 / Chapter 3.2 --- Bayesian Approach to Nuisance Parameters and Estimation Procedures --- p.18 / Chapter 3.3 --- Asymptotic Properties of the Bayesian Estimate --- p.20 / Chapter CHAPTER 4 --- SIMULATION STUDIES AND NUMERICAL EXAMPLE --- p.24 / Chapter 4.1 --- Simulation Study for Identified Models with Stochastic Constraints --- p.24 / Chapter 4.2 --- Simulation Study for Non-identified Models with Stochastic Constraints --- p.29 / Chapter 4.3 --- Numerical Example with Exact and Stochastic Constraints --- p.32 / Chapter CHAPTER 5 --- DISCUSSION AND CONCLUSION --- p.34 / APPENDICES --- p.36 / TABLES --- p.66 / REFERENCES --- p.81
|
62 |
A Three-Paper Dissertation on Longitudinal Data Analysis in Education and PsychologyAhmadi, Hedyeh January 2019 (has links)
In longitudinal settings, modeling the covariance structure of repeated measure data is essential for proper analysis. The first paper in this three-paper dissertation presents a survey of four journals in the fields of Education and Psychology to identify the most commonly used methods for analyzing longitudinal data. It provides literature reviews and statistical details for each identified method. This paper also offers a summary table giving the benefits and drawbacks of all the surveyed methods in order to help researchers choose the optimal model according to the structure of their data. Finally, this paper highlights that even when scholars do use more advanced methods for analyzing repeated measure data, they very rarely report (or explore in their discussions) the covariance structure implemented in their choice of modeling. This suggests that, at least in some cases, researchers may not be taking advantage of the optimal covariance patterns. This paper identifies a gap in the standard statistical practices of the fields of Education and Psychology, namely that researchers are not modeling the covariance structure as an extension of fixed/random effects modeling. The second paper introduces the General Serial Covariance (GSC) approach, an extension of the Linear Mixed Modeling (LMM) or Hierarchical Linear Model (HLM) techniques that models the covariance structure using spatial correlation functions such as Gaussian, Exponential, and other patterns. These spatial correlations model the covariance structure in a continuous manner and therefore can deal with missingness and imbalanced data in a straightforward way. A simulation study in the second paper reveals that when data are consistent with the GSC model, using basic HLMs is not optimal for the estimation and testing of the fixed effects. The third paper is a tutorial that uses a real-world data set from a drug abuse prevention intervention to demonstrate the use of the GSC and basic HLM models in R programming language. This paper utilizes variograms (a visualization tool borrowed from geostatistics) among other exploratory tools to determine the covariance structure of the repeated measure data. This paper aims to introduce the GSC model and variogram plots to Education and Psychology, where, according to the survey in the first paper, they are not in use. This paper can also help scholars seeking guidance for interpreting the fixed effect-parameters.
|
63 |
Estimation of two-level structural equation models with constraints.January 1997 (has links)
by Sin Yu Tsang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 40-42). / Chapter Chapter 1. --- Introduction --- p.1 / Chapter Chapter 2. --- Two-level structural equation model --- p.5 / Chapter Chapter 3. --- Estimation of the model under general constraints --- p.11 / Chapter Chapter 4. --- Estimation of the model under linear constraints --- p.22 / Chapter Chapter 5. --- Simulation results --- p.27 / Chapter 5.1 --- "Artificial examples for ""modified"" EM algorithm" --- p.27 / Chapter 5.2 --- "Artificial examples for ""restricted"" EM algorithm" --- p.34 / Chapter Chapter 6. --- Discussion and conclusion --- p.38 / References --- p.40 / Tables --- p.43
|
64 |
Testemunha de emaranhamento generalizada / Generalized entanglement witnessLima, Rafael Bruno Barbosa 19 February 2015 (has links)
Desde o surgimento da mecânica quântica no início do século XX, ela vem sendo alvo de diversos estudos e suas característcas fazem com que a mesma seja descrita de forma totalmente diferente da teoria clássica. Com o aprofundamento em suas áreas, surgiram novos conceitos e a compreensão sobre a teoria da informação e computação quântica foi radicalmente mudada devido a uma propriedade básica da mecânica quântica, o emaranhamento. Assim, a popularização da ideia do computador quântico trouxe consigo uma série de pesquisas relacionadas a informação quântica e suas aplicações no mundo real. Nesta dissertação apresentamos um estudo sobre a construção de um critério de emaranhamento geral, no qual podemos aplicá-lo a quaisquer sistemas possuindo um Hamiltoniano descrito por cadeias de spins, seja ele bipartido ou multipartido. Esse critério é baseado na covariância de um observável geral que pode ou não possuir termos de interações entre os spins. Entretanto, esse critério pode ser facilmente reduzido a variância, uma vez que esta é muito mais adequada para a aplicação em sistemas físicos. Desta maneira, podemos utilizar a susceptibilidade magnética e o calor específico como testemunhas de emaranhamento para o critério, em razão da sua facilidade de medidas experimentais. / Since the advent of quantum mechanics in the early twentieth century, it has been the subject of several studies and their features cause it to be described quite differently from classical theory. With the deepening in their fields, there were new concepts and understanding about information theory and quantum computing has been radically changed due to a basic property of quantum mechanics, the entanglement. Thus, the popularization of the idea of the quantum computer has brought a lot of research related to quantum information and its applications in the real world. In this thesis we present a study about a construction of a general criterion of entanglement, in which we can apply it to any system having a Hamiltonian described by spin chains, either bipartite or multipartite. This criterion is based on general observables covariance that may or may not possess terms of interactions between spins. However, this criterion can be easily reduced to the variance, since this is more suitable for use in physical systems. In this way, we can use the magnetic susceptibility and the specific heat as witnesses of entanglement for the criterion, because of their ease of experimental measurements.
|
65 |
Annual carbon balance of an intensively grazed pasture: magnitude and controlsMudge, Paul Lawrence January 2009 (has links)
Soil carbon (C) is important because even small changes in soil C can affect atmospheric concentrations of CO₂, which in turn can influence global climate. Adequate soil carbon is also required to maintain soil quality, which is important to if agricultural production is to be sustained. The soil carbon balance of New Zealand's pastoral soils is poorly understood, with recent research showing that soils under dairy pasture have lost large amounts of C during the past few decades. The main objective of this research was to determine an annual farm scale C budget for an intensively grazed dairy farm, with a second objective being to determine the amount of CO₂-C lost following cultivation for pasture renewal, and soil pugging by dairy cattle. A third objective was to investigate the environmental controls of CO₂ exchange in a dairy farm pasture system. Net ecosystem exchange (NEE) of CO₂ was measured using an eddy covariance (EC) system from 15 December 2007 to 14 December 2008. Closed chamber techniques were used to measure CO₂ emissions from three cultivated paddocks and three adjacent pasture paddocks between 26 January 2008 and 5 March 2008. CO₂ emissions were also measured using chambers from pugged and control plots between 25 June and 5 August. Coincidentally this research was carried out in a year with a severe summer/autumn drought and a wetter than usual winter. Annual NEE measured with the eddy covariance system was -1,843 kg C ha⁻¹ (a C gain by the land surface). Accounting for C in supplement import, milk export, pasture export and losses in methane, the dairy pasture system was a net sink of -880±500 kg C ha⁻¹. This C sequestration occurred despite severe drought during the study, which was in contrast to other studies of grasslands during drought. Cultivation under dry conditions did not increase cumulative CO₂-C emissions compared to adjacent pasture paddocks. However, when C inputs to pasture paddocks via photosynthesis were included in calculations, net C loss from the cultivated paddocks (during the 39 day study) was estimated to be 622 kg C ha⁻¹ more than the pasture paddocks. CO₂ emissions were lower from pugged plots compared to control plots, probably caused by decreased microbial and root respiration due to wetter soil conditions, and lowered root respiration as a result of lower pasture production. Volumetric soil moisture content (soil moisture) had a dominant effect on CO₂ exchange at a range of temporal scales. Respiration and photosynthesis were both reduced when soil moisture was below 43% (~the lower limit of readily available water) and photosynthesis virtually ceased when soil moisture declined below 24% (~wilting point). Soil moisture also influenced the relationship between temperature and respiration and photosynthetic flux density (PPFD) and NEE. These results suggest that management related soil disturbances of occasional cultivation for pasture renewal and soil pugging, are unlikely to cause large losses of soil C. Further, a severe drought also did not cause CO₂-C losses from the land surface to the atmosphere on an annual scale, in contrast to previous studies.
|
66 |
Robust Real-Time Estimation of Region Displacements in Video SequencesSkoglund, Johan January 2007 (has links)
<p>The possibility to use real-time computer vision in video sequences gives many opportunities for a system to interact with the environment. Possible ways for interaction are e.g. augmented reality like in the MATRIS project where the purpose is to add new objects into the video sequence, or surveillance where the purpose is to find abnormal events.</p><p>The increase of the speed of computers the last years has simplified this process and it is now possible to use at least some of the more advanced computer vision algorithms that are available. The computational speed of computers is however still a problem, for an efficient real-time system efficient code and methods are necessary. This thesis deals with both problems, one part is about efficient implementations using single instruction multiple data (SIMD) instructions and one part is about robust tracking.</p><p>An efficient real-time system requires efficient implementations of the used computer vision methods. Efficient implementations requires knowledge about the CPU and the possibilities given. In this thesis, one method called SIMD is explained. SIMD is useful when the same operation is applied to multiple data which usually is the case in computer vision, the same operation is executed on each pixel.</p><p>Following the position of a feature or object in a video sequence is called tracking. Tracking can be used for a number of applications. The application in this thesis is to use tracking for pose estimation. One way to do tracking is to cut out a small region around the feature, creating a patch and find the position on this patch in the other frames. To find the position, a measure of the difference between the patch and the image in a given position is used. This thesis thoroughly investigates the sum of absolute difference (SAD) error measure. The investigation involves different ways to improve the robustness and to decrease the average error. One method to estimate the average error, the covariance of the position error is proposed. An estimate of the average error is needed when different measurements are combined.</p><p>Finally, a system for camera pose estimation is presented. The computer vision part of this system is based on the result in this thesis. This presentation contains also a discussion about the result of this system.</p> / Report code: LIU-TEK-LIC-2007:5. The report code in the thesis is incorrect.
|
67 |
NoneKuo, Jui-Lin 01 August 2007 (has links)
Skincare products were deemed as socially undesirable for men in the past days. Images of men were expected to be tough, independent, hard-boiled, whereas skincare products were too feminine for men. However, due to the metro-sexual fashion trend which advocating men to dress up themselves, men nowadays are starting to care about their outfits, appearances and facial skin condition to which women are paying attention.
The market of skincare products for men in Taiwan now is rapidly growing and do not confine to only a small amount of men users. It is a widely received conscious that men need skincare products. Hence, from only a few early adopters to wide spread adopters in men population as whole, the purpose of this study is to investigate the correlations between demographic, media usages, personal characteristics, impulsive consumption behaviors and adopting stages.
By using survey method, this study wants to explore whether there are any significance in demographic, media usages, personal characteristics affecting men to adopt skincare products. However, Diffusion of Innovation theory is under the premise of social learning theory, which presupposes that it is a rational decision making process for consumer when adopting innovation. The presupposition as such disregards the possibility of impulsive consumption. In addition to demographic, media usages, personal characteristics, this study takes impulsive consumption into consider as a independent variable affecting adoption.
The study results is that when compared with no taking impulsive consumption into consideration, there are significances in demographic, media usages, personal characteristics affecting men to adopt skincare products after controlling impulsive consumption as a independent variable. The study results show that impulsive consumption is an indispensable variable when researching diffusion of innovations.
|
68 |
Bayesian Gaussian Graphical models using sparse selection priors and their mixturesTalluri, Rajesh 2011 August 1900 (has links)
We propose Bayesian methods for estimating the precision matrix in Gaussian graphical models. The methods lead to sparse and adaptively shrunk estimators of the precision matrix, and thus conduct model selection and estimation simultaneously. Our methods are based on selection and shrinkage priors leading to parsimonious parameterization of the precision (inverse covariance) matrix, which is essential in several applications in learning relationships among the variables. In Chapter I, we employ the Laplace prior on the off-diagonal element of the precision matrix, which is similar to the lasso model in a regression context. This type of prior encourages sparsity while providing shrinkage estimates. Secondly we introduce a novel type of selection prior that develops a sparse structure of the precision matrix by making most of the elements exactly zero, ensuring positive-definiteness.
In Chapter II we extend the above methods to perform classification. Reverse-phase protein array (RPPA) analysis is a powerful, relatively new platform that allows for high-throughput, quantitative analysis of protein networks. One of the challenges that currently limits the potential of this technology is the lack of methods that allows for accurate data modeling and identification of related networks and samples. Such models may improve the accuracy of biological sample classification based on patterns of protein network activation, and provide insight into the distinct biological relationships underlying different cancers. We propose a Bayesian sparse graphical modeling approach motivated by RPPA data using selection priors on the conditional relationships in the presence of class information. We apply our methodology to an RPPA data set generated from panels of human breast cancer and ovarian cancer cell lines. We demonstrate that the model is able to distinguish the different cancer cell types more accurately than several existing models and to identify differential regulation of components of a critical signaling network (the PI3K-AKT pathway) between these cancers. This approach represents a powerful new tool that can be used to improve our understanding of protein networks in cancer.
In Chapter III we extend these methods to mixtures of Gaussian graphical models for clustered data, with each mixture component being assumed Gaussian with an adaptive covariance structure. We model the data using Dirichlet processes and finite mixture models and discuss appropriate posterior simulation schemes to implement posterior inference in the proposed models, including the evaluation of normalizing constants that are functions of parameters of interest which are a result of the restrictions on the correlation matrix. We evaluate the operating characteristics of our method via simulations, as well as discuss examples based on several real data sets.
|
69 |
Bayesian Variable Selection for Logistic Models Using Auxiliary Mixture SamplingTüchler, Regina January 2006 (has links) (PDF)
The paper presents an Markov Chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities, with no additional tuning being needed. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix. For logistic mixed effects models prior determination of explanatory variables and random effects is no longer prerequisite since the definite structure is chosen in a data-driven manner in the course of the modeling procedure. As an illustration two real-data examples from finance and tourism studies are given. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
70 |
Streamline Assisted Ensemble Kalman Filter - Formulation and Field ApplicationDevegowda, Deepak 2009 August 1900 (has links)
The goal of any data assimilation or history matching algorithm is to enable better reservoir management decisions through the construction of reliable reservoir performance models and the assessment of the underlying uncertainties. A considerable body of research work and enhanced computational capabilities have led to an increased application of robust and efficient history matching algorithms to condition reservoir models to dynamic data. Moreover, there has been a shift towards generating multiple plausible reservoir models in recognition of the significance of the associated uncertainties. This provides for uncertainty analysis in reservoir performance forecasts, enabling better management decisions for reservoir development. Additionally, the increased deployment of permanent well sensors and downhole monitors has led to an increasing interest in maintaining 'live' models that are current and consistent with historical observations.
One such data assimilation approach that has gained popularity in the recent past is the Ensemble Kalman Filter (EnKF) (Evensen 2003). It is a Monte Carlo approach to generate a suite of plausible subsurface models conditioned to previously obtained measurements. One advantage of the EnKF is its ability to integrate different types of data at different scales thereby allowing for a framework where all available dynamic data is simultaneously or sequentially utilized to improve estimates of the reservoir model parameters. Of particular interest is the use of partitioning tracer data to infer the location and distribution of target un-swept oil. Due to the difficulty in differentiating the relative effects of spatial variations in fractional flow and fluid saturations and partitioning coefficients on the tracer response, interpretation of partitioning tracer responses is particularly challenging in the presence of mobile oil saturations.
The purpose of this research is to improve the performance of the EnKF in parameter estimation for reservoir characterization studies without the use of a large ensemble size so as to keep the algorithm efficient and computationally inexpensive for large, field-scale models. To achieve this, we propose the use of streamline-derived information to mitigate problems associated with the use of the EnKF with small sample sizes and non-linear dynamics in non-Gaussian settings. Following this, we present the application of the EnKF for interpretation of partitioning tracer tests specifically to obtain improved estimates of the spatial distribution of target oil.
|
Page generated in 0.1008 seconds