Spelling suggestions: "subject:"datent"" "subject:"iatent""
11 |
Some Problems in the Determination of the Latent Heat Liquid Helium / Latent Heat of Liquid HeliumWaimsley, David 05 1900 (has links)
A cryostat has been constructed for the determination of the latent heat of evaporation of liquid helium-4 from 1.8°K to the critical point. It has not yet proved possible to stabilize the cryostat behaviour sufficiently and thermal oscillations of the type reported by Taconis are strongly suspected as being the source of the difficulty. Modifications that were carried out in the design of the equipment to improve its behaviour were limited in their success so that two conclusions were reached. Firstly, a separate investigation of the problem of Taconis resonate® is necessary before reliable results can be obtained, Secondly, the existing cryostat could in the meantime readily be converted to other cryogenic uses. / Thesis / Master of Science (MS)
|
12 |
A latent variable approach to impute missing values: with application in air pollution data.January 1999 (has links)
Wing-Yeong Lee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 73-75). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- The observed data --- p.3 / Chapter 1.3 --- Outline of the thesis --- p.8 / Chapter Chapter 2 --- Modeling using Latent Variable --- p.9 / Chapter Chapter 3 --- Imputation Procedure --- p.16 / Chapter 3.1 --- Introduction --- p.16 / Chapter 3.2 --- Introduction to Metropolis-Hastings algorithm --- p.18 / Chapter 3.3 --- Introduction to Gibbs sampler --- p.19 / Chapter 3.4 --- Imputation step --- p.21 / Chapter 3.5 --- Initialization of the missing values by regression --- p.23 / Chapter 3.6 --- Initialization of the parameters and creating the latent variable and noises --- p.27 / Chapter 3.7 --- Simulation of Y's --- p.30 / Chapter 3.8 --- Simulation of the parameters --- p.34 / Chapter 3.9 --- Simulation of T by use of the Metropolis-Hastings algorithm --- p.41 / Chapter 3.10 --- Distribution of Vij's given all other values --- p.44 / Chapter 3.11 --- Simulation procedure of Vij's --- p.46 / Chapter Chapter 4 --- Data Analysis of the Pollutant Data --- p.48 / Chapter 4.1 --- Convergence of the process --- p.48 / Chapter 4.2 --- Data analysis --- p.53 / Chapter Chapter 5 --- Conclusion --- p.69 / REFERENCES --- p.73
|
13 |
Statistical analysis for transformation latent variable models with incomplete data. / CUHK electronic theses & dissertations collectionJanuary 2013 (has links)
潜变量模型作为处理多元数据的一种有效的方法,在行为学、教育学、社会心理学以及医学等各个领域都受到了广泛关注。在分析潜变量模型时,大多数现有的统计方法和软件都是基于响应变量为正态分布的假设。尽管一些最近发展的方法可以处理部分的非正态数据,但在分析高度非正态的数据时依然存在问题。此外,在实际研究中还经常会遇到不完全数据,如缺失数据和删失数据。简单地忽略或错误地处理不完全数据可能会严重扭曲统计结果。在本文中,我们发展了贝叶斯惩罚样条方法,同时采用马尔科夫链蒙特卡洛方法,用以分析存有高度非正态和不完全数据的变换潜变量模型。我们在变换潜变量模型中讨论了不同类型的不完全数据,如完全随机缺失数据、随机缺失数据、不可忽略的缺失数据以及删失数据。我们还利用离差信息准则来选择正确的模型和数据缺失机制。我们通过许多模拟研究论证了我们提出的方法。此方法被应用于关于工作满意度、家庭生活、工作态度的研究,以及香港地区2 型糖尿病患者心血管疾病的研究。 / Latent variable models (LVMs), as useful multivariate techniques, have attracted significant attention from various fields, including the behavioral, educational, social-psychological, and medical sciences. In the analysis of LVMs, most existing statistical methods and software have been developed under the normal assumption of response variables. While some recent developments can partially address the non-normality of data, they are still problematic in dealing with highly non-normal data. Moreover, the presence of incomplete data, such as missing data and censoring data, is a practical issue in substantive research. Simply ignoring incomplete data or wrongly managing incomplete data might seriously distort statistical influence results. In this thesis, we develop a Bayesian P-spline approach, coupled with Markov chain Monte Carlo (MCMC) methods, to analyze transformation LVMs with highly non-normal and incomplete data. Different types of incomplete data, such as missing completely at random data, missing at random data, nonignorable missing data, as well as censored data, are discussed in the context of transformation LVMs. The deviance information criterion is proposed to conduct model comparison and select an appropriate missing mechanism. The empirical performance of the proposed methodologies is examined via many simulation studies. Applications to a study concerning people's job satisfaction, home life, and work attitude, as well as a study on cardiovascular diseases for type 2 diabetic patients in Hong Kong are presented. / Detailed summary in vernacular field only. / Liu, Pengfei. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 115-127). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.ii / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Latent Variable Models --- p.1 / Chapter 1.2 --- Missing Data --- p.4 / Chapter 1.3 --- Censoring Data --- p.5 / Chapter 1.4 --- Penalized B-splines --- p.6 / Chapter 1.5 --- Bayesian Methods --- p.7 / Chapter 1.6 --- Outline of the Thesis --- p.8 / Chapter 2 --- Transformation Structural Equation Models --- p.9 / Chapter 2.1 --- Introduction --- p.9 / Chapter 2.2 --- Model Description --- p.11 / Chapter 2.3 --- Bayesian Estimation --- p.12 / Chapter 2.3.1 --- Bayesian P-splines --- p.12 / Chapter 2.3.2 --- Identifiability Constraints --- p.15 / Chapter 2.3.3 --- Prior Distributions --- p.16 / Chapter 2.3.4 --- Posterior Inference --- p.18 / Chapter 2.4 --- Bayesian Model Selection via DIC --- p.20 / Chapter 2.5 --- Simulation Studies --- p.23 / Chapter 2.5.1 --- Simulation 1 --- p.23 / Chapter 2.5.2 --- Simulation 2 --- p.26 / Chapter 2.5.3 --- Simulation 3 --- p.27 / Chapter 2.6 --- Conclusion --- p.28 / Chapter 3 --- Transformation SEMs with Missing Data that are Missing At Random --- p.43 / Chapter 3.1 --- Introduction --- p.43 / Chapter 3.2 --- Model Description --- p.45 / Chapter 3.3 --- Bayesian Estimation and Model Selection --- p.46 / Chapter 3.3.1 --- Modeling Transformation Functions --- p.46 / Chapter 3.3.2 --- Identifiability Constraints --- p.47 / Chapter 3.3.3 --- Prior Distributions --- p.48 / Chapter 3.3.4 --- Bayesian Estimation --- p.49 / Chapter 3.3.5 --- Model Selection via DIC --- p.52 / Chapter 3.4 --- Simulation Studies --- p.53 / Chapter 3.4.1 --- Simulation 1 --- p.54 / Chapter 3.4.2 --- Simulation 2 --- p.56 / Chapter 3.5 --- Conclusion --- p.57 / Chapter 4 --- Transformation SEMs with Nonignorable Missing Data --- p.65 / Chapter 4.1 --- Introduction --- p.65 / Chapter 4.2 --- Model Description --- p.67 / Chapter 4.3 --- Bayesian Inference --- p.68 / Chapter 4.3.1 --- Model Identification and Prior Distributions --- p.68 / Chapter 4.3.2 --- Posterior Inference --- p.69 / Chapter 4.4 --- Selection of Missing Mechanisms --- p.71 / Chapter 4.5 --- Simulation studies --- p.73 / Chapter 4.5.1 --- Simulation 1 --- p.73 / Chapter 4.5.2 --- Simulation 2 --- p.76 / Chapter 4.6 --- A Real Example --- p.77 / Chapter 4.7 --- Conclusion --- p.79 / Chapter 5 --- Transformation Latent Variable Models with Multivariate Censored Data --- p.86 / Chapter 5.1 --- Introduction --- p.86 / Chapter 5.2 --- Model Description --- p.88 / Chapter 5.3 --- Bayesian Inference --- p.90 / Chapter 5.3.1 --- Model Identification and Bayesian P-splines --- p.90 / Chapter 5.3.2 --- Prior Distributions --- p.91 / Chapter 5.3.3 --- Posterior Inference --- p.93 / Chapter 5.4 --- Simulation Studies --- p.96 / Chapter 5.4.1 --- Simulation 1 --- p.96 / Chapter 5.4.2 --- Simulation 2 --- p.99 / Chapter 5.5 --- A Real Example --- p.100 / Chapter 5.6 --- Conclusion --- p.103 / Chapter 6 --- Conclusion and Further Development --- p.113 / Bibliography --- p.115
|
14 |
Three Contributions to Latent Variable ModelingLiu, Xiang January 2019 (has links)
The dissertation includes three papers that address some theoretical and technical issues of latent variable models. The first paper extends the uniformly most powerful test approach for testing person parameter in IRT to the two-parameter logistic models. In addition, an efficient branch-and-bound algorithm for computing the exact p-value is proposed. The second paper proposes a reparameterization of the log-linear CDM model. A Gibbs sampler is developed for posterior computation. The third paper proposes an ordered latent class model with infinite classes using a stochastic process prior. Furthermore, a nonparametric IRT application is also discussed.
|
15 |
Latent models for cross-covariance /Wegelin, Jacob A. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 139-145).
|
16 |
Search-based learning of latent tree models /Chen, Tao. January 2009 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2009. / Includes bibliographical references (p. 95-99).
|
17 |
The attentional deficit in schizophrenia : a neurobiological accountGray, Nicola Susan January 1991 (has links)
No description available.
|
18 |
The solution structure and surface properties of TB3 of LTBP-1Lack, Jeremy David January 2001 (has links)
No description available.
|
19 |
Improving Image Classification Performance using Joint Feature SelectionMaboudi Afkham, Heydar January 2014 (has links)
In this thesis, we focus on the problem of image classification and investigate how its performance can be systematically improved. Improving the performance of different computer vision methods has been the subject of many studies. While different studies take different approaches to achieve this improvement, in this thesis we address this problem by investigating the relevance of the statistics collected from the image. We propose a framework for gradually improving the quality of an already existing image descriptor. In our studies, we employ a descriptor which is composed the response of a series of discriminative components for summarizing each image. As we will show, this descriptor has an ideal form in which all categories become linearly separable. While, reaching this form is not possible, we will argue how by replacing a small fraction of these components, it is possible to obtain a descriptor which is, on average, closer to this ideal form. To do so, we initially identify which components do not contribute to the quality of the descriptor and replace them with more robust components. As we will show, this replacement has a positive effect on the quality of the descriptor. While there are many ways of obtaining more robust components, we introduce a joint feature selection problem to obtain image features that retains class discriminative properties while simultaneously generalising between within class variations. Our approach is based on the concept of a joint feature where several small features are combined in a spatial structure. The proposed framework automatically learns the structure of the joint constellations in a class dependent manner improving the generalisation and discrimination capabilities of the local descriptor while still retaining a low-dimensional representations. The joint feature selection problem discussed in this thesis belongs to a specific class of latent variable models that assumes each labeled sample is associated with a set of different features, with no prior knowledge of which feature is the most relevant feature to be used. Deformable-Part Models (DPM) can be seen as good examples of such models. These models are usually considered to be expensive to train and very sensitive to the initialization. Here, we focus on the learning of such models by introducing a topological framework and show how it is possible to both reduce the learning complexity and produce more robust decision boundaries. We will also argue how our framework can be used for producing robust decision boundaries without exploiting the dataset bias or relying on accurate annotations. To examine the hypothesis of this thesis, we evaluate different parts of our framework on several challenging datasets and demonstrate how our framework is capable of gradually improving the performance of image classification by collecting more robust statistics from the image and improving the quality of the descriptor. / <p>QC 20140506</p>
|
20 |
Efficient Estimation of the Expectation of a Latent Variable in the Presence of Subject-Specific AncillariesMittel, Louis Buchalter January 2017 (has links)
Latent variables are often included in a model in order to capture the diversity among subjects in a population. Sometimes the distribution of these latent variables are of principle interest. In studies where sequences of observations are taken from subjects, ancillary variables, such as the number of observations provided by each subject, usually also vary between subjects. The goal here is to understand efficient estimation of the expectation of the latent variable in the presence of these subject-specific ancillaries.
Unbiased estimation and efficient estimation of the expectation of the latent parameter depend on the dependence structure of these three subject-specific components: latent variable, sequence of observations, and ancillary. This dissertation considers estimation under two dependence configurations. In Chapter 3, efficiency is studied under the model in which no assumptions are made about the joint distribution of the latent variable and the subject-specific ancillary. Chapter 4 treats the setting where the ancillary variable and the latent variable are independent.
|
Page generated in 0.0543 seconds