• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 3
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 12
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Positioning patterns from multidimensional data and its applications in meteorology

Wong, Ka-yan, 王嘉欣 January 2008 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy
2

Automatic model selection on local Gaussian structures with priors: comparative investigations and applications. / 基於帶先驗的局部高斯結构的自動模型選擇: 比較性分析及應用研究 / CUHK electronic theses & dissertations collection / Ji yu dai xian yan de ju bu Gaosi jie gou de zi dong mo xing xuan ze: bi jiao xing fen xi ji ying yong yan jiu

January 2012 (has links)
作為機器學習領域中的一個重要課題,模型選擇旨在給定有限樣本的情況下、恰當地確定模型的複雜度。自動模型選擇是指一類快速有效的模型選擇方法,它們以一個足夠大的模型複雜度作為初始,在學習過程中有一種內在機制能夠驅使冗餘結構自動地變為不起作用、從而可以剔除。爲了輔助自動模型選擇的進行,模型的參數通常被假設帶有先驗。對於考慮先驗的各種自動模型選擇方法,已有工作中尚缺乏系統性的比較研究。本篇論文著眼於具有局部高斯結構的模型,進行了系統性的比較分析。 / 具體而言,本文比較了三種典型的自動模型選擇方法的優劣勢,它們分別為變分貝葉斯(Variational Bayesian),最小信息長度(Minimum Message Length),以及貝葉斯陰陽和諧學習(Bayesian Ying‐Yang harmony learning)。首先,我們研究針對高斯混合模型(Gaussian Mixture Model)的模型選擇,即確定該模型中高斯成份的個數。進而,我们假設每個高斯成份都有子空間結構、并研究混合因子分析模型(Mixture of Factor Analyzers)及局部因子分析模型(Local Factor Analysis)下的模型選擇問題,即確定模型中混合成份的個數及各個局部子空間的維度。 / 本篇論文考慮以上各模型的參數的兩類先驗,分別為共軛型先驗及Jeffreys 先驗。其中,共軛型先驗在高斯混合模型上為DNW(Dirichlet‐Normal‐Wishart)先驗,在混合因子分析模型及局部因子分析模型上均為DNG(Dirichlet‐Normal‐Gamma)先驗。由於推導對應Fisher 信息矩陣的解析表達非常困難,在混合因子分析模型及局部因子分析模型上,我們不考慮Jeffreys 先驗以及最小信息長度方法。 / 通過一系列的仿真實驗及應用分析,本文比較了幾種自動模型選擇算法(包括基於高斯混合模型的6 個算法,基於混合因子分析模型及局部因子分析模型的4 個算法),并得到了如下主要發現:1. 對於各種自動模型選擇方法,在所有參數上加先驗都比僅在混合權重上加先驗的效果好。2. 在高斯混合模型上,考慮 DNW 先驗的效果比考慮Jeffreys 先驗的效果好。其中,考慮Jeffreys 先驗時,最小信息長度比變分貝葉斯的效果略好;而考慮DNW 先驗時,變分貝葉斯比最小信息長度的效果好。3. 在高斯混合模型上,當DNW 先驗的超參數(hyper‐parameters)由保持固定變為根據各自學習準則進行優化時,貝葉斯陰陽和諧學習的效果得到了提高,而變分貝葉斯及最小信息長度的結果都會變差。在基於帶DNG 先驗的混合因子分析模型及局部因子分析模型的比較中,以上觀察結果同樣維持。事實上,變分貝葉斯及最小信息長度都缺乏一種引導先驗超參數優化的良好機制。4. 對以上各種模型、無論考慮哪種先驗、以及無論先驗超參數是否在學習過程中進行優化,貝葉斯陰陽和諧學習的效果都明顯地優於變分貝葉斯和最小信息長度。與后兩者相比,貝葉斯陰陽和諧學習對於先驗的依賴程度不高,它的結果在不考慮先驗的情況下已較好,並在考慮Jeffreys 或共軛型先驗時有進一步提高。5. 儘管混合因子分析模型及局部因子分析模型在最大似然準則的參數估計中等價,它們在變分貝葉斯及貝葉斯陰陽和諧學習下的自動模型選擇中卻表现不同。在這兩種方法下,局部因子分析模型皆以明顯的優勢優於混合因子分析模型。 / 爲進行以上比較分析,除了直接使用已有算法或做少許修改之外,本篇論文還提出了五個新的算法來填補空白。針對高斯混合模型,我們提出了帶Jeffreys 先驗的變分貝葉斯算法;通過邊際化(marginalization),我們得到了有多變量學生分佈(Student’s T‐distribution)形式的后驗,并提出了帶DNW 先驗的貝葉斯陰陽和諧學習算法。針對混合因子分析模型及局部因子分析模型,我們通過一系列的近似邊際化過程,得到了有多個學生分佈乘積形式的后驗,并提出了帶DNG 先驗的貝葉斯陰陽和諧學習算法。對應於已有的基於混合因子分析模型的變分貝葉斯算法,我們還提出了基於局部因子分析模型的變分貝葉斯算法,作為一種更有效的可替代選擇。 / Model selection aims to determine an appropriate model scale given a small size of samples, which is an important topic in machine learning. As one type of efficient solution, an automatic model selection starts from a large enough model scale, and has an intrinsic mechanism to push redundant structures to be ineffective and thus discarded automatically during learning. Priors are usually imposed on parameters to facilitate an automatic model selection. There still lack systematic comparisons on automatic model selection approaches with priors, and this thesis is motivated for such a study based on models with local Gaussian structures. / Particularly, we compare the relative strength and weakness of three typical automatic model selection approaches, namely Variational Bayesian (VB), Minimum Message Length (MML) and Bayesian Ying-Yang (BYY) harmony learning, on models with local Gaussian structures. First, we consider Gaussian Mixture Model (GMM), for which the number of Gaussian components is to be determined. Further assuming each Gaussian component has a subspace structure, we extend to consider two models namely Mixture of Factor Analyzers (MFA) and Local Factor Analysis (LFA), for both of which the component number and local subspace dimensionalities are to be determined. / Two types of priors are imposed on parameters, namely a conjugate form prior and a Jeffreys prior. The conjugate form prior is chosen as a Dirichlet-Normal- Wishart (DNW) prior for GMM, and as a Dirichlet-Normal-Gamma (DNG) prior for both MFA and LFA. The Jeffreys prior and the MML approach are not considered on MFA/LFA due to the difficulty in deriving the corresponding Fisher information matrix. Via extensive simulations and applications, comparisons on the automatic model selection algorithms (six for GMM and four for MFA/LFA), we get following main findings:1. Considering priors on all parameters makes each approach perform better than considering priors merely on the mixing weights.2. For all the three approaches on GMM, the performance with the DNW prior is better than with the Jeffreys prior. Moreover, Jeffreys prior makes MML slightly better than VB, while the DNW prior makes VB better than MML.3. As the DNW prior hyper-parameters on GMM are changed from fixed to freely optimized by each of its own learning principle, BYY improves its performance, while VB and MML deteriorate their performances. This observation remains the same when we compare BYY and VB on either MFA or LFA with the DNG prior. Actually, VB and MML lack a good guide for optimizing prior hyper-parameters.4. For bothGMMand MFA/LFA, BYY considerably outperforms both VB and MML, for any type of priors and whether hyper-parameters are optimized. Being different from VB and MML that rely on appropriate priors, BYY does not highly depend on the type of priors. It performs already well without priors and improves by imposing a Jeffreys or a conjugate form prior. 5. Despite the equivalence in maximum likelihood parameter learning, MFA and LFA affect the performances by VB and BYY in automatic model selection. Particularly, both BYY and VB perform better on LFA than on MFA, and the superiority of LFA is reliable and robust. / In addition to adopting the existing algorithms either directly or with some modifications, this thesis develops five new algorithms to fill the missing gap. Particularly on GMM, the VB algorithm with Jeffreys prior and the BYY algorithm with DNW prior are developed, in the latter of which a multivariate Student’s Tdistribution is obtained as the posterior via marginalization. On MFA and LFA, BYY algorithms with DNG priors are developed, where products of multiple Student’s T-distributions are obtained in posteriors via approximated marginalization. Moreover, a VB algorithm on LFA is developed as an alternative choice to the existing VB algorithm on MFA. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Shi, Lei. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 153-166). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.3 / Chapter 1.2 --- Main Contributions of the Thesis --- p.11 / Chapter 1.3 --- Outline of the Thesis --- p.14 / Chapter 2 --- Automatic Model Selection on GMM --- p.16 / Chapter 2.1 --- Introduction --- p.17 / Chapter 2.2 --- Gaussian Mixture, Model Selection, and Priors --- p.21 / Chapter 2.2.1 --- Gaussian Mixture Model and EM algorithm --- p.21 / Chapter 2.2.2 --- Three automatic model selection approaches --- p.22 / Chapter 2.2.3 --- Jeffreys prior and Dirichlet-Normal-Wishart prior --- p.24 / Chapter 2.3 --- Algorithms with Jeffreys Priors --- p.25 / Chapter 2.3.1 --- Bayesian Ying-Yang learning and BYY-Jef algorithms --- p.25 / Chapter 2.3.2 --- Variational Bayesian and VB-Jef algorithms --- p.29 / Chapter 2.3.3 --- Minimum Message Length and MML-Jef algorithms --- p.33 / Chapter 2.4 --- Algorithms with Dirichlet and DNW Priors --- p.35 / Chapter 2.4.1 --- Algorithms BYY-Dir(α), VB-Dir(α) and MML-Dir(α) --- p.35 / Chapter 2.4.2 --- Algorithms with DNW priors --- p.40 / Chapter 2.5 --- Empirical Analysis on Simulated Data --- p.44 / Chapter 2.5.1 --- With priors on mixing weights: a quick look --- p.44 / Chapter 2.5.2 --- With full priors: extensive comparisons --- p.51 / Chapter 2.6 --- Concluding Remarks --- p.55 / Chapter 3 --- Applications of GMM Algorithms --- p.57 / Chapter 3.1 --- Face and Handwritten Digit Images Clustering --- p.58 / Chapter 3.2 --- Unsupervised Image Segmentation --- p.59 / Chapter 3.3 --- Image Foreground Extraction --- p.62 / Chapter 3.4 --- Texture Classification --- p.68 / Chapter 3.5 --- Concluding Remarks --- p.71 / Chapter 4 --- Automatic Model Selection on MFA/LFA --- p.73 / Chapter 4.1 --- Introduction --- p.74 / Chapter 4.2 --- MFA/LFA Models and the Priors --- p.78 / Chapter 4.2.1 --- MFA and LFA models --- p.78 / Chapter 4.2.2 --- The Dirichlet-Normal-Gamma priors --- p.79 / Chapter 4.3 --- Algorithms on MFA/LFA with DNG Priors --- p.82 / Chapter 4.3.1 --- BYY algorithm on MFA with DNG prior --- p.83 / Chapter 4.3.2 --- BYY algorithm on LFA with DNG prior --- p.86 / Chapter 4.3.3 --- VB algorithm on MFA with DNG prior --- p.89 / Chapter 4.3.4 --- VB algorithm on LFA with DNG prior --- p.91 / Chapter 4.4 --- Empirical Analysis on Simulated Data --- p.93 / Chapter 4.4.1 --- On the “chair data: a quick look --- p.94 / Chapter 4.4.2 --- Extensive comparisons on four series of simulations --- p.97 / Chapter 4.5 --- Concluding Remarks --- p.101 / Chapter 5 --- Applications of MFA/LFA Algorithms --- p.102 / Chapter 5.1 --- Face and Handwritten Digit Images Clustering --- p.103 / Chapter 5.2 --- Unsupervised Image Segmentation --- p.105 / Chapter 5.3 --- Radar HRRP based Airplane Recognition --- p.106 / Chapter 5.3.1 --- Background of HRRP radar target recognition --- p.106 / Chapter 5.3.2 --- Data description --- p.109 / Chapter 5.3.3 --- Experimental results --- p.111 / Chapter 5.4 --- Concluding Remarks --- p.113 / Chapter 6 --- Conclusions and FutureWorks --- p.114 / Chapter A --- Referred Parametric Distributions --- p.117 / Chapter B --- Derivations of GMM Algorithms --- p.119 / Chapter B.1 --- The BYY-DNW Algorithm --- p.119 / Chapter B.2 --- The MML-DNW Algorithm --- p.124 / Chapter B.3 --- The VB-DNW Algorithm --- p.127 / Chapter C --- Derivations of MFA/LFA Algorithms --- p.130 / Chapter C.1 --- The BYY Algorithms with DNG Priors --- p.130 / Chapter C.1.1 --- The BYY-DNG-MFA algorithm --- p.130 / Chapter C.1.2 --- The BYY-DNG-LFA algorithm --- p.137 / Chapter C.2 --- The VB Algorithms with DNG Priors --- p.145 / Chapter C.2.1 --- The VB-DNG-MFA algorithm --- p.145 / Chapter C.2.2 --- The VB-DNG-LFA algorithm --- p.149 / Bibliography --- p.152
3

Spectral Filtering for Spatio-temporal Dynamics and Multivariate Forecasts

Meng, Lu January 2016 (has links)
Due to the increasing availability of massive spatio-temporal data sets, modeling high dimensional data becomes quite challenging. A large number of research questions are rooted in identifying underlying dynamics in such spatio-temporal data. For many applications, the science suggests that the intrinsic dynamics be smooth and of low dimension. To reduce the variance of estimates and increase the computational tractability, dimension reduction is also quite necessary in the modeling procedure. In this dissertation, we propose a spectral filtering approach for dimension reduction and forecast amelioration, and apply it to multiple applications. We show the effectiveness of dimension reduction via our method and also illustrate its power for prediction in both simulation and real data examples. The resultant lower dimensional principal component series has a diagonal spectral density at each frequency whose diagonal elements are in descending order, which is not well motivated can be hard to interpret. Therefore we propose a phase-based filtering method to create principal component series with interpretable dynamics in the time domain. Our method is based on an approach of structural decomposition and phase-aligned construction in the frequency domain, identifying lower-rank dynamics and its components embedded in a high dimensional spatio-temporal system. In both our simulated examples and real data applications, we illustrate that the proposed method is able to separate and identify meaningful lower-rank movements. Benefiting from the zero-coherence property of the principal component series, we subsequently develop a predictive model for high-dimensional forecasting via lower-rank dynamics. Our modeling approach reduces multivariate modeling task to multiple univariate modeling and is flexible in combining with regularization techniques to obtain more stable estimates and improve interpretability. The simulation results and real data analysis show that our model achieves superior forecast performance compared to the class of autoregressive models.
4

Customer-centric data analysis. / 以顧客為本的數據分析 / CUHK electronic theses & dissertations collection / Yi gu ke wei ben de shu ju fen xi

January 2008 (has links)
With the advancement of information technology and declining hardware price, organizations and companies are able to collect large amount of personal data. Individual health records, product preferences and membership information are all converted into digital format. The ability to store and retrieve large amount of electronic records benefits many parties. Useful knowledge often hides in a large pool of raw data. In many customer-centric applications, customers want to find some "best" services according to their needs. However, since different customers may have different preferences to find "best" services, different services are suggested accordingly to different customers. In this thesis, we study models for different customer needs. Besides, customers also want to protect their individual privacy in many applications. In this thesis, we also study how individual privacy can be protected. / Wong, Chi Wing. / "June 2008." / Adviser: Ada Wai-Chee Fu. / Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1770. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 133-137). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
5

Investigation on Bayesian Ying-Yang learning for model selection in unsupervised learning. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2005 (has links)
For factor analysis models, we develop an improved BYY harmony data smoothing learning criterion BYY-HDS in help of considering the dependence between the factors and observations. We make empirical comparisons of the BYY harmony empirical learning criterion BYY-HEC, BYY-HDS, the BYY automatic model selection method BYY-AUTO, AIC, CAIC, BIC, and CV for selecting the number of factors not only on simulated data sets of different sample sizes, noise variances, data dimensions and factor numbers, but also on two real data sets from air pollution data and sport track records, respectively. / Model selection is a critical issue in unsupervised learning. Conventionally, model selection is implemented in two phases by some statistical model selection criterion such as Akaike's information criterion (AIC), Bozdogan's consistent Akaike's information criterion (CAIC), Schwarz's Bayesian inference criterion (BIC) which formally coincides with the minimum description length (MDL) criterion, and the cross-validation (CV) criterion. These methods are very time intensive and may become problematic when sample size is small. Recently, the Bayesian Ying-Yang (BYY) harmony learning has been developed as a unified framework with new mechanisms for model selection and regularization. In this thesis we make a systematic investigation on BYY learning as well as several typical model selection criteria for model selection on factor analysis models, Gaussian mixture models, and factor analysis mixture models. / The most remarkable findings of our study is that BYY-HDS is superior to its counterparts, especially when the sample size is small. AIC, BYY-HEC, BYY-AUTO and CV have a risk of overestimating, while BIC and CAIC have a risk of underestimating in most cases. BYY-AUTO is superior to other methods in a computational cost point of view. The cross-validation method requires the highest computing cost. (Abstract shortened by UMI.) / Hu Xuelei. / "November 2005." / Adviser: Lei Xu. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3899. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 131-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
6

A comparison of support vector machines and traditional techniques for statistical regression and classification

Hechter, Trudie 04 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Since its introduction in Boser et al. (1992), the support vector machine has become a popular tool in a variety of machine learning applications. More recently, the support vector machine has also been receiving increasing attention in the statistical community as a tool for classification and regression. In this thesis support vector machines are compared to more traditional techniques for statistical classification and regression. The techniques are applied to data from a life assurance environment for a binary classification problem and a regression problem. In the classification case the problem is the prediction of policy lapses using a variety of input variables, while in the regression case the goal is to estimate the income of clients from these variables. The performance of the support vector machine is compared to that of discriminant analysis and classification trees in the case of classification, and to that of multiple linear regression and regression trees in regression, and it is found that support vector machines generally perform well compared to the traditional techniques. / AFRIKAANSE OPSOMMING: Sedert die bekendstelling van die ondersteuningspuntalgoritme in Boser et al. (1992), het dit 'n populêre tegniek in 'n verskeidenheid masjienleerteorie applikasies geword. Meer onlangs het die ondersteuningspuntalgoritme ook meer aandag in die statistiese gemeenskap begin geniet as 'n tegniek vir klassifikasie en regressie. In hierdie tesis word ondersteuningspuntalgoritmes vergelyk met meer tradisionele tegnieke vir statistiese klassifikasie en regressie. Die tegnieke word toegepas op data uit 'n lewensversekeringomgewing vir 'n binêre klassifikasie probleem sowel as 'n regressie probleem. In die klassifikasiegeval is die probleem die voorspelling van polisvervallings deur 'n verskeidenheid invoer veranderlikes te gebruik, terwyl in die regressiegeval gepoog word om die inkomste van kliënte met behulp van hierdie veranderlikes te voorspel. Die resultate van die ondersteuningspuntalgoritme word met dié van diskriminant analise en klassifikasiebome vergelyk in die klassifikasiegeval, en met veelvoudige linêere regressie en regressiebome in die regressiegeval. Die gevolgtrekking is dat ondersteuningspuntalgoritmes oor die algemeen goed vaar in vergelyking met die tradisionele tegnieke.
7

A study on model selection of binary and non-Gaussian factor analysis.

January 2005 (has links)
An, Yujia. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 71-76). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.1.1 --- Review on BFA --- p.2 / Chapter 1.1.2 --- Review on NFA --- p.3 / Chapter 1.1.3 --- Typical model selection criteria --- p.5 / Chapter 1.1.4 --- New model selection criterion and automatic model selection --- p.6 / Chapter 1.2 --- Our contributions --- p.7 / Chapter 1.3 --- Thesis outline --- p.8 / Chapter 2 --- Combination of B and BI architectures for BFA with automatic model selection --- p.10 / Chapter 2.1 --- Implementation of BFA using BYY harmony learning with au- tomatic model selection --- p.11 / Chapter 2.1.1 --- Basic issues of BFA --- p.11 / Chapter 2.1.2 --- B-architecture for BFA with automatic model selection . --- p.12 / Chapter 2.1.3 --- BI-architecture for BFA with automatic model selection . --- p.14 / Chapter 2.2 --- Local minima in B-architecture and BI-architecture --- p.16 / Chapter 2.2.1 --- Local minima in B-architecture --- p.16 / Chapter 2.2.2 --- One unstable result in BI-architecture --- p.21 / Chapter 2.3 --- Combination of B- and BI-architecture for BFA with automatic model selection --- p.23 / Chapter 2.3.1 --- Combine B-architecture and BI-architecture --- p.23 / Chapter 2.3.2 --- Limitations of BI-architecture --- p.24 / Chapter 2.4 --- Experiments --- p.25 / Chapter 2.4.1 --- Frequency of local minima occurring in B-architecture --- p.25 / Chapter 2.4.2 --- Performance comparison for several methods in B-architecture --- p.26 / Chapter 2.4.3 --- Comparison of local minima in B-architecture and BI- architecture --- p.26 / Chapter 2.4.4 --- Frequency of unstable cases occurring in BI-architecture --- p.27 / Chapter 2.4.5 --- Comparison of performance of three strategies --- p.27 / Chapter 2.4.6 --- Limitations of BI-architecture --- p.28 / Chapter 2.5 --- Summary --- p.29 / Chapter 3 --- A Comparative Investigation on Model Selection in Binary Factor Analysis --- p.31 / Chapter 3.1 --- Binary Factor Analysis and ML Learning --- p.32 / Chapter 3.2 --- Hidden Factors Number Determination --- p.33 / Chapter 3.2.1 --- Using Typical Model Selection Criteria --- p.33 / Chapter 3.2.2 --- Using BYY harmony Learning --- p.34 / Chapter 3.3 --- Empirical Comparative Studies --- p.36 / Chapter 3.3.1 --- Effects of Sample Size --- p.37 / Chapter 3.3.2 --- Effects of Data Dimension --- p.37 / Chapter 3.3.3 --- Effects of Noise Variance --- p.39 / Chapter 3.3.4 --- Effects of hidden factor number --- p.43 / Chapter 3.3.5 --- Computing Costs --- p.43 / Chapter 3.4 --- Summary --- p.46 / Chapter 4 --- A Comparative Investigation on Model Selection in Non-gaussian Factor Analysis --- p.47 / Chapter 4.1 --- Non-Gaussian Factor Analysis and ML Learning --- p.48 / Chapter 4.2 --- Hidden Factor Determination --- p.51 / Chapter 4.2.1 --- Using typical model selection criteria --- p.51 / Chapter 4.2.2 --- BYY harmony Learning --- p.52 / Chapter 4.3 --- Empirical Comparative Studies --- p.55 / Chapter 4.3.1 --- Effects of Sample Size on Model Selection Criteria --- p.56 / Chapter 4.3.2 --- Effects of Data Dimension on Model Selection Criteria --- p.60 / Chapter 4.3.3 --- Effects of Noise Variance on Model Selection Criteria --- p.64 / Chapter 4.3.4 --- Discussion on Computational Cost --- p.64 / Chapter 4.4 --- Summary --- p.68 / Chapter 5 --- Conclusions --- p.69 / Bibliography --- p.71
8

Designing and analyzing test programs with censored data for civil engineering applications

Finley, Cynthia 28 August 2008 (has links)
Not available / text
9

The robustness of LISREL estimates in structural equation models with categorical data

Ethington, Corinna A. January 1985 (has links)
This study was an examination of the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical manifest variables. Two types of correlation matrices were analyzed; one containing Pearson product-moment correlations and one containing tetrachoric, polyserial, and product-moment correlations as appropriate. Using continuous variables generated according to the equations defining the population model, three cases were considered by dichotomizing some of the variables with varying degrees of skewness. When Pearson product-moment correlations were used to estimate associations involving dichotomous variables, the structural parameter estimates were biased when skewness was present in the dichotomous variables. Moreover, the degree of bias was consistent for both the maximum likelihood and unweighted least squares estimates. The standard errors of the estimates were found to be inflated, making significance tests unreliable. The analysis of mixed matrices produced average estimates that more closely approximated the model parameters except in the case where the dichotomous variables were skewed in opposite directions. However, since goodness-of-fit statistics and standard errors are not available in LISREL when tetrachoric and polyserial correlations are used, the unbiased estimates are not of practical significance. Until alternative computer programs are available that employ distribution-free estimation procedures that consider the skewness and kurtosis of the variables, researchers are ill-advised to employ LISREL in the estimation of structural equation models containing skewed categorical manifest variables. / Ph. D.
10

A cox proportional hazard model for mid-point imputed interval censored data

Gwaze, Arnold Rumosa January 2011 (has links)
There has been an increasing interest in survival analysis with interval-censored data, where the event of interest (such as infection with a disease) is not observed exactly but only known to happen between two examination times. However, because so much research has been focused on right-censored data, so many statistical tests and techniques are available for right-censoring methods, hence interval-censoring methods are not as abundant as those for right-censored data. In this study, right-censoring methods are used to fit a proportional hazards model to some interval-censored data. Transformation of the interval-censored observations was done using a method called mid-point imputation, a method which assumes that an event occurs at some midpoint of its recorded interval. Results obtained gave conservative regression estimates but a comparison with the conventional methods showed that the estimates were not significantly different. However, the censoring mechanism and interval lengths should be given serious consideration before deciding on using mid-point imputation on interval-censored data.

Page generated in 0.2294 seconds