• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 5
  • 3
  • 1
  • Tagged with
  • 26
  • 26
  • 26
  • 15
  • 10
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Survey error modelling and benchmarking with monthly-quarterly data.

January 2004 (has links)
Shea Hon-Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 51-52). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Review of benchmarking methods --- p.6 / Chapter 2.1 --- Denton method --- p.7 / Chapter 2.2 --- Regression method --- p.9 / Chapter 2.3 --- Signal extraction method --- p.11 / Chapter 3 --- Survey error modelling by using benchmarks --- p.14 / Chapter 4 --- A simulation study on benchmarking methods --- p.25 / Chapter 4.1 --- Model assumptions --- p.25 / Chapter 4.2 --- Simulation procedures --- p.27 / Chapter 4.3 --- Simulation results --- p.29 / Chapter 5 --- A simulation study on signal extraction with a nonparametric approach --- p.35 / Chapter 5.1 --- Introduction of the nonparametric method --- p.35 / Chapter 5.2 --- Simulation results --- p.38 / Chapter 6 --- Example: An application to the Danish unemployment series --- p.42 / Chapter 7 --- Conclusion --- p.49 / Reference --- p.51
2

Power computation for multiple comparisons with a control procedures in two-way designs.

January 2005 (has links)
Cheung Ching Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 64-65). / Abstracts in English and Chinese. / Acknowledgement --- p.i / Abstract --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Multiple Comparison Procedures --- p.1 / Chapter 1.2 --- Multiple Comparisons with a control --- p.2 / Chapter 1.3 --- Multiple Comparisons with a control in two-way designs --- p.5 / Chapter 1.4 --- Example --- p.12 / Chapter 1.5 --- Thesis Objectives --- p.13 / Chapter 2 --- Evaluation of Power (Homogeneous Variance) --- p.14 / Chapter 2.1 --- Definition and the use of power --- p.14 / Chapter 2.2 --- Setup and Notations --- p.15 / Chapter 2.3 --- Evaluation of power --- p.16 / Chapter 2.4 --- Computational Details --- p.19 / Chapter 2.4.1 --- Algorithm --- p.19 / Chapter 2.4.2 --- Results --- p.20 / Chapter 2.5 --- Numerical Example --- p.39 / Chapter 3 --- Evaluation of Power (Heterogeneous Variances) --- p.42 / Chapter 3.1 --- Setup and Notations --- p.42 / Chapter 3.2 --- Evaluation of power --- p.43 / Chapter 3.3 --- Results --- p.45 / Chapter 3.3.1 --- All-pairs Power --- p.46 / Chapter 3.3.2 --- Any-pair Power --- p.53 / Chapter 3.4 --- Numerical Example --- p.60 / Chapter 4 --- Conclusions --- p.63 / References --- p.64
3

Power computation for multiple comparisons with a control in directional-mixed families.

January 2010 (has links)
Lau, Sin Yi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 64-66). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Multiple Comparison Procedures --- p.1 / Chapter 1.2 --- Multiple Comparisons with a control --- p.2 / Chapter 1.3 --- Multiple Comparisons with a control in directional- mixed families --- p.5 / Chapter 1.4 --- Examples --- p.8 / Chapter 1.5 --- Thesis Objectives --- p.10 / Chapter 2 --- Evaluation of Power --- p.12 / Chapter 2.1 --- Definition and the Use of Power --- p.12 / Chapter 2.2 --- Computational Details --- p.13 / Chapter 2.3 --- All-pairs Power --- p.13 / Chapter 2.4 --- Any-pair Power --- p.15 / Chapter 2.5 --- Average Power --- p.16 / Chapter 2.6 --- Algorithm --- p.16 / Chapter 2.7 --- Results --- p.19 / Chapter 2.7.1 --- All-pairs Power --- p.20 / Chapter 2.7.2 --- Any-pair Power --- p.23 / Chapter 2.7.3 --- Average Power --- p.26 / Chapter 3 --- Sample Size Determination --- p.29 / Chapter 3.1 --- The required sample size for a pre-assigned all-pairs power --- p.31 / Chapter 3.2 --- The required sample size for a pre-assigned any-pair power --- p.41 / Chapter 3.3 --- The required sample size for a pre-assigned average power --- p.51 / Chapter 4 --- An Illustrative Example --- p.61 / Chapter 5 --- Conclusions --- p.63 / References --- p.64
4

Maximizing the use of blocking in record linkage : theory and simulation

Khan, Mahmudul Huq January 1991 (has links)
Thesis (Ph. D.)--University of Hawaii at Manoa, 1991. / Includes bibliographical references (leaves 128-132) / Microfiche. / xiii, 132 leaves, bound ill. 29 cm
5

On Recovering the Best Rank-? Approximation from Few Entries

Xu, Shun January 2022 (has links)
In this thesis, we investigate how well we can reconstruct the best rank-? approximation of a large matrix from a small number of its entries. We show that even if a data matrix is of full rank and cannot be approximated well by a low-rank matrix, its best low-rank approximations may still be reliably computed or estimated from a small number of its entries. This is especially relevant from a statistical viewpoint: the best low-rank approximations to a data matrix are often of more interest than itself because they capture the more stable and oftentimes more reproducible properties of an otherwise complicated data-generating model. In particular, we investigate two agnostic approaches: the first is based on spectral truncation; and the second is a projected gradient descent based optimization procedure. We argue that, while the first approach is intuitive and reasonably effective, the latter has far superior performance in general. We show that the error depends on how close the matrix is to being of low rank. Our results can be generalized to the spectral and entrywise error and provide flexible tools for the error analysis of the follow-up computation. Moreover, we derive a high-order decomposition of the error. With an explicit expression of the main error source, we obtain an improved estimate of the linear form. Both theoretical and numerical evidence is presented to demonstrate the effectiveness of the proposed approaches.
6

Positioning patterns from multidimensional data and its applications in meteorology

Wong, Ka-yan, 王嘉欣 January 2008 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy
7

Automatic model selection on local Gaussian structures with priors: comparative investigations and applications. / 基於帶先驗的局部高斯結构的自動模型選擇: 比較性分析及應用研究 / CUHK electronic theses & dissertations collection / Ji yu dai xian yan de ju bu Gaosi jie gou de zi dong mo xing xuan ze: bi jiao xing fen xi ji ying yong yan jiu

January 2012 (has links)
作為機器學習領域中的一個重要課題,模型選擇旨在給定有限樣本的情況下、恰當地確定模型的複雜度。自動模型選擇是指一類快速有效的模型選擇方法,它們以一個足夠大的模型複雜度作為初始,在學習過程中有一種內在機制能夠驅使冗餘結構自動地變為不起作用、從而可以剔除。爲了輔助自動模型選擇的進行,模型的參數通常被假設帶有先驗。對於考慮先驗的各種自動模型選擇方法,已有工作中尚缺乏系統性的比較研究。本篇論文著眼於具有局部高斯結構的模型,進行了系統性的比較分析。 / 具體而言,本文比較了三種典型的自動模型選擇方法的優劣勢,它們分別為變分貝葉斯(Variational Bayesian),最小信息長度(Minimum Message Length),以及貝葉斯陰陽和諧學習(Bayesian Ying‐Yang harmony learning)。首先,我們研究針對高斯混合模型(Gaussian Mixture Model)的模型選擇,即確定該模型中高斯成份的個數。進而,我们假設每個高斯成份都有子空間結構、并研究混合因子分析模型(Mixture of Factor Analyzers)及局部因子分析模型(Local Factor Analysis)下的模型選擇問題,即確定模型中混合成份的個數及各個局部子空間的維度。 / 本篇論文考慮以上各模型的參數的兩類先驗,分別為共軛型先驗及Jeffreys 先驗。其中,共軛型先驗在高斯混合模型上為DNW(Dirichlet‐Normal‐Wishart)先驗,在混合因子分析模型及局部因子分析模型上均為DNG(Dirichlet‐Normal‐Gamma)先驗。由於推導對應Fisher 信息矩陣的解析表達非常困難,在混合因子分析模型及局部因子分析模型上,我們不考慮Jeffreys 先驗以及最小信息長度方法。 / 通過一系列的仿真實驗及應用分析,本文比較了幾種自動模型選擇算法(包括基於高斯混合模型的6 個算法,基於混合因子分析模型及局部因子分析模型的4 個算法),并得到了如下主要發現:1. 對於各種自動模型選擇方法,在所有參數上加先驗都比僅在混合權重上加先驗的效果好。2. 在高斯混合模型上,考慮 DNW 先驗的效果比考慮Jeffreys 先驗的效果好。其中,考慮Jeffreys 先驗時,最小信息長度比變分貝葉斯的效果略好;而考慮DNW 先驗時,變分貝葉斯比最小信息長度的效果好。3. 在高斯混合模型上,當DNW 先驗的超參數(hyper‐parameters)由保持固定變為根據各自學習準則進行優化時,貝葉斯陰陽和諧學習的效果得到了提高,而變分貝葉斯及最小信息長度的結果都會變差。在基於帶DNG 先驗的混合因子分析模型及局部因子分析模型的比較中,以上觀察結果同樣維持。事實上,變分貝葉斯及最小信息長度都缺乏一種引導先驗超參數優化的良好機制。4. 對以上各種模型、無論考慮哪種先驗、以及無論先驗超參數是否在學習過程中進行優化,貝葉斯陰陽和諧學習的效果都明顯地優於變分貝葉斯和最小信息長度。與后兩者相比,貝葉斯陰陽和諧學習對於先驗的依賴程度不高,它的結果在不考慮先驗的情況下已較好,並在考慮Jeffreys 或共軛型先驗時有進一步提高。5. 儘管混合因子分析模型及局部因子分析模型在最大似然準則的參數估計中等價,它們在變分貝葉斯及貝葉斯陰陽和諧學習下的自動模型選擇中卻表现不同。在這兩種方法下,局部因子分析模型皆以明顯的優勢優於混合因子分析模型。 / 爲進行以上比較分析,除了直接使用已有算法或做少許修改之外,本篇論文還提出了五個新的算法來填補空白。針對高斯混合模型,我們提出了帶Jeffreys 先驗的變分貝葉斯算法;通過邊際化(marginalization),我們得到了有多變量學生分佈(Student’s T‐distribution)形式的后驗,并提出了帶DNW 先驗的貝葉斯陰陽和諧學習算法。針對混合因子分析模型及局部因子分析模型,我們通過一系列的近似邊際化過程,得到了有多個學生分佈乘積形式的后驗,并提出了帶DNG 先驗的貝葉斯陰陽和諧學習算法。對應於已有的基於混合因子分析模型的變分貝葉斯算法,我們還提出了基於局部因子分析模型的變分貝葉斯算法,作為一種更有效的可替代選擇。 / Model selection aims to determine an appropriate model scale given a small size of samples, which is an important topic in machine learning. As one type of efficient solution, an automatic model selection starts from a large enough model scale, and has an intrinsic mechanism to push redundant structures to be ineffective and thus discarded automatically during learning. Priors are usually imposed on parameters to facilitate an automatic model selection. There still lack systematic comparisons on automatic model selection approaches with priors, and this thesis is motivated for such a study based on models with local Gaussian structures. / Particularly, we compare the relative strength and weakness of three typical automatic model selection approaches, namely Variational Bayesian (VB), Minimum Message Length (MML) and Bayesian Ying-Yang (BYY) harmony learning, on models with local Gaussian structures. First, we consider Gaussian Mixture Model (GMM), for which the number of Gaussian components is to be determined. Further assuming each Gaussian component has a subspace structure, we extend to consider two models namely Mixture of Factor Analyzers (MFA) and Local Factor Analysis (LFA), for both of which the component number and local subspace dimensionalities are to be determined. / Two types of priors are imposed on parameters, namely a conjugate form prior and a Jeffreys prior. The conjugate form prior is chosen as a Dirichlet-Normal- Wishart (DNW) prior for GMM, and as a Dirichlet-Normal-Gamma (DNG) prior for both MFA and LFA. The Jeffreys prior and the MML approach are not considered on MFA/LFA due to the difficulty in deriving the corresponding Fisher information matrix. Via extensive simulations and applications, comparisons on the automatic model selection algorithms (six for GMM and four for MFA/LFA), we get following main findings:1. Considering priors on all parameters makes each approach perform better than considering priors merely on the mixing weights.2. For all the three approaches on GMM, the performance with the DNW prior is better than with the Jeffreys prior. Moreover, Jeffreys prior makes MML slightly better than VB, while the DNW prior makes VB better than MML.3. As the DNW prior hyper-parameters on GMM are changed from fixed to freely optimized by each of its own learning principle, BYY improves its performance, while VB and MML deteriorate their performances. This observation remains the same when we compare BYY and VB on either MFA or LFA with the DNG prior. Actually, VB and MML lack a good guide for optimizing prior hyper-parameters.4. For bothGMMand MFA/LFA, BYY considerably outperforms both VB and MML, for any type of priors and whether hyper-parameters are optimized. Being different from VB and MML that rely on appropriate priors, BYY does not highly depend on the type of priors. It performs already well without priors and improves by imposing a Jeffreys or a conjugate form prior. 5. Despite the equivalence in maximum likelihood parameter learning, MFA and LFA affect the performances by VB and BYY in automatic model selection. Particularly, both BYY and VB perform better on LFA than on MFA, and the superiority of LFA is reliable and robust. / In addition to adopting the existing algorithms either directly or with some modifications, this thesis develops five new algorithms to fill the missing gap. Particularly on GMM, the VB algorithm with Jeffreys prior and the BYY algorithm with DNW prior are developed, in the latter of which a multivariate Student’s Tdistribution is obtained as the posterior via marginalization. On MFA and LFA, BYY algorithms with DNG priors are developed, where products of multiple Student’s T-distributions are obtained in posteriors via approximated marginalization. Moreover, a VB algorithm on LFA is developed as an alternative choice to the existing VB algorithm on MFA. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Shi, Lei. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 153-166). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.3 / Chapter 1.2 --- Main Contributions of the Thesis --- p.11 / Chapter 1.3 --- Outline of the Thesis --- p.14 / Chapter 2 --- Automatic Model Selection on GMM --- p.16 / Chapter 2.1 --- Introduction --- p.17 / Chapter 2.2 --- Gaussian Mixture, Model Selection, and Priors --- p.21 / Chapter 2.2.1 --- Gaussian Mixture Model and EM algorithm --- p.21 / Chapter 2.2.2 --- Three automatic model selection approaches --- p.22 / Chapter 2.2.3 --- Jeffreys prior and Dirichlet-Normal-Wishart prior --- p.24 / Chapter 2.3 --- Algorithms with Jeffreys Priors --- p.25 / Chapter 2.3.1 --- Bayesian Ying-Yang learning and BYY-Jef algorithms --- p.25 / Chapter 2.3.2 --- Variational Bayesian and VB-Jef algorithms --- p.29 / Chapter 2.3.3 --- Minimum Message Length and MML-Jef algorithms --- p.33 / Chapter 2.4 --- Algorithms with Dirichlet and DNW Priors --- p.35 / Chapter 2.4.1 --- Algorithms BYY-Dir(α), VB-Dir(α) and MML-Dir(α) --- p.35 / Chapter 2.4.2 --- Algorithms with DNW priors --- p.40 / Chapter 2.5 --- Empirical Analysis on Simulated Data --- p.44 / Chapter 2.5.1 --- With priors on mixing weights: a quick look --- p.44 / Chapter 2.5.2 --- With full priors: extensive comparisons --- p.51 / Chapter 2.6 --- Concluding Remarks --- p.55 / Chapter 3 --- Applications of GMM Algorithms --- p.57 / Chapter 3.1 --- Face and Handwritten Digit Images Clustering --- p.58 / Chapter 3.2 --- Unsupervised Image Segmentation --- p.59 / Chapter 3.3 --- Image Foreground Extraction --- p.62 / Chapter 3.4 --- Texture Classification --- p.68 / Chapter 3.5 --- Concluding Remarks --- p.71 / Chapter 4 --- Automatic Model Selection on MFA/LFA --- p.73 / Chapter 4.1 --- Introduction --- p.74 / Chapter 4.2 --- MFA/LFA Models and the Priors --- p.78 / Chapter 4.2.1 --- MFA and LFA models --- p.78 / Chapter 4.2.2 --- The Dirichlet-Normal-Gamma priors --- p.79 / Chapter 4.3 --- Algorithms on MFA/LFA with DNG Priors --- p.82 / Chapter 4.3.1 --- BYY algorithm on MFA with DNG prior --- p.83 / Chapter 4.3.2 --- BYY algorithm on LFA with DNG prior --- p.86 / Chapter 4.3.3 --- VB algorithm on MFA with DNG prior --- p.89 / Chapter 4.3.4 --- VB algorithm on LFA with DNG prior --- p.91 / Chapter 4.4 --- Empirical Analysis on Simulated Data --- p.93 / Chapter 4.4.1 --- On the “chair data: a quick look --- p.94 / Chapter 4.4.2 --- Extensive comparisons on four series of simulations --- p.97 / Chapter 4.5 --- Concluding Remarks --- p.101 / Chapter 5 --- Applications of MFA/LFA Algorithms --- p.102 / Chapter 5.1 --- Face and Handwritten Digit Images Clustering --- p.103 / Chapter 5.2 --- Unsupervised Image Segmentation --- p.105 / Chapter 5.3 --- Radar HRRP based Airplane Recognition --- p.106 / Chapter 5.3.1 --- Background of HRRP radar target recognition --- p.106 / Chapter 5.3.2 --- Data description --- p.109 / Chapter 5.3.3 --- Experimental results --- p.111 / Chapter 5.4 --- Concluding Remarks --- p.113 / Chapter 6 --- Conclusions and FutureWorks --- p.114 / Chapter A --- Referred Parametric Distributions --- p.117 / Chapter B --- Derivations of GMM Algorithms --- p.119 / Chapter B.1 --- The BYY-DNW Algorithm --- p.119 / Chapter B.2 --- The MML-DNW Algorithm --- p.124 / Chapter B.3 --- The VB-DNW Algorithm --- p.127 / Chapter C --- Derivations of MFA/LFA Algorithms --- p.130 / Chapter C.1 --- The BYY Algorithms with DNG Priors --- p.130 / Chapter C.1.1 --- The BYY-DNG-MFA algorithm --- p.130 / Chapter C.1.2 --- The BYY-DNG-LFA algorithm --- p.137 / Chapter C.2 --- The VB Algorithms with DNG Priors --- p.145 / Chapter C.2.1 --- The VB-DNG-MFA algorithm --- p.145 / Chapter C.2.2 --- The VB-DNG-LFA algorithm --- p.149 / Bibliography --- p.152
8

Spectral Filtering for Spatio-temporal Dynamics and Multivariate Forecasts

Meng, Lu January 2016 (has links)
Due to the increasing availability of massive spatio-temporal data sets, modeling high dimensional data becomes quite challenging. A large number of research questions are rooted in identifying underlying dynamics in such spatio-temporal data. For many applications, the science suggests that the intrinsic dynamics be smooth and of low dimension. To reduce the variance of estimates and increase the computational tractability, dimension reduction is also quite necessary in the modeling procedure. In this dissertation, we propose a spectral filtering approach for dimension reduction and forecast amelioration, and apply it to multiple applications. We show the effectiveness of dimension reduction via our method and also illustrate its power for prediction in both simulation and real data examples. The resultant lower dimensional principal component series has a diagonal spectral density at each frequency whose diagonal elements are in descending order, which is not well motivated can be hard to interpret. Therefore we propose a phase-based filtering method to create principal component series with interpretable dynamics in the time domain. Our method is based on an approach of structural decomposition and phase-aligned construction in the frequency domain, identifying lower-rank dynamics and its components embedded in a high dimensional spatio-temporal system. In both our simulated examples and real data applications, we illustrate that the proposed method is able to separate and identify meaningful lower-rank movements. Benefiting from the zero-coherence property of the principal component series, we subsequently develop a predictive model for high-dimensional forecasting via lower-rank dynamics. Our modeling approach reduces multivariate modeling task to multiple univariate modeling and is flexible in combining with regularization techniques to obtain more stable estimates and improve interpretability. The simulation results and real data analysis show that our model achieves superior forecast performance compared to the class of autoregressive models.
9

Customer-centric data analysis. / 以顧客為本的數據分析 / CUHK electronic theses & dissertations collection / Yi gu ke wei ben de shu ju fen xi

January 2008 (has links)
With the advancement of information technology and declining hardware price, organizations and companies are able to collect large amount of personal data. Individual health records, product preferences and membership information are all converted into digital format. The ability to store and retrieve large amount of electronic records benefits many parties. Useful knowledge often hides in a large pool of raw data. In many customer-centric applications, customers want to find some "best" services according to their needs. However, since different customers may have different preferences to find "best" services, different services are suggested accordingly to different customers. In this thesis, we study models for different customer needs. Besides, customers also want to protect their individual privacy in many applications. In this thesis, we also study how individual privacy can be protected. / Wong, Chi Wing. / "June 2008." / Adviser: Ada Wai-Chee Fu. / Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1770. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 133-137). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
10

Investigation on Bayesian Ying-Yang learning for model selection in unsupervised learning. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2005 (has links)
For factor analysis models, we develop an improved BYY harmony data smoothing learning criterion BYY-HDS in help of considering the dependence between the factors and observations. We make empirical comparisons of the BYY harmony empirical learning criterion BYY-HEC, BYY-HDS, the BYY automatic model selection method BYY-AUTO, AIC, CAIC, BIC, and CV for selecting the number of factors not only on simulated data sets of different sample sizes, noise variances, data dimensions and factor numbers, but also on two real data sets from air pollution data and sport track records, respectively. / Model selection is a critical issue in unsupervised learning. Conventionally, model selection is implemented in two phases by some statistical model selection criterion such as Akaike's information criterion (AIC), Bozdogan's consistent Akaike's information criterion (CAIC), Schwarz's Bayesian inference criterion (BIC) which formally coincides with the minimum description length (MDL) criterion, and the cross-validation (CV) criterion. These methods are very time intensive and may become problematic when sample size is small. Recently, the Bayesian Ying-Yang (BYY) harmony learning has been developed as a unified framework with new mechanisms for model selection and regularization. In this thesis we make a systematic investigation on BYY learning as well as several typical model selection criteria for model selection on factor analysis models, Gaussian mixture models, and factor analysis mixture models. / The most remarkable findings of our study is that BYY-HDS is superior to its counterparts, especially when the sample size is small. AIC, BYY-HEC, BYY-AUTO and CV have a risk of overestimating, while BIC and CAIC have a risk of underestimating in most cases. BYY-AUTO is superior to other methods in a computational cost point of view. The cross-validation method requires the highest computing cost. (Abstract shortened by UMI.) / Hu Xuelei. / "November 2005." / Adviser: Lei Xu. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3899. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 131-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.

Page generated in 0.1453 seconds