Spelling suggestions: "subject:"cachine learning amathematical models."" "subject:"cachine learning dmathematical models.""
1 |
Regularized models and algorithms for machine learningShen, Chenyang 31 August 2015 (has links)
Multi-lable learning (ML), multi-instance multi-label learning (MIML), large network learning and random under-sampling system are four active research topics in machine learning which have been studied intensively recently. So far, there are still a lot of open problems to be figured out in these topics which attract worldwide attention of researchers. This thesis mainly focuses on several novel methods designed for these research tasks respectively. Then main difference between ML learning and traditional classification task is that in ML learning, one object can be characterized by several different labels (or classes). One important observation is that the labels received by similar objects in ML data are usually highly correlated with each other. In order to exploring this correlation of labels between objects which might be a key issue in ML learning, we consider to require the resulting label indicator to be low rank. In the proposed model, nuclear norm which is a famous convex relaxation of intractable matrix rank is introduced to label indicator in order to exploiting the underlying correlation in label domain. Motivated by the idea of spectral clustering, we also incorporate information from feature domain by constructing a graph among objects based on their features. Then with partial label information available, we integrate them together into a convex low rank based model designed for ML learning. The proposed model can be solved efficiently by using alternating direction method of multiplier (ADMM). We test the performance on several benchmark ML data sets and make comparisons with the state-of-art algorithms. The classification results demonstrate the efficiency and effectiveness of the proposed low rank based methods. One step further, we consider MIML learning problem which is usually more complicated than ML learning: besides the possibility of having multiple labels, each object can be described by multiple instances simultaneously which may significantly increase the size of data. To handle the MIML learning problem we first propose and develop a novel sparsity-based MIML learning algorithm. Our idea is to formulate and construct a transductive objective function for label indicator to be learned by using the method of random walk with restart that exploits the relationships among instances and labels of objects, and computes the affinities among the objects. Then sparsity can be introduced in the labels indicator of the objective function such that relevant and irrelevant objects with respect to a given class can be distinguished. The resulting sparsity-based MIML model can be given as a constrained convex optimization problem, and it can be solved very efficiently by using the augmented Lagrangian method (ALM). Experimental results on benchmark data have shown that the proposed sparse-MIML algorithm is computationally efficient, and effective in label prediction for MIML data. We demonstrate that the performance of the proposed method is better than the other testing MIML learning algorithms. Moreover, one big concern of an MIML learning algorithm is computational efficiency, especially when figuring out classification problem for large data sets. Most of the existing methods for solving MIML problems in literature may take a long computational time and have a huge storage cost for large MIML data sets. In this thesis, our main aim is to propose and develop an efficient Markov Chain based learning algorithm for MIML problems. Our idea is to perform labels classification among objects and features identification iteratively through two Markov chains constructed by using objects and features respectively. The classification of objects can be obtained by using labels propagation via training data in the iterative method. Because it is not necessary to compute and store a huge affinity matrix among objects/instances, both the storage and computational time can be reduced significantly. For instance, when we handle MIML image data set of 10000 objects and 250000 instances, the proposed algorithm takes about 71 seconds. Also experimental results on some benchmark data sets are reported to illustrate the effectiveness of the proposed method in one-error, ranking loss, coverage and average precision, and show that it is competitive with the other methods. In addition, we consider the module identification from large biological networks. Nowadays, the interactions among different genes, proteins and other small molecules are becoming more and more significant and have been studied intensively. One general way that helps people understand these interactions is to analyze networks constructed from genes/proteins. In particular, module structure as a common property of most biological networks has drawn much attention of researchers from different fields. However, biological networks might be corrupted by noise in the data which often lead to the miss-identification of module structure. Besides, some edges in network might be removed (or some nodes might be miss-connected) when improper parameters are selected which may also affect the module identified significantly. In conclusion, the module identification results are sensitive to noise as well as parameter selection of network. In this thesis, we consider employing multiple networks for consistent module detection in order to reduce the effect of noise and parameter settings. Instead of studying different networks separately, our idea is to combine multiple networks together by building them into tensor structure data. Then given any node as prior label information, tensor-based Markov chains are constructed iteratively for identification of the modules shared by the multiple networks. In addition, the proposed tensor-based Markov chain algorithm is capable of simultaneously evaluating the contribution from each network. It would be useful to measure the consistency of modules in the multiple networks. In the experiments, we test our method on two groups of gene co-expression networks from human beings. We also validate biological meaning of modules identified by the proposed method. Finally, we introduce random under-sampling techniques with application to X-ray computed tomography (CT). Under-sampling techniques are realized to be powerful tools of reducing the scale of problem especially for large data analysis. However, information loss seems to be un-avoidable which inspires different under-sampling strategies for preserving more useful information. Here we focus on under-sampling for the real-world CT reconstruction problem. The main motivation is to reduce the total radiation dose delivered to patient which has arisen significant clinical concern for CT imaging. We compare two popular regular CT under-sampling strategies with ray random under-sampling. The results support the conclusion that random under-sampling always outperforms regular ones especially for the high down-sampling ratio cases. Moreover, based on the random ray under-sampling strategy, we propose a novel scatter removal method which further improves performance of ray random under-sampling in CT reconstruction.
|
2 |
Machine learning models on random graphs. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
In summary, the viewpoint of random graphs indeed provides us an opportunity of improving some existing machine learning algorithms. / In this thesis, we establish three machine learning models on random graphs: Heat Diffusion Models on Random Graphs, Predictive Random Graph Ranking, and Random Graph Dependency. The heat diffusion models on random graphs lead to Graph-based Heat Diffusion Classifiers (G-HDC) and a novel ranking algorithm on Web pages called DiffusionRank. For G-HDC, a random graph is constructed on data points. The generated random graph can be considered as the representation of the underlying geometry, and the heat diffusion model on them can be considered as the approximation to the way that heat flows on a geometric structure. Experiments show that G-HDC can achieve better performance in accuracy in some benchmark datasets. For DiffusionRank, theoretically we show that it is a generalization of PageRank when the heat diffusion coefficient tends to infinity, and empirically we show that it achieves the ability of anti-manipulation. / Predictive Random Graph Ranking (PRGR) incorporates DiffusionRank. PRGR aims to solve the problem that the incomplete information about the Web structure causes inaccurate results of various ranking algorithms. The Web structure is predicted as a random graph, on which ranking algorithms are expected to be improved in accuracy. Experimental results show that the PRGR framework can improve the accuracy of the ranking algorithms such as PageRank and Common Neighbor. / Three special forms of the novel Random Graph Dependency measure on two random graphs are investigated. The first special form can improve the speed of the C4.5 algorithm, and can achieve better results on attribute selection than gamma used in Rough Set Theory. The second special form of the general random graph dependency measure generalizes the conditional entropy because it becomes equivalent to the conditional entropy when the random graphs take their special form-equivalence relations. Experiments demonstrates that the second form is an informative measure, showing its success in decision trees on small sample size problems. The third special form can help to search two parameters in G-HDC faster than the cross-validation method. / Yang, haixuan. / "August 2007." / Advisers: Irwin King; Michael R. Lyu. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1125. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 184-197). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
3 |
Exploring attributes and instances for customized learning based on support patterns. / CUHK electronic theses & dissertations collectionJanuary 2005 (has links)
Both the learning model and the learning process of CSPL are customized to different query instances. CSPL can make use of the characteristics of the query instance to explore a focused hypothesis space effectively during classification. Unlike many existing learning methods, CSPL conducts learning from specific to general, effectively avoiding the horizon effect. Empirical investigation demonstrates that learning from specific to general can discover more useful patterns for learning. Experimental results on benchmark data sets and real-world problems demonstrate that our CSPL framework has a prominent learning performance in comparison with existing learning rnethods. / CSPL integrates the attributes and instances in a query matrix model under customized learning framework. Within this query matrix model, it can be demonstrated that attributes and instances have a useful symmetry property for learning. This symmetry property leads to a solution for counteracting the negative factor of sparse instances with the abundance of attribute information, which was previously viewed as a kind of dimension curse for common learning methods. Given this symmetry property, we propose to use support patterns as the basic learning unit of CSPL, i.e., the patterns to be explored. Generally, a support pattern can be viewed as a sub-matrix of the query matrix, considering its associated support instances and attribute values. CSPL discovers useful support patterns and combines their statistics for classifying unseen instances. / The developing of machine learning techniques still has a number of challenges. Real world problems often require a more flexible and dynamic learning method, which is customized to the learning scenario and user demand. For example, it is quite often in real-world applications to make a critical decision with only limited data but huge amount of potentially relevant attributes. Therefore, we propose a novel customized learning framework called Customized Support Pattern Learner (CSPL), which exploits a tradeoff between instance-based learning and attribute-based learning. / Han Yiqiu. / "October 2005." / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3898. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 99-104). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
4 |
Learning from data locally and globally. / CUHK electronic theses & dissertations collection / Digital dissertation consortiumJanuary 2004 (has links)
Huang Kaizhu. / "July 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 176-194) / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
5 |
Generalized regularized learning. / 廣義正則化學習 / CUHK electronic theses & dissertations collection / Guang yi zheng ze hua xue xiJanuary 2007 (has links)
A classical algorithm in classification is the support vector machine (SVM) algorithm. Based on Vapnik's statistical learning theory, it tries to find a linear boundary with maximum margin to separate the given data into different classes. In non-separable case, SVM uses a kernel trick to map the data onto a feature space and finds a linear boundary in the new space. / Different algorithms are derived from the framework. When the empirical error is defined by a quadratic loss, we have generalized regularized least-squares learning algorithm. When the idea is applied to SVM, we obtain semi-parametric SVM algorithm. Besides, we derive the third algorithm which generalizes the kernel logistic regression algorithm. / How to choose non-regularized features? We give some empirical studies. We use dimensionality reduction techniques in text categorization, extract some non-regularized intrinsic features for the high dimensional data, and report improved results. / Instead of understanding SVM's behavior from Vapnik's theory, our work follows regularized learning viewpoint. In regularized learning, people try to find a solution from a function space which has small empirical error in explaining the input-output relationship for training data, yet keeping the simplicity of the solution. / To provide the simplicity, the complexity of the solution is penalized, which involves all features in the function space. An equal penalty, as in standard regularized learning, is reasonable without knowing the significance of individual features. But how about if we have prior knowledge that some features are more important than others? Instead of penalizing all features, we study a generalized regularized learning framework where part of the function space is not penalized, and derive its corresponding solution. / Two generalized algorithms need to solve positive definite linear systems to get the parameters. How to solve a large-scale linear system efficiently? Different from previous work in machine learning where people generally resort to conjugate gradient method, our work proposes to use a domain decomposition approach. New interpretations and improved results are reported accordingly. / Li, Wenye. / "September 2007." / Advisers: Kwong-Sak Leung; Kin-Hong Lee. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4850. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 101-109). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
6 |
Hierarchical average reward reinforcement learningSeri, Sandeep 15 March 2002 (has links)
Reinforcement Learning (RL) is the study of agents that learn optimal
behavior by interacting with and receiving rewards and punishments from an unknown
environment. RL agents typically do this by learning value functions that
assign a value to each state (situation) or to each state-action pair. Recently,
there has been a growing interest in using hierarchical methods to cope with the
complexity that arises due to the huge number of states found in most interesting
real-world problems. Hierarchical methods seek to reduce this complexity by the
use of temporal and state abstraction. Like most RL methods, most hierarchical
RL methods optimize the discounted total reward that the agent receives. However,
in many domains, the proper criteria to optimize is the average reward per
time step.
In this thesis, we adapt the concepts of hierarchical and recursive optimality,
which are used to describe the kind of optimality achieved by hierarchical methods,
to the average reward setting and show that they coincide under a condition called
Result Distribution Invariance. We present two new model-based hierarchical RL
methods, HH-learning and HAH-learning, that are intended to optimize the average
reward. HH-learning is a hierarchical extension of the model-based, average-reward RL method, H-learning. Like H-learning, HH-learning requires exploration
in order to learn correct domain models and optimal value function. HH-learning
can be used with any exploration strategy whereas HAH-learning uses the principle
of "optimism under uncertainty", which gives it a built-in "auto-exploratory"
feature. We also give the hierarchical and auto-exploratory hierarchical versions
of R-learning, a model-free average reward method, and a hierarchical version of
ARTDP, a model-based discounted total reward method.
We compare the performance of the "flat" and hierarchical methods in the
task of scheduling an Automated Guided Vehicle (AGV) in a variety of settings.
The results show that hierarchical methods can take advantage of temporal and
state abstraction and converge in fewer steps than the flat methods. The exception
is the hierarchical version of ARTDP. We give an explanation for this anomaly.
Auto-exploratory hierarchical methods are faster than the hierarchical methods
with ��-greedy exploration. Finally, hierarchical model-based methods are faster
than hierarchical model-free methods. / Graduation date: 2003
|
7 |
Learning non-Gaussian factor analysis with different structures: comparative investigations on model selection and applications. / 基於多種結構的非高斯因數分析的模型選擇學習演算法比較研究及其應用 / CUHK electronic theses & dissertations collection / Ji yu duo zhong jie gou de fei Gaosi yin shu fen xi de mo xing xuan ze xue xi yan suan fa bi jiao yan jiu ji qi ying yongJanuary 2012 (has links)
高維資料的隱含結構挖掘是機器學習、模式識別和生物資訊學等領域中的重要問題。本論文從實踐和理論上研究了具有不同隱含結構模式的非高斯因數分析(Non-Gaussian Factor Analysis)模型。本文既從兩步法又從自動法的角度重點研究確定隱因數個數的模型選擇問題,及其在模式識別和生物資訊學上的實際應用。 / 非高斯因數分析在單高斯因數的情況下退化為傳統的因數分析(Factor Analysis)模型。我們發展了一套系統地比較模型選擇方法性能的工具,比較研究了經典的模型選擇準則(比如AIC 等),及近年來基於隨機矩陣理論的統計檢驗方法,還有貝葉斯陰陽(Bayesian Ying-Yang)和諧學習理論。同時,我們也對四個經典準則提供了一個適用於小樣本的低估因數數目傾向的相對排序的理論結果。 / 基於傳統的因數分析模型,我們還研究了參數化形式對模型選擇方法的性能的影響,一個重要的但被忽略或很少研究的問題,因為似然函數等價的參數化形式在傳統的模型選擇準則像AIC 下不會有性能差異。但是,我們通過大量的模擬資料和實際資料上的結果發現,在兩個常用的似然函數等價的因數分析參數化形式中,其中一個更加有利於在變分貝葉斯(Variational Bayes)和貝葉斯陰陽理論框架下做模型選擇。 進一步地,該兩個參數化形式被作為兩端拓展成一系列具有等價似然函數的參數化形式。實驗結果更加可靠地揭示了參數化形式的逐漸變化對模型選擇的影響。同時,實驗結果也顯示參數先驗分佈的引入可以提高模型選擇的準確度,並給出了相應的新的學習演算法。系統比較表明,不僅是兩步法還是自動法,貝葉斯陰陽學習理論都比變分貝葉斯的模型選擇的性能更佳,並且能在有利的參數化形式中獲得更大的提高。 / 二元因數分析(Binary FA)也是一種非高斯因數分析模型,它用伯努利因數去解釋隱含結構。首先,我們引入一種叫做正則對偶(canonical dual)的方法去解決在二元因數分析學習演算法中遇到的一個計算複雜度很大的二值二次規劃(Binary Quadratic Programming)問題。雖然它不能準確找到二值二次規劃的全域最優,它卻提高了整個學習演算法的計算速度和自動模型選擇的準確性。由此表明,局部嵌套的子優化問題的解不需要太精確反而能對整個學習演算法的性能更有利。然後,先驗分佈的引入進一步提高了模型選擇的性能,並且貝葉斯陰陽學習理論被系統的實驗結果證實要優於變分貝葉斯。接著,我們進一步發展了一個適用於二值資料的二元矩陣分解演算法。該演算法有理論的結果保證它的性能,並且在實際應用中,能以比其他相關演算法更優的性能從大規模的蛋白相互作用網路中檢測出蛋白功能複合物。 / 進一步,我們在一個半盲(semi-blind)的框架下研究了非高斯因數分析的演算法及其在系統生物學中的應用。非高斯因數分析模型被用於基因轉錄調控建模,並引入稀疏約束到連接矩陣,從而提出一個能有效估計轉錄因數調控信號的方法,而不需要像網路分量分析(Network Component Analysis)方法那樣預先給定轉錄因數調控基因的拓撲網路結構。特別地,借助二元因數分析,調控信號中的二元特徵能被直接捕捉。這種似開關的模式在很多生物過程的調控機制裡面起著重要作用。 / 最後,基於半盲非高斯因數分析學習演算法,我們提出了一套分析外顯子測序數據的方法,能有效地找出與疾病關聯的易感基因,提供了一個可能的方向去解決傳統的全基因組關聯分析(GWAS)方法在低頻高雜訊的外顯子測序數據上失效的問題。在一個1457 個樣本的大規模外顯子測序數據的初步結果顯示,我們的方法既能確認很多已經被認為是與疾病相關的基因,又能找到新的被重複驗證有顯著性的易感基因。相關的表達譜資料進一步顯示所找到的新基因在疾病和對照上有顯著的上下調的表達差異。 / Mining the underlying structure from high dimensional observations is of critical importance in machine learning, pattern recognition and bioinformatics. In this thesis, we, empirically or theoretically, investigate non-Gaussian Factor Analysis (NFA) models with different underlying structures. We focus on the problem of determining the number of latent factors of NFA, from two-stage approach model selection to automatic model selection, with real applications in pattern recognition and bioinformatics. / We start by a degenerate case of NFA, the conventional Factor Analysis (FA) with latent Gaussian factors. Many model selection methods have been proposed and used for FA, and it is important to examine their relative strengths and weaknesses. We develop an empirical analysis tool, to facilitate a systematic comparison on model selection performances of not only classical criteria (e.g., Akaike’s information criterion or shortly AIC) but also recently developed methods (e.g., Kritchman & Nadler’s hypothesis tests), as well as the Bayesian Ying-Yang (BYY) harmony learning. Also, we prove a theoretical relative order of underestimation tendency of four classical criteria. / Then, we investigate how parameterizations affect model selection performance, an issue that has been ignored or seldom studied since traditional model selection criteria, like AIC, perform equivalently on different parameterizations that have equivalent likelihood functions. Focusing on two typical parameterizations of FA, one of which is found to be better than the other under both Variational Bayes (VB) and BYY via extensive experiments on synthetic and real data. Moreover, a family of FA parameterizations that have equivalent likelihood functions are presented, where each one is featured by an integer r, with the two known parameterizations being both ends as r varies from zero to its upper bound. Investigations on this FA family not only confirm the significant difference between the two parameterizations in terms of model selection performance, but also provide insights into what makes a better parameterization. With a Bayesian treatment to the new FA family, alternative VB algorithms on FA are derived, and also BYY algorithms on FA are extended to be equipped with prior distributions on the parameters. A systematic comparison shows that BYY generally outperforms VB under various scenarios including varying simulation configurations and incrementally adding priors to parameters, as well as automatic model selection. / To describe binary latent features, we proceed to binary factor analysis (BFA), which considers Bernoulli factors. First, we introduce a canonical dual approach to tackling a difficult Binary Quadratic Programming (BQP) problem encountered as a computational bottleneck in BFA learning. Although it is not an exact BQP solver, it improves the learning speed and model selection accuracy, which indicates that some amount of error in solving the BQP, a problem nested in the hierarchy of the whole learning process, brings gain on both computational efficiency and model selection performance. The results also imply that optimization is important in learning, but learning is not just a simple optimization. Second, we develop BFA algorithms under VB and BYY to incorporate Bayesian priors on the parameters to improve the automatic model selection performance, and also show that BYY is superior to VB under a systematic comparison. Third, for binary observations, we propose a Bayesian Binary Matrix Factorization (BMF) algorithm under the BYY framework. The performance of the BMF algorithm is guaranteed with theoretical proofs and verified by experiments. We apply it to discovering protein complexes from protein-protein interaction (PPI) networks, an important problem in bioinformatics, with outperformance comparing to other related methods. / Furthermore, we investigate NFA under a semi-blind learning framework. In practice, there exist many scenarios of knowing partially either or both of the system and the input. Here, we modify Network Component Analysis (NCA) to model gene transcriptional regulation in system biology by NFA. The previous hardcut NFA algorithm is extended here as sparse BYY-NFA by considering either or both of a priori connectivity and a priori sparse constraint. Therefore, the a priori knowledge about the connection topology of the TF-gene regulatory network required by NCA is not necessary for our NFA algorithm. The sparse BYY-NFA can be further modified to get a sparse BYY-BFA algorithm, which directly models the switching patterns of latent transcription factor (TF) activities in gene regulation, e.g., whether or not a TF is activated. Mining switching patterns provides insights into exploring regulation mechanism of many biological processes. / Finally, the semi-blind NFA learning is applied to identify those single nucleotide polymorphisms (SNPs) that are significantly associated with a disease or a complex trait from exome sequencing data. By encoding each exon/gene (which may contain multiple SNPs) as a vector, an NFA classifier, obtained in a supervised way on a training set, is used for prediction on a testing set. The genes are selected according to the p-values of Fisher’s exact test on the confusion tables collected from prediction results. The selected genes on a real dataset from an exome sequencing project on psoriasis are consistent in part with published results, and some of them are probably novel susceptible genes of the disease according to the validation results. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Tu, Shikui. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 196-212). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.1.1 --- Motivations --- p.1 / Chapter 1.1.2 --- Independent Factor Analysis (IFA) --- p.2 / Chapter 1.1.3 --- Learning Methods --- p.6 / Chapter 1.2 --- Related Work --- p.14 / Chapter 1.2.1 --- Learning Gaussian FA --- p.14 / Chapter 1.2.2 --- Learning NFA --- p.16 / Chapter 1.2.3 --- Learning Semi-blind NFA --- p.18 / Chapter 1.3 --- Main Contribution of the Thesis --- p.18 / Chapter 1.4 --- Thesis Organization --- p.25 / Chapter 1.5 --- Publication List --- p.27 / Chapter 2 --- FA comparative analysis --- p.31 / Chapter 2.1 --- Determining the factor number --- p.32 / Chapter 2.2 --- Model Selection Methods --- p.34 / Chapter 2.2.1 --- Two-Stage Procedure and Classical Model Selection Criteria --- p.34 / Chapter 2.2.2 --- Kritchman&Nadler's Hypothesis Test (KN) --- p.35 / Chapter 2.2.3 --- Minimax Rank Estimation (MM) --- p.37 / Chapter 2.2.4 --- Minka's Criterion (MK) for PCA --- p.38 / Chapter 2.2.5 --- Bayesian Ying-Yang (BYY) Harmony Learning --- p.39 / Chapter 2.3 --- Empirical Analysis --- p.42 / Chapter 2.3.1 --- A New Tool for Empirical Comparison --- p.42 / Chapter 2.3.2 --- Investigation On Model Selection Performance --- p.44 / Chapter 2.4 --- A Theoretic Underestimation Partial Order --- p.49 / Chapter 2.4.1 --- Events of Estimating the Hidden Dimensionality --- p.49 / Chapter 2.4.2 --- The Structural Property of the Criterion Function --- p.49 / Chapter 2.4.3 --- Experimental Justification --- p.54 / Chapter 2.5 --- Concluding Remarks --- p.58 / Chapter 3 --- FA parameterizations affect model selection --- p.70 / Chapter 3.1 --- Parameterization Issue in Model Selection --- p.71 / Chapter 3.2 --- FAr: ML-equivalent Parameterizations of FA --- p.72 / Chapter 3.3 --- Variational Bayes on FAr --- p.74 / Chapter 3.4 --- Bayesian Ying-Yang Harmony Learning on FAr --- p.77 / Chapter 3.5 --- Empirical Analysis --- p.82 / Chapter 3.5.1 --- Three levels of investigations --- p.82 / Chapter 3.5.2 --- FA-a vs FA-b: performances of BYY, VB, AIC, BIC, and DNLL --- p.84 / Chapter 3.5.3 --- FA-r: performances of VB versus BYY --- p.87 / Chapter 3.5.4 --- FA-a vs FA-b: automatic model selection performance of BYYandVB --- p.90 / Chapter 3.5.5 --- Classification Performance on Real World Data Sets --- p.92 / Chapter 3.6 --- Concluding remarks --- p.93 / Chapter 4 --- BFA learning versus optimization --- p.104 / Chapter 4.1 --- Binary Factor Analysis --- p.105 / Chapter 4.2 --- BYY Harmony Learning on BFA --- p.107 / Chapter 4.3 --- Empirical Analysis --- p.108 / Chapter 4.3.1 --- BIC and Variational Bayes (VB) on BFA --- p.108 / Chapter 4.3.2 --- Error in solving BQP affects model selection --- p.110 / Chapter 4.3.3 --- Priors over parameters affect model selection --- p.114 / Chapter 4.3.4 --- Comparisons among BYY, VB, and BIC --- p.115 / Chapter 4.3.5 --- Applications in recovering binary images --- p.116 / Chapter 4.4 --- Concluding Remarks --- p.117 / Chapter 5 --- BMF for PPI network analysis --- p.124 / Chapter 5.1 --- The problem of protein complex prediction --- p.125 / Chapter 5.2 --- A novel binary matrix factorization (BMF) algorithm --- p.126 / Chapter 5.3 --- Experimental Results --- p.130 / Chapter 5.3.1 --- Other methods in comparison --- p.130 / Chapter 5.3.2 --- Data sets --- p.131 / Chapter 5.3.3 --- Evaluation criteria --- p.131 / Chapter 5.3.4 --- On altered graphs by randomly adding and deleting edges --- p.132 / Chapter 5.3.5 --- On real PPI data sets --- p.137 / Chapter 5.3.6 --- On gene expression data for biclustering --- p.137 / Chapter 5.4 --- A Theoretical Analysis on BYY-BMF --- p.138 / Chapter 5.4.1 --- Main results --- p.138 / Chapter 5.4.2 --- Experimental justification --- p.140 / Chapter 5.4.3 --- Proofs --- p.143 / Chapter 5.5 --- Concluding Remarks --- p.147 / Chapter 6 --- Semi-blind NFA: algorithms and applications --- p.148 / Chapter 6.1 --- Determining transcription factor activity --- p.148 / Chapter 6.1.1 --- A brief review on NCA --- p.149 / Chapter 6.1.2 --- Sparse NFA --- p.150 / Chapter 6.1.3 --- Sparse BFA --- p.156 / Chapter 6.1.4 --- On Yeast cell-cycle data --- p.160 / Chapter 6.1.5 --- On E. coli carbon source transition data --- p.166 / Chapter 6.2 --- Concluding Remarks --- p.170 / Chapter 7 --- Applications on Exome Sequencing Data Analysis --- p.172 / Chapter 7.1 --- From GWAS to Exome Sequencing --- p.172 / Chapter 7.2 --- Encoding An Exon/Gene --- p.173 / Chapter 7.3 --- An NFA Classifier --- p.175 / Chapter 7.4 --- Results --- p.176 / Chapter 7.4.1 --- Simulation --- p.176 / Chapter 7.4.2 --- On a real exome sequencing data set: AHMUe --- p.177 / Chapter 7.5 --- Concluding Remarks --- p.186 / Chapter 8 --- Conclusion and FutureWork --- p.187 / Chapter A --- Derivations of the learning algorithms on FA-r --- p.190 / Chapter A.1 --- The VB learning algorithm on FA-r --- p.190 / Chapter A.2 --- The BYY learning algorithm on FA-r --- p.193 / Bibliography --- p.195
|
8 |
Discretization for Naive-Bayes learningYang, Ying January 2003 (has links)
Abstract not available
|
9 |
Efficient portfolio optimisation by hydridised machine learning26 March 2015 (has links)
D.Ing. / The task of managing an investment portfolio is one that continues to challenge both professionals and private individuals on a daily basis. Contrary to popular belief, the desire of these actors is not in all (or even most) instances to generate the highest profits imaginable, but rather to achieve an acceptable return for a given level of risk. In other words, the investor desires to have his funds generate money for him, while not feeling that he is gambling away his (or his clients’) funds. The reasons for a given risk tolerance (or risk appetite) are as varied as the clients themselves – in some instances, clients will simply have their own arbitrary risk appetites, while other may need to maintain certain values to satisfy their mandates, while other may need to meet regulatory requirements. In order to accomplish this task, many measures and representations of performance data are employed to both communicate and understand the risk-reward trade-offs involved in the investment process. In light of the recent economic crisis, greater understanding and control of investment is being clamoured for around the globe, along with the concomitant finger-pointing and blame-assignation that inevitably follows such turmoil, and such heavy costs. The reputation of the industry, always dubious in the best of times, has also taken a significant knock after the events, and while this author would not like to point fingers, clearly the managers of funds, custodians of other people’s money, are in no small measure responsible for the loss of the funds under their care. It is with these concerns in mind that this thesis explores the potential for utilising the powerful tools found within the disciplines of artificial intelligence and machine learning in order to aid fund managers in the balancing of portfolios, tailoring specifically to their clients’ individual needs. These fields hold particular promise due to their focus on generalised pattern recognition, multivariable optimisation and continuous learning. With these tools in hand, a fund manager is able to continuously rebalance a portfolio for a client, given the client’s specific needs, and achieve optimal results while staying within the client’s risk parameters (in other words, keeping within the clients comfort zone in terms of price / value fluctuations).This thesis will first explore the drivers and constraints behind the investment process, as well as the process undertaken by the fund manager as recommended by the CFA (Certified Financial Analyst) Institute. The thesis will then elaborate on the existing theory behind modern investment theory, and the mathematics and statistics that underlie the process. Some common tools from the field of Technical Analysis will be examined, and their implicit assumptions and limitations will be shown, both for understanding and to show how they can still be utilised once their limitations are explicitly known. Thereafter the thesis will show the various tools from within the fields of machine learning and artificial intelligence that form the heart of the thesis herein. A highlight will be placed on data structuring, and the inherent dangers to be aware of when structuring data representations for computational use. The thesis will then illustrate how to create an optimiser using a genetic algorithm for the purpose of balancing a portfolio. Lastly, it will be shown how to create a learning system that continues to update its own understanding, and create a hybrid learning optimiser to enable fund managers to do their job effectively and safely.
|
10 |
Sparse learning under regularization framework. / 正則化框架下的稀疏學習 / CUHK electronic theses & dissertations collection / Zheng ze hua kuang jia xia de xi shu xue xiJanuary 2011 (has links)
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this thesis tackles the key research problems ranging from feature selection to learning with unlabeled data and learning data similarity representation. More specifically, we focus on the problems in three areas: online learning, semi-supervised learning, and multiple kernel learning. / The first part of this thesis develops a novel online learning framework to solve group lasso and multi-task feature selection. To the best our knowledge, the proposed online learning framework is the first framework for the corresponding models. The main advantages of the online learning algorithms are that (1) they can work on the applications where training data appear sequentially; consequently, the training procedure can be started at any time; (2) they can handle data up to any size with any number of features. The efficiency of the algorithms is attained because we derive closed-form solutions to update the weights of the corresponding models. At each iteration, the online learning algorithms just need O (d) time complexity and memory cost for group lasso, while they need O (d x Q) for multi-task feature selection, where d is the number of dimensions and Q is the number of tasks. Moreover, we provide theoretical analysis for the average regret of the online learning algorithms, which also guarantees the convergence rate of the algorithms. In addition, we extend the online learning framework to solve several related models which yield more sparse solutions. / The second part of this thesis addresses a general scenario of semi-supervised learning for the binary classification problern, where the unlabeled data may be a mixture of relevant and irrelevant data to the target binary classification task. Without specifying the relatedness in the unlabeled data, we develop a novel maximum margin classifier, named the tri-class support vector machine (3C-SVM), to seek an inductive rule that can separate these data into three categories: --1, +1, or 0. This is achieved by adopting a novel min loss function and following the maximum entropy principle. For the implementation, we approximate the problem and solve it by a standard concaveconvex procedure (CCCP). The approach is very efficient and it is possible to solve large-scale datasets. / The third part of this thesis focuses on multiple kernel learning (MKL) to solve the insufficiency of the L1-MKL and the Lp-MKL models. Hence, we propose a generalized MKL (GMKL) model by introducing an elastic net-type constraint on the kernel weights. More specifically, it is an MKL model with a constraint on a linear combination of the L1-norm and the square of the L2-norm on the kernel weights to seek the optimal kernel combination weights. Therefore, previous MKL problems based on the L1-norm or the L2-norm constraints can be regarded as its special cases. Moreover, our GMKL enjoys the favorable sparsity property on the solution and also facilitates the grouping effect. In addition, the optimization of our GMKL is a convex optimization problem, where a local solution is the globally optimal solution. We further derive the level method to efficiently solve the optimization problem. / Yang, Haiqin. / Advisers: Kuo Chin Irwin King; Michael Rung Tsong Iyu. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 152-173). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
Page generated in 0.1575 seconds