Spelling suggestions: "subject:"cachine 1earning"" "subject:"cachine c1earning""
11 |
Incremental nonparametric discriminant analysis based active learning and its applications a thesis submitted to Auckland University of Technology in partial fulfillment [sic] of the requirements for the degree of Master of Computer and Information Sciences (MCIS), 18th March 2010 /Dhoble, Kshitij. January 2010 (has links)
Thesis (MCIS)--AUT University, 2010. / Includes bibliographical references. Also held in print ( leaves : ill. ; 30 cm.) in the Archive at the City Campus (T 006.31 DHO)
|
12 |
Statistical learning algorithms : multi-class classification and regression with non-i.i.d. sampling /Pan, Zhiwei. January 2009 (has links) (PDF)
Thesis (Ph.D.)--City University of Hong Kong, 2009. / "Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [65]-75)
|
13 |
Creating diverse ensemble classifiers to reduce supervisionMelville, Prem Noel, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2005. / Vita. Includes bibliographical references.
|
14 |
Regularized adaptation : theory, algorithms, and applications /Li, Xiao, January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 132-146).
|
15 |
Task-oriented learning of structured probability distributionsBouchacourt, Diane January 2017 (has links)
Machine learning models automatically learn from historical data to predict unseen events. Such events are often represented as complex multi-dimensional structures. In many cases there is high uncertainty in the prediction process. Research has developed probabilistic models to capture distributions of complex objects, but their learning objective is often agnostic of the evaluation loss. In this thesis, we address the aforementioned defficiency by designing probabilistic methods for structured object prediction that take into account the task at hand. First, we consider that the task at hand is explicitly known, but there is ambiguity in the prediction due to an unobserved (latent) variable. We develop a framework for latent structured output prediction that unifies existing empirical risk minimisation methods. We empirically demonstrate that for large and ambiguous latent spaces, performing prediction by minimising the uncertainty in the latent variable provides more accurate results. Empirical risk minimisation methods predict only a pointwise estimate of the output, however there can be uncertainty on the output value itself. To tackle this deficiency, we introduce a novel type of model to perform probabilistic structured output prediction. Our training objective minimises a dissimilarity coefficient between the data distribution and the model's distribution. This coefficient is defined according to a loss of choice, thereby our objective can be tailored to the task loss. We empirically demonstrate the ability of our model to capture distributions over complex objects. Finally, we tackle a setting where the task loss is implicitly expressed. Specifically, we consider the case of grouped observations. We propose a new model for learning a representation of the data that decomposes according to the semantics behind this grouping, while allowing efficient test-time inference. We experimentally demonstrate that our model learns a disentangled and controllable representation, leverages grouping information when available, and generalises to unseen observations.
|
16 |
Learning in the real world environment: a classification model based on sensitivity to within-dimension and between-category variation of feature frequenciesLam, Newman Ming Ki 22 June 2018 (has links)
Research on machine learning has taken numerous different
directions. The present study focussed on the microstructural
characteristics of learning systems. It was
postulated that learning systems consist of a macrostructure
which controls the flow of information, and a
micro-structure which manipulates information for decision
making. A review of the literature suggested that the basic
function of the micro-structure of learning systems was to
make a choice among a set of alternatives. This decision
function was then equated with the task of making
classification decisions. On the basis of the requirements
for practical learning systems, the feature frequency
approach was chosen for model development. An analysis of
the feature frequency approach indicated that an effective
model must be sensitive to both within-dimension and
between-category variations in frequencies. A model was
then developed to provide for such sensitivities. The model
was based on the Bayes' Theorem with an assumption of
uniform prior probability of occurrence for the categories.
This model was tested using data collected for
neuropsychological diagnosis of children. Results of the
tests showed that the model was capable of learning and
provided a satisfactory level of performance. The
performance of the model was compared with that of other
models designed for the same purpose. The other models
included NEXSYS, a rule-based system specially design for
this type of diagnosis, discriminant analysis, which is a
statistical technique widely used for pattern recognition,
and neural networks, which attempt to simulate the neural
activities of the brain. Results of the tests showed that
the model's performance was comparable to that of the other
models. Further analysis indicated that the model has certain advantages in that it has a simple structure, is
capable of explaining its decisions, and is more efficient
than the other models. / Graduate
|
17 |
Machine learning approaches to manufacturing and materials: Applications to semi-supervised, unbalanced and heterogeneous data problemsKarkare, Rasika S 30 July 2019 (has links)
The objective of this thesis is to use machine learning and deep learning techniques for the quality assurance of metal casting processes. Metal casting can be defined as a process in which liquid metal is poured into a mold of a desired shape and allowed to solidify. The process is completed after ejection of the final solidified component, also known as a casting, out of the mold. There may be undesired irregularities in the metal casting process known as casting defects. Among the defects that are found, porosity is considered to be a major defect, which is difficult to detect, until the end of the manufacturing cycle. When there are small voids, holes or pockets found within the metal, porosity defect occurs. It is important to control and alleviate porosity below certain permissible thresholds, depending on the product that is being manufactured. If the foundry process can be modeled using machine learning approaches, to predict the state of the casting prior to completion of the casting process, it would save the foundry the inspection and testing of the casting, which requires specific attention of the staff and expensive machinery for testing. Moreover, if the casting fails the quality test, then it would be rendered useless. This is one of the major issues for the foundries today. The main aim of this project, is to make predictions about the quality of metal cast components. We determine whether under certain given conditions and parameters, a cast component would pass or fail the quality test. Although this thesis focuses on porosity defects, machine learning and deep learning techniques can be used to model any other kinds of defects such as shrinkage defects, metal pouring defects or any metallurgical defects. The other important objective is to identify the most important parameters in this casting process, that are responsible for the porosity control and ultimately the quality of the cast component. The challenges faced during the data analysis while dealing with a small sized, unbalanced, heterogeneous and semi-supervised dataset, such as this one, are also covered. We compare the results obtained using different machine learning techniques in terms of F1 score, precision and recall, among other metrics, on unseen test data post cross validation. Finally, the conclusions and scope for the future work are also discussed.
|
18 |
Apply Machine Learning on Cattle Behavior Classification Using Accelerometer DataZhao, Zhuqing 15 April 2022 (has links)
We used a 50Hz sampling frequency to collect tri-axle acceleration from the cows. For the traditional Machine learning approach, we segmented the data to calculate features, selected the important features, and applied machine learning algorithms for classification. We compared the performance of various models and found a robust model with relatively low computation and high accuracy. For the deep learning approach, we designed an end-to-end trainable Convolutional Neural Networks (CNN) to predict activities for given segments, applied distillation, and quantization to reduce model size. In addition to the fixed window
size approach, we used CNN to predict dense labels that each data point has an individual label, inspired by semantic segmentation. In this way, we could have a more precise measurement for the composition of activities. Summarily, physically monitoring the well-being of crowded animals is labor-intensive, so we proposed a solution for timely and efficient
measuring of cattle’s daily activities using wearable sensors and machine learning models. / M.S. / Animal agriculture has intensified over the past several decades, and animals are managed increasingly as large groups. This group-based management has significantly increased productivity. However, animals are often located remotely on large expanses of pasture, which makes continuous monitoring of daily activities to assess animal health and well-being labor-intensive and challenging [37]. Remote monitoring of animal activities with wireless sensor nodes integrated with machine learning algorithms is a promising solution. The machine learning models will predict the activities of given accelerometer segments, and the pre-dicted result will be uploaded to the cloud. The challenges would be the limitation in power consumption and computation. To propose a precise measurement of individual cattle in the herd, we experimented with several different types of machine learning methods with different advantages and drawbacks in performance and efficiency.
|
19 |
Deep Representation Learning on Labeled GraphsFan, Shuangfei 27 January 2020 (has links)
We introduce recurrent collective classification (RCC), a variant of ICA analogous to recurrent neural network prediction. RCC accommodates any differentiable local classifier and relational feature functions. We provide gradient-based strategies for optimizing over model parameters to more directly minimize the loss function. In our experiments, this direct loss minimization translates to improved accuracy and robustness on real network data. We demonstrate the robustness of RCC in settings where local classification is very noisy, settings that are particularly challenging for ICA. As a new way to train generative models, generative adversarial networks (GANs) have achieved considerable success in image generation, and this framework has also recently been applied to data with graph structures. We identify the drawbacks of existing deep frameworks for generating graphs, and we propose labeled-graph generative adversarial networks (LGGAN) to train deep generative models for graph-structured data with node labels. We test the approach on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all baselines in terms of quality, generality, and scalability. To further evaluate the quality of the generated graphs, we apply it to a downstream task for graph classification, and the results show that LGGAN can better capture the important aspects of the graph structure. / Doctor of Philosophy / Graphs are one of the most important and powerful data structures for conveying the complex and correlated information among data points. In this research, we aim to provide more robust and accurate models for some graph specific tasks, such as collective classification and graph generation, by designing deep learning models to learn better task-specific representations for graphs. First, we studied the collective classification problem in graphs and proposed recurrent collective classification, a variant of the iterative classification algorithm that is more robust to situations where predictions are noisy or inaccurate. Then we studied the problem of graph generation using deep generative models. We first proposed a deep generative model using the GAN framework that generates labeled graphs. Then in order to support more applications and also get more control over the generated graphs, we extended the problem of graph generation to conditional graph generation which can then be applied to various applications for modeling graph evolution and transformation.
|
20 |
Solution path algorithms : an efficient model selection approach /Wang, Gang. January 2007 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 102-108). Also available in electronic version.
|
Page generated in 0.0678 seconds