Spelling suggestions: "subject:"achine learning"" "subject:"cachine learning""
1 
Creating diverse ensemble classifiers to reduce supervisionMelville, Prem Noel 28 August 2008 (has links)
Not available / text

2 
Learning in the real world environment: a classification model based on sensitivity to withindimension and betweencategory variation of feature frequenciesLam, Newman Ming Ki 22 June 2018 (has links)
Research on machine learning has taken numerous different
directions. The present study focussed on the microstructural
characteristics of learning systems. It was
postulated that learning systems consist of a macrostructure
which controls the flow of information, and a
microstructure which manipulates information for decision
making. A review of the literature suggested that the basic
function of the microstructure of learning systems was to
make a choice among a set of alternatives. This decision
function was then equated with the task of making
classification decisions. On the basis of the requirements
for practical learning systems, the feature frequency
approach was chosen for model development. An analysis of
the feature frequency approach indicated that an effective
model must be sensitive to both withindimension and
betweencategory variations in frequencies. A model was
then developed to provide for such sensitivities. The model
was based on the Bayes' Theorem with an assumption of
uniform prior probability of occurrence for the categories.
This model was tested using data collected for
neuropsychological diagnosis of children. Results of the
tests showed that the model was capable of learning and
provided a satisfactory level of performance. The
performance of the model was compared with that of other
models designed for the same purpose. The other models
included NEXSYS, a rulebased system specially design for
this type of diagnosis, discriminant analysis, which is a
statistical technique widely used for pattern recognition,
and neural networks, which attempt to simulate the neural
activities of the brain. Results of the tests showed that
the model's performance was comparable to that of the other
models. Further analysis indicated that the model has certain advantages in that it has a simple structure, is
capable of explaining its decisions, and is more efficient
than the other models. / Graduate

3 
Taskoriented learning of structured probability distributionsBouchacourt, Diane January 2017 (has links)
Machine learning models automatically learn from historical data to predict unseen events. Such events are often represented as complex multidimensional structures. In many cases there is high uncertainty in the prediction process. Research has developed probabilistic models to capture distributions of complex objects, but their learning objective is often agnostic of the evaluation loss. In this thesis, we address the aforementioned defficiency by designing probabilistic methods for structured object prediction that take into account the task at hand. First, we consider that the task at hand is explicitly known, but there is ambiguity in the prediction due to an unobserved (latent) variable. We develop a framework for latent structured output prediction that unifies existing empirical risk minimisation methods. We empirically demonstrate that for large and ambiguous latent spaces, performing prediction by minimising the uncertainty in the latent variable provides more accurate results. Empirical risk minimisation methods predict only a pointwise estimate of the output, however there can be uncertainty on the output value itself. To tackle this deficiency, we introduce a novel type of model to perform probabilistic structured output prediction. Our training objective minimises a dissimilarity coefficient between the data distribution and the model's distribution. This coefficient is defined according to a loss of choice, thereby our objective can be tailored to the task loss. We empirically demonstrate the ability of our model to capture distributions over complex objects. Finally, we tackle a setting where the task loss is implicitly expressed. Specifically, we consider the case of grouped observations. We propose a new model for learning a representation of the data that decomposes according to the semantics behind this grouping, while allowing efficient testtime inference. We experimentally demonstrate that our model learns a disentangled and controllable representation, leverages grouping information when available, and generalises to unseen observations.

4 
Classical and quantum data sketching with applications in communication complexity and machine learning / CUHK electronic theses & dissertations collectionJanuary 2014 (has links)
Liu, Yang. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 163188). / Abstracts also in Chinese. / Title from PDF title page (viewed on 25, October, 2016).

5 
Fast training of SVM with [beta]neighbor editing.January 2003 (has links)
Wan Zhang. / Thesis (M.Phil.)Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 91103). / Abstracts in English and Chinese. / Abstract  p.ii / Acknowledgement  p.v / Chapter 1  Introduction  p.1 / Chapter 1.1  Introduction to Classification  p.1 / Chapter 1.2  Problem Definition  p.4 / Chapter 1.3  Major Contributions  p.6 / Chapter 1.4  Thesis Organization  p.7 / Chapter 2  Literature Review  p.8 / Chapter 2.1  Fisher's Linear Discriminant  p.8 / Chapter 2.2  Radial Basis Function Networks  p.9 / Chapter 2.3  Decision Tree  p.10 / Chapter 2.4  Nearest Neighbor  p.12 / Chapter 2.5  Support Vector Machine  p.13 / Chapter 2.5.1  Linear Separable Case  p.14 / Chapter 2.5.2  Non Linearseparable Case  p.15 / Chapter 2.5.3  Nonlinear Case  p.18 / Chapter 2.5.4  Multiclass SVM  p.19 / Chapter 2.5.5  RSVM  p.21 / Chapter 2.6  Summary  p.23 / Chapter 3  Computational Geometry  p.25 / Chapter 3.1  Convex hull  p.26 / Chapter 3.1.1  Separable Case  p.26 / Chapter 3.1.2  Inseparable Case  p.28 / Chapter 3.2  Proximity Graph  p.32 / Chapter 3.2.1  Voronoi Diagram and Delaunay Triangulation  p.32 / Chapter 3.2.2  Gabriel Graph and Relative Neighborhood Graph  p.34 / Chapter 3.2.3  βskeleton  p.36 / Chapter 4  Data Editing  p.39 / Chapter 4.1  Hart's Condensed Rule and Its Relatives  p.39 / Chapter 4.2  Orderindependent Subsets  p.40 / Chapter 4.3  Minimal Size Trainingset Consistent Subsets  p.40 / Chapter 4.4  Proximity Graph Methods  p.41 / Chapter 4.5  Comparing Results of Different Classifiers with Edited Dataset as the Training Set  p.42 / Chapter 4.5.1  Time Complexity  p.47 / Chapter 4.5.2  Editing Size of Training Data  p.48 / Chapter 4.5.3  Accuracy  p.50 / Chapter 4.5.4  Efficiency  p.54 / Chapter 4.5.5  Summary  p.58 / Chapter 5  Techniques Speeding Up Data Editing  p.60 / Chapter 5.1  Parallel Computing  p.61 / Chapter 5.1.1  Basic Idea of Parallel  p.61 / Chapter 5.1.2  Details of Parallel Technique  p.63 / Chapter 5.1.3  Comparing Effects of the Choice of Number of Threads on Efficiency  p.64 / Chapter 5.2  Tree Indexing Structure  p.67 / Chapter 5.2.1  Rtree and R*tree  p.67 / Chapter 5.2.2  SStree  p.69 / Chapter 5.2.3  SRtvee  p.70 / Chapter 5.2.4  βneighbor Algorithm Based on SRtree Structure  p.71 / Chapter 5.2.5  Pruning Search Space for βneighbor Algorithm  p.72 / Chapter 5.2.6  Comparing Results of Nonindex Methods with Those of Methods with Indexing  p.80 / Chapter 5.3  Combination of Parallelism and SRtree Indexing Structure  p.83 / Chapter 5.3.1  Comparing Results of Both Techniques Applied  p.84 / Chapter 5.4  Summary  p.87 / Chapter 6  Conclusion  p.89 / Bibliography  p.91

6 
Machine learning approaches to manufacturing and materials: Applications to semisupervised, unbalanced and heterogeneous data problemsKarkare, Rasika S 30 July 2019 (has links)
The objective of this thesis is to use machine learning and deep learning techniques for the quality assurance of metal casting processes. Metal casting can be defined as a process in which liquid metal is poured into a mold of a desired shape and allowed to solidify. The process is completed after ejection of the final solidified component, also known as a casting, out of the mold. There may be undesired irregularities in the metal casting process known as casting defects. Among the defects that are found, porosity is considered to be a major defect, which is difficult to detect, until the end of the manufacturing cycle. When there are small voids, holes or pockets found within the metal, porosity defect occurs. It is important to control and alleviate porosity below certain permissible thresholds, depending on the product that is being manufactured. If the foundry process can be modeled using machine learning approaches, to predict the state of the casting prior to completion of the casting process, it would save the foundry the inspection and testing of the casting, which requires specific attention of the staff and expensive machinery for testing. Moreover, if the casting fails the quality test, then it would be rendered useless. This is one of the major issues for the foundries today. The main aim of this project, is to make predictions about the quality of metal cast components. We determine whether under certain given conditions and parameters, a cast component would pass or fail the quality test. Although this thesis focuses on porosity defects, machine learning and deep learning techniques can be used to model any other kinds of defects such as shrinkage defects, metal pouring defects or any metallurgical defects. The other important objective is to identify the most important parameters in this casting process, that are responsible for the porosity control and ultimately the quality of the cast component. The challenges faced during the data analysis while dealing with a small sized, unbalanced, heterogeneous and semisupervised dataset, such as this one, are also covered. We compare the results obtained using different machine learning techniques in terms of F1 score, precision and recall, among other metrics, on unseen test data post cross validation. Finally, the conclusions and scope for the future work are also discussed.

7 
Deep Representation Learning on Labeled GraphsFan, Shuangfei 27 January 2020 (has links)
We introduce recurrent collective classification (RCC), a variant of ICA analogous to recurrent neural network prediction. RCC accommodates any differentiable local classifier and relational feature functions. We provide gradientbased strategies for optimizing over model parameters to more directly minimize the loss function. In our experiments, this direct loss minimization translates to improved accuracy and robustness on real network data. We demonstrate the robustness of RCC in settings where local classification is very noisy, settings that are particularly challenging for ICA. As a new way to train generative models, generative adversarial networks (GANs) have achieved considerable success in image generation, and this framework has also recently been applied to data with graph structures. We identify the drawbacks of existing deep frameworks for generating graphs, and we propose labeledgraph generative adversarial networks (LGGAN) to train deep generative models for graphstructured data with node labels. We test the approach on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all baselines in terms of quality, generality, and scalability. To further evaluate the quality of the generated graphs, we apply it to a downstream task for graph classification, and the results show that LGGAN can better capture the important aspects of the graph structure. / Doctor of Philosophy / Graphs are one of the most important and powerful data structures for conveying the complex and correlated information among data points. In this research, we aim to provide more robust and accurate models for some graph specific tasks, such as collective classification and graph generation, by designing deep learning models to learn better taskspecific representations for graphs. First, we studied the collective classification problem in graphs and proposed recurrent collective classification, a variant of the iterative classification algorithm that is more robust to situations where predictions are noisy or inaccurate. Then we studied the problem of graph generation using deep generative models. We first proposed a deep generative model using the GAN framework that generates labeled graphs. Then in order to support more applications and also get more control over the generated graphs, we extended the problem of graph generation to conditional graph generation which can then be applied to various applications for modeling graph evolution and transformation.

8 
Reasoning and learning for intelligent agents /Sioutis, Christos. Unknown Date (has links)
Intelligent Agents that operate in dynamic, realtime domains are required to embody complex but controlled behaviours, some of which may not be easily implementable. This thesis investigates the difficulties presented with implementing Intelligent Agents for such environments and makes contributions in the fields of Agent Reasoning, Agent Learning and AgentOriented Design in order to overcome some of these difficulties. / The thesis explores the need for incorporating learning into agents. This is done through a comprehensive review of complex application domains where current agent development techniques are insufficient to provide a system of acceptable standard. The theoretical foundations of agent reasoning and learning are reviewed and a critique of reasoning techniques illustrates how humans make decisions. Furthermore, a number of learning and adaptation methods are introduced. The concepts behind Intelligent Agents and the reasons why researchers have recently turned to this technology for implementing complex systems are then reviewed. Overviews of different agentoriented development paradigms are explored, which include relevant development platforms available for each one. / Previous research on modeling how humans make decisions is investigated, in particular three models are described in detail. A new cognitive, hybrid reasoning model is presented that fuses the three models together to offset the demerits of one model by the merits of another. Due to the additional elements available in the new model, it becomes possible to define how learning can be integrated into the reasoning process. In addition, an abstract framework that implements the reasoning and learning model is defined. This framework hides the complexity of learning and allows for designing agents based on the new reasoning model. / Finally, the thesis contributes the design of an application where learning agents are faced with a rich, realtime environment and are required to work as a teamto achieve a common goal. Detailed algorithmic descriptions of the agent's behaviours as well as a subset of the source code are included in the thesis. The empirical results obtained validate all contributions within the domain of Unreal Tournament. Ultimately, this dissertation demonstrates that if agent reasoning is implemented using a cognitive reasoning model with defined learning goals, an agent can operate effectively in a complex, realtime, collaborative and adversarial environment. / Thesis (PhDComputerSystemsEng)University of South Australia, 2006.

9 
Reinforcement learning and approximation complexityMcDonald, Matthew A. F Unknown Date (has links)
Many tasks can easily be posed as the problem of responding to the states of an external world with actions that maximise the reward received over time. Algorithms that reliably solve such problems exist. However, their worstcase complexities are typically more than proportional to the size of the state space in which a task is to be performed. Many simple tasks involve enormous numbers of states, which can make the application of such algorithms impractical. This thesis examines reinforcement learning algorithms which effectively learn to perform tasks by constructing mappings from states to suitable actions. In problems involving large numbers of states, these algorithms usually must construct approximate, rather than exact, solutions and the primary issue examined in the thesis is the way in which the complexity of constructing adequate approximations scales as the size of a state space increases. The vast majority of reinforcement learning algorithms operate by constructing estimates of the longterm value of states and using these estimates to select actions. The potential effects of errors in such estimates are examined and shown to be severe. Empirical results are presented which suggest that minor errors are likely to result in significant losses in many problems, and where such losses are most likely to occur. The complexity of constructing estimates accurate enough to prevent significant losses is also examined empirically and shown to be substantial.

10 
Apply Machine Learning on Cattle Behavior Classification Using Accelerometer DataZhao, Zhuqing 15 April 2022 (has links)
We used a 50Hz sampling frequency to collect triaxle acceleration from the cows. For the traditional Machine learning approach, we segmented the data to calculate features, selected the important features, and applied machine learning algorithms for classification. We compared the performance of various models and found a robust model with relatively low computation and high accuracy. For the deep learning approach, we designed an endtoend trainable Convolutional Neural Networks (CNN) to predict activities for given segments, applied distillation, and quantization to reduce model size. In addition to the fixed window
size approach, we used CNN to predict dense labels that each data point has an individual label, inspired by semantic segmentation. In this way, we could have a more precise measurement for the composition of activities. Summarily, physically monitoring the wellbeing of crowded animals is laborintensive, so we proposed a solution for timely and efficient
measuring of cattle’s daily activities using wearable sensors and machine learning models. / M.S. / Animal agriculture has intensified over the past several decades, and animals are managed increasingly as large groups. This groupbased management has significantly increased productivity. However, animals are often located remotely on large expanses of pasture, which makes continuous monitoring of daily activities to assess animal health and wellbeing laborintensive and challenging [37]. Remote monitoring of animal activities with wireless sensor nodes integrated with machine learning algorithms is a promising solution. The machine learning models will predict the activities of given accelerometer segments, and the predicted result will be uploaded to the cloud. The challenges would be the limitation in power consumption and computation. To propose a precise measurement of individual cattle in the herd, we experimented with several different types of machine learning methods with different advantages and drawbacks in performance and efficiency.

Page generated in 0.1114 seconds