Spelling suggestions: "subject:"achine learning"" "subject:"cachine learning""
1 
Fast training of SVM with [beta]neighbor editing.January 2003 (has links)
Wan Zhang. / Thesis (M.Phil.)Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 91103). / Abstracts in English and Chinese. / Abstract  p.ii / Acknowledgement  p.v / Chapter 1  Introduction  p.1 / Chapter 1.1  Introduction to Classification  p.1 / Chapter 1.2  Problem Definition  p.4 / Chapter 1.3  Major Contributions  p.6 / Chapter 1.4  Thesis Organization  p.7 / Chapter 2  Literature Review  p.8 / Chapter 2.1  Fisher's Linear Discriminant  p.8 / Chapter 2.2  Radial Basis Function Networks  p.9 / Chapter 2.3  Decision Tree  p.10 / Chapter 2.4  Nearest Neighbor  p.12 / Chapter 2.5  Support Vector Machine  p.13 / Chapter 2.5.1  Linear Separable Case  p.14 / Chapter 2.5.2  Non Linearseparable Case  p.15 / Chapter 2.5.3  Nonlinear Case  p.18 / Chapter 2.5.4  Multiclass SVM  p.19 / Chapter 2.5.5  RSVM  p.21 / Chapter 2.6  Summary  p.23 / Chapter 3  Computational Geometry  p.25 / Chapter 3.1  Convex hull  p.26 / Chapter 3.1.1  Separable Case  p.26 / Chapter 3.1.2  Inseparable Case  p.28 / Chapter 3.2  Proximity Graph  p.32 / Chapter 3.2.1  Voronoi Diagram and Delaunay Triangulation  p.32 / Chapter 3.2.2  Gabriel Graph and Relative Neighborhood Graph  p.34 / Chapter 3.2.3  βskeleton  p.36 / Chapter 4  Data Editing  p.39 / Chapter 4.1  Hart's Condensed Rule and Its Relatives  p.39 / Chapter 4.2  Orderindependent Subsets  p.40 / Chapter 4.3  Minimal Size Trainingset Consistent Subsets  p.40 / Chapter 4.4  Proximity Graph Methods  p.41 / Chapter 4.5  Comparing Results of Different Classifiers with Edited Dataset as the Training Set  p.42 / Chapter 4.5.1  Time Complexity  p.47 / Chapter 4.5.2  Editing Size of Training Data  p.48 / Chapter 4.5.3  Accuracy  p.50 / Chapter 4.5.4  Efficiency  p.54 / Chapter 4.5.5  Summary  p.58 / Chapter 5  Techniques Speeding Up Data Editing  p.60 / Chapter 5.1  Parallel Computing  p.61 / Chapter 5.1.1  Basic Idea of Parallel  p.61 / Chapter 5.1.2  Details of Parallel Technique  p.63 / Chapter 5.1.3  Comparing Effects of the Choice of Number of Threads on Efficiency  p.64 / Chapter 5.2  Tree Indexing Structure  p.67 / Chapter 5.2.1  Rtree and R*tree  p.67 / Chapter 5.2.2  SStree  p.69 / Chapter 5.2.3  SRtvee  p.70 / Chapter 5.2.4  βneighbor Algorithm Based on SRtree Structure  p.71 / Chapter 5.2.5  Pruning Search Space for βneighbor Algorithm  p.72 / Chapter 5.2.6  Comparing Results of Nonindex Methods with Those of Methods with Indexing  p.80 / Chapter 5.3  Combination of Parallelism and SRtree Indexing Structure  p.83 / Chapter 5.3.1  Comparing Results of Both Techniques Applied  p.84 / Chapter 5.4  Summary  p.87 / Chapter 6  Conclusion  p.89 / Bibliography  p.91

2 
Classical and quantum data sketching with applications in communication complexity and machine learning / CUHK electronic theses & dissertations collectionJanuary 2014 (has links)
Liu, Yang. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 163188). / Abstracts also in Chinese. / Title from PDF title page (viewed on 25, October, 2016).

3 
Reinforcement learning and approximation complexityMcDonald, Matthew A. F Unknown Date (has links)
Many tasks can easily be posed as the problem of responding to the states of an external world with actions that maximise the reward received over time. Algorithms that reliably solve such problems exist. However, their worstcase complexities are typically more than proportional to the size of the state space in which a task is to be performed. Many simple tasks involve enormous numbers of states, which can make the application of such algorithms impractical. This thesis examines reinforcement learning algorithms which effectively learn to perform tasks by constructing mappings from states to suitable actions. In problems involving large numbers of states, these algorithms usually must construct approximate, rather than exact, solutions and the primary issue examined in the thesis is the way in which the complexity of constructing adequate approximations scales as the size of a state space increases. The vast majority of reinforcement learning algorithms operate by constructing estimates of the longterm value of states and using these estimates to select actions. The potential effects of errors in such estimates are examined and shown to be severe. Empirical results are presented which suggest that minor errors are likely to result in significant losses in many problems, and where such losses are most likely to occur. The complexity of constructing estimates accurate enough to prevent significant losses is also examined empirically and shown to be substantial.

4 
Methods for costsensitive learning /Margineantu, Dragos D. January 1900 (has links)
Thesis (Ph. D.)Oregon State University, 2002. / Typescript (photocopy). Includes bibliographical references (leaves 122138). Also available on the World Wide Web.

5 
Creating diverse ensemble classifiers to reduce supervisionMelville, Prem Noel 28 August 2008 (has links)
Not available / text

6 
Reasoning and learning for intelligent agents /Sioutis, Christos. Unknown Date (has links)
Intelligent Agents that operate in dynamic, realtime domains are required to embody complex but controlled behaviours, some of which may not be easily implementable. This thesis investigates the difficulties presented with implementing Intelligent Agents for such environments and makes contributions in the fields of Agent Reasoning, Agent Learning and AgentOriented Design in order to overcome some of these difficulties. / The thesis explores the need for incorporating learning into agents. This is done through a comprehensive review of complex application domains where current agent development techniques are insufficient to provide a system of acceptable standard. The theoretical foundations of agent reasoning and learning are reviewed and a critique of reasoning techniques illustrates how humans make decisions. Furthermore, a number of learning and adaptation methods are introduced. The concepts behind Intelligent Agents and the reasons why researchers have recently turned to this technology for implementing complex systems are then reviewed. Overviews of different agentoriented development paradigms are explored, which include relevant development platforms available for each one. / Previous research on modeling how humans make decisions is investigated, in particular three models are described in detail. A new cognitive, hybrid reasoning model is presented that fuses the three models together to offset the demerits of one model by the merits of another. Due to the additional elements available in the new model, it becomes possible to define how learning can be integrated into the reasoning process. In addition, an abstract framework that implements the reasoning and learning model is defined. This framework hides the complexity of learning and allows for designing agents based on the new reasoning model. / Finally, the thesis contributes the design of an application where learning agents are faced with a rich, realtime environment and are required to work as a teamto achieve a common goal. Detailed algorithmic descriptions of the agent's behaviours as well as a subset of the source code are included in the thesis. The empirical results obtained validate all contributions within the domain of Unreal Tournament. Ultimately, this dissertation demonstrates that if agent reasoning is implemented using a cognitive reasoning model with defined learning goals, an agent can operate effectively in a complex, realtime, collaborative and adversarial environment. / Thesis (PhDComputerSystemsEng)University of South Australia, 2006.

7 
Reinforcement learning and approximation complexityMcDonald, Matthew A. F Unknown Date (has links)
Many tasks can easily be posed as the problem of responding to the states of an external world with actions that maximise the reward received over time. Algorithms that reliably solve such problems exist. However, their worstcase complexities are typically more than proportional to the size of the state space in which a task is to be performed. Many simple tasks involve enormous numbers of states, which can make the application of such algorithms impractical. This thesis examines reinforcement learning algorithms which effectively learn to perform tasks by constructing mappings from states to suitable actions. In problems involving large numbers of states, these algorithms usually must construct approximate, rather than exact, solutions and the primary issue examined in the thesis is the way in which the complexity of constructing adequate approximations scales as the size of a state space increases. The vast majority of reinforcement learning algorithms operate by constructing estimates of the longterm value of states and using these estimates to select actions. The potential effects of errors in such estimates are examined and shown to be severe. Empirical results are presented which suggest that minor errors are likely to result in significant losses in many problems, and where such losses are most likely to occur. The complexity of constructing estimates accurate enough to prevent significant losses is also examined empirically and shown to be substantial.

8 
Developing machine learning techniques for real world applicationsYao, Jian. January 2006 (has links)
Thesis (Ph. D.)State University of New York at Binghamton, Computer Science Department, 2006. / Includes bibliographical references.

9 
The solution paths of multicategory support vector machines algorithm and applications /Cui, Zhenhuan, January 2007 (has links)
Thesis (Ph. D.)Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 6668).

10 
A study of distancebased machine learning algorithms /Wettschereck, Dietrich. January 1900 (has links)
Thesis (Ph. D.)Oregon State University, 1995. / Typescript (photocopy). Includes bibliographical references (leaves 141151). Also available on the World Wide Web.

Page generated in 1.6751 seconds