Spelling suggestions: "subject:"[een] MACHINE"" "subject:"[enn] MACHINE""
171 |
Knowledge frontier discovery a thesis presented to the faculty of the Graduate School, Tennessee Technological University /Honeycutt, Matthew Burton, January 2009 (has links)
Thesis (M.S.)--Tennessee Technological University, 2009. / Title from title page screen (viewed on Feb. 24, 2010). Bibliography: leaves 78-83.
|
172 |
Using semantic role labels to reorder statistical machine translation output /Lo, Chi Kiu. January 2009 (has links)
Includes bibliographical references (p. 78-84).
|
173 |
Counter-spaces and notation machinesShivers, Christina Nicole 08 June 2015 (has links)
The modern American city is organized into a multitude of spaces based upon function and use. These organized spaces dictate a prescribed behavior and social awareness resulting in a landscape of ill-fitting and awkward territories existing in opposition to one another. An unintended byproduct of these collisions is the counter-space. Akin to slag, sludge and waste resulting from modern industrial processes, the counter-space is the left-over and neglected space of the city resulting from the ever increasing hegemony of society. Hidden within plain site, abandoned and unused, these spaces exist everywhere.
This thesis seeks to understand and reveal these counter-spaces and their subsequent populations within the city of Atlanta in order to bring an awareness to the design of the city for all populations. The spatial-temporalities of counter-spaces will be understood through a de-territorialization of representation through notation and mapping. Through this act, a “cartography of events” will be created for each counter space using series of notation machines in which temporal stimuli from each counter-space site will be used as inputs for the machines.
|
174 |
Scalable kernel methods for machine learningKulis, Brian Joseph 09 October 2012 (has links)
Machine learning techniques are now essential for a diverse set of applications in computer vision, natural language processing, software analysis, and many other domains. As more applications emerge and the amount of data continues to grow, there is a need for increasingly powerful and scalable techniques. Kernel methods, which generalize linear learning methods to non-linear ones, have become a cornerstone for much of the recent work in machine learning and have been used successfully for many core machine learning tasks such as clustering, classification, and regression. Despite the recent popularity in kernel methods, a number of issues must be tackled in order for them to succeed on large-scale data. First, kernel methods typically require memory that grows quadratically in the number of data objects, making it difficult to scale to large data sets. Second, kernel methods depend on an appropriate kernel function--an implicit mapping to a high-dimensional space--which is not clear how to choose as it is dependent on the data. Third, in the context of data clustering, kernel methods have not been demonstrated to be practical for real-world clustering problems. This thesis explores these questions, offers some novel solutions to them, and applies the results to a number of challenging applications in computer vision and other domains. We explore two broad fundamental problems in kernel methods. First, we introduce a scalable framework for learning kernel functions based on incorporating prior knowledge from the data. This frame-work scales to very large data sets of millions of objects, can be used for a variety of complex data, and outperforms several existing techniques. In the transductive setting, the method can be used to learn low-rank kernels, whose memory requirements are linear in the number of data points. We also explore extensions of this framework and applications to image search problems, such as object recognition, human body pose estimation, and 3-d reconstructions. As a second problem, we explore the use of kernel methods for clustering. We show a mathematical equivalence between several graph cut objective functions and the weighted kernel k-means objective. This equivalence leads to the first eigenvector-free algorithm for weighted graph cuts, which is thousands of times faster than existing state-of-the-art techniques while using significantly less memory. We benchmark this algorithm against existing methods, apply it to image segmentation, and explore extensions to semi-supervised clustering. / text
|
175 |
Structured exploration for reinforcement learningJong, Nicholas K. 18 December 2012 (has links)
Reinforcement Learning (RL) offers a promising approach towards achieving the dream of autonomous agents that can behave intelligently in the real world. Instead of requiring humans to determine the correct behaviors or sufficient knowledge in advance, RL algorithms allow an agent to acquire the necessary knowledge through direct experience with its environment. Early algorithms guaranteed convergence to optimal behaviors in limited domains, giving hope that simple, universal mechanisms would allow learning agents to succeed at solving a wide variety of complex problems. In practice, the field of RL has struggled to apply these techniques successfully to the full breadth and depth of real-world domains.
This thesis extends the reach of RL techniques by demonstrating the synergies among certain key developments in the literature. The first of these developments is model-based exploration, which facilitates theoretical convergence guarantees in finite problems by explicitly reasoning about an agent's certainty in its understanding of its environment. A second branch of research studies function approximation, which generalizes RL to infinite problems by artificially limiting the degrees of freedom in an agent's representation of its environment. The final major advance that this thesis incorporates is hierarchical decomposition, which seeks to improve the efficiency of learning by endowing an agent's knowledge and behavior with the gross structure of its environment.
Each of these ideas has intuitive appeal and sustains substantial independent research efforts, but this thesis defines the first RL agent that combines all their benefits in the general case. In showing how to combine these techniques effectively, this thesis investigates the twin issues of generalization and exploration, which lie at the heart of efficient learning. This thesis thus lays the groundwork for the next generation of RL algorithms, which will allow scientific agents to know when it suffices to estimate a plan from current data and when to accept the potential cost of running an experiment to gather new data. / text
|
176 |
Machine learning methods for computational biologyLi, Limin, 李丽敏 January 2010 (has links)
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
|
177 |
Cross-domain subspace learningSi, Si, 斯思 January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
178 |
Improving the safety and efficiency of rail yard operations using roboticsBoddiford, Andrew Shropshire 10 March 2015 (has links)
Significant efforts have been expended by the railroad industry to make operations safer and more efficient through the intelligent use of sensor data. This work proposes to take the technology one step further to use this data for the control of physical systems designed to automate hazardous railroad operations, particularly those that require humans to interact with moving trains. To accomplish this, application specific requirements must be established to design self-contained machine vision and robotic solutions to eliminate the risks associated with existing manual operations. Present-day rail yard operations have been identified as good candidates to begin development. Manual uncoupling, in particular, of rolling stock in classification yards has been investigated. To automate this process, an intelligent robotic system must be able to detect, track, approach, contact, and manipulate constrained objects on equipment in motion. This work presents multiple prototypes capable of autonomously uncoupling full-scale freight cars using feedback from its surrounding environment. Geometric image processing algorithms and machine learning techniques were implemented to accurately identify cylindrical objects in point clouds generated in real-vi time. Unique methods fusing velocity and vision data were developed to synchronize a pair of moving rigid bodies in real-time. Multiple custom end-effectors with in-built compliance and fault tolerance were designed, fabricated, and tested for grasping and manipulating cylindrical objects. Finally, an event-driven robotic control application was developed to safely and reliably uncouple freight cars using data from 3D cameras, velocity sensors, force/torque transducers, and intelligent end-effector tooling. Experimental results in a lab setting confirm that modern robotic and sensing hardware can be used to reliably separate pairs of rolling stock up to two miles per hour. Additionally, subcomponents of the autonomous pin-pulling system (APPS) were designed to be modular to the point where they could be used to automate other hazardous, labor-intensive tasks found in U.S. classification yards. Overall, this work supports the deployment of autonomous robotic systems in semi-unstructured yard environments to increase the safety and efficiency of rail operations. / text
|
179 |
Anomaly detection with Machine learning : Quality assurance of statistical data in the Aid communityBlomquist, Hanna, Möller, Johanna January 2015 (has links)
The overall purpose of this study was to find a way to identify incorrect data in Sida’s statistics about their contributions. A contribution is the financial support given by Sida to a project. The goal was to build an algorithm that determines if a contribution has a risk to be inaccurate coded, based on supervised classification methods within the area of Machine Learning. A thorough data analysis process was done in order to train a model to find hidden patterns in the data. Descriptive features containing important information about the contributions were successfully selected and used for this task. These included keywords that were retrieved from descriptions of the contributions. Two Machine learning methods, Adaboost and Support Vector Machines, were tested for ten classification models. Each model got evaluated depending on their accuracy of predicting the target variable into its correct class. A misclassified component was more likely to be incorrectly coded and was also seen as an anomaly. The Adaboost method performed better and more steadily on the majority of the models. Six classification models built with the Adaboost method were combined to one final ensemble classifier. This classifier was verified with new unseen data and an anomaly score was calculated for each component. The higher the score, the higher the risk of being anomalous. The result was a ranked list, where the most anomalous components were prioritized for further investigation of staff at Sida.
|
180 |
Design-by-analogy and representation in innovative engineering concept generationLinsey, Julie Stahmer, 1979- 29 August 2008 (has links)
Design-by-analogy is an important tool for engineers seeking innovative solutions to design problems. A new method for systematically guiding designers in seeking analogies, the WordTree Design-by-Analogy Method, was created based knowledge gained from a series of experiments and prior literature. The WordTree Method linguistically re-represents the design problem and leads the designer to unexpected, novel analogies and analogous domains. A controlled experiment and the applications of the method to a number of engineering projects prove the method's value. Designers implementing the method identify a greater number of analogies. Application of the method to a set of engineering project resulted in unexpected, novel analogies and solutions. A set of experiments to more deeply understand the individual cognitive and the group social process employed during analogical design guides the development of the WordTree Design-by-Analogy Method. A series of three experiments shows the effects of the problem representation and how the analogy is initially learned on a designers' ability to use the analogy to solve a future design problem. The effect of the problem representation depends on how the analogy is initially learned. Learning analogies in more domain-general representations facilitates later retrieval and use. A fourth experiment explored group brainwriting idea generation techniques including 6-3-5, Gallery, C-Sketch and Brainsketching through a 3 X 2 factorial experiment. The first factor controls how teams represent their ideas to each other, words alone, sketches alone or a combination. The second factor determines how teams exchanged ideas, either all the ideas are displayed on the wall or sets of ideas are rotated between team members. The number, quality, novelty and variety of ideas are measured. The greatest quantity of ideas is produced when teams use a combination of words and sketches to represent their ideas and then rotationally exchange them. This corresponds to a hybrid 6-3-5/C-Sketch method.
|
Page generated in 0.0421 seconds