• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5630
  • 578
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9097
  • 9097
  • 3034
  • 1699
  • 1538
  • 1530
  • 1425
  • 1369
  • 1202
  • 1189
  • 1168
  • 1131
  • 1117
  • 1029
  • 1028
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1001

Applying inter-layer conflict resolution to hybrid robot control architectures

Powers, Matthew D. 20 January 2010 (has links)
In this document, we propose and examine the novel use of a learning mechanism between the reactive and deliberative layers of a hybrid robot control architecture. Balancing the need to achieve complex goals and meet real-time constraints, many modern mobile robot navigation control systems make use of a hybrid deliberative-reactive architecture. In this paradigm, a high-level deliberative layer plans routes or actions toward a known goal, based on accumulated world knowledge. A low-level reactive layer selects motor commands based on current sensor data and the deliberative layer's plan. The desired system-level effect of this architecture is that the robot is able to combine complex reasoning toward global objectives with quick reaction to local constraints. Implicit in this type of architecture, is the assumption that both layers are using the same model of the robot's capabilities and constraints. It may happen, for example, due to differences in representation of the robot's kinematic constraints, that the deliberative layer creates a plan that the reactive layer cannot follow. This sort of conflict may cause a degradation in system-level performance, if not complete navigational deadlock. Traditionally, it has been the task of the robot designer to ensure that the layers operate in a compatible manner. However, this is a complex, empirical task. Working to improve system-level performance and navigational robustness, we propose introducing a learning mechanism between the reactive layer and the deliberative layer, allowing the deliberative layer to learn a model of the reactive layer's execution of its plans. First, we focus on detecting this inter-layer conflict, and acting based on a corrected model. This is demonstrated on a physical robotic platform in an unstructured outdoor environment. Next, we focus on learning a model to predict instances of inter-layer conflict, and planning to act with respect to this model. This is demonstrated using supervised learning in a physics-based simulation environment. Results and algorithms are presented.
1002

MaltParser -- An Architecture for Inductive Labeled Dependency Parsing

Hall, Johan January 2006 (has links)
<p>This licentiate thesis presents a software architecture for inductive labeled dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The architecture is based on the theoretical framework of inductive dependency parsing by Nivre \citeyear{nivre06c} and has been realized in MaltParser, a system that supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. Special attention is given in this thesis to learning methods based on support vector machines (SVM).</p><p>The implementation is validated in three sets of experiments using data from three languages (Chinese, English and Swedish). First, we check if the implementation realizes the underlying architecture. The experiments show that the MaltParser system outperforms the baseline and satisfies the basic constraints of well-formedness. Furthermore, the experiments show that it is possible to vary parsing algorithm, feature model and learning method independently. Secondly, we focus on the special properties of the SVM interface. It is possible to reduce the learning and parsing time without sacrificing accuracy by dividing the training data into smaller sets, according to the part-of-speech of the next token in the current parser configuration. Thirdly, the last set of experiments present a broad empirical study that compares SVM to memory-based learning (MBL) with five different feature models, where all combinations have gone through parameter optimization for both learning methods. The study shows that SVM outperforms MBL for more complex and lexicalized feature models with respect to parsing accuracy. There are also indications that SVM, with a splitting strategy, can achieve faster parsing than MBL. The parsing accuracy achieved is the highest reported for the Swedish data set and very close to the state of the art for Chinese and English.</p> / <p>Denna licentiatavhandling presenterar en mjukvaruarkitektur för</p><p>datadriven dependensparsning, dvs. för att automatiskt skapa en</p><p>syntaktisk analys i form av dependensgrafer för meningar i texter</p><p>på naturligt språk. Arkitekturen bygger på idén att man ska kunna variera parsningsalgoritm, särdragsmodell och inlärningsmetod oberoende av varandra. Till grund för denna arkitektur har vi använt det teoretiska ramverket för induktiv dependensparsning presenterat av Nivre \citeyear{nivre06c}. Arkitekturen har realiserats i programvaran MaltParser, där det är möjligt att definiera komplexa särdragsmodeller i ett speciellt beskrivningsspråk. I denna avhandling kommer vi att lägga extra tyngd vid att beskriva hur vi har integrerat inlärningsmetoden supportvektor-maskiner (SVM).</p><p>MaltParser valideras med tre experimentserier, där data från tre språk används (kinesiska, engelska och svenska). I den första experimentserien kontrolleras om implementationen realiserar den underliggande arkitekturen. Experimenten visar att MaltParser utklassar en trivial metod för dependensparsning (\emph{eng}. baseline) och de grundläggande kraven på välformade dependensgrafer uppfylls. Dessutom visar experimenten att det är möjligt att variera parsningsalgoritm, särdragsmodell och inlärningsmetod oberoende av varandra. Den andra experimentserien fokuserar på de speciella egenskaperna för SVM-gränssnittet. Experimenten visar att det är möjligt att reducera inlärnings- och parsningstiden utan att förlora i parsningskorrekthet genom att dela upp träningsdata enligt ordklasstaggen för nästa ord i nuvarande parsningskonfiguration. Den tredje och sista experimentserien presenterar en empirisk undersökning som jämför SVM med minnesbaserad inlärning (MBL). Studien använder sig av fem särdragsmodeller, där alla kombinationer av språk, inlärningsmetod och särdragsmodell</p><p>har genomgått omfattande parameteroptimering. Experimenten visar att SVM överträffar MBL för mer komplexa och lexikaliserade särdragsmodeller med avseende på parsningskorrekthet. Det finns även vissa indikationer på att SVM, med en uppdelningsstrategi, kan parsa en text snabbare än MBL. För svenska kan vi rapportera den högsta parsningskorrektheten hittills och för kinesiska och engelska är resultaten nära de bästa som har rapporterats.</p>
1003

Approximation methods for efficient learning of Bayesian networks

Riggelsen, Carsten. January 1900 (has links)
Thesis (Ph.D.)--Utrecht University, 2006. / Includes bibliographical references (p. [133]-137).
1004

Physically interpretable machine learning methods for transcription factor binding site identification using principled energy thresholds and occupancy

Drawid, Amar Mohan. January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Computational Biology and Molecular Biophysics." Includes bibliographical references (p. 210-226).
1005

Adaptive temporal difference learning of spatial memory in the water maze task

Stone, Erik E. Skubic, Marge. January 2009 (has links)
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on January 22, 2010). Thesis advisor: Dr. Marjorie Skubic. Includes bibliographical references.
1006

Classifying atomicity violation warnings using machine learning

Li, Hongjiang. January 2008 (has links)
Thesis (M.S.)--University of Wyoming, 2008. / Title from PDF title page (viewed on August 5, 2009). Includes bibliographical references (p. 38-39).
1007

Modular learning through output space decomposition /

Kumar, Shailesh, January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 196-213). Available also in a digital version from Dissertation Abstracts.
1008

Recommender system for audio recordings a thesis /

Lee, Long Seo. Dekhtyar, Alexander. January 1900 (has links)
Thesis (M.S.)--California Polytechnic State University, 2010. / Title from PDF title page; viewed on March 18, 2010. Major professor: Alexander Dekhtyar, Ph.D. "Presented to the faculty of California Polytechnic State University, San Luis Obispo." "In partial fulfillment of the requirements for the degree [of] Master of Science in Computer Science." "January 2010." Includes bibliographical references (p. 72-74).
1009

A unifying framework for computational reinforcement learning theory

Li, Lihong, January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Computer Science." Includes bibliographical references (p. 238-261).
1010

FALCONET force-feedback approach for learning from coaching and observation using natural and experiential training /

Stein, Gary. January 2009 (has links)
Thesis (Ph.D.)--University of Central Florida, 2009. / Adviser: Avelino Gonzalez. Includes bibliographical references (p. 275-288).

Page generated in 0.0789 seconds