• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 18
  • 14
  • 10
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 136
  • 104
  • 92
  • 82
  • 76
  • 60
  • 52
  • 52
  • 43
  • 38
  • 37
  • 35
  • 32
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Probabilistic Shape Parsing and Action Recognition Through Binary Spatio-Temporal Feature Description

Whiten, Christopher J. 09 April 2013 (has links)
In this thesis, contributions are presented in the areas of shape parsing for view-based object recognition and spatio-temporal feature description for action recognition. A probabilistic model for parsing shapes into several distinguishable parts for accurate shape recognition is presented. This approach is based on robust geometric features that permit high recognition accuracy. As the second contribution in this thesis, a binary spatio-temporal feature descriptor is presented. Recent work shows that binary spatial feature descriptors are effective for increasing the efficiency of object recognition, while retaining comparable performance to state of the art descriptors. An extension of these approaches to action recognition is presented, facilitating huge gains in efficiency due to the computational advantage of computing a bag-of-words representation with the Hamming distance. A scene's motion and appearance is encoded with a short binary string. Exploiting the binary makeup of this descriptor greatly increases the efficiency while retaining competitive recognition performance.
152

Houses in reach: A personal real estate monitoring and mining application

Gupta, Sweta 05 May 2008 (has links)
The information technology, which is inescapably penetrating all facets of industry and our lives, is propelling the real estate industry into unknown territory. The number of websites providing access to information about property and its environs is steadily increasing. By automating various processes, cost-effective websites offer high quality services. Despite on-line access, one has to analyze a seemingly infinite variety of information to find desired houses or properties. Most buyers, who use the web to investigate real estate, login and re-login to many websites to keep track of houses, which is rather time consuming. Sometimes properties of interest have sold even though a buyer is still waiting for the price to come down. Moreover, real estate websites typically provide large lists of houses without taking the preferences of a particular customer into account. Some websites use proprietary protocols and formats to store and publish house listing data. While these formats are easy to read, there is no common format and thus making it difficult for web developers to consolidate data from different sources. The goal of our research is to design a flexible and extensible architecture for a real estate engine to be able to draw data from different real estate sources effectively. In addition, it is important to design this engine to display properties based on user preferences, rather than merely providing a list of all properties currently available. In this thesis, we present the design and implementation of the components of our “Houses In Reach” real estate engine, which addresses the problems mentioned above. The main components of this web engine include an architectural model, search engines to look for pertinent information on the web based on user preferences, and a visualization engine to display houses on Google maps. The thesis concludes with a discussion of our experience building these components.
153

Application for Debugging and Calibration of an Underwater Robot

Lannebjer, Patrik, Forssman, Alexander January 2014 (has links)
In this thesis we present a suitable way of calibrating and debugging an autonomous underwater vehicle (AUV). The issues that occur when working with an AUV are the inconvenient way of having to constantly recompile the software to change the behavior of the AUV and the lack of feedbacksreceived. If the vehicle does not behave as it should the information needed to be able to trace and fix the problems that occur isin general difficult to retrieve. To tackle this problem a literature study was made on logging libraries, communication protocols as well as AUVs in general. This resulted in identifying a set of existing logging libraries and possible communication protocols. From testing and analyzing these results, Zlog was chosen as the logging library and UDP as the communication protocol. Zlog has then been used in the AUV application to log relevant information on the AUV and UDP allows establishing a connection between the AUV and a desktop program created for Windows to send this logging information to. The desktop program also allows filtering of any incoming logs with the use of a parser. This has been an essential part of the solution to be able to identify specific logging data and help presenting this in a convenient way. To be able to change the format of the log file, the parser has been given a grammar which can be adjusted to adapt to a different log file. Additionally, the desktop application has the ability to send commands to the AUVapplication via the UDP connection to change the behavior of the AUV live.
154

Large Vocabulary Continuous Speech Recogniton For Turkish Using Htk

Comez, Murat Ali 01 January 2003 (has links) (PDF)
This study aims to build a new language model that can be used in a Turkish large vocabulary continuous speech recognition system. Turkish is a very productive language in terms of word forms because of its agglutinative nature. For such languages like Turkish, the vocabulary size is far from being acceptable. From only one simple stem, thousands of new word forms can be generated using inflectional or derivational suffixes. In this thesis, words are parsed into their stems and endings. One ending includes the suffixes attached to the associated root. Then the search network based on bigrams is constructed. Bigrams are obtained either using stem and endings, or using only stems. The language model proposed is based on bigrams obtained using only stems. All work is done in HTK (Hidden Markov Model Toolkit) environment, except parsing and network transforming. Besides of offering a new language model for Turkish, this study involves a comprehensive work about speech recognition inspecting into concepts in the state of the art speech recognition systems. To acquire good command of these concepts and processes in speech recognition isolated word, connected word and continuous speech recognition tasks are performed. The experimental results associated with these tasks are also given.
155

Lexical approaches to backoff in statistical parsing

Lakeland, Corrin, n/a January 2006 (has links)
This thesis develops a new method for predicting probabilities in a statistical parser so that more sophisticated probabilistic grammars can be used. A statistical parser uses a probabilistic grammar derived from a training corpus of hand-parsed sentences. The grammar is represented as a set of constructions - in a simple case these might be context-free rules. The probability of each construction in the grammar is then estimated by counting its relative frequency in the corpus. A crucial problem when building a probabilistic grammar is to select an appropriate level of granularity for describing the constructions being learned. The more constructions we include in our grammar, the more sophisticated a model of the language we produce. However, if too many different constructions are included, then our corpus is unlikely to contain reliable information about the relative frequency of many constructions. In existing statistical parsers two main approaches have been taken to choosing an appropriate granularity. In a non-lexicalised parser constructions are specified as structures involving particular parts-of-speech, thereby abstracting over individual words. Thus, in the training corpus two syntactic structures involving the same parts-of-speech but different words would be treated as two instances of the same event. In a lexicalised grammar the assumption is that the individual words in a sentence carry information about its syntactic analysis over and above what is carried by its part-of-speech tags. Lexicalised grammars have the potential to provide extremely detailed syntactic analyses; however, Zipf�s law makes it hard for such grammars to be learned. In this thesis, we propose a method for optimising the trade-off between informative and learnable constructions in statistical parsing. We implement a grammar which works at a level of granularity in between single words and parts-of-speech, by grouping words together using unsupervised clustering based on bigram statistics. We begin by implementing a statistical parser to serve as the basis for our experiments. The parser, based on that of Michael Collins (1999), contains a number of new features of general interest. We then implement a model of word clustering, which we believe is the first to deliver vector-based word representations for an arbitrarily large lexicon. Finally, we describe a series of experiments in which the statistical parser is trained using categories based on these word representations.
156

ETRANS : an English-Thai translator /

Warote, Nuntaporn. January 1991 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1991. / Typescript. Includes bibliography: leaf [53].
157

A maximum entropy approach to Chinese language parsing /

Yang, Yongsheng. January 2002 (has links)
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002. / Includes bibliographical references (leaves 54-55). Also available in electronic version. Access restricted to campus users.
158

Semantic lexicon acquisition for learning natural language interfaces /

Thompson, Cynthia Ann, January 1998 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1998. / Vita. Includes bibliographical references (leaves 134-145). Available also in a digital version from Dissertation Abstracts.
159

On comprehending sentences syntactic parsing strategies /

Frazier, Lyn. January 1979 (has links)
Originally presented as the author's thesis. / Bibliography: 159-165.
160

Complexities of Parsing in the Presence of Reordering

Berglund, Martin January 2012 (has links)
The work presented in this thesis discusses various formalisms for representing the addition of order-controlling and order-relaxing mechanisms to existing formal language models. An immediate example is shuffle expressions, which can represent not only all regular languages (a regular expression is a shuffle expression), but also features additional operations that generate arbitrary interleavings of its argument strings. This defines a language class which, on the one hand, does not contain all context-free languages, but, on the other hand contains an infinite number of languages that are not context-free. Shuffle expressions are, however, not themselves the main interest of this thesis. Instead we consider several formalisms that share many of their properties, where some are direct generalisations of shuffle expressions, while others feature very different methods of controlling order. Notably all formalisms that are studied here have a semi-linear Parikh image, are structured so that each derivation step generates at most a constant number of symbols (as opposed to the parallel derivations in for example Lindenmayer systems), feature interesting ordering characteristics, created either by derivation steps that may generate symbols in multiple places at once, or by multiple generating processes that produce output independently in an interleaved fashion, and are all limited enough to make the question of efficient parsing an interesting and reasonable goal. This vague description already hints towards the formalisms considered; the different classes of mildly context-sensitive devices and concurrent finite-state automata. This thesis will first explain and discuss these formalisms, and will then primarily focus on the associated membership problem (or parsing problem). Several parsing results are discussed here, and the papers in the appendix give a more complete picture of these problems and some related ones.

Page generated in 0.0758 seconds