• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 659
  • 91
  • 45
  • 38
  • 15
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 4
  • 3
  • 3
  • Tagged with
  • 963
  • 963
  • 950
  • 904
  • 257
  • 238
  • 228
  • 182
  • 139
  • 86
  • 83
  • 69
  • 67
  • 60
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Reengineering human performance and fatigue research through use of physiological monitoring devices, web-based and mobile device data collection methods, and integrated data storage techniques /

O'Connor, Maureen J. Patillo, Paul J. January 2003 (has links) (PDF)
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, December 2003. / Thesis advisor(s): Nita L. Miller, Thomas J. Housel. Includes bibliographical references (p. 115-117). Also available online.
262

Learning for information extraction: from named entity recognition and disambiguation to relation extraction

Bunescu, Razvan Constantin, 1975- 28 August 2008 (has links)
Information Extraction, the task of locating textual mentions of specific types of entities and their relationships, aims at representing the information contained in text documents in a structured format that is more amenable to applications in data mining, question answering, or the semantic web. The goal of our research is to design information extraction models that obtain improved performance by exploiting types of evidence that have not been explored in previous approaches. Since designing an extraction system through introspection by a domain expert is a laborious and time consuming process, the focus of this thesis will be on methods that automatically induce an extraction model by training on a dataset of manually labeled examples. Named Entity Recognition is an information extraction task that is concerned with finding textual mentions of entities that belong to a predefined set of categories. We approach this task as a phrase classification problem, in which candidate phrases from the same document are collectively classified. Global correlations between candidate entities are captured in a model built using the expressive framework of Relational Markov Networks. Additionally, we propose a novel tractable approach to phrase classification for named entity recognition based on a special Junction Tree representation. Classifying entity mentions into a predefined set of categories achieves only a partial disambiguation of the names. This is further refined in the task of Named Entity Disambiguation, where names need to be linked to their actual denotations. In our research, we use Wikipedia as a repository of named entities and propose a ranking approach to disambiguation that exploits learned correlations between words from the name context and categories from the Wikipedia taxonomy. Relation Extraction refers to finding relevant relationships between entities mentioned in text documents. Our approaches to this information extraction task differ in the type and the amount of supervision required. We first propose two relation extraction methods that are trained on documents in which sentences are manually annotated for the required relationships. In the first method, the extraction patterns correspond to sequences of words and word classes anchored at two entity names occurring in the same sentence. These are used as implicit features in a generalized subsequence kernel, with weights computed through training of Support Vector Machines. In the second approach, the implicit extraction features are focused on the shortest path between the two entities in the word-word dependency graph of the sentence. Finally, in a significant departure from previous learning approaches to relation extraction, we propose reducing the amount of required supervision to only a handful of pairs of entities known to exhibit or not exhibit the desired relationship. Each pair is associated with a bag of sentences extracted automatically from a very large corpus. We extend the subsequence kernel to handle this weaker form of supervision, and describe a method for weighting features in order to focus on those correlated with the target relation rather than with the individual entities. The resulting Multiple Instance Learning approach offers a competitive alternative to previous relation extraction methods, at a significantly reduced cost in human supervision. / text
263

A standardized language for a military intelligence information system

Harrison, Harry Clifford, 1941- January 1971 (has links)
No description available.
264

On the design of an information retrieval system for Patent Office novelty searching

Griffiths, Samuel Ernest, 1928- January 1962 (has links)
No description available.
265

The development of a code dictionary for the placement of sociological documents into an information retrieval system

Petroni, Frank Anthony, 1936- January 1963 (has links)
No description available.
266

Modeling information-seeking expertise on the Web

Tabatabai, Diana January 2002 (has links)
Searching for information pervades a wide spectrum of human activity, including learning and problem solving. With recent changes in the amount of information available and the variety of means of retrieval, there is even more need to understand why some searchers are more successful than others. This study was undertaken to advance our understanding of expertise in seeking information on the Web by identifying strategies and attributes that will increase the chance of a successful search on the Web. A model that illustrated the relationship between strategies and attributes and a successful search was also created. The strategies were: Evaluation, Navigation, Affect, Metacognition, Cognition, and Prior knowledge. Attributes included Age, Sex, Years of experience, Computer knowledge, and Info-seeking knowledge. Success was defined as finding a target topic within 30 minutes. Participants were from three groups. Novices were 10 undergraduate pre-service teachers who were trained in pedagogy but not specifically in information seeking. Intermediates were nine final-year master's students who had received training on how to search but typically had not put heir knowledge into extensive practice. Experts were 10 highly experienced professional librarians working in a variety of settings including government, industry, and university. Participants' verbal protocols were transcribed verbatim into a text file and coded. These codes, along with Internet temporary files, a background questionnaire, and a post-task interview were the sources of the data. Since the variable of interest was the time to finding the topic, in addition to ANOVA and Pearson correlation, survival analysis was used to explore the data. The most significant differences in patterns of search between novices and experts were found in the Cognitive, Metacognitive, and Prior Knowledge strategies. Based on the fitted survival model, Typing Keyword, Criteria to evaluate sites, and Information-Seeking Kno
267

An empirical evaluation of computational and perceptual multi-label genre classification on music / Christopher Sanden

Sanden, Christopher, University of Lethbridge. Faculty of Arts and Science January 2010 (has links)
Automatic music genre classi cation is a high-level task in the eld of Music Information Retrieval (MIR). It refers to the process of automatically assigning genre labels to music for various tasks, including, but not limited to categorization, organization and browsing. This is a topic which has seen an increase in interest recently as one of the cornerstones of MIR. However, due to the subjective and ambiguous nature of music, traditional single-label classi cation is inadequate. In this thesis, we study multi-label music genre classi cation from perceptual and computational perspectives. First, we design a set of perceptual experiments to investigate the genre-labelling behavior of individuals. The results from these experiments lead us to speculate that multi-label classi cation is more appropriate for classifying music genres. Second, we design a set of computational experiments to evaluate multi-label classi cation algorithms on music. These experiments not only support our speculation but also reveal which algorithms are more suitable for music genre classi cation. Finally, we propose and examine a group of ensemble approaches for combining multi-label classi cation algorithms to further improve classi cation performance. ii / viii, 87 leaves ; 29 cm
268

Context-sensitive asynchronous memory : a general experience-based method for managing information access in cognitive agents

Francis, Anthony G., Jr. 08 1900 (has links)
No description available.
269

An exploration of feature selection as a tool for optimizing musical genre classification /

Fiebrink, Rebecca. January 2006 (has links)
The computer classification of musical audio can form the basis for systems that allow new ways of interacting with digital music collections. Existing music classification systems suffer, however, from inaccuracy as well as poor scalability. Feature selection is a machine-learning tool that can potentially improve both accuracy and scalability of classification. Unfortunately, there is no consensus on which feature selection algorithms are most appropriate or on how to evaluate the effectiveness of feature selection. Based on relevant literature in music information retrieval (MIR) and machine learning and on empirical testing, the thesis specifies an appropriate evaluation method for feature selection, employs this method to compare existing feature selection algorithms, and evaluates an appropriate feature selection algorithm on the problem of musical genre classification. The outcomes include an increased understanding of the potential for feature selection to benefit MIR and a new technique for optimizing one type of classification-based system.
270

A Hybrid Scavenger Grid Approach to Intranet Search

Nakashole, Ndapandula 01 February 2009 (has links)
According to a 2007 global survey of 178 organisational intranets, 3 out of 5 organisations are not satisfied with their intranet search services. However, as intranet data collections become large, effective full-text intranet search services are needed more than ever before. To provide an effective full-text search service based on current information retrieval algorithms, organisations have to deal with the need for greater computational power. Hardware architectures that can scale to large data collections and can be obtained and maintained at a reasonable cost are needed. Web search engines address scalability and cost-effectiveness by using large-scale centralised cluster architectures. The scalability of cluster architectures is evident in the ability of Web search engines to respond to millions of queries within a few seconds while searching very large data collections. Though more cost-effective than high-end supercomputers, cluster architectures still have relatively high acquisition and maintenance costs. Where information retrieval is not the core business of an organisation, a cluster-based approach may not be economically viable. A hybrid scavenger grid is proposed as an alternative architecture — it consists of a combination of dedicated and dynamic resources in the form of idle desktop workstations. From the dedicated resources, the architecture gets predictability and reliability whereas from the dynamic resources it gets scalability. An experimental search engine was deployed on a hybrid scavenger grid and evaluated. Test results showed that the resources of the grid can be organised to deliver the best performance by using the optimal number of machines and scheduling the optimal combination of tasks that the machines perform. A system efficiency and cost-effectiveness comparison of a grid and a multi-core machine showed that for workloads of modest to large sizes, the grid architecture delivers better throughput per unit cost than the multi-core, at a system efficiency that is comparable to that of the multi-core. The study has shown that a hybrid scavenger grid is a feasible search engine architecture that is cost-effective and scales to medium- to large-scale data collections.

Page generated in 0.3446 seconds