• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1605
  • 457
  • 422
  • 170
  • 114
  • 102
  • 60
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3643
  • 856
  • 804
  • 754
  • 608
  • 543
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 276
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Development of a learning management system for UCAR-COMET

Riter, Dan. January 2006 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2006. / Title from PDF title page (viewed on Apr. 7, 2006). Includes bibliographical references.
532

Development of database and web site for D3Multisport

Garrison, Jay T. January 2006 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2006. / Title from PDF title page (viewed on May 25, 2006). Includes bibliographical references.
533

Clustering of database query results /

Daniels, Kristine Jean, January 2006 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Computer Science, 2006. / Includes bibliographical references (p. 41-44).
534

Enhanced classification through exploitation of hierarchical structures

Punera, Kunal Vinod Kumar, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
535

A methodology for domain-specific conceptual data modeling and querying

Tian, Hao. January 2007 (has links)
Thesis (Ph. D.)--Georgia State University, 2007. / Rajshekhar Sunderraman, committee chair; Paul S. Katz, Yanqing Zhang, Ying Zhu, committee members. Electronic text (128 p. : ill.) : digital, PDF file. Description based on contents viewed Oct. 15, 2007; title from file title page. Includes bibliographical references (p. 124-128).
536

Constraint processing alternatives in an engineering design database /

Schaefer, Michael Joseph. January 1982 (has links)
Thesis (M.S.)--Carnegie-Mellon University, 1983. / Includes bibliographical references (p. 121-123).
537

Keyword Join: Realizing Keyword Search for Information Integration

Yu, Bei, Liu, Ling, Ooi, Beng Chin, Tan, Kian Lee 01 1900 (has links)
Information integration has been widely addressed over the last several decades. However, it is far from solved due to the complexity of resolving schema and data heterogeneities. In this paper, we propose out attempt to alleviate such difficulty by realizing keyword search functionality for integrating information from heterogeneous databases. Our solution does not require predefined global schema or any mappings between databases. Rather, it relies on an operator called keyword join to take a set of lists of partial answers from different data sources as input, and output a list of results that are joined by the tuples from input lists based on predefined similarity measures as integrated results. Our system allows source databases remain autonomous and the system to be dynamic and extensible. We have tested our system with real dataset and benchmark, which shows that our proposed method is practical and effective. / Singapore-MIT Alliance (SMA)
538

LabelMe: a database and web-based tool for image annotation

Russell, Bryan C., Torralba, Antonio, Murphy, Kevin P., Freeman, William T. 08 September 2005 (has links)
Research in object detection and recognition in cluttered scenes requires large image collections with ground truth labels. The labels should provide information about the object classes present in each image, as well as their shape and locations, and possibly other attributes such as pose. Such data is useful for testing, as well as for supervised learning. This project provides a web-based annotation tool that makes it easy to annotate images, and to instantly sharesuch annotations with the community. This tool, plus an initial set of 10,000 images (3000 of which have been labeled), can be found at http://www.csail.mit.edu/$\sim$brussell/research/LabelMe/intro.html
539

Roteamento de consultas em banco de dados peer-to-peer utilizando colônias de formigas e ontologias /

Costa, Leandro Rincon. January 2009 (has links)
Orientador: Carlos Roberto Valêncio / Banca: Pedro Luiz Pizzigatti Corrêa / Banca: Rogéria Cristiane Gratão de Souza / Resumo: Sistemas baseados em redes peer-to-peer come caram a se popularizar nos anos 90 e, desde então, grandes avan cos e novas aplicações têm sido desenvolvidas aproveitando as caracter sticas deste tipo de rede de computadores. Inicialmente, tais redes eram utilizadas apenas em aplicações simples como o compartilhamento de arquivos, hoje, por em, encontram-se em aplicaçãoes com grau de complexidade cada vez maior. Dentre estes sistemas mais recentes, destaca-se o compartilhamento de informações armazenadas em bancos de dados, um segmento em franco desenvolvimento. Em bancos de dados peer-to-peer, cria-se uma base de conhecimento rica e amplamente distribu da, baseada no compartilhamento de informações semanticamente relacionadas, por em sintaticamente heterogêneas. Um dos desa os desta categoria de aplicações e garantir uma forma e ciente para a busca de informações sem comprometer a autonomia de cada n o e a exibilidade da rede. Neste trabalho explora-se este desafio e apresenta-se uma proposta de suporte as buscas por meio da otimização dos caminhos, buscando reduzir o n umero de mensagens enviadas na rede sem afetar significativamente o n umero de respostas obtidas por consulta. Para tal tarefa propõe-se uma estrat egia baseada em conceitos do algoritmo de colônia de formigas e classicação das informações utilizando ontologias. Com isso foi possível adicionar o suporte semântico como facilidade na execução do processo de busca em bancos de dados peer-to-peer, al em de reduzir o tráfego de mensagens e permitir inclusive que mais resultados sejam alcan cados sem comprometer o desempenho da rede. / Abstract: In the 90s, peer-to-peer systems became more popular and, since then, major advances and new applications have been developed based on the features of this kind of computer network. Initially they were used only in simple applications as le sharing, but now they have been implemented in increasingly more complex applications. Among these novel systems, it pointed out the database information sharing, which is developing rapidly. In peer-to-peer database, a very rich and widely distributed knowledge base is created, based on the sharing of semantically related but syntactically heterogeneous information. One of the challenges of such an application is to ensure an e cient way to search for information with no jeopardy either to the individual nodes autonomy or to the network exibility. The work herein explores this challenge aiming at a proposal to support the searches through paths optimization, looking for reducing the number of messages sent in network without a ecting the number of each query's answers. To do this work, it proposes a strategy based both on ant colony algorithm concepts and information classi cation by ontologies. This way, it has been possible to add the semantic support in order to ease the search process in peer-to-peer database, while reducing the message tra c and allowing even to reach more results without compromising the network performance. / Mestre
540

Data Driven Framework for Prognostics

January 2010 (has links)
abstract: Prognostics and health management (PHM) is a method that permits the reliability of a system to be evaluated in its actual application conditions. This work involved developing a robust system to determine the advent of failure. Using the data from the PHM experiment, a model was developed to estimate the prognostic features and build a condition based system based on measured prognostics. To enable prognostics, a framework was developed to extract load parameters required for damage assessment from irregular time-load data. As a part of the methodology, a database engine was built to maintain and monitor the experimental data. This framework helps in significant reduction of the time-load data without compromising features that are essential for damage estimation. A failure precursor based approach was used for remaining life prognostics. The developed system has a throughput of 4MB/sec with 90% latency within 100msec. This work hence provides an overview on Prognostic framework survey, Prognostics Framework architecture and design approach with a robust system implementation. / Dissertation/Thesis / M.S. Computer Science 2010

Page generated in 0.0663 seconds