• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 622
  • 136
  • 119
  • 109
  • 108
  • 103
  • 99
  • 70
  • 62
  • 62
  • 55
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Automated Extraction and Retrieval of Metadata by Data Mining : a Case Study of Mining Engine for National Land Survey Sweden

Dong, Zheng January 2010 (has links)
Metadata is the important information describing geographical data resources and their key elements. It is used to guarantee the availability and accessibility of the data. ISO 19115 is a metadata standard for geographical information, making the geographical metadata shareable, retrievable, and understandable at the global level. In order to cope with the massive, high-dimensional and high-diversity nature of geographical data, data mining is an applicable method to discover the metadata. This thesis develops and evaluates an automated mining method for extracting metadata from the data environment on the Local Area Network at the National Land Survey of Sweden (NLS). These metadata are prepared and provided across Europe according to the metadata implementing rules for the Infrastructure for Spatial Information in Europe (INSPIRE). The metadata elements are defined according to the numerical formats of four different data entities: document data, time-series data, webpage data, and spatial data. For evaluating the method for further improvement, a few attributes and corresponding metadata of geographical data files are extracted automatically as metadata record in testing, and arranged in database. Based on the extracted metadata schema, a retrieving functionality is used to find the file containing the keyword of metadata user input. In general, the average success rate of metadata extraction and retrieval is 90.0%. The mining engine is developed in C# programming language on top of the database using SQL Server 2005. Lucene.net is also integrated with Visual Studio 2005 to build an indexing framework for extracting and accessing metadata in database.
272

Implementation and Evaluation of Image Retrieval Method Utilizing Geographic Location Metadata

Lundstedt, Magnus January 2009 (has links)
Multimedia retrieval systems are very important today with millions of content creators all over the world generating huge multimedia archives. Recent developments allows for content based image and video retrieval. These methods are often quite slow, especially if applied on a library of millions of media items. In this research a novel image retrieval method is proposed, which utilizes spatial metadata on images. By finding clusters of images based on their geographic location, the spatial metadata, and combining this information with existing content- based image retrieval algorithms, the proposed method enables efficient presentation of high quality image retrieval results to system users. Clustering methods considered include Vector Quantization, Vector Quantization LBG and DBSCAN. Clustering was performed on three different similarity measures; spatial metadata, histogram similarity or texture similarity. For histogram similarity there are many different distance metrics to use when comparing histograms. Euclidean, Quadratic Form and Earth Mover’s Distance was studied. As well as three different color spaces; RGB, HSV and CIE Lab.
273

Collaborative tagging : folksonomy, metadata, visualization, e-learning, thesis

Bateman, Scott 12 December 2007 (has links)
Collaborative tagging is a simple and effective method for organizing and sharing web resources using human created metadata. It has arisen out of the need for an efficient method of personal organization, as the number of digital resources in everyday lives increases. While tagging has become a proven organization scheme through its popularity and widespread use on the Web, little is known about its implications and how it may effectively be applied in different situations. This is due to the fact that tagging has evolved through several iterations of use on social software websites, rather than through a scientific or an engineering design process. The research presented in this thesis, through investigations in the domain of e-learning, seeks to understand more about the scientific nature of collaborative tagging through a number of human subject studies. While broad in scope, touching on issues in human computer interaction, knowledge representation, Web system architecture, e-learning, metadata, and information visualization, this thesis focuses on how collaborative tagging can supplement the growing metadata requirements of e-learning. I conclude by looking at how the findings may be used in future research, through using information based in the emergent social networks of social software, to automatically adapt to the needs of individual users.
274

Evaluation of a System Layer Design for the Visual Knowledge Builder

Gomathinayagam, Arun Bharath 2011 December 1900 (has links)
When users are searching for documents, they must sift through a collection of potentially relevant documents assessing, categorizing and prioritizing them based on the current task at hand, a process we refer to as document triage. Since users' time is precious, as much information as possible should be presented to them to aid the process of document triage. This thesis presents a simple visualization and a set of features that can help users in identifying information of interest. As a part of this thesis, the System Layer of the Visual Knowledge Builder (VKB) was developed as a tab strip container. Each of the tabs presents a different type of information about Web Documents. The types of information currently included in VKB are: a summary of the Web Document, keywords based on users' interests provided by the Interest Profile Manager (IPM), popular keywords from a social bookmarking site, metadata about the Web Document, a list of outgoing links of the Web Document, and the history of the Web Document. We performed a heuristic evaluation to assess the usefulness of the new visualization and features. During the evaluation, participants were asked to rate the usefulness of each of the new web document features over a scale of 1 to 7, where a value of 1 indicated strong disagreement, and 7 indicated strong agreement. Our results indicate that the document summary, the keywords from IPM, popular tags, and the history of the Web Document are expected to be most useful during the process of document triage.
275

Case study: Extending content metadata by appending user context

Svensson, Martin, Pettersson, Oskar January 2006 (has links)
<p>Recent developments in modern computing and wireless networks allow mobile devices to be connected to the Internet regardless of their physical location. These mobile devices, such as smart phones and PDAs, have turned into powerful multimedia units allowing users to become producers of rich media content. This latest development contributes to the ever-growing amount of digital material existing on the World Wide Web, and at the same time creates a new information landscape that combines content coming from both, the wired and mobile Internet. Thus, it is important to understand the context or settings in which mobile devices are used, and what is the digital content produced by the different users. In order to gain more knowledge about this domain, we have investigated how to extend the standard metadata related to content with a metadata domain describing the context, or settings in which the content has been created.</p><p>In order to limit the scope of our work, we have focused our efforts in a specific case taking place in a project called AMULETS. The AMULETS-project contains all of the elements we need in order to resemble the contextual setting in a metadata model. Combined with the technical metadata associated to the digital content, we try to display the benefits of capturing the different attributes of the context that were present when the content was generated. Additionally, we have created a proof-of-concept Entity Relation (ER)-diagram which proposes how the metadata models can be implemented in a relational database. As the nature of the thesis is design-oriented, a model has been developed and it will be illustrated throughout this report. The aim of the thesis is to show how it is possible to design new metadata models that combine both relevant attributes of the context and content in order to develop new educational activities supported by location-based services.</p>
276

Standardization of the Instrumentation Hardware Abstraction Language in IRIG 106

Hamilton, John, Fernandes, Ronald, Darr, Timothy, Jones, Charles H., Faulstich, Ray 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / Previously, we have presented an approach to achieving standards-based multi-vendor hardware configuration using the Instrumentation Hardware Abstraction Language (IHAL) and an associated Application Programming Interface (API) specification. In this paper we describe the current status of the IHAL standard. Since the first introduction of IHAL at ITC 2006, the language has undergone a number of additions and improvements. Currently, IHAL is nearing the end of a 2-year standardization task with the Range Commanders Council Telemetry Group (RCC TG). This paper describes the standardization process in addition to providing an overview of the current state of IHAL. The standard consists of two key components: (1) the IHAL language, and (2), the IHAL API specification.
277

Supporting Metadata Management for Data Curation: Problem and Promise

Westbrooks, Elaine L. 02 May 2008 (has links)
Breakout session from the Living the Future 7 Conference, April 30-May 3, 2008, University of Arizona Libraries, Tucson, AZ. / Research communities and libraries are on the verge of reaching a saturation point with regard to the number of published reports documenting, planning, and defining e-science, e-research, cyberscholarship, and data curation. Despite the volumes of literature, little research is devoted to metadata maintenance and infrastructure. Libraries are poised to contribute metadata expertise to campus-wide data curation efforts; however, traditional and costly library methods of metadata creation and management must be replaced with cost-effective models that focus on the researcher’s data collection/analysis process. In such a model, library experts collaborate with researchers in building tools for metadata creation and maintenance which in turn contribute to the long-term sustainability, organization, and preservation of data. This presentation will introduce one of Cornell University Library’s collaborative efforts curating 2003 Northeast Blackout Data. The goal of the project is to make Blackout data accessible so that it can serve as a catalyst for innovative cross-disciplinary research that will produce better scientific understanding of the technology and communications that failed during the Blackout. Library staff collaborated with three groups: engineering faculty at Cornell, Government power experts, and power experts in the private sector. Finally the core components with regard to the metadata management methodology will be outlined and defined. Rights management emerged as the biggest challenge for the Blackout project.
278

Redesign of Library Workflows: Experimental Models for Electronic Resource Description

Calhoun, Karen January 2000 (has links)
This paper explores the potential for and progress of a gradual transition from a highly centralized model for cataloging to an iterative, collaborative, and broadly distributed model for electronic resource description. The author's purpose is to alert library managers to some experiments underway and to help them conceptualize new methods for defining, planning, and leading the e-resource description process under moderate to severe time and staffing constraints. To build a coherent library system for discovery and retrieval of networked resources, librarians and technologists are experimenting with team-based efforts and new workflows for metadata creation. In an emerging new service model for e-resource description, metadata can come from selectors, public service librarians, information technology staff, authors, vendors, publishers, and catalogers. Arguing that e-resource description demands a level of cross-functional collaboration and creative problem-solving that is often constrained by libraries' functional organizational structures, the author calls for reuniting functional groups into virtual teams that can integrate the e-resource description process, speed up operations, and provide better service. The paper includes an examination of the traditional division of labor for producing catalogs and bibliographies, a discussion of experiments that deploy a widely distributed e-resource description process (e.g., the use of CORC at Cornell and Brown), and an exploration of the results of a brief study of selected ARL libraries' e-resource discovery systems.
279

A Comparison of Web Resource Access Experiments:Planning for the New Millennium

Greenberg, Jane January 2000 (has links)
Over the last few years the bibliographic control community has initiated a series of experiments that aim to improve access to the growing number of valuable information resources that are increasingly being placed on World Wide Web (here after referred to as Web resources). Much has been written about these experiments, mainly describing their implementation and features, and there has been some evaluative reporting, but there has been little comparison among these initiatives. The research reported on in this paper addresses this limitation by comparing five leading experiments in this area. The objective was to identify characteristics of success and considerations for improvement in experiments providing access to Web resources via bibliographic control methods. The experiments examined include: OCLC's CORC project; UKOLN's BIBLINK, ROADS, and DESIRE projects; and the NORDIC project. The research used a multi-case study methodology and a framework comprised of five evaluation criteria that included the experiment's organizational structure, reception, duration, application of computing technology, and use of human resources. This paper defines the Web resource access experimentation environment, reviews the study's research methodology, and highlights key findings. The paper concludes by initiating a strategic plan and by inviting conference participants to contribute their ideas and expertise to an effort will improve experimental initiatives that ultimately aim to improve access to Web resources in the new Millennium.
280

Extending MARC for Bibliographic Control in the Web Environment:Challenges and Alternatives

McCallum, Sally January 2000 (has links)
This paper deconstructs the "MARC format" and similar newer tools like DC, XML, and RDF, separating structural issues from content-driven issues. Against that it examines the pressures from new types of digital resources, the responses to these pressures in format and content terms, and the transformations that may take place. The conflicting desires coming from users and librarians, the plethora of solutions to problems that constantly appear (some of which just might work), and the traditional access expectations are considered. Footnotes There are a large number of terms being used in the broader information community that often mean approximately the same thing, but relate concepts to the different backgrounds of the players. For example librarians are sometimes confused that metadata is something new and a replacement for either cataloging or MARC. Metadata is cataloging and not MARC. In this article terms based on library specialist terminology are used, with occasional use of alternative terms indicated below, depending on context. No difference in meaning is intended by the use of alternative terminology . The descriptions of the terms are indicative, not strict. cataloging data or cataloging content = metadata - used broadly, in this context, for all data (descriptive, administrative, and structural) that relates to the resources being described. content rules - rules for formulation of the data including controlled lists and codes. data elements - the individual identifiable pieces of cataloging data (e.g., name, title, subtitle) and including elements that are often called attributes or qualifiers (since generally this paper does not need to isolate data elements in to subtypes). relationships - the semantics that relate data elements, e.g., name is author of title, title has subtitle. content rules - the rules for formulating data element content structure = syntax - the physical arrangement of parts of an entity record - the bundle of information that describes a resource format = DTD - a defined specification of structure and markup markup = tag set = content designation - a system of symbols used to identify in some way the following data. ANSI/NISO Z39.2, Record Interchange Format, and ISO 2709, Format for Data Interchange. The two standards are essentially identical in specification. ANSI/NISO has a few provisions where the ISO standard is not specific, but there is no conflict between the two standards. Functional Requirements for Bibliographic Records. IFLA Study Group on the Functional Requirements for the Bibliographic Record. Munich, Saur, 1998. ISO 8879, Standardized General Markup Language (SGML).

Page generated in 0.0595 seconds