• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 621
  • 136
  • 119
  • 108
  • 108
  • 103
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Metadata Quality and the Use of Hierarchical Schemes to Determine Meta Keywords: An Exploration

Emily S. Fidelman 12 April 2006 (has links)
This study explores the impact of vocabulary scheme arrangement on the quality of author-generated metadata, specifically specificity and frequency of vocabulary terms chosen from schemes to describe websites. By evaluating vocabulary assigned using hierarchical and flat schemes, and by comparing these evaluations, this study seeks to isolate the arrangement of the scheme used from other variables, such as skill level and intentions of metadata generators, which have been the focus of previous research into the viability of author-generated metadata. This study suggests a relationship between term specificity and scheme arrangement, and possible relationships between term frequency and scheme arrangement, and submits that it is therefore possible that non-professional status, lack of skills, or intentions to misrepresent web page content via metadata are not the sole contributing factors to quality of author-generated metadata. New methods for researching metadata quality are tested and their validity discussed.
12

Understanding Metadata

National Information Standards Organization, (NISO) January 2004 (has links)
NISO, a non-profit association accredited by the American National Standards Institute (ANSI), identifies, develops, maintains, and publishes technical standards to manage information in our changing and ever-more digital environment. NISO standards apply both traditional and new technologies to the full range of information-related needs, including retrieval, re-purposing, storage, metadata, and preservation. NISO Standards, information about NISOâ s activities and membership are featured on the NISO website <http://www.niso.org>.
13

Metadata Quality for Digital Libraries

Chan, Chu-hsiang January 2008 (has links)
The quality of metadata in a digital library is an important factor in ensuring access for end-users. Several studies have tried to define quality frameworks and assess metadata but there is little user feedback about these in the literature. As collections grow in size maintaining quality through manual methods becomes increasingly difficult for repository managers. This research presents the design and implementation of a web-based metadata analysis tool for digital repositories. The tool is built as an extension to the Greenstone3 digital library software. We present examples of the tool in use on real-world data and provide feedback from repository managers. The evidence from our studies shows that automated quality analysis tools are useful and valued service for digital libraries.
14

Meta-Metadata: An Information Semantic Language and Software Architecture for Collection Visualization Application

Mathur, Abhinav 2009 December 1900 (has links)
Information collection and discovery tasks involve aggregation and manipulation of information resources. An information resource is a location from which a human gathers data to contribute to his/her understanding of something significant. Repositories of information resources include the Google search engine, the ACM Digital Library, Wikipedia, Flickr, and IMDB. Information discovery tasks involve having new ideas in contexts of information collecting. The information one needs to collect is large and diverse and hard to keep track of. The heterogeneity and scale also make difficult writing software to support information collection and discovery tasks. Metadata is a structured means for describing information resources. It forms the basis of digital libraries and search engines. As metadata is often called, "data about data," we define meta-metadata as a formal means for describing metadata as an XML based language. We consider the lifecycle of metadata in information collection and discovery tasks and develop a metametadata architecture which deals with the data structures for representation of metadata inside programs, extraction from information resources, rules for presentation to users, and logic that defines how an application needs to operate on metadata. Semantic actions for an information resource collection are steps taken to generate representative objects, including formation of iconographic image and text surrogates, associated with metadata. The meta-metadata language serves as a layer of abstraction between information resources, power users, and application developers. A power user can enhance an existing collection visualization application by authoring meta-metadata for a new information resource without modifying the application source code. The architecture provides a set of interfaces for semantic actions which different information discovery and visualization applications can implement according to their own custom requirements. Application developers can modify the implementation of these semantic actions to change the behavior of their application, regardless of the information resource. We have used our architecture in combinFormation, an information discovery and collection visualization application and validated it through a user study.
15

Obsahová metadata vytvářená uživateli a informačními profesionály: srovnávací analýza / Content metadata created by users and information professionals: a comparative analysis

Světelská, Hana January 2020 (has links)
The master thesis is focused on professional and user-generated metadata with emphasis on content metadata. It describes metadata in general, their important characteristics, functions, and types, and defines other related terms. The professional content metadata and user-generated metadata are described in more detail. The analytical part aims to compare professional and user-generated content metadata. For this purpose, a dataset of metadata statements on fiction books collected from dozens of databases was used. It also aims to determine whether user-generated metadata bring additional information to metadata created by professionals.
16

Exploring multi-granular documentation strategies for the representation, discovery and use of geographic information

Batcheller, James Kenneth January 2009 (has links)
This thesis explores how digital representations of geography and Geographic Information (GI) may be described, and how these descriptions facilitate the use of the resources they depict. More specifically, it critically examines existing geospatial documentation practices and aims to identify opportunities for refinement therein, whether when used to signpost those data assets documented, for managing and maintaining information assets, or to assist in resource interpretation and discrimination. Documentation of GI can therefore facilitate its utilisation; it can be reasonably expected that by refining documentation practices, GI hold the potential for being better exploited. The underpinning theme connecting the individual papers of the thesis is one of multi-granular documentation. GI may be recorded at varying degrees of granularity, and yet traditional documentation efforts have predominantly focussed on a solitary level (that of the geospatial data layer). Developing documentation practices to account for other granularities permits the description of GI at different levels of detail and can further assist in realising its potential through better discovery, interpretation and use. One of the aims of the current work is to establish the merit of such multi-granular practices. Over the course of four research papers and a short research article, proprietary as well as open source software approaches are accordingly presented and provide proof-of-concept and conceptual solutions that aim to enhance GI utilisation through improved documentation practices. Presented in the context of an existing body of research, the proposed approaches focus on the technological infrastructure supporting data discovery, the automation of documentation processes and the implications of describing geospatial information resources of varying granularity. Each paper successively contributes to the notion that geospatial resources are potentially better exploited when documentation practices account for the multi-granular aspects of GI, and the varying ways in which such documentation may be used. In establishing the merit of multi-granular documentation, it is nevertheless recognised in the current work that instituting a comprehensive documentation strategy at several granularities may be unrealistic for some geospatial applications. Pragmatically, the level of effort required would be excessive, making universal adoption impractical. Considering however the ever-expanding volumes of geospatial data gathered and the demand for ways of managing and maintaining the usefulness of potentially unwieldy repositories, improved documentation practices are required. A system of hierarchical documentation, of self-documenting information, would provide for information discovery and retrieval from such expanding resource pools at multiple granularities, improve the accessibility of GI and ultimately, its utilisation.
17

Authority Control and Digital Commons: Why Bother?

Edwards, Laura 01 June 2018 (has links)
Authority control provided by Digital Commons is basic. Other than author names, Digital Commons does not provide much in the way of authority control for other fields, such as faculty advisor/mentor names or department names. Standardizing name fields has several benefits, not least of which is the increased precision of reports that institutions can create to highlight the impact of faculty mentorship activities as well as the scholarship output of departmental entities on campus. Institutions that want to ensure the consistency of names across submissions to their Digital Commons repository, especially for self-submitted submissions, must develop their own methods for maintaining authority control. The presenter, a librarian wearing many hats in her position at Eastern Kentucky University Libraries, will talk about strategies she has developed for streamlining authority control work in EKU Libraries’ Digital Commons repository, Encompass Digital Archive.
18

A Futurist Vision for Instrumentation

Jones, Charles H. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / For those of us that are Trekies as well as techies, having Geordi's computer that can answer detailed system status questions in real time is something of a holy grail. Indeed, who doesn't like the idea of being able to ask a question and almost instantaneously get an answer? Fortunately, this basic functionality of being able to query an instrumentation system and have it return any level of detail about the system is within reach. Borrowing from another science fiction show, we might say: "We have the technology ..." The ability to network complex systems together - even to the point of having devices autonomously link into the system - is common place. Devices that can report their status, test themselves for failures, and self calibrate are also common. Certainly software interfaces into complex systems, including the graphics for hierarchical 3-D displays, can be created. Unfortunately, we do not currently have all of the different technologies needed for a fully automated instrumentation support system integrated into our particular domain. This paper looks at why we don't have this now and where we are in terms of getting there. This includes discussions of networking, metadata, smart instrumentation, standardization, the role manufacturers need to play, and a little historical perspective.
19

A Metadata Approach to Preservation of Digital Resources: The University of North Texas Libraries' Experience

Alemneh, Daniel Gelaw, Hastings, Samantha Kelly, Hartman, Cathy N. 08 1900 (has links)
Preserving long-term access to digital information resources is one of the key challenges facing libraries and information centers today. The University of North Texas (UNT) Libraries has entered into partnership agreements with federal and state agencies to ensure permanent storage and public access to a variety of government information sources. As digital resource preservation encompasses a wide variety of interrelated activities, the UNT Libraries are taking a phased approach to ensure the long-term access to its digital resources. Formulation of preservation policy and creation of preservation metadata for electronic files and digital collections are among the most important steps. This paper discusses the issues related to digital resources preservation and demonstrates the role of preservation metadata in facilitating the preservation activities in general. In particular, it describes the efforts being made by the UNT libraries to ensure the long-term access and preservation of various digital information resources.
20

Alt er metadata : Bruk av metadata i et integrert brukersystem

Oppedal, Anita Iren January 2000 (has links)
<p>Denne hovedfagsavhandlingen setter fokus på hvordan metadata brukes i et integrert brukersystem i en bedrift.</p><p>I et informasjonsrom er informasjonsressurser fra ulike medier intergrert, og en trenger et felles “bindeledd” for å støtte bedre gjenfinning og tilgang til informasjon i informasjonsrommet. Problemet er ofte at de ulike medier bruker ulike format for beskrivelse av sine informasjonsressurser, noe som vanskeliggjør interoperabilitet mellom de ulike medier. Dersom de ulike medier kan bruke samme metadataformat til å beskrive sine informasjonsressurser, vil det bedre interoperabiliteten.</p><p>Dublin Core Metadata Element Set (DC) er et format utviklet med tanke på publisering av informasjonsressuser via Intranett og Internett. Det er DC som er bindeleddet i det virtuelle informasjonsrommet som denne avhandlingen tar utgangspunkt i.</p><p>Sentralt i denne avhandlingen står vurderingen av hvordan Adresseavisens indekseringsbehov kan tilfredsstilles i DC for informasjonsressurser som artikler, bilder/illustrasjoner og film. Forslag til et kjerneformat for Adresseavisens informasjonsressurser, med medieavhengige variasjoner legges frem. Dette er informasjonsressurser hvor avis er brukskontekst. Forslaget som fremlegges imøtekommer resultater fra brukerundersøkelsen, og opplysninger og observasjon av hvordan indekseringsformatene allerede benytter brukes.</p><p>Undersøkelsen har resultert i følgende funn:</p><p>• De fleste brukere velger fritekstsøk fremfor metadatasøk</p><p>• Opplæring virker inn på bruk av metadata</p><p>• Arbeidsoppgaver/Informasjonsbehov påvirker bruk av metadata</p><p>• Erfaring med databasesystemet og hyppighet i søk i databasen kan påvirke bruk av metadata</p><p>• Noen metadataelement er mer bedre egnet for søk enn andre.</p><p>Undersøkelsene gir også anbefalinger som kan være nyttige ved navngivning av metadata. Følgende fremgår av undersøkelsen:</p><p>• Forkortelser i navngivning av metadata bør unngås for å gjøre dem mer selvforklarende</p><p>• Tvetydige begreper i navngivning av metadata gjør dem mindre intuitive i forhold til forståelse for innhold</p><p>Undersøkelsen er presentert med stolpediagram og tabeller, som er metoder som kan brukes til kvalitative analyser.</p>

Page generated in 0.058 seconds