• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 235
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 617
  • 136
  • 118
  • 108
  • 107
  • 100
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Automatic Language Identification for Metadata Records: Measuring the Effectiveness of Various Approaches

Knudson, Ryan Charles 05 1900 (has links)
Automatic language identification has been applied to short texts such as queries in information retrieval, but it has not yet been applied to metadata records. Applying this technology to metadata records, particularly their title elements, would enable creators of metadata records to obtain a value for the language element, which is often left blank due to a lack of linguistic expertise. It would also enable the addition of the language value to existing metadata records that currently lack a language value. Titles lend themselves to the problem of language identification mainly due to their shortness, a factor which increases the difficulty of accurately identifying a language. This study implemented four proven approaches to language identification as well as one open-source approach on a collection of multilingual titles of books and movies. Of the five approaches considered, a reduced N-gram frequency profile and distance measure approach outperformed all others, accurately identifying over 83% of all titles in the collection. Future plans are to offer this technology to curators of digital collections for use.
122

Exploration of RDA-Based MARC21 Subject Metadata in Worldcat Database and Its Readiness to Support Linked Data Functionality

Zavalin, Vyacheslav I. 08 1900 (has links)
Subject of information entity is one of the fundamental concepts in the field of information science. Subject of any document represents its intellectual potential -- 'aboutness' of the document. Traditionally, subject (along with title and author) is the one of three major ways to access information, so subject metadata plays a central role in this process and the role is constantly growing. Previous research concluded that the larger bibliographic database is, the richer subject vocabularies and classification schemes are needed to support information discovery. Further, a high proportion of information objects are unretrievable without subject headings in metadata records. This exploratory study provides the analysis of the subject metadata in MARC 21 bibliographic records created in 2020; and develops understanding of the level and patterns of 'aboutness' representation in the MARC 21 bibliographic records. Study also examines how these records apply the recent RDA and MARC21 guidelines and features intended to support functionality in a Linked Data environment. Methods of Social Network Analysis were applied along with content analysis, to answer research questions of this study. Suggestions for future research, implications for education, and practical recommendations for library metadata creation and management are discussed.
123

Einsatz und Bewertung komponentenbasierter Metadaten in einer föderierten Infrastruktur für Sprachressourcen am Beispiel der CMDI

Eckart, Thomas 29 July 2016 (has links)
Die Arbeit setzt sich mit dem Einsatz der Component Metadata Infrastructure CMDI im Rahmen der föderierten Infrastruktur CLARIN auseinander, wobei diverse konkrete Problemfälle aufgezeigt werden. Für die Erarbeitung entsprechender Lösungsstrategien werden unterschiedliche Verfahren adaptiert und für die Qualitätsanalyse von Metadaten und zur Optimierung ihres Einsatzes in einer föderierten Umgebung genutzt. Konkret betrifft dies vor allem die Übernahme von Modellierungsstrategien der Linked Data Community, die Übernahme von Prinzipien und Qualitätsmetriken der objektorientierten Programmierung für CMD-Metadatenkomponenten, sowie den Einsatz von Zentralitätsmaßen der Graph- bzw. Netzwerkanalyse für die Bewertung des Zusammenhalts des gesamten Metadatenverbundes. Dabei wird im Rahmen der Arbeit die Analyse verwendeter Schema- bzw. Schemabestandteile sowie die Betrachtung verwendeter Individuenvokabulare im Zusammenspiel aller beteiligten Zentren in den Vordergrund gestellt.
124

MINING IRIG-106 CHAPTER 10 AND HDF-5 DATA

Lockard, Michael T., Rajagopalan, R., Garling, James A. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Rapid access to ever-increasing amounts of test data is becoming a problem. The authors have developed a data-mining methodology solution approach to provide a solution to catalog test files, search metadata attributes to derive test data files of interest, and query test data measurements using a web-based engine to produce results in seconds. Generated graphs allow the user to visualize an overview of the entire test for a selected set of measurements, with areas highlighted where the query conditions were satisfied. The user can then zoom into areas of interest and export selected information.
125

iNET System Operational Flows

Grace, Thomas B., Abbott, Ben A., Moodie, Myron L. 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / The integrated Network-Enhanced Telemetry (iNET) project is transitioning from standards development to deployment of systems. In fielding a Telemetry Network System (TmNS) demonstration system, one must choose and integrate technological building blocks from the suite of standards to implement new test capabilities. This paper describes the operation of a TmNS and identifies the management, configuration, control, acquisition, and distribution of information and operational flows. These items are discussed utilizing a notional system to walk through the mechanisms identified by the iNET standards. Note that at the time of this paper the efforts discussed are only at the very beginning of the design process and will likely evolve throughout the design process.
126

IHAL and Web Service Interfaces to Vendor Configuration Engines

Hamilton, John, Darr, Timothy, Fernandes, Ronald, Sulewski, Joe, Jones, Charles 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / In this paper, we present an approach towards achieving standards-based multi-vendor hardware configuration. This approach uses the Instrumentation Hardware Abstraction Language (IHAL) and a standardized web service Application Programming Interface (API) specification to allow any Instrumentation Support System (ISS) to control instrumentation hardware in a vendor neutral way without requiring non-disclosure agreements or knowledge of proprietary information. Additionally, we will describe a real-world implementation of this approach using KBSI‟s InstrumentMap application and an implementation of the web service API by L-3 Communications Telemetry East.
127

A Model-Based Methodology for Managing T&E Metadata

Hamilton, John, Fernandes, Ronald, Darr, Timothy, Graul, Michael, Jones, Charles, Weisenseel, Annette 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / In this paper, we present a methodology for managing diverse sources of T&E metadata. Central to this methodology is the development of a T&E Metadata Reference Model, which serves as the standard model for T&E metadata types, their proper names, and their relationships to each other. We describe how this reference model can be mapped to a range's own T&E data and process models to provide a standardized view into each organization's custom metadata sources and procedures. Finally, we present an architecture that uses these models and mappings to support cross-system metadata management tasks and makes these capabilities accessible across the network through a single portal interface.
128

METADATA MODELING FOR AIRBORNE DATA ACQUISITION SYSTEMS

Kupferschmidt, Benjamin, Pesciotta, Eric 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Many engineers express frustration with the multitude of vendor specific tools required to describe measurements and configure data acquisition systems. In general, tools are incompatible between vendors, forcing the engineer to enter the same or similar data multiple times. With the emergence of XML technologies, user centric data modeling for the flight test community is now possible. With this new class of technology, a vendor neutral, standard language to define measurements and configure systems may finally be realized. However, the allure of such a universal language can easily become too abstract, making it untenable for hardware configuration and resulting in a low vendor adoption rate. Conversely, a language that caters too much to vendor specific configuration will defeat its purpose. Achieving this careful balance is not trivial, but is possible. Doing so will produce a useful standard without putting it out of the reach of equipment vendors. This paper discusses the concept, merits, and possible solutions for a standard measurement metadata model. Practical solutions using XML and related technologies are discussed.
129

OntoStudyEdit

Uciteli, Alexandr, Herre, Heinrich 10 February 2016 (has links) (PDF)
Background: The specification of metadata in clinical and epidemiological study projects absorbs significant expense. The validity and quality of the collected data depend heavily on the precise and semantical correct representation of their metadata. In various research organizations, which are planning and coordinating studies, the required metadata are specified differently, depending on many conditions, e.g., on the used study management software. The latter does not always meet the needs of a particular research organization, e.g., with respect to the relevant metadata attributes and structuring possibilities. Methods: The objective of the research, set forth in this paper, is the development of a new approach for ontology-based representation and management of metadata. The basic features of this approach are demonstrated by the software tool OntoStudyEdit (OSE). The OSE is designed and developed according to the three ontology method. This method for developing software is based on the interactions of three different kinds of ontologies: a task ontology, a domain ontology and a top-level ontology. Results: The OSE can be easily adapted to different requirements, and it supports an ontologically founded representation and efficient management of metadata. The metadata specifications can by imported from various sources; they can be edited with the OSE, and they can be exported in/to several formats, which are used, e.g., by different study management software. Conclusions: Advantages of this approach are the adaptability of the OSE by integrating suitable domain ontologies, the ontological specification of mappings between the import/export formats and the DO, the specification of the study metadata in a uniform manner and its reuse in different research projects, and an intuitive data entry for non-expert users.
130

Evaluation and improvement of semantically-enhanced tagging system

Alsharif, Majdah Hussain January 2013 (has links)
The Social Web or ‘Web 2.0’ is focused on the interaction and collaboration between web sites users. It is credited for the existence of tagging systems, amongst other things such as blogs and Wikis. Tagging systems like YouTube and Flickr offer their users the simplicity and freedom in creating and sharing their own contents and thus folksonomy is a very active research area where many improvements are presented to overcome existing disadvantages such as the lack of semantic meaning, ambiguity, and inconsistency. TE is a tagging system proposing solutions to the problems of multilingualism, lack of semantic meaning and shorthand writing (which is very common in the social web) through the aid of semantic and social resources. The current research is presenting an addition to the TE system in the form of an embedded stemming component to provide a solution to the different lexical form problems. Prior to this, the TE system had to be explored thoroughly and then its efficiency had to be determined in order to decide on the practicality of embedding any additional components as enhancements to the performance. Deciding on this involved analysing the algorithm efficiency using an analytical approach to determine its time and space complexity. The TE had a time growth rate of O (N²) which is polynomial, thus the algorithm is considered efficient. Nonetheless, recommended modifications like patch SQL execution can improve this. Regarding space complexity, the number of tags per photo represents the problem size which, if it grows, will increase linearly the required memory space. Based on the findings above, the TE system is re-implemented on Flickr instead of YouTube, because of a recent YouTube restriction, which is of greater benefit in multi languages tagging system since the language barrier is meaningless in this case. The re-implementation is achieved using ‘flickrj’ (Java Interface for Flickr APIs). Next, the stemming component is added to perform tags normalisation prior to the ontologies querying. The component is embedded using the Java encoding of the porter 2 stemmer which support many languages including Italian. The impact of the stemming component on the performance of the TE system in terms of the size of the index table and the number of retrieved results is investigated using an experiment that showed a reduction of 48% in the size of the index table. This also means that search queries have less system tags to compare them against the search keywords and this can speed up the search. Furthermore, the experiment runs similar search trails on two versions of the TE systems one without the stemming component and the other with the stemming component and found out that the latter produced more results on the conditions of working with valid words and valid stems. The embedding of the stemming component in the new TE system has lessened the effect of the storage overhead needed for the generated system tags by their reduction for the size of the index table which make the system suited for many applications such as text classification, summarization, email filtering, machine translation…etc.

Page generated in 0.0818 seconds