• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 234
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 616
  • 136
  • 118
  • 108
  • 106
  • 100
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The development of an effective co-training framework for adapting metasearch engine rankers /

Tan, Qingzhao. January 2004 (has links)
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004. / Includes bibliographical references (leaves 65-70). Also available in electronic version. Access restricted to campus users.
62

A framework for capturing, querying, and restructuring metadata in XML data

Jin, Hao, January 2005 (has links) (PDF)
Thesis (Ph.D.)--Washington State University. / Includes bibliographical references.
63

A Metadata Inference Framework to Provide Operational Information Support for Fault Detection and Diagnosis Applications in Secondary HVAC Systems

Gao, Jingkun 01 December 2017 (has links)
As the cost of hardware decreases and software technology advances, building automation systems (BAS) have been widely deployed to new buildings or as part of the retrofit to replace the old control systems. Though they are becoming more prevalent and promise important benefits to the society, such as improved energy-efficiency and occupants’ comfort, many of their benefits remain unreachable. Research suggests that this is because of the heterogeneous, fragmented and nonstandardized nature of existing BASs. One of the purported benefits of these systems is the ability to reduce energy consumption through the application of automated approaches such as fault detection and diagnosis (FDD) algorithms. Savings of up to 0.16 quadrillion BTUs per year could be obtained in the US alone through the use of these approaches, which are just software applications running on BAS hardware. However, deployment of these applications for buildings remains a challenge due to the non-trivial efforts of organizing, managing and extracting metadata associated with sensors (e.g., information about their type, function, etc.), which is required by them. One of the reasons leading to the problem is that varying conventions, acronyms, and standards are used to define this metadata. Though standards and governmentmandated policies may lift these obstacles and enable these softwarebased improvements to our building stock, this effort could take years to come to fruition and there are alternative technical solutions, such as automated metadata inference techniques, that could help reign in on the non-standardized nature of today’s BASs. This thesis sheds light on the visibility of this alternative approach by answering three key questions, which are then validated using data from more than 400 buildings in the US: (a) What is the specific operational information required by FDD approaches for secondary heating, ventilation, and air conditioning (HVAC) systems found in existing literature? (b) How is the performance of existing metadata inference approaches affected by changes in building characteristics, weather conditions, building usage patterns, and geographical locations? (c) What is an approach that can provide physical interpretations in the case of incorrect metadata being inferred? We find that: (a) The BAS points required by more than 30% of FDD approaches include six sensors in AHUs monitoring supply air temperature, outside air temperature, chilled water valve position, return air temperature, supply air flow rate, and mixed air temperature; (b) The average performance of existing inference approaches in terms of accuracy is similar across building sites, though there is significant variance, and the expected accuracy of classifying the type of points required by a particular FDD application for a new unseen building is, on average, 75%; (c) A new approach based on physical models is developed and validated on both the simulation data and the real-world data to infer the point types confused by data-driven models with an accuracy ranging from 73% to 100%, and this approach can provide physical interpretations in the case of incorrect inference. Our results provide a foundation and starting point to infer the metadata required by FDD approaches and minimize the implementation cost of deploying FDD applications on multiple buildings.
64

Semantic Web Technologies for T&E Metadata Verification and Validation

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles, Weisenseel, Annette 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The vision of the semantic web is to unleash the next generation of information sharing and interoperability by encoding meaning into the symbols that are used to describe various computational capabilities within the World Wide Web or other networks. This paper describes the application of semantic web technologies to Test and Evaluation (T&E) metadata verification and validation. Verification is a quality process that is used to evaluate whether or not a product, service, or system complies with a regulation, specification, or conditions imposed at the start of a development phase or which exists in the organization. Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. While this often involves acceptance and suitability with external customers, automation provides significant assistance to the customers.
65

Einsatz und Bewertung komponentenbasierter Metadaten in einer föderierten Infrastruktur für Sprachressourcen am Beispiel der CMDI

Eckart, Thomas 02 August 2016 (has links) (PDF)
Die Arbeit setzt sich mit dem Einsatz der Component Metadata Infrastructure CMDI im Rahmen der föderierten Infrastruktur CLARIN auseinander, wobei diverse konkrete Problemfälle aufgezeigt werden. Für die Erarbeitung entsprechender Lösungsstrategien werden unterschiedliche Verfahren adaptiert und für die Qualitätsanalyse von Metadaten und zur Optimierung ihres Einsatzes in einer föderierten Umgebung genutzt. Konkret betrifft dies vor allem die Übernahme von Modellierungsstrategien der Linked Data Community, die Übernahme von Prinzipien und Qualitätsmetriken der objektorientierten Programmierung für CMD-Metadatenkomponenten, sowie den Einsatz von Zentralitätsmaßen der Graph- bzw. Netzwerkanalyse für die Bewertung des Zusammenhalts des gesamten Metadatenverbundes. Dabei wird im Rahmen der Arbeit die Analyse verwendeter Schema- bzw. Schemabestandteile sowie die Betrachtung verwendeter Individuenvokabulare im Zusammenspiel aller beteiligten Zentren in den Vordergrund gestellt.
66

Where Next for XidML

Cooke, Alan 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / XidML is an open, vendor neutral, XML based standard for the FTI community used to capture the metadata associated with flight test instrumentation and data acquisition networks. This paper discusses the difference between metadata and meta-metadata, how these concepts apply to XidML and an optional schema, XdefML. The paper also describes how Settings and their validation may be applied to a Generic package definition. Some examples are given of how the current version XdefML can be leveraged to auto-generate graphical user interfaces that are both feature rich and incorporate sophisticated error checking and data validation.
67

Flexible Authoring of Metadata for Learning : Assembling forms from a declarative data and view model

Enoksson, Fredrik January 2011 (has links)
With the vast amount of information in various formats that is produced today it becomes necessary for consumers ofthis information to be able to judge if it is relevant for them. One way to enable that is to provide information abouteach piece of information, i.e. provide metadata. When metadata is to be edited by a human being, a metadata editorneeds to be provided. This thesis describes the design and practical use of a configuration mechanism for metadataeditors called annotation profiles, that is intended to enable a flexible metadata editing environment. An annotationprofile is an instance of an Annotation Profile Model (APM), which is an information model that can gatherinformation from many sources. This model has been developed by the author together with colleagues at the RoyalInstitute of Technology and Uppsala University in Sweden. It is designed so that an annotation profile can holdenough information for an application to generate a customized metadata editor from it. The APM works withmetadata expressed in a format called RDF (Resource Description Framwork), which forms the technical basis for theSemantic Web. It also works with metadata that is expressed using a model similar to RDF. The RDF model providesa simple way of combining metadata standards and this makes it possible for the resulting metadata editor to combinedifferent metadata standards into one metadata description. Resources that are meant to be used in a learning situationcan be of various media types (audio- or video-files, documents, etc.), which gives rise to a situation where differentmetadata standards have to be used in combination. Such a resource would typically contain educational metadatafrom one standard, but for each media type a different metadata standard might be used for the technical description.To combine all the metadata into a single metadata record is desirable and made possible when using RDF. The focusin this thesis is on metadata for resources that can be used in such learning contexts.One of the major advantages of using annotation profiles is that they enable change of metadata editor without havingto modify the code of an application. In contrast, the annotation profile is updated to fit the required changes. In thisway, the programmer of an application can avoid the responsibility of deciding which metadata that can be edited aswell as the structure of it. Instead, such decisions can be left to the metadata specialist that creates the annotationprofiles to be used.The Annotation Profile Model can be divided into two models, the Graph Pattern Model that holds information onwhat parts of the metadata that can be edited, and the Form Template Model that provides information about how thedifferent parts of the metadata editor should be structured. An instance of the Graph Pattern Model is called a graphpattern, and it defines which parts of the metadata that the annotation profile will be editable. The author hasdeveloped an approach to how this information can be used when the RDF metadata to edit is stored on a remotesystem, e.g. a system that can only be accessed over a network. In such cases the graph pattern cannot be useddirectly, even though it defines the structures that can be affected in the editing process. The method developeddescribes how the specific parts of metadata are extracted for editing and updating when the metadata author hasfinished editing.A situation where annotation profiles have proven valuable is presented in chapter 6. Here the author have taken partin developing a portfolio system for learning resources in the area of blood diseases, hematology. A set of annotationprofiles was developed in order to adapt the portfolio system for this particular community. The annotation profilesmade use of an existing curriculum for hematology that provides a competence profile of this field. The annotationprofile makes use this curriculum in two ways:1. As a part of the personal profile for each user, i.e. metadata about a person. Through the editor, created from anannotation profile, the user can express his/her skill/knowledge/competence in the field of hematology.2. The metadata can associate a learning resource can with certain parts of the competence description, thusexpressing that the learning resource deals with a specific part of the competence profile. This provides a mechanismfor matching learning need with available learning resources.As the field of hematology is evolving, the competence profile will need to be updated. Because of the use ofannotation profiles, the metadata editors in question can be updated simply by changing the corresponding annotationprofiles. This is an example of the benefits of annotation profiles within an installed application. Annotation Profilescan also be used for applications that aim to support different metadata expressions, since the set of metadata editorscan be easily changed.The system of portfolios mentioned above provides this flexibility in metadata expression, and it has successfullybeen configured to work with resources from other domain areas, notably organic farming, by using another set ofannotation profiles. Hence, to use annotation profiles has proven useful in these settings due to the flexibility that theAnnotation Profile Model enables. Plans for the future include developing an editor for annotation profiles in order toprovide a simple way to create such profiles. / QC 20110426
68

History and Evolution of Metadata Standards for the FTI Community

Cooke, Alan 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / The paper discusses the history and background of metadata standards for the FTI community over the last 20 years and speculates on how they may develop in the future. It starts by highlighting the deficiencies of proprietary formats and the resulting problems. It then discusses the characteristics and features of specific industry standard metadata descriptions such as TMATS, iHAL, MDL and XidML in addition to their levels of maturity. The attributes of what constitutes a fully mature FTI metadata standard is then discussed. It is suggested that any standard must serve at least two functions, Configuration and Validation, and outlines what exactly each means. Finally, it is argued that there is now a significant level of convergence and consensus in both the scope and application of metadata, and in the associated concept of operations (ConOps). The details of this Concept of Operations are then discussed along with suggestions as to how this may evolve in the coming years.
69

REFERENCE DESIGN FOR A SQUADRON LEVEL DATA ARCHIVAL SYSTEM

Ferrill, Paul 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / As more aircraft are fitted with solid state memory recording systems, the need for a large data archival storage system becomes increasingly important. In addition, there is a need to keep classified and unclassified data separate but available to the aircrews for training and debriefing along with some type of system for cataloging and searching for specific missions. This paper will present a novel approach along with a reference design for using commercially available hardware and software and a minimal amount of custom programming to help address these issues.
70

APPLICATIONS OF A HARDWARE SPECIFICATION FOR INSTRUMENTATION METADATA

Hamilton, John, Fernandes, Ronald, Graul, Mike, Jones, Charles H. 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / In this paper, we discuss the benefits of maintaining a neutral-format hardware specification along with the telemetry metadata specification. We present several reasons and methods for maintaining the hardware specifications, as well as several potential uses of hardware specification. These uses include cross-validation with the telemetry metadata and automatic generation of both metadata and instrumentation networks.

Page generated in 0.039 seconds