• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 619
  • 136
  • 119
  • 108
  • 108
  • 101
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

En jämförande studie av metadatascheman vid två statliga myndigheter

Troselius, Nils January 2010 (has links)
The purpose of this case study has been to investigate the development and implementation of metadataschemas in two swedish governmental agencies. The interest of metadata in relation to records managment has been significant lately due to the challenges posed by the digital environment. This has been manifesteted in international standards such as TS/ISO 23081 Metadata for records and the MoReq 2 metadata specification issued by the DLM-forum. The extent to which these standard has influenced practice seems less overt however which has been one main motivation for this study.   The research questions adressed are: What has been the main purposes for the development of the metadataschemas? Which standards and role-models have influenced the metadataschemas? How has the implementation of the schemas been carried out? The case study has been conducted using a qualitative research method. The primary method of data collection has been qualitative interwievs and analysis of documentation. The results has been analysed using the Records-continuum model according to which records can serve different purposes in time and space.   The results shows that the motives behind the development of the schemas has been much the same as for example improved description and access. The choice of standards influencing the schemas has been different however. This has affected the possibilities of the agencies to describe records in context and safeguard their evidentual value. The metadataschema developed for Skatteverket has mainly followed a record-centric model of description. The underlying assumption has been a retro-active stance where records are regarded as existing objects that process through a-priori defined life-stages. In contrast, the metadataschema developed for Trafikverket emphasizes the relationship with records and the business process in which they are created. The study confirm findings of previous research that certain metadatatypes such as metadata describing the logical strucure of records and process metadata relating to the subsequent use and handling of records are used to a lesser extent. ISO 23081:1 Metadata for records was issued during the development of the metadataframework for Trafikverket and influenced the further work to some extent. In order to investigate the actual impact of this standard further quantiative oriented studies are needed.
82

Individualized selection of learning objects

Liu, Jian 15 May 2009
Rapidly evolving Internet and web technologies and international efforts on standardization of learning object metadata enable learners in a web-based educational system ubiquitous access to multiple learning resources. It is becoming more necessary and possible to provide individualized help with selecting learning materials to make the most suitable choice among many alternatives.<p> A framework for individualized learning object selection, called Eliminating and Optimized Selection (EOS), is presented in this thesis. This framework contains a suggestion for extending learning object metadata specifications and presents an approach to selecting a short list of suitable learning objects appropriate for an individual learner in a particular learning context. The key features of the EOS approach are to evaluate the suitability of a learning object in its situated context and to refine the evaluation by using available historical usage information about the learning object. A Learning Preference Survey was conducted to discover and determine the relationships between the importance of learning object attributes and learner characteristics. Two weight models, a Bayesian Network Weight Model and a Naïve Bayes Model, were derived from the data collected in the survey. Given a particular learner, both of these models provide a set of personal weights for learning object features required by the individualized learning object selection.<p> The optimized selection approach was demonstrated and verified using simulated selections. Seventy simulated learning objects were evaluated for three simulated learners within simulated learning contexts. Both the Bayesian Network Weight Model and the Naïve Bayes Model were used in the selection of simulated learning objects. The results produced by the two algorithms were compared, and the two algorithms highly correlated each other in the domain where the testing was conducted.<p> A Learning Object Selection Study was performed to validate the learning object selection algorithms against human experts. By comparing machine selection and human experts selection, we found out that the agreement between machine selection and human experts selection is higher than agreement among the human experts alone.
83

Inter-university Upper atmosphere Global Observation NETwork (IUGONET) project

Hashiguchi, N.O., Yatagai, Akiyo, Kaneda, Naoki, Umemura, Norio, UeNo, Satoru, Yagi, Manabu, Koyama, Yukinobu, Sato, Yuka, Shinbori, Atsuki, Tanaka, Yoshimasa, Abe, Shuji, Hori, Tomoaki 25 June 2013 (has links)
International Living With a Star Workshop 2013, 2013/06/24-6/28, Irkutsk, Russia
84

Automatic and semi-automatic methods for metadata creation and maintenance : long term implementation of the INSPIRE directive

Norlund, Petra January 2010 (has links)
Metadata is an important part of any Spatial Data Infrastructure (SDI). Without proper and sufficient documentation of spatial data, resources are lost when pre-existing data has to be recreated or if data sets overlap. At the same time, creating and updating metadata can be a resource intensive task. Lantmäteriet seeks to optimize the creation and updating of metadata according to the new INSPIRE directive, as well as the Swedish National Geodata Strategy. INSPIRE (Infrastructure for Spatial Information in Europe) seeks to increase cooperation between nations in Europe through harmonization of certain spatial data themes, increased data and software interoperability, as well as the creation of a European spatial data infrastructure.  INSPIRE lays the judicial foundation for this European cooperation. Sweden has been involved with INSPIRE since May 15th 2009. This thesis is aimed at developing the most optimal business process model for how the Swedish Mapping, Cadastral, and Land Registration Authority (Lantmäteriet) can create and update metadata according to the new INSPIRE directive based on best practice case studies and extensive literature review. The European Commission (EC) INSPIRE directive will be fully implemented in 2010. Furthermore, a survey of current metadata practices has been carried out to establish a starting off point for metadata creation at Lantmäteriet as well as a best practice business process model using ArcGIS Desktop.
85

Individualized selection of learning objects

Liu, Jian 15 May 2009 (has links)
Rapidly evolving Internet and web technologies and international efforts on standardization of learning object metadata enable learners in a web-based educational system ubiquitous access to multiple learning resources. It is becoming more necessary and possible to provide individualized help with selecting learning materials to make the most suitable choice among many alternatives.<p> A framework for individualized learning object selection, called Eliminating and Optimized Selection (EOS), is presented in this thesis. This framework contains a suggestion for extending learning object metadata specifications and presents an approach to selecting a short list of suitable learning objects appropriate for an individual learner in a particular learning context. The key features of the EOS approach are to evaluate the suitability of a learning object in its situated context and to refine the evaluation by using available historical usage information about the learning object. A Learning Preference Survey was conducted to discover and determine the relationships between the importance of learning object attributes and learner characteristics. Two weight models, a Bayesian Network Weight Model and a Naïve Bayes Model, were derived from the data collected in the survey. Given a particular learner, both of these models provide a set of personal weights for learning object features required by the individualized learning object selection.<p> The optimized selection approach was demonstrated and verified using simulated selections. Seventy simulated learning objects were evaluated for three simulated learners within simulated learning contexts. Both the Bayesian Network Weight Model and the Naïve Bayes Model were used in the selection of simulated learning objects. The results produced by the two algorithms were compared, and the two algorithms highly correlated each other in the domain where the testing was conducted.<p> A Learning Object Selection Study was performed to validate the learning object selection algorithms against human experts. By comparing machine selection and human experts selection, we found out that the agreement between machine selection and human experts selection is higher than agreement among the human experts alone.
86

A Study on the Mechanism of Geographic Data Searching and Clearinghouse on the Internet

Wei, Ko-Ming 31 August 2002 (has links)
Internet has become the most extensible media of data exchange and communication in the world because computer science and technology are more and more popular. The Geographic Information Systems¡]GIS¡^are also developed on the Internet. However, using existing mechanism of data searching on the Internet cannot search data in Web GIS. We can only browse data but not access. This situation makes Web GIS as an isolated island. Users fail to know where and what kinds of data are provided, and these data also cannot be shared. The most important objective of the research is to build an effective mechanism of searching and clearinghouse on the Internet. This mechanism can help computer overcome difficulties in reading and understanding geographic data that are composed of maps and images, and then geographic data can be searched and shared easily as text data. The research will try to create metadata by XML that are complied with FGDC standard. By using two of the XML characteristics, i.e. creating tags and describing data, the computer can retrieve information automatically from metadata on the Internet. Lastly, the geographic search engine and clearinghouse that the research built will collect and integrate geographic metadata to systematically facilitate users finding geographic data they need through Internet, and achieve the objectives of geographic data search and clearinghouse on the Internet.
87

Collaborative annotation, analysis, and presentation interfaces for digital video

Diakopoulos, Nicholas A. January 2009 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010. / Committee Chair: Essa, Irfan; Committee Member: Abowd, Gregory; Committee Member: Bolter, Jay; Committee Member: Lampe, Cliff; Committee Member: Stasko, John. Part of the SMARTech Electronic Thesis and Dissertation Collection.
88

Learning path optimization with incomplete learning object metadata

Fung, Sze-tat., 馮思達. January 2011 (has links)
One of the fundamental concerns of instructional design is pedagogical sequencing which is a practice of organizing course materials according to the underlying knowledge structure and concept dependency. In the conventional settings, like the secondary schools or tertiary institution, instructors are required to interpret learning materials by their own domain knowledge. But in many online learning systems, analyzing and interpreting learning materials are very challenging due to the lack of instructional contexts and pedagogical attributes of the learning units. The learning objects and learning object metadata (LOM) are learning technologies to formalize the concept of learning unit and standardizing the specification of learning object annotation framework. The learning object is aimed to provide a solution for reuse and sharing of learning materials, and to provide infrastructure for pedagogical design. The LOM has been widely adopted in various learning systems, methodologies and system frameworks proposed to solve instructional design problem based on the pedagogical information as provided in the LOM. However, an empirical study showed that most real-life learning objects do not provide necessary pedagogical information. Thus, it is not clear how the issue of incomplete metadata and hence incomplete pedagogical information will affect those LOM based methods. A new approach to reconstruct the underlying knowledge structure based on information extracted from LOM and data mining techniques is proposed. The main idea of the approach is to reconstruct knowledge structure by the context of learning materials. Intrinsically, the vector space model and the k-means clustering algorithm are applied to reconstruct the knowledge graph based on keyword extraction techniques, and concept dependency relations are extracted from the obtained knowledge graph. Then, the genetic algorithm is applied to optimize for a learning path that satisfies most of the obtained concept dependencies. Furthermore, the performance of applying different semantic interpreters and rule extraction methodology are carefully tested and compared. Experimental results revealed that learning paths generated by the proposed approach are very similar to learning paths designed by human instructors. / published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
89

Separating data from metadata for robustness and scalability

Wang, Yang, active 21st century 09 February 2015 (has links)
When building storage systems that aim to simultaneously provide robustness, scalability, and efficiency, one faces a fundamental tension, as higher robustness typically incurs higher costs and thus hurts both efficiency and scalability. My research shows that an approach to storage system design based on a simple principle—separating data from metadata—can yield systems that address elegantly and effectively that tension in a variety of settings. One observation motivates our approach: much of the cost paid by many strong protection techniques is incurred to detect errors. This observation suggests an opportunity: if we can build a low-cost oracle to detect errors and identify correct data, it may be possible to reduce the cost of protection without weakening its guarantees. This dissertation shows that metadata, if carefully designed, can serve as such an oracle and help a storage system protect its data with minimal cost. This dissertation shows how to effectively apply this idea in three very different systems: Gnothi—a storage replication protocol that combines the high availability of asynchronous replication and the low cost of synchronous replication for a small-scale block storage; Salus—a large-scale block storage with unprecedented guarantees in terms of consistency, availability, and durability in the face of a wide range of server failures; and Exalt—a tool to emulate a large storage system with 100 times fewer machines. / text
90

Transitioning from NTSC to HD Digital Video Vol. 2

Hightower, Paul 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / In our first installment, analog and HD video terms were compared. It was shown that resolution is three-dimensional in HD. High HD data rates force the use of video compression in order to transport video. Compression compromises video detail and introduces latency. Latency affects the overlay of time critical measurements. It is therefore important to time stamp at the source. In this volume, the focus is on the key regions of the HD video frame and metadata. SMPTE specifications are the foundation on which MISB builds its use of metadata. It will be shown that only two KLV packets can hold all TSPI and calibration data with frame-by-frame updates. This capacity is new in HD. Metadata is permanently merged with images and the time that both were collected. We show how employing the KLV metadata packet can result in a single video record where picture taking are all in lockstep. Using KLV metadata enables one to record clean video while retaining the ability to place crosshairs and data during playback.

Page generated in 0.0497 seconds