Spelling suggestions: "subject:"metadata."" "subject:"datadata.""
261 |
Managing Geographic Data as an Asset: A Case Study in Large Scale Data ManagementSmithers, Clay 21 November 2008 (has links)
Geographic data is a hallowed element within the Geographic Information Systems (GIS) discipline. As geographic data faces increased usage in distributed and mobile environments, the ability to access and maintain that data can become challenging. Traditional methods of data management through the use of file storage, databases, and data catalog software are valuable in their ability to organize data, but provide little information about how the data was collected, how often the data is updated, and what value the data holds for an organization. By defining geographic data as an asset it becomes a valuable resource that requires acquisition, maintenance and sometimes retirement during its lifetime. To further understand why geographic data is different than other types of data, we must look at the many components of geographic data and specifically how that data is gathered and organized.
To best align geographic data to the asset management discipline, this thesis will focus on six key dimensions, established through the work of Vanier (2000, 2001), which seek to evaluate asset management systems. Using a conceptual narrative linked to an environmental analysis case study, this research seeks to inform as to the strategies for efficiently managing geospatial data resources. These resources gain value through the context applied by the inclusion of a standard structure and methodologies from the asset management field. The result of this thesis is the determination of the extent to which geographic data can be considered an asset, what asset management strategies are applicable to geographic data, and what are the requirements for geographic data asset management systems.
|
262 |
The USA PATRIOT Act and Punctuated EquilibriumSanders, Michael 01 January 2016 (has links)
Currently, Title II of the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT Act) Act of 2001 appears to be stalled as a result of controversy over the intent and meaning of the law. Proponents of the title advocate the necessity of the act to combat modern terrorism, whereas opponents warn of circumventions of the Fourth Amendment of the U.S. Constitution. Using punctuated equilibrium as the theoretical foundation, the purpose of this case study was to explore the dialogue and legal exchanges between the American Civil Liberties Union and the Department of Justice related to the National Security Agency's metadata collection program. In specific, the study sought to explore the nature of resistance to changes needed to mollify the controversies associated with Title II. Data for this study were acquired through publicly available documents and artifacts including transcripts of Congressional hearings, legal documents, and briefing statements from the US Department of Justice and the American Civil Liberties Union. These data were deductively coded according to the elements of PET and then subjected to thematic analysis. Findings indicate that supporters and opponents of the law are locked in a consistent ideological polarization, with supporters of the law touting the necessity of the authorizations in combatting terrorism and opponents arguing the law violates civil liberties. Neither side of the debate displayed a willingness to compromise or acknowledge the legitimacy of the other viewpoint. Legislators who accept the legitimacy of both researched viewpoints could create positive social change by refining the law to meet national security needs while preserving constitutional protections.
|
263 |
Metadata Validation Using a Convolutional Neural Network : Detection and Prediction of Fashion ProductsNilsson Harnert, Henrik January 2019 (has links)
In the e-commerce industry, importing data from third party clothing brands require validation of this data. If the validation step of this data is done manually, it is a tedious and time-consuming task. Part of this task can be replaced or assisted by using computer vision to automatically find clothing types, such as T-shirts and pants, within imported images. After a detection of clothing type is computed, it is possible to recommend the likelihood of clothing products correlating to data imported with a certain accuracy. This was done alongside a prototype interface that can be used to start training, finding clothing types in an image and to mask annotations of products. Annotations are areas describing different clothing types and are used to train an object detector model. A model for finding clothing types is trained on Mask R-CNN object detector and achieves 0.49 mAP accuracy. A detection take just above one second on an Nvidia GTX 1070 8 GB graphics card. Recommending one or several products based on a detection take 0.5 seconds and the algorithm used is k-nearest neighbors. If prediction is done on products of which is used to build the model of the prediction algorithm almost perfect accuracy is achieved while products in images for another products does not achieve nearly as good results.
|
264 |
Automated Extraction and Retrieval of Metadata by Data Mining : a Case Study of Mining Engine for National Land Survey SwedenDong, Zheng January 2010 (has links)
<p>Metadata is the important information describing geographical data resources and their key elements. It is used to guarantee the availability and accessibility of the data. ISO 19115 is a metadata standard for geographical information, making the geographical metadata shareable, retrievable, and understandable at the global level. In order to cope with the massive, high-dimensional and high-diversity nature of geographical data, data mining is an applicable method to discover the metadata.</p><p>This thesis develops and evaluates an automated mining method for extracting metadata from the data environment on the Local Area Network at the National Land Survey of Sweden (NLS). These metadata are prepared and provided across Europe according to the metadata implementing rules for the Infrastructure for Spatial Information in Europe (INSPIRE). The metadata elements are defined according to the numerical formats of four different data entities: document data, time-series data, webpage data, and spatial data. For evaluating the method for further improvement, a few attributes and corresponding metadata of geographical data files are extracted automatically as metadata record in testing, and arranged in database. Based on the extracted metadata schema, a retrieving functionality is used to find the file containing the keyword of metadata user input. In general, the average success rate of metadata extraction and retrieval is 90.0%.</p><p>The mining engine is developed in C# programming language on top of the database using SQL Server 2005. Lucene.net is also integrated with Visual Studio 2005 to build an indexing framework for extracting and accessing metadata in database.</p>
|
265 |
The Transformation of the North Carolina Government Information Locator Service, 1995-2005James T. Wellman 2005 April 1900 (has links)
This paper is a study of the transformation of the North Carolina Information Locator Service (NCGILS) in the decade following its creation in 1995. The changes that NCGILS has undergone mirror then changes in the world of metadata and government information. North Carolina started NCGILS as a librarian-influenced attempt to engage all information creators in producing quality metadata. As a result of several obstacles and issues encountered during the past decade, North Carolina has essentially put NCGILS into hibernation. Today North Carolina relies on automatic harvesting of metadata and centralized efforts by state library staff instead of relying on NCGILS code. This change to an information science driven model underscores the general inability to apply librarian-influenced models in the practical world of government information. The changes, challenges and issues encountered by NCGILS provide a valuable guide for all government agencies and academic students of metadata.
|
266 |
Knowledge Discovery In Microarray Data Of BioinformaticsKocabas, Fahri 01 June 2012 (has links) (PDF)
This thesis analyzes major microarray repositories and presents a metadata
framework both to address the current issues and to promote the main operations
such as knowledge discovery, sharing, integration, and exchange. The proposed
framework is demonstrated in a case study on real data and can be used for other
high throughput repositories in biomedical domain.
Not only the number of microarray experimentation increases, but also the
size and complexity of the results rise in response to biomedical inquiries. And,
experiment results are significant when examined in a batch and placed in a
biological context. There have been standardization initiatives on content, object
model, exchange format, and ontology. However, they have proprietary information
space. There are backlogs and the data cannot be exchanged among the repositories.
There is a need for a format and data management standard at present.iv
v
We introduced a metadata framework to include metadata card and semantic
nets to make the experiment results visible, understandable and usable. They are
encoded in standard syntax encoding schemes and represented in XML/RDF. They
can be integrated with other metadata cards, semantic nets and can be queried. They
can be exchanged and shared. We demonstrated the performance and potential
benefits with a case study on a microarray repository.
This study does not replace any product on repositories. A metadata
framework is required to manage such huge data. We state that the backlogs can be
reduced, complex knowledge discovery queries and exchange of information can
become possible with this metadata framework.
|
267 |
Exploring the multiple dimensions of context: Implications for the design and development of innovative technology-enhanced learning environmentsKurti, Arianit January 2009 (has links)
Technology evolution throughout history has initiated many changes in different aspects of human activities. Learning, as one of the most representative human activities has also been subject to these changes. Nowadays, the use of information and communication technologies has considerably changed the way people learn and collaborate. These changes have been accompanied by new approaches to support learning using a wide range of mobile devices, software applications and different communication platforms. In these technology rich landscapes, the notion of context emerges as a crucial component to be considered for the design and technical implementation of technology-enhanced learning environments. The main research question investigated in this thesis relates to the use of different context instantiations for the design and development of innovative technology-enhanced learning environments.This thesis is a collection of eight papers that describe the results of the research efforts conducted in four different experimental cases during a period of four years. These experiments have been designed and developed as part of two research projects. The theoretical foundations that guided this research were based on the view of context and interaction from a learning theory, human-computer-interaction perspective, as well as dimensional data modelling techniques. Different methodological approaches, (such as action-oriented, design-based research and case study) have been used while investigating the main research question. The main contribution that this thesis offers to the research community is a conceptual context model accompanied by a dimensional data model that can be used as a design tool for embedding learning activities in context. In the four trials that encompass my empirical work, the conceptual model proposed in the thesis guided the design and technical development of the different novel technology-enhanced learning activities. The outcomes of these efforts provided various insights regarding the use of different context instantiations that have implications for the design and development of these environments. This thesis advocates that computational context attributes should be used as metadata descriptors that would potentially promote personalization and interoperability of digital learning content. Content personalization offers opportunities for personalized learning that increases learners’ engagement and eventually could lead to better learning results. Furthermore, the research and industrial community could use the context model developed in this thesis as a guiding tool to promote the creation of new ways to personalize services and technologies.
|
268 |
From Interoperability to Harmonization in Metadata Standardization : Designing an Evolvable Framework for Metadata HarmonizationNilsson, Mikael January 2010 (has links)
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata. This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed. The thesis presents an analytical framework of concepts and principles for understanding the issues arising when interfacing multiple standardization communities. The analytical framework focuses on a set of important patterns in metadata specifications and their respective contribution to harmonization issues: Metadata syntaxes as a tool for metadata exchange. Syntaxes are shown to be of secondary importance in harmonization. Metadata semantics as a cornerstone for interoperability. This thesis argues that the incongruences in the interpretation of metadata descriptions play a significant role in harmonization. Abstract models for metadata as a tool for designing metadata standards. It is shown how such models are pivotal in the understanding of harmonization problems. Vocabularies as carriers of meaning in metadata. The thesis shows how portable vocabularies can carry semantics from one standard to another, enabling harmonization. Application profiles as a method for combining metadata standards. While application profiles have been put forward as a powerful tool for interoperability, the thesis concludes that they have only a marginal role to play in harmonization. The analytical framework is used to analyze and compare seven metadata specifications, and a concrete set of harmonization issues is presented. These issues are used as a basis for a metadata harmonization framework where a multitude of metadata specifications with different characteristics can coexist. The thesis concludes that the Resource Description Framework (RDF) is the only existing specification that has the right characteristics to serve as a practical basis for such a harmonization framework, and therefore must be taken into account when designing metadata specifications. Based on the harmonization framework, a best practice for metadata standardization development is developed, and a roadmap for harmonization improvements of the analyzed standards is presented. / QC 20101117
|
269 |
Collaborative tagging : folksonomy, metadata, visualization, e-learning, thesisBateman, Scott 12 December 2007
Collaborative tagging is a simple and effective method for organizing and sharing web resources using human created metadata. It has arisen out of the need for an efficient method of personal organization, as the number of digital resources in everyday lives increases. While tagging has become a proven organization scheme through its popularity and widespread use on the Web, little is known about its implications and how it may effectively be applied in different situations. This is due to the fact that tagging has evolved through several iterations of use on social software websites, rather than through a scientific or an engineering design process. The research presented in this thesis, through investigations in the domain of e-learning, seeks to understand more about the scientific nature of collaborative tagging through a number of human subject studies. While broad in scope, touching on issues in human computer interaction, knowledge representation, Web system architecture, e-learning, metadata, and information visualization, this thesis focuses on how collaborative tagging can supplement the growing metadata requirements of e-learning. I conclude by looking at how the findings may be used in future research, through using information based in the emergent social networks of social software, to automatically adapt to the needs of individual users.
|
270 |
Arbitrary borders? New partnerships for cultural heritage siblings – libraries, archives and museums: creating integrated descriptive systemsTimms, Katherine V. 18 September 2007 (has links)
This thesis explores the topic of convergence of descriptive systems between different cultural heritage institutions — libraries, archives and museums. The primary purpose of integrated descriptive systems is to enable researchers to access cultural heritage information through one portal. Beginning with definitions of each type of cultural heritage institution and a historical overview of their evolution, the thesis then provides an analysis of similarities and differences between these institutions with respect to purpose, procedures, and perspective. The latter half of the thesis first provides a historical overview of each discipline’s descriptive practices with a brief comparative analysis before discussing various methods by which these institutions can create integrated descriptive systems. The overall emphasis is on complementary similarities between the institutions and the potential for cross-sectoral collaboration that these similarities enable. The conclusion of the thesis is that creating integrated descriptive systems is desirable and well within current technological capabilities. / October 2007
|
Page generated in 0.0465 seconds