• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 621
  • 136
  • 119
  • 108
  • 108
  • 103
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Modeling of census data in a multidimensional environment

Günzel, Holger, Lehner, Wolfgang, Eriksen, Stein, Folkedal, Jon 13 June 2023 (has links)
The general aim of the KOSTRA project, initiated by Statistics Norway, is to set up a data reporting chain from the norwegian municipalities to a central database at Statistics Norway. In this paper, we present an innovative data model for supporting a data analysis process consisting of two sequential data production phases using two conceptional database schemes. A first data schema must provide a sound basis for an efficient analysis reflecting a multidimensional view on data. Another schema must cover all structural information, which is essential for supporting the generation of electronic forms as well as for performing consistency checks of the gathered in-formation. The resulting modeling approach provides a seamless solution for both proposed challenges. Based on the relational model, both schemes are powerful to cover the heterogeneity of the data source, handle complex structural information, and to provide a versioning mechanism for long term analysis.
472

Metadata Quality Assurance for Audiobooks: : An explorative case study on how to measure, identify and solve metadata quality issues

Carlsson, Patrik January 2023 (has links)
Metadata is essential to how (digital) archives, collections or databases operate. It is the backbone to organise different types of content, make them discoverable and keep the digital records’ authenticity, integrity and meaning over time. For that reason, it is also important to iteratively assess if the metadata is of high quality. Despite its importance, there is an acknowledged lack of research verifying if existing assessment frameworks and methodologies do indeed work and if so how well, especially in fields outside the libraries. Thus, this thesis conducted an exploratory case study and applied already existing frameworks in a new context by evaluating the metadata quality of audiobooks. The Information Continuum Model was used as a way to capture the metadata quality needs of customers/end users who will be searching and listening to audiobooks. Using a mixed methods approach, the results showed that the frameworks can indeed be generalised and adapted to a new context. However, although the frameworks helped measure, identify and find potential solutions to the problems, they could be better adjusted to the context and more metrics and information could be added. Thus, there can be a generalised method to assess metadata quality. But the method needs improvements and to be used by people who understand the data and the processes to reach its full potential.
473

Best Practices in Digital Asset Management for Electronic Texts in Academic Research Libraries

Cleland, William A. 28 June 2007 (has links)
No description available.
474

Linked data for improving student experience in searching e-learning resources

Castellanos Ardila, Julieth Patricia January 2012 (has links)
The collection and the use of data on the internet with e-learning purposes are tasks made by many people every day, because of their role as teachers or students. The web provides several data sources with relevant information that could be used in educational environments, but the information is widely distributed, or poorly structured. Also, resources on the web are diverse, sometimes with high quality, but sometimes not. These situations involve a difficult search of e-learning resources, and therefore a lot of time invested, because the search process – typing, reasoning, selecting, using resources, bookmarking, and so forth - is completely executed by humans, despite that some of them can be executed by computers. Linked data provides designed practices for organizing, and for discovering information using the processing power of computers. At the same time, the community of linked data provides data sets that are already connected, and this information could be consumed by people anytime as resources with e-learning purposesThe current article presents the findings of a master thesis that address the linked data techniques and also the techniques used by students when searching e-learning resources, using internet. The resources used by the students, as well as the sources preferred, will be comparing with the current resources offered by the linked data community. Likewise, the strategies and techniques selected by the students will be taken into account, in order to establish the basic requirements of an e-learning collaborative environment prototype.The outline of the thesis is:Chapter 2 discusses the research methodology as well as the constructing and administering the research survey in which the current thesis based the requirements elicitation. Chapter 3 lays the groundwork for the rest of the thesis by presenting the principles and terminology of linked data, as well as related work about the internet in education, availability of e-learning resources, and surveys about the use of the internet in e-learning resources searchingChapter 4 presents the investigation of the methods used by student for exploring and discovering e-learning resources through the data analysis and interpretation of the survey. Chapter 5 Introduces to the prototype design. It includes the prototype idea, the requirements specification using the data analysis of the survey, and the architecture of the e-learning collaborative environment using the assumptions reached in the literature review and the dereferencable URIs found in the linked open data cloud diagram. The design of components in the environment will be addressed in terms of UML diagrams. Chapter 6 Validates the requirements of the prototype.Chapter 7 Tackles conclusions of the master thesis project in order to find incomes for further research in the area. This chapter also shows the contributions e-learning world evaluation is based on the benefits indentified by using this approach and gives indications of what future work can be done to improve the results. The findings address the decision-making process for designing a new era of e-learning environments that enhance the selection of e-learning resources, taking into account the technology available, and the information around the World Wide Web.
475

An Application-Attuned Framework for Optimizing HPC Storage Systems

Paul, Arnab Kumar 19 August 2020 (has links)
High performance computing (HPC) is routinely employed in diverse domains such as life sciences, and Geology, to simulate and understand the behavior of complex phenomena. Big data driven scientific simulations are resource intensive and require both computing and I/O capabilities at scale. There is a crucial need for revisiting the HPC I/O subsystem to better optimize for and manage the increased pressure on the underlying storage systems from big data processing. Extant HPC storage systems are designed and tuned for a specific set of applications targeting a range of workload characteristics, but they lack the flexibility in adapting to the ever-changing application behaviors. The complex nature of modern HPC storage systems along with the ever-changing application behaviors present unique opportunities and engineering challenges. In this dissertation, we design and develop a framework for optimizing HPC storage systems by making them application-attuned. We select three different kinds of HPC storage systems - in-memory data analytics frameworks, parallel file systems and object storage. We first analyze the HPC application I/O behavior by studying real-world I/O traces. Next we optimize parallelism for applications running in-memory, then we design data management techniques for HPC storage systems, and finally focus on low-level I/O load balance for improving the efficiency of modern HPC storage systems. / Doctor of Philosophy / Clusters of multiple computers connected through internet are often deployed in industry and laboratories for large scale data processing or computation that cannot be handled by standalone computers. In such a cluster, resources such as CPU, memory, disks are integrated to work together. With the increase in popularity of applications that read and write a tremendous amount of data, we need a large number of disks that can interact effectively in such clusters. This forms the part of high performance computing (HPC) storage systems. Such HPC storage systems are used by a diverse set of applications coming from organizations from a vast range of domains from earth sciences, financial services, telecommunication to life sciences. Therefore, the HPC storage system should be efficient to perform well for the different read and write (I/O) requirements from all the different sets of applications. But current HPC storage systems do not cater to the varied I/O requirements. To this end, this dissertation designs and develops a framework for HPC storage systems that is application-attuned and thus provides much improved performance than other state-of-the-art HPC storage systems without such optimizations.
476

TENA Performance in a Telemetry Network System

Saylor, Kase J., Wood, Paul B., Malatesta, William A., Abbott, Ben A. 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The integrated Network-Enhanced Telemetry (iNET) project conducted an assessment to determine how the Test and Training Enabling Architecture (TENA) would integrate into an iNET Telemetry Network System (TmNS), particularly across constrained environments on a resource constrained platform. Some of the key elements investigated were quality of service measures (throughput, latency, and reliability) in the face of projected characteristics of iNET Data Acquisition Unit (DAU) devices including size, weight, and power (SWAP), and processing capacity such as memory size and processor speed. This paper includes recommendations for both the iNET and TENA projects.
477

Trismegistos

Gheldof, Tom 20 April 2016 (has links) (PDF)
Trismegistos (TM, http://www.trismegistos.org) is a metadata platform for the study of texts from the Ancient World, coordinated and maintained by the KU Leuven research group of Ancient History. Originating from the Prosopographia Ptolemaica, TM was developed in 2005 as a database containing information about people mentioned in papyrus documents from Ptolemaic Egypt. In other related databases additional information about these texts was found: when they were written (dates), where they are stored (collections) and to which archive they belong (archives). The following years also epigraphic data were added to these databases. The TM platform has two important goals: firstly it functions as an aggregator of metadata for which it also links to other projects (e.g. Papyrological Navigator, Epigraphic Database Heidelberg), secondly it can be used as an identifying tool for all of its content such as Ancient World texts, places and people. With its unique identifying numbers and stable URI\'s, TM sets standards for and bridges the gap between different digital representations of Ancient World texts. In the future TM aims not only to expand its coverage, but also to provide new ways to study these ancient sources, for example via social network analysis trough its latest addition: Trismegistos networks ((http://www.trismegistos.org/network).
478

知識倉儲的知識結構之研究-以某行政部門為例

盧美惠 Unknown Date (has links)
隨著資訊科技的蓬勃發展,經由資訊媒介的傳播,造成了企業或組織內部資訊大量的累積。因此,知識倉儲(knowledge repository)可以說是儲存各類型文件的儲存庫,主要用來管理和組織各類型資訊,例如資料庫、報告、文件、表單,都可以數位化方式儲存在知識倉儲,其功能在於進行組織內部各類型文件知識內容管理,進而協助組織提供網路服務(web service),包括:提供目錄、索引以協助使用者尋找資訊的檢索服務以及辨識和確認資訊位址的定址服務。因此,當知識不斷地從組織運作之中產生,知識與資訊的量也跟著不斷增加,如何管理這些知識就益顯重要,包括知識的表達、結構、儲存與取用方式等。 在這篇論文中,本研究試圖整理對於知識倉儲的『知識結構』之相關或背景知識,並針對文件彼此之間的相互參照關係以及索引典建立知識地圖(knowledge map),進一步將領域的相關知識,如術語或關連性等資料儲存成有結構性之知識,利用此領域知識對於文件內容附加上有語意關係之處理,在進行資訊檢索時,從而利用領域知識結構以協助使用者準確地檢索與查詢有用或相關之資訊內容。 本研究運用檔案管理全宗理論及控制層級(control level),提出因應組織結構改變之檔案系統目錄結構,劃分全宗、系列、案卷、文件等層次,知識倉儲系統藉由文件虛擬位址(DL)以及文件實體位址(URL)之對映,以處理組織結構改變之動態文件管理。本研究進一步針對具有關連的一組文件進行案卷內部分類,利用所分析之案卷類型結構,描述具有單一文件以及具有複合文件概念之文件,包括:會議記錄、法令規章等,並運用都柏林核心集(Dublin Core)描述文件資料建立Metadata結構,然後透過索引典(Thesaurus)詞彙語意關係之處理,提供概念性之語意資訊檢索。
479

Procedures for the Processing, Cataloging, and Classification of a Non-Circulating Historical Art Print Collection

Ray, Linda 01 April 1975 (has links)
In order to establish specific procedures for processing, cataloging and classifying the art print collection at the Kentucky Library, Western Kentucky University, data were gathered from three sources. These were: (1) information on current procedures used in the Kentucky Library, obtained through an interview with Riley Handy, the Kentucky Librarian (2) a search of related literature and (3) a questionnaire survey of other institution having art print collections. It was found that historically valuable art prints, which are used primarily as documentary resources, need to be carefully processed and stored so as to preserve and protect them from damaging effects of light, temperature change, humidity and dust. Effective preservation techniques include: (1) controlling the light, temperature and humidity in the building through installation of modern air conditioning units, electronic air filters and artificial light filters; (2) making photographs of the art prints so that the copies rather the original prints can be used by the patrons; (3) placing the prints in all-rag paper or Mylar plastic folders which are then stored flat in dustproof acid free paper boxes or steel map cases; and (4) having damaged prints restored only by professional prints conservationists who use reversible restoration techniques. In regard to cataloging, it was found that historically valuable prints should be individually cataloged and should be organized in either a classified system or in numerical arrangement. Although no central reporting agency exists for disseminating information about art prints, the findings indicate the advisability of each institution publicizing its own art print holdings.
480

LORESA : un système de recommandation d'objets d'apprentissage basé sur les annotations sémantiques

Benlizidia, Sihem January 2007 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.

Page generated in 0.0372 seconds