• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 46
  • 13
  • 13
  • 13
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 400
  • 400
  • 120
  • 53
  • 52
  • 43
  • 41
  • 41
  • 40
  • 36
  • 34
  • 31
  • 31
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

An investigation into tools and protocols for commercial audio web-site creation

Ndinga, S'busiso Simon January 2000 (has links)
This thesis presents a feasibility study of a Web-based digital music library and purchasing system. It investigates the current status of the enabling technologies for developing such a system. An analysis of various Internet audio codecs, streaming audio protocols, Internet credit card payment security methods, and ways for accessing remote Web databases is presented. The objective of the analysis is to determine the viability and the economic benefits of using these technologies when developing systems that facilitate music distribution over the Internet. A prototype of a distributed digital music library and purchasing system named WAPS (for Web-based Audio Purchasing System) was developed and implemented in the Java programming language. In this thesis both the physical and the logical component elements of WAPS are explored in depth so as to provide an insight into the inherent problems of creating such a system, as well as the overriding benefits derived from the creation of such a system.
232

Spatializing science and technology studies : exploring the role of GIS and interactive social research

Talwar, Sonia 05 1900 (has links)
This thesis is an interdisciplinary study based on the interplay between science, technology and society in order to inform the design of knowledge exploration systems. It provides a rationale for the integration of science knowledge, geographic information, with digital libraries to build knowledge and awareness about sustainability. A theoretical reconceptualization of knowledge building is provided that favours interactive engagement with information and argues against a traditional model of science production and communication that is linear and unidirectional. The elements of contextualization, classification and communication form the core of the reconceptualization. Since many information systems entrench the traditional model of science production, the three elements are considered in light of library and information science and geographic information science. The use of geographic information systems is examined to identify how they can be used as part of a social learning model for scientific, social, cultural, and environmental issues to further assist people in connecting to place and sustainability. Empirical data was collected from four case studies. One case study centred on the design and development of a web-based digital library called the Georgia Basin Digital Library, another two case studies focused on the use of part of this digital library with youth, senior and environmental groups in south-western British Columbia. The remaining case study observed a community deliberation to consider how knowledge exploration systems might support deliberation in future processes. The case study research confirms that collaborative research with communities is a fruitful way to engage with sustainability issues. Such collaborations require consideration of institutional arrangements, information collections, relationship building, technology transfer and capacity building. / Arts, Faculty of / Geography, Department of / Graduate
233

An exploratory study of factors that influence student user success in an academic digital library.

Rahman, Faizur 12 1900 (has links)
The complex nature of digital libraries calls for appropriate models to study user success. Calls have been made to incorporate into these models factors that capture the interplay between people, organizations, and technology. In order to address this, two research questions were formulated: (1) To what extent does the comprehensive digital library user success model (DLUS), based on a combination of the EUCS and flow models, describe overall user success in a prototype digital library environment; and (2) To what extent does a combined model of DeLone & McLean's reformulated information system success model and comprehensive digital library user success model (DLUS) explain digital library user success in a prototype digital library environment? Participants were asked to complete an online survey questionnaire. A total of 160 completed and useable questionnaires were obtained. Data analyses through exploratory and confirmatory factor analyses and structural equation modeling produced results that support the two models. However, some relationships between latent variables hypothesized in the model were not confirmed. A modified version of the proposed comprehensive plus user success model in a digital library environment was tested and supported through model fit statistics. This model was recommended as a possible alternative model of user success. The dissertation also makes a number of recommendations for future research.
234

Metadata for phonograph records : facilitating new forms of use and access

Lai, Catherine Wanwen. January 2007 (has links)
No description available.
235

Epidemiology Experimentation and Simulation Management through Scientific Digital Libraries

Leidig, Jonathan Paul 05 September 2012 (has links)
Advances in scientific data management, discovery, dissemination, and sharing are changing the manner in which scientific studies are being conducted and repurposed. Data-intensive scientific practices increasingly require data management related services not available in existing digital libraries. Complicating the issue are the diversity of functional requirements and content in scientific domains as well as scientists' lack of expertise in information and library sciences. Researchers that utilize simulation and experimentation systems need digital libraries to maintain datasets, input configurations, results, analyses, and related documents. A digital library may be integrated with simulation infrastructures to provide automated support for research components, e.g., simulation interfaces to models, data warehouses, simulation applications, computational resources, and storage systems. Managing and provisioning simulation content allows streamlined experimentation, collaboration, discovery, and content reuse within a simulation community. Formal definitions of this class of digital libraries provide a foundation for producing a software toolkit and the semi-automated generation of digital library instances. We present a generic, component-based SIMulation-supporting Digital Library (SimDL) framework. The framework is formally described and provides a deployable set of domain-free services, schema-based domain knowledge representations, and extensible lower and higher level service abstractions. Services in SimDL are specialized for semi-structured simulation content and large-scale data producing infrastructures, as exemplified in data storage, indexing, and retrieval service implementations. Contributions to the scientific community include previously unavailable simulation-specific services, e.g., incentivizing public contributions, semi-automated content curating, and memoizing simulation-generated data products. The practicality of SimDL is demonstrated through several case studies in computational epidemiology and network science as well as performance evaluations. / Ph. D.
236

A Java-based Smart Object Model for use in Digital Learning Environments

Pushpagiri, Vara Prashanth 16 October 2003 (has links)
The last decade has seen the scope of digital library usage extend from data warehousing and other common library services to building quality collections of electronic resources and providing web-based information retrieval mechanisms for distributed learning. This is clear from the number of ongoing research initiatives aiming to provide dynamic learning environments. A major task in providing learning environments is to define a resource model (learning object). The flexibility of the learning object model determines the quality of the learning environment. Further, dynamic environments can be realized by changing the contents and structure of the learning object, i.e. make it mutable. Most existing models are immutable after creation and require the library to support operations that help in creating these environments. This leaves the learning object at the mercy of the parent library's functionality. This thesis work is an extension of an existing model and allows a learning object to function independent of the operational constraints of a digital library by equipping learning objects with software components called methods that influence their operation and structure even after being deployed. It provides a reference implementation of an aggregate, intelligent, self-sufficient, object-oriented, platform-independent learning object model, which is conformant to popular digital library standards. It also presents a Java-based development tool for creating and modifying smart objects. It is capable of performing content aggregation, metadata harvesting and user repository maintenance operations, in addition to supporting the addition/removal of methods to a smart object. The current smart object implementation and the development tool have been deployed successfully on two platforms (Windows and Linux) where their operation was found to be satisfactory. / Master of Science
237

Arabic News Text Classification and Summarization: A Case of the Electronic Library Institute SeerQ (ELISQ)

Kan'an, Tarek Ghaze 21 July 2015 (has links)
Arabic news articles in heterogeneous electronic collections are difficult for users to work with. Two problems are: that they are not categorized in a way that would aid browsing, and that there are no summaries or detailed metadata records that could be easier to work with than full articles. To address the first problem, schema mapping techniques were adapted to construct a simple taxonomy for Arabic news stories that is compatible with the subject codes of the International Press Telecommunications Council. So that each article would be labeled with the proper taxonomy category, automatic classification methods were researched, to identify the most appropriate. Experiments showed that the best features to use in classification resulted from a new tailored stemming approach (i.e., a new Arabic light stemmer called P-Stemmer). When coupled with binary classification using SVM, the newly developed approach proved to be superior to state-of-the-art techniques. To address the second problem, i.e., summarization, preliminary work was done with English corpora. This was in the context of a new Problem Based Learning (PBL) course wherein students produced template summaries of big text collections. The techniques used in the course were extended to work with Arabic news. Due to the lack of high quality tools for Named Entity Recognition (NER) and topic identification for Arabic, two new tools were constructed: RenA for Arabic NER, and ALDA for Arabic topic extraction tool (using the Latent Dirichlet Algorithm). Controlled experiments with each of RenA and ALDA, involving Arabic speakers and a randomly selected corpus of 1000 Qatari news articles, showed the tools produced very good results (i.e., names, organizations, locations, and topics). Then the categorization, NER, topic identification, and additional information extraction techniques were combined to produce approximately 120,000 summaries for Qatari news articles, which are searchable, along with the articles, using LucidWorks Fusion, which builds upon Solr software. Evaluation of the summaries showed high ratings based on the 1000-article test corpus. Contributions of this research with Arabic news articles thus include a new: test corpus, taxonomy, light stemmer, classification approach, NER tool, topic identification tool, and template-based summarizer – all shown through experimentation to be highly effective. / Ph. D.
238

Prototyping Digital Libraries Handling Heterogeneous Data Sources - An ETANA-DL Case Study

Ravindranathan, Unnikrishnan 06 May 2004 (has links)
Information systems used in archaeological research have several needs that can be summarized as follows: interoperability among diverse, heterogeneous systems, making information available without significant delay, providing a sustainable approach to long-term preservation of data, and providing a suite of services to users of the system. In this thesis, we describe how digital library techniques can be employed to provide solutions to these problems and describe our experiences in creating a prototype for ETANA-DL. ETANA-DL is a model-based, componentized, extensible, archaeological Digital Library that manages complex information sources using the client-server paradigm of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). We have designed and developed the prototype system with the following main goals: 1) to achieve information sharing between different heterogeneous archaeological systems, 2) to make primary archaeological data rapidly available to users, 3) to provide useful services to users of the DL, 4) to elicit requirements that users of the system will have beyond the services that it supports, and 5) to provide a sustainable solution to long-term preservation of valuable archaeological data. Consequently, we describe our approach to handling heterogeneous archaeological information from disparate sources; suggest an architecture for ETANA-DL, to be validated through prototyping; and show that given a pool of components that implement common DL services, a prototype DL can be rapidly created that supports several useful services over integrated data. Further, and most fundamentally, we note that understanding complex information systems is a difficult task. Finally, therefore, we describe our efforts to model complex archaeological information systems using the 5S framework, and show how we have used the resulting partial models to implement ETANA-DL with cross-collection searching and browsing capabilities. / Master of Science
239

Figure Extraction from Scanned Electronic Theses and Dissertations

Kahu, Sampanna Yashwant 29 September 2020 (has links)
The ability to extract figures and tables from scientific documents can solve key use-cases such as their semantic parsing, summarization, or indexing. Although a few methods have been developed to extract figures and tables from scientific documents, their performance on scanned counterparts is considerably lower than on born-digital ones. To facilitate this, we propose methods to effectively extract figures and tables from Electronic Theses and Dissertations (ETDs), that out-perform existing methods by a considerable margin. Our contribution towards this goal is three-fold. (a) We propose a system/model for improving the performance of existing methods on scanned scientific documents for figure and table extraction. (b) We release a new dataset containing 10,182 labelled page-images spanning across 70 scanned ETDs with 3.3k manually annotated bounding boxes for figures and tables. (c) Lastly, we release our entire code and the trained model weights to enable further research (https://github.com/SampannaKahu/deepfigures-open). / Master of Science / Portable Document Format (PDF) is one of the most popular document formats. However, parsing PDF files is not a trivial task. One use-case of parsing PDF files is the search functionality on websites hosting scholarly documents (i.e., IEEE Xplore, etc.). Having the ability to extract figures and tables from a scholarly document helps this use-case, among others. Methods using deep learning exist which extract figures from scholarly documents. However, a large number of scholarly documents, especially the ones published before the advent of computers, have been scanned from hard paper copies into PDF. In particular, we focus on scanned PDF versions of long documents, such as Electronic Theses and Dissertations (ETDs). No experiments have been done yet that evaluate the efficacy of the above-mentioned methods on this scanned corpus. This work explores and attempts to improve the performance of these existing methods on scanned ETDs. A new gold standard dataset is created and released as a part of this work for figure extraction from scanned ETDs. Finally, the entire source code and trained model weights are made open-source to aid further research in this field.
240

Segmenting Electronic Theses and Dissertations By Chapters

Manzoor, Javaid Akbar 18 January 2023 (has links)
Master of Science / Electronic theses and dissertations (ETDs) are structured documents in which chapters are major components. There is a lack of any repository that contains chapter boundary details alongside these structured documents. Revealing these details of the documents can help increase accessibility. This research explores the manipulation of ETDs marked up using LaTeX to generate chapter boundaries. We use this to create a data set of 1,459 ETDs and their chapter boundaries. Additionally, for the task of automatic segmentation of unseen documents, we prototype three deep learning models that are trained using this data set. We hope to encourage researchers to incorporate LaTeX manipulation techniques to create similar data sets.

Page generated in 0.0686 seconds