• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 46
  • 13
  • 13
  • 13
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 400
  • 400
  • 120
  • 53
  • 52
  • 43
  • 41
  • 41
  • 40
  • 36
  • 34
  • 31
  • 31
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Web-based library for student projects/theses and faculty research papers

Senjaya, Rudy 01 January 2007 (has links)
The purpose of this project is to make available a Web-based Library, a web application developed for the Department of Computer Science at CSUSB to manage student projects/theses and faculty papers. The project is designed in accordance with Model-View-Controller (MVC) design pattern using the Jakarta Struts framework and iBATIS Data Mapper framework from Apache Software Foundation, JavaServer Pages (JSP), and MySQL database.
302

A consignment library of reusable software components for use over the World-Wide Web

Hicklin, R. Austin 20 January 2010 (has links)
This research project report discusses the development of a commercial, consignment-based library (a) of reusable software components to be accessed using the World-Wide Web. The research project consists of two parts: the development of a prototype system that provides interface and information retrieval functionality for such a system, and an analysis of the technical and business issues involved in making the library operational as a commercial entity. <p>The prototype system uses a hypertext browser and a query-based search mechanism to access descriptions of reusable software components; these descriptions are structured by a variation of a faceted classification system. The issues addressed include the classification and description of reusable software components; methods of retrieval, especially library browsing methods based on component classification; and analysis of incentives for reuse. / Master of Science
303

Exploring the Use of Metadata Record Graphs for Metadata Assessment

Phillips, Mark Edward 08 1900 (has links)
Cultural heritage institutions, including galleries, libraries, museums, and archives are increasingly digitizing physical items and collecting born-digital items and making these resources available on the Web. Metadata plays a vital role in the discovery and management of these collections. Existing frameworks to identify and address deficiencies in metadata rely heavily on count and data-value based metrics that are calculated over aggregations of descriptive metadata. There has been little research into the use of traditional network analysis to investigate the connections between metadata records based on shared data values in metadata fields such as subject or creator. This study introduces metadata record graphs as a mechanism to generate network-based statistics to support analysis of metadata. These graphs are constructed with the metadata records as the nodes and shared metadata field values as the edges in the network. By analyzing metadata record graphs with algorithms and tools common to the field of network analysis, metadata managers can develop a new understanding of their metadata that is often impossible to generate from count and data-value based statistics alone. This study tested application of metadata record graphs to analysis of metadata collections of increasing size, complexity, and interconnectedness in a series of three related stages. The findings of this research indicate effectiveness of this new method, identify network algorithms that are useful for analyzing descriptive metadata and suggest methods and practices for future implementations of this technique.
304

RESEARCH-PYRAMID BASED SEARCH TOOLS FOR ONLINE DIGITAL LIBRARIES

Bani-Ahmad, Sulieman Ahmad 03 April 2008 (has links)
No description available.
305

Standards-based teaching and educational digital libraries as innovations: undergraduate science faculty in the adoption process

Ridgway, Judith Sulkes 02 December 2005 (has links)
No description available.
306

Streams, Structures, Spaces,Scenarios, and Societies (5S): A Formal Digital Library Framework and Its Applications

Gonçcalves, Marcos André 08 December 2004 (has links)
Digital libraries (DLs) are complex information systems and therefore demand formal foundations lest development efforts diverge and interoperability suffers. In this dissertation, we propose the fundamental abstractions of Streams, Structures, Spaces, Scenarios, and Societies (5S), which allow us to define digital libraries rigorously and usefully. Streams are sequences of arbitrary items used to describe both static and dynamic (e.g., video) content. Structures can be viewed as labeled directed graphs, which impose organization. Spaces are sets with operations that obey certain constraints. Scenarios consist of sequences of events or actions that modify states of a computation in order to accomplish a functional requirement. Societies are sets of entities and activities, and the relationships among them. Together these abstractions provide a formal foundation to define, relate, and unify concepts -- among others, of digital objects, metadata, collections, and services -- required to formalize and elucidate ``digital libraries''. A digital library theory based on 5S is defined by proposing a formal ontology that defines the fundamental concepts, relationships, and axiomatic rules that govern the DL domain. The ontology is an axiomatic, formal treatment of DLs, which distinguishes it from other approaches that informally define a number of architectural invariants. The applicability, versatility, and unifying power of the 5S theory are demonstrated through its use in a number of distinct applications including: 1) building and interpreting a DL taxonomy; 2) informal and formal analysis of case studies of digital libraries (NDLTD and OAI); 3)utilization as a formal basis for a DL description language, digital library visualization and generation tools, and a log format specific for DLs; and 4) defining a quality model for DLs. / Ph. D.
307

Digital Libraries with Superimposed Information: Supporting Scholarly Tasks that Involve Fine Grain Information

Murthy, Uma 02 May 2011 (has links)
Many scholarly tasks involve working with contextualized fine-grain information, such as a music professor creating a multimedia lecture on a musical style, while bringing together several snippets of compositions of that style. We refer to such contextualized parts of a larger unit of information (or whole documents), as subdocuments. Current approaches to work with subdocuments involve a mix of paper-based and digital techniques. With the increase in the volume and in the heterogeneity of information sources, the management, organization, access, retrieval, as well as reuse of subdocuments becomes challenging, leading to inefficient and ineffective task execution. A digital library (DL) facilitates management, access, retrieval, and use of collections of data and metadata through services. However, most DLs do not provide infrastructure or services to support working with subdocuments. Superimposed information (SI) refers to new information that is created to reference subdocuments in existing information resources. We combine this idea of SI with traditional DL services, to define and develop a DL with SI (an SI-DL). Our research questions are centered around one main question: how can we extend the notion of a DL to include SI, in order to support scholarly tasks that involve working with subdocuments? We pursued this question from a theoretical as well as a practical/user perspective. From a theoretical perspective, we developed a formal metamodel that precisely defines the components of an SI-DL, building upon related work in DLs, SI, annotations, and hypertext. From the practical/user perspective, we developed prototype superimposed applications and conducted user studies to explore the use of SI in scholarly tasks. We developed SuperIDR, a prototype SI-DL, which enables users to mark up subimages, annotate them, and retrieve information in multiple ways, including browsing, and text- and content-based image retrieval. We explored the use of subimages and evaluated the use of SuperIDR in fish species identification, a scholarly task that involves working with subimages. Findings from the user studies and other work in our research lead to theory- and experiment-based enhancements that can guide design of digital libraries with superimposed information. / Ph. D.
308

Continuously Extensible Information Systems: Extending the 5S Framework by Integrating UX and Workflows

Chandrasekar, Prashant 11 June 2021 (has links)
In Virginia Tech's Digital Library Research Laboratory, we support subject-matter-experts (SMEs) in their pursuit of research goals. Their goals include everything from data collection to analysis to reporting. Their research commonly involves an analysis of an extensive collection of data such as tweets or web pages. Without support -- such as by our lab, developers, or data analysts/scientists -- they would undertake the data analysis themselves, using available analytical tools, frameworks, and languages. Then, to extract and produce the information needed to achieve their goals, the researchers/users would need to know what sequences of functions or algorithms to run using such tools, after considering all of their extensive functionality. Our research addresses these problems directly by designing a system that lowers the information barriers. Our approach is broken down into three parts. In the first two parts, we introduce a system that supports discovery of both information and supporting services. In the first part, we describe the methodology that incorporates User eXperience (UX) research into the process of workflow design. Through the methodology, we capture (a) what are the different user roles and goals, (b) how we break down the user goals into tasks and sub-tasks, and (c) what functions and services are required to solve each (sub-)task. In the second part, we identify and describe key components of the infrastructure implementation. This implementation captures the various goals/tasks/services associations in a manner that supports information inquiry of two types: (1) Given an information goal as query, what is the workflow to derive this information? and (2) Given a data resource, what information can we derive using this data resource as input? We demonstrate both parts of the approach, describing how we teach and apply the methodology, with three case studies. In the third part of this research, we rely on formalisms used in describing digital libraries to explain the components that make up the information system. The formal description serves as a guide to support the development of information systems that generate workflows to support SME information needs. We also specifically describe an information system meant to support information goals that relate to Twitter data. / Doctor of Philosophy / In Virginia Tech's Digital Library Research Laboratory, we support subject-matter-experts (SMEs) in their pursuit of research goals. This includes everything from data collection to analysis to reporting. Their research commonly involves an analysis of an extensive collection of data such as tweets or web pages. Without support -- such as by our lab, developers, or data analysts/scientists -- they would undertake the data analysis themselves, using available analytical tools, frameworks, and languages. Then, to extract and produce the information needed to achieve their goals, the researchers/users would need to know what sequences of functions or algorithms to run using such tools, after considering all of their extensive functionality. Further, as more algorithms are being discovered and datasets are getting larger, the information processing effort is getting more and more complicated. Our research aims to address these problems directly by attempting to lower the barriers, through a methodology that integrates the full life cycle, including the activities carried out by User eXperience (UX), analysis, development, and implementation experts. We devise a three part approach to this research. The first two parts concern building a system that supports discovery of both information and supporting services. First, we describe the methodology that introduces UX research into the process of workflow design. Second, we identify and describe key components of the infrastructure implementation. We demonstrate both parts of the approach, describing how we teach and apply the methodology, with three case studies. In the third part of this research, we extend formalisms used in describing digital libraries to encompass the components that make up our new type of extensible information system.
309

Intelligent Event Focused Crawling

Farag, Mohamed Magdy Gharib 23 September 2016 (has links)
There is need for an integrated event focused crawling system to collect Web data about key events. When an event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of information about events. We propose intelligent event focused crawling for automatic event tracking and archiving, as well as effective access. We extend the traditional focused (topical) crawling techniques in two directions, modeling and representing: events and webpage source importance. We developed an event model that can capture key event information (topical, spatial, and temporal). We incorporated that model into the focused crawler algorithm. For the focused crawler to leverage the event model in predicting a webpage's relevance, we developed a function that measures the similarity between two event representations, based on textual content. Although the textual content provides a rich set of features, we proposed an additional source of evidence that allows the focused crawler to better estimate the importance of a webpage by considering its website. We estimated webpage source importance by the ratio of number of relevant webpages to non-relevant webpages found during crawling a website. We combined the textual content information and source importance into a single relevance score. For the focused crawler to work well, it needs a diverse set of high quality seed URLs (URLs of relevant webpages that link to other relevant webpages). Although manual curation of seed URLs guarantees quality, it requires exhaustive manual labor. We proposed an automated approach for curating seed URLs using social media content. We leveraged the richness of social media content about events to extract URLs that can be used as seed URLs for further focused crawling. We evaluated our system through four series of experiments, using recent events: Orlando shooting, Ecuador earthquake, Panama papers, California shooting, Brussels attack, Paris attack, and Oregon shooting. In the first experiment series our proposed event model representation, used to predict webpage relevance, outperformed the topic-only approach, showing better results in precision, recall, and F1-score. In the second series, using harvest ratio to measure ability to collect relevant webpages, our event model-based focused crawler outperformed the state-of-the-art focused crawler (best-first search). The third series evaluated the effectiveness of our proposed webpage source importance for collecting more relevant webpages. The focused crawler with webpage source importance managed to collect roughly the same number of relevant webpages as the focused crawler without webpage source importance, but from a smaller set of sources. The fourth series provides guidance to archivists regarding the effectiveness of curating seed URLs from social media content (tweets) using different methods of selection. / Ph. D.
310

Practical Digital Library Generation into DSpace with the 5S Framework

Gorton, Douglas Christopher 30 April 2007 (has links)
In today's ever-changing world of technology and information, a growing number of organizations and universities seek to store digital documents in an online, accessible manner. These digital library (DL) repositories are powerful systems that allow institutions to store their digital documents while permitting interaction and collaboration among users in their organizations. Despite the continual work on DL systems that can produce out-of-the-box online repositories, the installation, configuration, and customization processes of these systems are still far from straightforward Motivated by the arduous process of designing digital library instances; installing software packages like DSpace and Greenstone; and configuring, customizing, and populating such systems, we have developed an XML-based model for specifying the nature of DSpace digital libraries. The ability to map out a digital library to be created in a straightforward, XML-based way allows for the integration of such a specification with other DL tools. To make use of DL specifications for DSpace, we create a DL generator that uses these models of digital library systems to create, configure, customize, and populate DLs as specified. We draw heavily on previous work in understanding the nature of digital libraries from the 5S framework for digital libraries. This divides the concerns of digital libraries into a complex, formal representation of the elements that are basic to any minimal digital library system including Streams, Structures, Spaces, Scenarios, and Societies. We reflect on this previous work and provide a fresh application of the 5S framework to practical DL systems. Given our specification and generation process, we draw conclusions towards a more general model that would be suitable to characterize any DL platform with one specification. We present this DSpace DL specification language and generator as an aid to DL designers and others interested in easing the specification of DSpace digital libraries. We believe that our method will not only enable users to create DLs more easily, but also gain a greater understanding about their desired DL structure, software, and digital libraries in general. / Master of Science

Page generated in 0.0744 seconds