• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 187
  • 22
  • 22
  • 21
  • 13
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 351
  • 351
  • 64
  • 62
  • 60
  • 53
  • 49
  • 47
  • 42
  • 41
  • 41
  • 37
  • 36
  • 33
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Querying, Exploring and Mining the Extended Document

Sarkas, Nikolaos 31 August 2011 (has links)
The evolution of the Web into an interactive medium that encourages active user engagement has ignited a huge increase in the amount, complexity and diversity of available textual data. This evolution forces us to re-evaluate our view of documents as simple pieces of text and of document collections as immutable and isolated. Extended documents published in the context of blogs, micro-blogs, on-line social networks, customer feedback portals, can be associated with a wealth of meta-data in addition to their textual component: tags, links, sentiment, entities mentioned in text, etc. Collections of user-generated documents grow, evolve, co-exist and interact: they are dynamic and integrated. These unique characteristics of modern documents and document collections present us with exciting opportunities for improving the way we interact with them. At the same time, this additional complexity combined with the vast amounts of available textual data present us with formidable computational challenges. In this context, we introduce, study and extensively evaluate an array of effective and efficient solutions for querying, exploring and mining extended documents, dynamic and integrated document collections. For collections of socially annotated extended documents, we present an improved probabilistic search and ranking approach based on our growing understanding of the dynamics of the social annotation process. For extended documents, such as blog posts, associated with entities extracted from text and categorical attributes, we enable their interactive exploration through the efficient computation of strong entity associations. Associated entities are computed for all possible attribute value restrictions of the document collection. For extended documents, such as user reviews, annotated with a numerical rating, we introduce a keyword-query refinement approach. The solution enables the interactive navigation and exploration of large result sets. We extend the skyline query to document streams, such as news articles, associated with categorical attributes and partially ordered domains. The technique incrementally maintains a small set of recent, uniquely interesting extended documents from the stream.Finally, we introduce a solution for the scalable integration of structured data sources into Web search. Queries are analysed in order to determine what structured data, if any, should be used to augment Web search results.
12

The Design Change Process in a Product Data Management System

Chung, Hsin-Yuan 27 July 2000 (has links)
To meet the fast-change market, products have to be improved continuously through design change processes. The design change process is the most frequent activity in its life cycle. When a change for a design is initiated, its related components need to be changed and that results in the changes of other related components. This starts a chain reaction. The product data usually include the data that may be generated throughout its life cycle, such as the design chart, manufacturing/production information, user¡¦s information, etc. The data formats varied at each stage of the product and its amount is too large for a simple database to handle. We need a system to manage these data which is called a Product Data Management¡]PDM¡^system. When a chain reaction occurs, it may cause a series changes among enterprises. We need a method to constrain the affected domain, and provide necessary data to proceed the change process. In this research, we are dealing with the design change issues by sharing product information stored in PDM systems. The purpose of this research is to construct a product data-sharing framework so that design change process can be carried out through the computer network. A product data search engine¡]PDES¡^will be used as the core of this framework. The PDES consists of several algorithms and assembly rules. The major function of the PDES is to find out proper parts, and then get their data through the network to meet the design change requirements. The design change processes are usually result in decision-making problems caused by distributed data and inconsistency of data formats. These problems can be solved by the proposed framework. The configurations of a personal computer and a bicycle are used as examples to demonstrate the analysis and modeling procedures.
13

Data Quality Through Active Constraint Discovery and Maintenance

Chiang, Fei Yen 10 December 2012 (has links)
Although integrity constraints are the primary means for enforcing data integrity, there are cases in which they are not defined or are not strictly enforced. This leads to inconsistencies in the data, causing poor data quality. In this thesis, we leverage the power of constraints to improve data quality. To ensure that the data conforms to the intended application domain semantics, we develop two algorithms focusing on constraint discovery. The first algorithm discovers a class of conditional constraints, which hold over a subset of the relation, under specific conditional values. The second algorithm discovers attribute domain constraints, which bind specific values to the attributes of a relation for a given domain. These two types of constraints have been shown to be useful for data cleaning. In practice, weak enforcement of constraints often occurs for performance reasons. This leads to inconsistencies between the data and the set of defined constraints. To resolve this inconsistency, we must determine whether it is the constraints or the data that is incorrect, and then make the necessary corrections. We develop a repair model that considers repairs to the data and repairs to the constraints on an equal footing. We present repair algorithms that find the necessary repairs to bring the data and the constraints back to a consistent state. Finally, we study the efficiency and quality of our techniques. We show that our constraint discovery algorithms find meaningful constraints with good precision and recall. We also show that our repair algorithms resolve many inconsistencies with high quality repairs, and propose repairs that previous algorithms did not consider.
14

Analysis of small businesses' perspective on the Electronic Data Interchange Acquisition Reform.

Hagen, Paul W. January 1997 (has links)
Thesis (M.S. in Management) Naval Postgraduate School, June 1997. / Thesis advisors, Mark W. Stone, Sandra M. Desbrow. Includes bibliographical references (p. 83-85). Also available online.
15

An Evaluation of a structured training event aimed at enhancing the Research Data Management Knowledge and Skills of Library and Information Science Professionals in South African Higher Education Institutions

Matlatse, Refiloe January 2016 (has links)
Research Data Management (RDM) has received a lot of attention recently. In South Africa, the importance of RDM has amplified since the release of the National Research Foundation‟s (NRF) open access statement. According to the statement, researchers who receive funding from the NRF must deposit their research output in an open access (OA) repository. In addition, the data supporting the research should be deposited in an accredited OA repository with a Digital Object Identifier (DOI) for future citations (NRF, 2015: online). The mandate, along with other drivers such as research data re-use, increased impact and validation of research findings has forced institutions to investigate the possibility of offering RDM services in their institutions (Ashley, 2012). It is expected that libraries and Library and Information Science (LIS) professionals will initiate and support RDM in their institutions. LIS professionals will need to upgrade or obtain new skills and knowledge to fulfil their new roles and responsibilities. Various training opportunities are available to interested professionals to improve their knowledge and skills related to RDM. These can be as simple as a workshop or as complex as a university degree. The objective of this research was to identify and evaluate a RDM training intervention to determine whether the training intervention could enhance the knowledge and skills of LIS professionals in South African (SA) Higher Education Institutions (HEIs). An embedded research design was used to investigate whether an RDM workshop, hosted by the Network for Data and Information Curation Communities (NeDICC), could enhance the LIS professional‟s (participants) perception of their RDM understanding, knowledge and skills. The research found that the RDM workshop was highly successful in enhancing the participant‟s perception of their RDM understanding and knowledge. The RDM workshop was less successful in enhancing the participant‟s perception of their RDM skills. It was recommended that LIS professionals (1) take advantage of the online RDM training material available to enhance their understanding and knowledge of RDM; (2) attend face-to-face training interventions to enhance or develop their RDM skills and (3) enrol in university level educational programmes to gain a qualification in RDM if they qualify. It was also recommended that institutions that provide RDM training should focus on specific aspects of RDM instead of offering a general overview. This research can be used to inspire larger studies or studies that compare two or more RDM training interventions. / Mini Dissertation (MIT)--University of Pretoria, 2016. / Carnegie Corporation of New York / University of Pretoria / Information Science / MIT / Unrestricted
16

The Role of Digital Transformation and Specification Data Management in Streamlining Supply Chains

Klemm, Daniel 01 September 2021 (has links) (PDF)
The packaging and product supply chain is currently undergoing a digital transformation that changes the way organizations manage data. Cloud-based software is an emerging technological innovation entirely focused on harnessing the power of packaging/product specifications to create efficiencies. These applications coordinate the data that millions of SKUs worldwide are generating into a harmonized system that will not only organize the SKUs but also create valuable information that will allow stakeholders to make decisions based upon data-driven insights. Specifications are the DNA-level information of packaging and products. Items such as the bill of materials, technical drawings, and inventory are stored together to create a traceable trail of information for stakeholders along the supply chain to refer to in the face of recalls, sustainability reports, and root cause analysis of procurement delays. Organizations have gone from storing this data on paper to creating a digital trail with manual processes and legacy systems on the computer. However, these systems cannot contend with the sheer amount of data companies now possess, wasting time and money trying to organize it all. In addition, packaging and product specification data lacks a common language that creates consistency and reduces errors; tracing an item to its source is a laborious endeavor; and resource investment is trying to solve the problem with existing manual processes and legacy systems when funding should go towards an innovative, cloud-based solution. Such a system would be able to process data and create a standardized template for specifications. This organization would allow for fast querying and advanced analytics that turn into visualizations that illustrate the insights. This framework would create a single point of truth for specifications that would enhance how companies along the supply chain collaborate and share information and streamline packaging and product creation workflow. Software solutions for specification data management exist in varying levels of involvement and installation that allow stakeholders to find a model that fits their needs in an ever-changing supply chain management landscape.
17

Srovnání produktů z oblasti Product Information Management / Comparison of Product Information Management software tools

Vytiska, Tomáš January 2008 (has links)
This diploma thesis deals with the Product Information Management (PIM) and compares PIM software tools. Its goal is to introduce the area of the PIM systems in Czech language. Next subgoal is to define system of criteria. It is also necessary to achieve the last goal -- to analyze and compare PIM software. The method I used is the exploration of information sources; obtaining information through email communication and use of empirical knowledge to define system of criteria. The contribution of this work is the same as its goals. The work is divided into two parts. The first theoretical part deals with PIM definitions, context, functionality, architectures and PIM market developing. The second practical part involves selecting of particular PIM software tools, defining system of criteria and comparison of PIM software tools.
18

LHCb data management on the computing grid

Smith, Andrew Cameron January 2009 (has links)
The LHCb detector is one of the four experiments being built to harness the proton-proton collisions provided by the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN). The data rate expected, when the LHC experiments are fully operational, eclipses that of any previous scientific experiments and has motivated the adoption of a grid computing paradigm to store and process the data. Managing PetaBytes of data in a distributed environment provides a rich set of challenges related to scalability, reliability and performance. This thesis will present the data management requirements for executing the workload of the LHCb collab- oration. We present the systems designed that support all aspects of the grid data management for LHCb, from data transfer, to data integrity, and efficient data access. The distributed computing environment is inherently unstable and much focus has been made on providing systems that are ro- bust and resilient to observed failures.
19

Research Data Services Maturity in Academic Libraries

Kollen, Christine, Kouper, Inna, Ishida, Mayu, Williams, Sarah, Fear, Kathleen 01 1900 (has links)
An ACRL white paper from 2012 reported that, at that time, only a small number of academic libraries in the United States and Canada offered research data services (RDS), but many were planning to do so within the next two years (Tenopir, Birch, and Allard, 2012). By 2013, 74% of the Association of Research Libraries (ARL) survey respondents offered RDS and an additional 23% were planning to do so (Fearon, Gunia, Pralle, Lake, and Sallans, 2013). The academic libraries recognize that the landscape of services changes quickly and that they need to support the changing needs of research and instruction. In their efforts to implement RDS, libraries often respond to pressures originating outside the library, such as national or funder mandates for data management planning and data sharing. To provide effective support for researchers and instructors, though, libraries must be proactive and develop new services that look forward and yet accommodate the existing human, technological, and intellectual capital accumulated over the decades. Setting the stage for data curation in libraries means to create visionary approaches that supersede institutional differences while still accommodating diversity in implementation. How do academic libraries work towards that? This chapter will combine an historical overview of RDS thinking and implementations based on the existing literature with an empirical analysis of ARL libraries’ current RDS goals and activities. The latter is based on the study we conducted in 2015 that included a content analysis of North American research library web pages and interviews of library leaders and administrators of ARL libraries. Using historical and our own data, we will synthesize the current state of RDS implementation across ARL libraries. Further, we will examine the models of research data management maturity (see, for example, Qin, Crowston and Flynn, 2014) and discuss how such models compare to our own three-level classification of services and activities offered at libraries - basic, intermediate, and advanced. Our analysis will conclude with a set of recommendations for next steps, i.e., actions and resources that a library might consider to expand their RDS to the next maturity level. References Fearon, D. Jr., Gunia, B., Pralle, B.E., Lake, S., Sallans, A.L. (2013). Research data management services. (ARL Spec Kit 334). Washington, D.C.: ARL. Retrieved from: http://publications.arl.org/Research-Data-Management-Services-SPEC-Kit-334/ Tenopir, C., Birch, B., & Allard, S. (2012). Academic libraries and research data services: Current practices and plans for the future. ACRL. Retrieved from http://www.ala.org/acrl/sites/ala.org.acrl/files/content/publications/whitepapers/Tenopir_Birch_Allard.pdf Qin, J., Crowston, K., & Flynn, C. (2014). 1.1 Commitment to Perform. A Capability Maturity Model for Research Data Management. wiki. Retrieved http://rdm.ischool.syr.edu/xwiki/bin/view/CMM+for+RDM/WebHome
20

Information Aggregation using the Cameleon# Web Wrapper

Firat, Aykut, Madnick, Stuart, Yahaya, Nor Adnan, Kuan, Choo Wai, Bressan, Stéphane 29 July 2005 (has links)
Cameleon# is a web data extraction and management tool that provides information aggregation with advanced capabilities that are useful for developing value-added applications and services for electronic business and electronic commerce. To illustrate its features, we use an airfare aggregation example that collects data from eight online sites, including Travelocity, Orbitz, and Expedia. This paper covers the integration of Cameleon# with commercial database management systems, such as MS SQL Server, and XML query languages, such as XQuery.

Page generated in 0.0652 seconds