• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 553
  • 231
  • 139
  • 126
  • 110
  • 68
  • 65
  • 43
  • 30
  • 24
  • 19
  • 14
  • 10
  • 9
  • 8
  • Tagged with
  • 1543
  • 404
  • 259
  • 239
  • 229
  • 227
  • 226
  • 213
  • 171
  • 155
  • 145
  • 131
  • 125
  • 119
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

XML Schema inference with XSLT

Buntin, Scott McCollum. January 2001 (has links) (PDF)
Thesis (M.S.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains viii, 135 p.; also contains graphics. Vita. Includes bibliographical references (p. 132-134).

Order-sensitive view maintenance of materialized XQuery views

Dimitrova, Katica. January 2003 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: XML algebra; order; view maintenance; propagation rules; XML. Includes bibliographical references (p. 80-83).

A Methodology for Managing Roles in Legacy Systems

January 2003 (has links)
Role-based access control (RBAC) is well accepted as a good technology for managing and designing access control in systems with many users and many objects. Much of the research on RBAC has been done in an environment isolated from real systems which need to be managed. In this paper, we propose a methodology for using an RBAC design tool we have developed, to manage and effect changes to an underlying relational database. We also discuss how to simulate the role graph model on a Unix system, and extend the methodology just described for relational databases to managing a Unix system when changes are made to the role graph.


MOHAMMAD, SAMIR 16 March 2011 (has links)
Extensible Markup Language (XML) is a de facto standard for data exchange in the World Wide Web. Indexing plays a key role in improving the execution of XML queries over that data. In this thesis we discuss the three main categories of indexes proposed in the literature to handle the XML semistructured data model, and identify limitations and open problems related to these indexing schemes. Based on our findings, we propose two novel XML index structures to overcome most of these limitations: a native index structure called Level-based Tree Index for XML databases (LTIX) and a relational index structure called Universal Index Structure for XML data (UISX). A proper labeling scheme is an essential part of a well-built XML index structure. We found that existing labeling schemes are not suitable for our index structures and therefore propose a novel labeling scheme, Level-based Labeling Scheme (LLS), which has the advantages of most popular types of labeling schemes while eliminating the main disadvantages. We then combine our LLS labeling scheme with our index structures. An evaluation shows that LLS performs well in comparison to existing labeling schemes using different mappings to relational tables. We propose the LTIX to minimize the number of joins and matches required to evaluate twig queries, and also to facilitate effective query optimization through early pruning of the space search. Our experimental results show that this approach performs well in comparison to existing state-of-the-art approaches. We propose the UISX to overcome the key problem with the state-of-the-art approaches, namely that they cannot support efficient processing of twig queries without requiring significant storage. We use a light-weight native XML engine on top of an SQL engine to perform the optimization related to the structure of the XML data prior to shredding. Experimental results show that our approach achieves lower response times than other similar approaches while using less space to store XML data. / Thesis (Ph.D, Computing) -- Queen's University, 2011-03-15 23:03:50.15

Building Information Modeling - A Minimum Mathematical Configuration

Bhandare, Ruchika 2012 August 1900 (has links)
In the current context, the standardization of building construction is not limited to a specific country or to a specific building code. Trade globalization has emphasized the need for standardization in the process of exchange of design information, whether it is in the form of drawings or documents. Building Information Modeling is the latest transformational technology that supports interactive development of design information for buildings. No single Building Information Modeling software package is used in the Architecture Engineering Construction and Facilities Management industries, which is strength as new ideas develop, but a hindrance as the new ideas flow at a different pace into the various programs. The standards divergence of various software results in a limited ability to exchange data between and within projects, especially one sees the difficulty in moving data from one program to another. The Document eXchange File format represents an early attempt to standardize the exchange of drawing information by Autodesk. However, the data was limited to geometric data required for the production of plotted drawings. Metadata in a Building Information Model provides a method to add information to the basic geometric configuration provided in a Document eXchange File. Building Information Model programs use data structures to define smart objects that encapsulate building data in a searchable and robust format. Due to the complexity of building designs eXtensible Markup Language schemas of three dimensional models are often large files that can contain considerable amounts of superfluous information. The aim of this research is to exclude all the superfluous information from the design information and determine the absolute minimum information required to execute the construction of a project. A plain concrete beam element was used as the case study for this research. The results show that a minimal information schema can be developed for a simple building element. Further research is required on more complex elements.

On Efficient processing of XML data and their applications

Shui, William Miao, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
The development of high-throughput genome sequencing and protein structure determination techniques have provided researchers with a wealth ofbiological data. However, providing an integrated analysis can be difficult due to the incompatibilities of data formats between providers and applications, the strict schema constraints imposed by data providers, and the lack ofinfrastructure for easily accommodating new semantic information. To address these issues, this thesis first proposes to use Extensible Markup Language (XML) [26] and its supporting query languages as the underlying technology to facilitate a seamless, integrated access to the sum of heterogeneous biological data and services. XML is used due to its semi-structured nature and its ability to easily encapsulate both contextual and semantic information. The tree representation of an XML document enables applications to easily traverse and access data within the document without prior knowledge of its schema. However, in the process ofconstructing the framework, we have identified a number of issues that are related to the performance ofXML technologies. More specifically, on the performance ofthe XML query processor, the data store and the transformation processor. Hence, this thesis also focuses on finding new solutions to address these issues. For the XML query processor, we proposes an efficient structural join algorithm that can be implemented on top of existing relational databases. Experiments show the proposed method outperforms previous work in both queries and updates. For complicated XML query patterns, a new twig join algorithm called CTwigStack is proposed in this thesis. In essence, the new approach only produces and merges partial solution nodes that satisfy the entire twig query pattern tree. Experiments show the proposed algorithm outperforms previous methods in most cases. For more general cases, a propose a mixed mode twig join is proposed, which combines CTwigStack with the existing twig join algorithms and the extensive experimental results have shown the superior effectiveness of both CTwigStack and the mixed mode twig join. By combining with existing system information, the mixed mode twig join can be served as a framework for plan selection during the process of XML query optimization. For the XML transfonnation component, a novel stand-alone, memory conscious XSLT processor is proposed in this thesis, such that the proposed XSLT processor only requires a single pass of the input XML dataset. Consequently, enabling fast transfonnation of streaming XML data and better handling of complicated XPath selection patterns, including aggregate predicate functions such as the XPath count function. Ultimately, based on the nature of the proposed framework, we believe that solving the perfonnance issues related to the underlying XML components can subsequently lead to a more robust framework for integrating heterogeneous biological data sources and services.

XML interfaces a growing need for standardization /

Jackson, Elizabeth A. January 2007 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2007. / Title from PDF title page (viewed on Jan 17, 2008). Includes bibliographical references.

Content oriented retrieval on document centric XML

Dopichaj, Philipp January 2007 (has links)
Zugl.: Kaiserslautern, Techn. Univ., Diss., 2007

Updating views over recursive XML

Jiang, Ming. January 2007 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: database; xml; view update. Includes bibliographical references (leaves 51-53 ).

Descriptive types for XML query language Xcerpt /

Wilk, Artur. January 2006 (has links)
Licentiatavhandling Linköping : Linköpings universitet, 2006.

Page generated in 0.0241 seconds