691 |
A Comparative Study of Data Transformations for Efficient XML and JSON Data Compression. An In-Depth Analysis of Data Transformation Techniques, including Tag and Capital Conversions, Character and Word N-Gram Transformations, and Domain-Specific Data Transforms using SMILES Data as a Case StudyScanlon, Shagufta A. January 2015 (has links)
XML is a widely used data exchange format. The verbose nature of XML leads to the requirement to efficiently store and process this type of data using compression. Various general-purpose transforms and compression techniques exist that can be used to transform and compress XML data. More compact alternatives to XML data have been developed, namely JSON due to the verbosity of XML data.
Similarly, there is a requirement to efficiently store and process SMILES data used in Chemoinformatics. General-purpose transforms and compressors can be used to compress this type of data to a certain extent, however, these techniques are not specific to SMILES data.
The primary contribution of this research is to provide developers that use XML, JSON or SMILES data, with key knowledge of the best transformation techniques to use with certain types of data, and which compression techniques would provide the best compressed output size and processing times, depending on their requirements.
The main study in this thesis, investigates the extent of which using data transforms prior to data compression can further improve the compression of XML and JSON data. It provides a comparative analysis of applying a variety of data transform and data transform variations, to a number of different types of XML and JSON equivalent datasets of various sizes, and applying different general-purpose compression techniques over the transformed data.
A case study is also conducted, to investigate data transforms prior to compression to improve the compression of data within a data-specific domain. / The files of software accompanying this thesis are unable to be presented online with the thesis.
|
692 |
Pentagonal scheme for dynamic XML prefix labellingTaktek, Ebtesam A.M. January 2020 (has links)
In XML databases, the indexing process is based on a labelling or
numbering scheme and generally used to label an XML document to
perform an XML query using the path node information. Moreover, a
labelling scheme helps to capture the structural relationships during the
processing of queries without the need to access the physical document.
Two of the main problems for labelling XML schemes are duplicated
labels and the cost efficiency of labelling time and size. This research
presents a novel dynamic XML labelling scheme, called the Pentagonal
labelling scheme, in which data are represented as ordered XML nodes
with relationships between them. The update of these nodes from large scale XML documents has been widely investigated and represents a
challenging research problem as it means relabelling a whole tree. Our
algorithms provide an efficient dynamic XML labelling scheme that
supports data updates without duplicating labels or relabelling old nodes.
Our work evaluates the labelling process in terms of size and time, and
evaluates the labelling scheme’s ability to handle several insertions in
XML documents. The findings indicate that the Pentagonal scheme
shows a better initial labelling time performance than the compared
schemes, particularly when using large XML datasets. Moreover, it
efficiently supports random skewed updates, has fast calculations and
uncomplicated implementations so efficiently handles updates. Also, it
proved its capability in terms of the query performance and in determining
the relationships. / Libyan government
|
693 |
A Framework for XML Index SelectionGoyal, Anushree January 2013 (has links)
No description available.
|
694 |
A WEB-BASED COMMISSIONING SYSTEMYE, LAN 16 September 2002 (has links)
No description available.
|
695 |
AN INTERACTIVE WEB-BASED MULTIMEDIA COURSEWARE WITH XMLWANG, ZHUO 22 January 2003 (has links)
No description available.
|
696 |
AN XML-BASED COURSE REGISTRATION SYSTEMLI, JUAN January 2004 (has links)
No description available.
|
697 |
A New Architecture for Developing Component-based Distributed ApplicationsZou, Li January 2000 (has links)
No description available.
|
698 |
Integration of Life Cycle Assessment within Building Information Modeling EnvironmentJiayu, Cui January 2020 (has links)
Over the past several decades, increasing awareness of sustainable building has led to the development and maturity of life cycle assessment (LCA) as a method used to assess the environmental impacts and resources through buildings’ life cycle. Building Information Modeling (BIM) is an intelligent process based on 3D model that enables architecture, engineering and construction designers to collaborate. Because of its advantages and the collaborative alternative, the integrations of BIM and LCA have been studied and developed in many ways. However, none of the integrating approaches have been widely used due to interoperability issues and accuracy problems. Detailed information of LCA and BIM are introduced in this thesis, and then innovated integration of BIM and LCA are proposed. This is done with the direct access to the LCA data in XML format from EPD database by using Dynamo that is a plug-in Revit application, LCA can be conducted within the BIM environment. The results of life cycle impact calculation can be instantly presented in diagram, and users can visualize the results by color coding different materials in BIM model. Future research could focus on how to widely use the integrating method in real project and connect this approach into environmental certification system in order to demonstrate the environmental performance of buildings and projects in a standardized manner.
|
699 |
Lock-based concurrency control for XMLAhmed, Namiruddin January 2006 (has links)
No description available.
|
700 |
Translation of Heterogeneous High-level Models to Lower Level Design LanguagesJackson, Brian Aliston 04 May 2005 (has links)
Proceeding from a specification, one develops an abstract mathematical model of a system, or portion of a system. This model of a system is validated to insure that the specification is interpreted accurately and to explore different algorithms for implementing the system behavior. We use the words "portion of a system," because only rarely are systems designed wholly using a purely top-down approach. Commonly, the design approach is a mixture of top-down and bottom-up. But even in this mixed approach, top-down techniques are critical to the development of new, advanced system features and improving the performance of existing system components. An example of this style of design tools and environments is Ptolemy II. Ptolemy II is a high-level modeling tool created at UC-Berkeley. It supports heterogeneous and homogeneous modeling, simulation, and design of concurrent systems. High-level modeling of such embedded systems as digital electronics, hardware, and software can be effectively represented.
The bottom-up design approach exploits design reuse to achieve the productivity necessary to build complex systems. Historically, chip design companies have always reused designs in going from one product generation to another, but the efficiency of bottom-up design is enhanced by the use of IP (Intellectual Property) cores that a company can buy from an outside source. Design libraries are useful for system design and are an example of IP cores.
A sound methodology to translate Ptolemy models to SystemC models would have a very beneficial effect on the CAD/EDA industry. Ptolemy II is written in Java and its high-level designs, or abstract graph models, are represented as XML documents. Ptolemy's major emphasis is on the methodology for defining and producing embedded software together with the system in which it is embedded. SystemC is written in C++, and its industrial use is gaining momentum due to its ability to represent functionality, communication, software, and hardware at various levels of abstraction. SystemC produces synthesizable code. A methodology to convert Ptolemy models to synthesizable SystemC code would be the technical epitome of a hybrid between top-down and bottom-up design styles and methodologies. Such a methodology would enable system designers to obtain fast design exploration, efficient IP-reuse, and validation. Ptolemy has various components and models of computation. A model of computation dictates how components interact between other components. SystemC has its own models of computation and design libraries. XML and Perl are both powerful tools by themselves, and we use these tools in this research to create a sound methodology for translating Ptolemy models (high-level of abstraction) to synthesizable SystemC code (low-level of abstraction), i.e.: code which can serve as input to hardware tools. / Ph. D.
|
Page generated in 0.0163 seconds