• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Relational Views of XML for the Semantic Web

Atre, Shruti 01 October 2007 (has links)
The Semantic Web is the future of the Internet. It is the extension to the Internet in which information will be given well-defined meaning, enabling not only humans but also machines to find, share and combine information more easily. In the Semantic Web documents are not merely pages containing a set of words that form their content. They also encode the meaning and structure of those words. This enables various information retrieval techniques to be performed on the documents in addition to the ones restricted to keywords. The goal of this research is to explore a method for querying the Semantic Web using relational database theory and source transformation techniques. We take as input, documents annotated with XML mark-up and the information tags that we are interested in. We then extract and populate a relational view on the annotated XML documents using these tags and the implicit relations in the XML documents. We evaluate the feasibility of our system by testing on a variety of input and we also explore the kinds of queries that can be made on the extracted relational view. / Thesis (Master, Computing) -- Queen's University, 2007-09-27 10:56:13.513
2

Language Implementation by Source Transformation

Dayanand, Pooja 01 February 2008 (has links)
Compilation involves transforming a high level language source program into an equivalent assembly or machine language program. Programming language implementation can therefore be viewed as a source to source transformation from the original high level source code to the corresponding low level assembly language source code. This thesis presents an experiment in implementing an entire programming language system using declarative source transformation. To this end a complete compiler/interpreter is implemented using TXL, a source transformation system. The TXL-based PT Pascal compiler/interpreter is implemented in phases similar to those in a traditional compiler. In the lexical and syntactic analysis phase any lexical and syntactic errors present are detected when the source program is parsed according to the TXL grammar specified. The semantic analysis phase is then run in which semantic checks are performed on the source program and error messages are generated when semantic errors are detected. The source program is also annotated with type information. The typed intermediate code produced by the semantic analysis phase can be directly executed in the execution phase. Alternatively, the intermediate typed source can be transformed into a bytecode instruction sequence by running the code generation phase. This bytecode instruction sequence is then executed by a TXL implementation of an abstract stack machine in the code simulation phase. The TXL-based PT Pascal compiler/interpreter is compared against the traditional S/SL implementation of the PT Pascal compiler. The declarative style of TXL makes the rules and functions in the TXL-based PT Pascal compiler/interpreter easier to understand and the number of lines of code in the TXL implementation is less than in the S/SL implementation. The TXL implementation is however slower and less scalable. The implementation of the TXL-based PT Pascal compiler/interpreter and the advantages and disadvantages of this approach are discussed in greater detail in this thesis. / Thesis (Master, Computing) -- Queen's University, 2008-01-29 19:31:31.454
3

Sequence Diagrams Integration via Typed Graphs: Theory and Implementation

LIANG, HONGZHI 03 September 2009 (has links)
It is widely accepted within the software engineering community that the support for integration is necessary for requirement models. Several methodologies, such as the role-based software development, that have appeared in the literature are relying on some kind of integration. However, current integration techniques and their tools support are insufficient. In this research, we discuss our solution to the problem. More precisely, we present a general integration approach for scenario-based models, particularly for UML Sequence Diagrams, based on the colimit construction known from category theory. In our approach, Sequence Diagrams are represented by SD-graphs, a special kind of typed graphs. The merge algorithm for SD-graphs is an extension of existing merge operations on sets and graphs. On the one hand, the merge algorithm ensures traceability and guarantees key theoretical properties (e.g., “everything is represented and nothing extra is acquired” during the merge). On the other hand, our formalization of Sequence Diagrams as SD-graphs retains the graphical nature of Sequence Diagrams, yet is amenable to algebraic manipulations. Another important property of our process is that our approach is applicable to other kinds of models as long as they can be represented by typed graphs. A prototype Sequence Diagram integration tool following the approach has been implemented. The tool is not only a fully functional integration tool, but also served as a test bed for our theory and provided feedback for our theoretical framework. To support the discovery and specification of model relationships, we also present a list of high-level merge patterns in this dissertation. We believe our theory and tool are beneficial to both academia and industry, as the initial evaluation has shown that the ideas presented in this dissertation represent promising steps towards the more rigorous management of requirement models. We also present an approach connecting model transformation with source transformation and allowing an existing source transformation language (TXL) to be used for model transformation. Our approach leverages grammar generators to ease the task of creating model transformations and inherits many of the strengths of the underlying transformation language (e.g., efficiency and maturity). / Thesis (Ph.D, Computing) -- Queen's University, 2009-08-28 13:03:08.607
4

A Verification Framework for Access Control in Dynamic Web Applications

Alalfi, Manar 30 April 2010 (has links)
Current technologies such as anti-virus software programs and network firewalls provide reasonably secure protection at the host and network levels, but not at the application level. When network and host-level entry points are comparatively secure, public interfaces of web applications become the focus of malicious software attacks. In this thesis, we focus on one of most serious web application vulnerabilities, broken access control. Attackers often try to access unauthorized objects and resources other than URL pages in an indirect way; for instance, using indirect access to back-end resources such as databases. The consequences of these attacks can be very destructive, especially when the web application allows administrators to remotely manage users and contents over the web. In such cases, the attackers are not only able to view unauthorized content,but also to take over site administration. To protect against these types of attacks, we have designed and implemented a security analysis framework for dynamic web applications. A reverse engineering process is performed on an existing dynamic web application to extract a role-based access-control security model. A formal analysis is applied on the recovered model to check access-control security properties. This framework can be used to verify that a dynamic web application conforms to access control polices specified by a security engineer. Our framework provides a set of novel techniques for the analysis and modeling of web applications for the purpose of security verification and validation. It is largely language independent, and based on adaptable model recovery which can support a wide range of security analysis tasks. / Thesis (Ph.D, Computing) -- Queen's University, 2010-04-30 14:30:53.018
5

Implémentations Centralisée et Répartie de Systèmes Corrects par construction à base des Composants par Transformations Source-à-source dans BIP

Jaber, Mohamad 28 October 2010 (has links) (PDF)
The thesis studies theory and methods for generating automatically centralized and distributed implementations from a high-level model of an application software in BIP. BIP (Behavior, Interaction, Priority) is a component framework with formal operational semantics. Coordination between components is achieved by using multiparty interactions and dynamic priorities for scheduling interactions. A key idea is to use a set of correct source-to-source transformations preserving the functional properties of a given application software. By application of these transformations we can generate a full range of implementations from centralized to fully distributed. Centralized Implementation: the implementation method transforms the interactions of an application software described in BIP and generates a functionally equivalent program. The method is based on the successive application of three types of source-to-source transformations: flattening of components, flattening of connectors and composition of atomic components. We shown that the system of the transformations is confluent and terminates. By exhaustive application of the transformations, any BIP component can be transformed into an equivalent monolithic component. From this component, efficient standalone C++ code can be generated. Distributed Implementation: the implementation method transforms an application software described in BIP for a given partition of its interactions, into a Send/Receive BIP model. Send/Receive BIP models consist of components coordinated by using asynchronous message passing (Send/Receive primitives). The method leads to 3-layer architectures. The bottom layer includes the components of the application software where atomic strong synchronization is implemented by sequences of Send/Receive primitives. The second layer includes a set of interaction protocols. Each protocol handles the interactions of a class of the given partition. The third layer implements a conflict resolution protocol used to resolve conflicts between conflicting interactions of the second layer. Depending on the given partition, the execution of obtained Send/Receive BIP model range from centralized (all interactions in the same class) to fully distributed (each class has a single interaction). From Send/Receive BIP models and a given mapping of their components on a platform providing Send/Receive primitives, an implementation is automatically generated. For each class of the partition we generate C++ code implementing the global behavior of its components. The transformations have been fully implemented and integrated into BIP tool-set. The experimental results on non trivial examples and case studies show the novelty and the efficiency of our approach.
6

SkePU 2: Language Embedding and Compiler Support for Flexible and Type-Safe Skeleton Programming

Ernstsson, August January 2016 (has links)
This thesis presents SkePU 2, the next generation of the SkePU C++ framework for programming of heterogeneous parallel systems using the skeleton programming concept. SkePU 2 is presented after a thorough study of the state of parallel programming models, frameworks and tools, including other skeleton programming systems. The advancements in SkePU 2 include a modern C++11 foundation, a native syntax for skeleton parameterization with user functions, and an entirely new source-to-source translator based on Clang compiler front-end libraries. SkePU 2 extends the functionality of SkePU 1 by embracing metaprogramming techniques and C++11 features, such as variadic templates and lambda expressions. The results are improved programmability and performance in many situations, as shown in both a usability survey and performance evaluations on high-performance computing hardware. SkePU’s skeleton programming model is also extended with a new construct, Call, unique in the sense that it does not impose any predefined skeleton structure and can encapsulate arbitrary user-defined multi-backend computations. We conclude that SkePU 2 is a promising new direction for the SkePU project, and a solid basis for future work, for example in performance optimization.
7

Towards putting abstract interpretation of Prolog into practice : design, implementation and evaluation of a tool to verify and optimise Prolog programs

Gobert, François 11 December 2007 (has links)
Logic programming is appealing since it allows the programmer to concentrate on the meaning of the problem to be solved. Unfortunately, for efficiency reasons, the declarative and operational natures of Prolog do not coincide. Prolog uses an incomplete depth-first search rule, unifications and negations may be unsound, and there are extralogical features like the cut or dynamic predicates. Methodologies have been proposed to construct operationally correct and efficient Prolog code. Researchers have designed methods to automate the verification of operational properties on which optimisation of logic programs can be based. A few tools have been implemented but there is a lack of a unified framework. <P> The goal and topic of this thesis is the design, implementation and evaluation of an abstract interpretation framework of Prolog to integrate state-of-the-art techniques. The analyser is based on an original proposal that defines the notion of abstract sequence, which allows one to verify many desirable operational properties of a logic procedure. The properties include types, modes, sharing of terms, proving termination, linear relations between the size of input/output terms and the number of solutions to a call. A single global analysis is performed, and abstract sequences are derived at each program point. <P> In this thesis, we implement and evaluate the original framework, and, more importantly, we overcome its limitations to make it accurate and usable in practice: the improved framework accepts any Prolog code with modules, new abstract domains and operations are added, and the language of specifications is more expressive. We also design and implement an optimiser that generates specialised code. The optimiser uses the abstract information to safely apply source-to-source transformations. Code transformations include clause and literal reordering, introduction of cuts, and removal of redundant literals. The optimiser follows a precise strategy to choose the most rewarding transformations in best order.

Page generated in 0.127 seconds