• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 2
  • Tagged with
  • 17
  • 17
  • 10
  • 10
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The application of the key-value-reference model in dynamic irregular parallel computation

Zhang, Yang 02 May 2009 (has links)
This dissertation studies the effects of the "key-value-ref" model in the computational field simulation software development process. The motivation of this study is rooted in addressing the high cost of designing and implementing high-performance simulation software that runs on modern parallel supercomputers. Unlike traditional sequential programming where a number of effective tools exist, parallel super-cluster programming contains many low-level constructs that increase the complexity in the implementation of a software design. More importantly, the dynamic nature of the simulation problems brings additional challenges into the designing stage. Often a designer has to face a number of competing factors and needs to devise strategies to make a trade-off and to find better software structures that can be realized with reasonable performance and flexibility. Proper modeling can help to address many of these issues in the design and implementation stages. Using a two-phase Lagrangian particleield simulation problem as a case study, this dissertation shows that the "key-space" concept developed in the "key-value-ref" model within this dissertation is able to model the essential components in available design approaches for parallel computational field simulation, and that the model also helps to expose the design choices in a more sensible way, and also offers certain guidance towards the crafting of a better software structure. In addition, a programming interface is also designed and implemented that allows the development of computational field simulation software utilizing the "key-space" concept. Empirical results show that the current implementation provides a reasonable performance compared to those highly optimized hand-tuned programs.
2

Optimized Composition of Parallel Components on a Linux Cluster

Al-Trad, Anas January 2012 (has links)
We develop a novel framework for optimized composition of explicitly parallel software components with different implementation variants given the problem size, data distribution scheme and processor group size on a Linux cluster. We consider two approaches (or two cases of the framework).  In the first approach, dispatch tables are built using measurement data obtained offline by executions for some (sample) points in the ranges of the context properties. Inter-/extrapolation is then used to do actual variant-selection for a given execution context at run-time. In the second approach, a cost function of each component variant is provided by the component writer for variant-selection. These cost functions can internally lookup measurements' tables built, either offline or at deployment time, for computation- and communication-specific primitives. In both approaches, the call to an explicitly parallel software component (with different implementation variants) is made via a dispatcher instead of calling a variant directly. As a case study, we apply both approaches on a parallel component for matrix multiplication with multiple implementation variants. We implemented our variants using Message Passing Interface (MPI). The results show the reduction in execution time for the optimally composed applications compared to applications with hard-coded composition. In addition, the results show the comparison of estimated and measured times for each variant using different data distributions, processor group and problem sizes.
3

Round-trip Engineering of Template-based Code Generation in SkAT

Nett, Tobias 04 August 2015 (has links) (PDF)
In recent years, the development of multi-core CPUs and GPUs with many cores has taken precedence over an increase in clock frequency. Therefore, writing parallel programs for multi-core and many-core systems becomes increasingly important. Due to the lack of inherently parallel language features in most programming languages, today many programs are written sequentially and then enhanced with special pragmas or framework calls hinting parallelizable parts of code. This hints are then used to modify and extend the code with parallel constructs in a preprocessing step. If it is crucial to optimize the run time of a program, the code generated by this step has to be inspected an manually tuned. To keep the original and the transformed code artifacts synchronized, an editor with a round-trip engineering (RTE) system can be used. RTE propagates changes made in the source artifacts to the generated artifacts and vice versa. One tool that can be used to expand pragmas to parallelized source code is the invasive software composition framework SkAT. SkAT-based tools use reference attribute grammars (RAGs) to compose code fragments according to a composition program written in Java. To facilitate the creation of SkAT-based tools, a minimal composition system framework SkAT/Minimal on to of the SkAT core contains mechanisms to enable the incremental building of such tools. The principle of island parsing is employed to be able to express just as much of a language as is necessary for composition. In this work, composition systems based on SkAT/Minimal are targeted. The task is split into two parts: first, approaches for RTE are analyzed and a concept for a RTE system is created. The focus lies on the analysis of features and requirements of existing RTE approaches and a thorough investigation of all relevant steps required to implement such a system for SkAT/Minimal. The second part of the task is the creation and evaluation of a prototypical implementation of the system.
4

Round-trip Engineering of Template-based Code Generation in SkAT

Nett, Tobias 13 March 2015 (has links)
In recent years, the development of multi-core CPUs and GPUs with many cores has taken precedence over an increase in clock frequency. Therefore, writing parallel programs for multi-core and many-core systems becomes increasingly important. Due to the lack of inherently parallel language features in most programming languages, today many programs are written sequentially and then enhanced with special pragmas or framework calls hinting parallelizable parts of code. This hints are then used to modify and extend the code with parallel constructs in a preprocessing step. If it is crucial to optimize the run time of a program, the code generated by this step has to be inspected an manually tuned. To keep the original and the transformed code artifacts synchronized, an editor with a round-trip engineering (RTE) system can be used. RTE propagates changes made in the source artifacts to the generated artifacts and vice versa. One tool that can be used to expand pragmas to parallelized source code is the invasive software composition framework SkAT. SkAT-based tools use reference attribute grammars (RAGs) to compose code fragments according to a composition program written in Java. To facilitate the creation of SkAT-based tools, a minimal composition system framework SkAT/Minimal on to of the SkAT core contains mechanisms to enable the incremental building of such tools. The principle of island parsing is employed to be able to express just as much of a language as is necessary for composition. In this work, composition systems based on SkAT/Minimal are targeted. The task is split into two parts: first, approaches for RTE are analyzed and a concept for a RTE system is created. The focus lies on the analysis of features and requirements of existing RTE approaches and a thorough investigation of all relevant steps required to implement such a system for SkAT/Minimal. The second part of the task is the creation and evaluation of a prototypical implementation of the system.:1 Introduction 1 1.1 Motivation 1 1.2 Scope 2 1.3 Contributions 2 1.4 Organization 2 2 Background 5 2.1 Fundamentals 6 2.1.1 Syntax Trees 6 2.1.2 Parsing and Unparsing 6 2.2 Attribute Grammars 9 2.2.1 Reference Attribute Grammars 10 2.2.2 Reference Attribute Grammars in SkAT 10 2.3 Composition Systems 12 2.3.1 Software Composition Systems 13 2.3.2 Invasive Software Composition 13 2.3.3 SkAT 15 2.3.4 Template-based Code Generation 16 2.4 Round-trip Engineering 17 2.4.1 Motivation For Round-trip Engineering 17 2.4.2 Concepts of RTE 18 3 Analysis of RTE Approaches 19 3.1 Automatic Round-trip Engineering 19 3.2 RTE In Aspect Weaving Systems 21 3.2.1 CST Graftings 21 3.2.2 Update Propagation in Aspect Weaving Systems 22 3.3 RTE in Invasive Software Composition Systems 23 3.3.1 Tracing Composition Program Execution 23 3.3.2 Backpropagation of Changes 24 3.3.3 Implementation in the Reuseware Framework 26 3.4 Managing Fragments in RTE 27 3.5 Evaluation of RTE Approaches 28 4 Tracing in SkAT 31 4.1 Requirements 31 4.1.1 Objectives 32 4.1.2 Functional and Nonfunctional Requirements 32 4.2 Concept 33 4.3 Implementation 34 5 Building an RTE-editor Prototype 37 5.1 Prerequisites 37 5.2 Requirements 39 5.3 Concept 40 5.3.1 AST Interface 41 5.3.2 Composer Interface 41 5.3.3 Generating the Output 41 5.3.4 The Prototype Skeleton 42 5.4 Implementation 43 6 Designing an RTE-editor 49 6.1 Replay 50 6.2 AST Modifications 50 6.2.1 Modification Types 51 6.2.2 Detecting Modification Types 52 6.3 Origin Inference 53 6.3.1 Inference for Updated Elements 53 6.3.2 Inference for Deleted Elements 54 6.3.3 Inference for Inserted Elements 54 6.4 Gap Edit Problem 54 6.4.1 Inference in SkAT 57 6.4.2 Multiple Source Fragments 57 6.5 Applying Modifications 58 6.5.1 Propagating Terminal Updates 60 6.5.2 Propagating Non-terminal Updates 61 6.5.3 Propagating Deletions 62 6.5.4 Propagating Insertions 62 6.5.5 Propagating Composed Modifications 62 6.6 Adapting SkAT Composition Programs 63 7 Evaluation and Outlook on Future Works 65 7.1 Fragment Versioning 65 7.2 Composition Program DSL 66 7.3 Structured Editors 68 7.4 SkAT RTE System 68 Appendices 71 List of Figures 73 List of Listings 75 List of Abbreviations 77 Bibliography 79 CD Content 83
5

A Lightweight Framework for Universal Fragment Composition

Henriksson, Jakob 06 January 2009 (has links) (PDF)
Domain-specific languages (DSLs) are useful tools for coping with complexity in software development. DSLs provide developers with appropriate constructs for specifying and solving the problems they are faced with. While the exact definition of DSLs can vary, they can roughly be divided into two categories: embedded and non-embedded. Embedded DSLs (E-DSLs) are integrated into general-purpose host languages (e.g. Java), while non-embedded DSLs (NE-DSLs) are standalone languages with their own tooling (e.g. compilers or interpreters). NE-DSLs can for example be found on the Semantic Web where they are used for querying or describing shared domain models (ontologies). A common theme with DSLs is naturally their support of focused expressive power. However, in many cases they do not support non–domain-specific component-oriented constructs that can be useful for developers. Such constructs are standard in general-purpose languages (procedures, methods, packages, libraries etc.). While E-DSLs have access to such constructs via their host languages, NE-DSLs do not have this opportunity. Instead, to support such notions, each of these languages have to be extended and their tooling updated accordingly. Such modifications can be costly and must be done individually for each language. A solution method for one language cannot easily be reused for another. There currently exist no appropriate technology for tackling this problem in a general manner. Apart from identifying the need for a general approach to address this issue, we extend existing composition technology to provide a language-inclusive solution. We build upon fragment-based composition techniques and make them applicable to arbitrary (context-free) languages. We call this process for the composition techniques’ universalization. The techniques are called fragment-based since their view of components— reusable software units with interfaces—are pieces of source code that conform to an underlying (context-free) language grammar. The universalization process is grammar-driven: given a base language grammar and a description of the compositional needs wrt. the composition techniques, an adapted grammar is created that corresponds to the specified needs. The result is thus an adapted grammar that forms the foundation for allowing to define and compose the desired fragments. We further build upon this grammar-driven universalization approach to allow developers to define the non–domain-specific component-oriented constructs that are needed for NE-DSLs. Developers are able to define both what those constructs should be, and how they are to be interpreted (via composition). Thus, developers can effectively define language extensions and their semantics. This solution is presented in a framework that can be reused for different languages, even if their notion of ‘components’ differ. To demonstrate the approach and show its applicability, we apply it to two Semantic Web related NE-DSLs that are in need of component-oriented constructs. We introduce modules to the rule-based Web query language Xcerpt and role models to the Web Ontology Language OWL.
6

A Lightweight Framework for Universal Fragment Composition

Henriksson, Jakob 19 December 2008 (has links)
Domain-specific languages (DSLs) are useful tools for coping with complexity in software development. DSLs provide developers with appropriate constructs for specifying and solving the problems they are faced with. While the exact definition of DSLs can vary, they can roughly be divided into two categories: embedded and non-embedded. Embedded DSLs (E-DSLs) are integrated into general-purpose host languages (e.g. Java), while non-embedded DSLs (NE-DSLs) are standalone languages with their own tooling (e.g. compilers or interpreters). NE-DSLs can for example be found on the Semantic Web where they are used for querying or describing shared domain models (ontologies). A common theme with DSLs is naturally their support of focused expressive power. However, in many cases they do not support non–domain-specific component-oriented constructs that can be useful for developers. Such constructs are standard in general-purpose languages (procedures, methods, packages, libraries etc.). While E-DSLs have access to such constructs via their host languages, NE-DSLs do not have this opportunity. Instead, to support such notions, each of these languages have to be extended and their tooling updated accordingly. Such modifications can be costly and must be done individually for each language. A solution method for one language cannot easily be reused for another. There currently exist no appropriate technology for tackling this problem in a general manner. Apart from identifying the need for a general approach to address this issue, we extend existing composition technology to provide a language-inclusive solution. We build upon fragment-based composition techniques and make them applicable to arbitrary (context-free) languages. We call this process for the composition techniques’ universalization. The techniques are called fragment-based since their view of components— reusable software units with interfaces—are pieces of source code that conform to an underlying (context-free) language grammar. The universalization process is grammar-driven: given a base language grammar and a description of the compositional needs wrt. the composition techniques, an adapted grammar is created that corresponds to the specified needs. The result is thus an adapted grammar that forms the foundation for allowing to define and compose the desired fragments. We further build upon this grammar-driven universalization approach to allow developers to define the non–domain-specific component-oriented constructs that are needed for NE-DSLs. Developers are able to define both what those constructs should be, and how they are to be interpreted (via composition). Thus, developers can effectively define language extensions and their semantics. This solution is presented in a framework that can be reused for different languages, even if their notion of ‘components’ differ. To demonstrate the approach and show its applicability, we apply it to two Semantic Web related NE-DSLs that are in need of component-oriented constructs. We introduce modules to the rule-based Web query language Xcerpt and role models to the Web Ontology Language OWL.
7

Customizing the Composition of Web Services and Beyond

Sohrabi Araghi, Shirin 16 December 2013 (has links)
Web services provide a standardized means of publishing diverse, distributed applications. Increasingly, corporations are providing services or programs within and between organizations either on corporate intranets or on the cloud. Many of these services can be composed together, ideally automatically, to provide value-added service. Automated Web service composition is an example of such automation where given a specification of an objective to be realized and some knowledge of the state of the world, the problem is to automatically select, integrate, and invoke multiple services to achieve the specified objective. A popular approach to the Web service composition problem is to conceive it as an Artificial Intelligence planning task. This enables us to bring to bear many of the theoretical and computational advances in reasoning about actions to the task of Web service composition. However, Web service composition goes far beyond the reaches of classical planning, presenting a number of interesting challenges relevant to a large body of problems related to the composition of actions, programs, and services. Among these, an important challenge is generating not only a composition, but a high-quality composition tailored to user preferences. In this thesis, we present an approach to the Web service composition problem with a particular focus on the customization of compositions. We claim that there is a correspondence between generating a customized composition of Web services and non-classical Artificial Intelligence planning where the objective of the planning problem is specified as a form of control knowledge, such as a workflow or template, together with a set of constraints to be optimized or enforced. We further claim that techniques in (preference-based) planning can provide a computational basis for the development of effective, state-of-the-art techniques for generating customized compositions of Web services. To evaluate our claim, we characterize the Web service composition problem with customization as a non-classical planning problem, exploit and advance preference specification languages and preference-based planning, develop algorithms tailored to the Web service composition problem, prove formal properties of these algorithms, implement proof-of-concept systems, and evaluate these systems experimentally. While our research has been motivated by Web services, the theory and techniques we have developed are amenable to analogous problems in such diverse sectors as multi-agent systems, business process modeling, component software composition, and social and computational behaviour modeling and verification.
8

Customizing the Composition of Web Services and Beyond

Sohrabi Araghi, Shirin 16 December 2013 (has links)
Web services provide a standardized means of publishing diverse, distributed applications. Increasingly, corporations are providing services or programs within and between organizations either on corporate intranets or on the cloud. Many of these services can be composed together, ideally automatically, to provide value-added service. Automated Web service composition is an example of such automation where given a specification of an objective to be realized and some knowledge of the state of the world, the problem is to automatically select, integrate, and invoke multiple services to achieve the specified objective. A popular approach to the Web service composition problem is to conceive it as an Artificial Intelligence planning task. This enables us to bring to bear many of the theoretical and computational advances in reasoning about actions to the task of Web service composition. However, Web service composition goes far beyond the reaches of classical planning, presenting a number of interesting challenges relevant to a large body of problems related to the composition of actions, programs, and services. Among these, an important challenge is generating not only a composition, but a high-quality composition tailored to user preferences. In this thesis, we present an approach to the Web service composition problem with a particular focus on the customization of compositions. We claim that there is a correspondence between generating a customized composition of Web services and non-classical Artificial Intelligence planning where the objective of the planning problem is specified as a form of control knowledge, such as a workflow or template, together with a set of constraints to be optimized or enforced. We further claim that techniques in (preference-based) planning can provide a computational basis for the development of effective, state-of-the-art techniques for generating customized compositions of Web services. To evaluate our claim, we characterize the Web service composition problem with customization as a non-classical planning problem, exploit and advance preference specification languages and preference-based planning, develop algorithms tailored to the Web service composition problem, prove formal properties of these algorithms, implement proof-of-concept systems, and evaluate these systems experimentally. While our research has been motivated by Web services, the theory and techniques we have developed are amenable to analogous problems in such diverse sectors as multi-agent systems, business process modeling, component software composition, and social and computational behaviour modeling and verification.
9

A VPA-based Aspect Language

Nguyen, Dong Ha 21 October 2011 (has links)
This thesis focuses on the development of an advanced history-based aspect language and approaches to certain related issues ranging from applications to analysis methods. The aspect language, namely VPA-based Aspect Language, is defined upon visibly pushdown au- tomata (VPAs) [21]. This language is essentially an extension from an existing framework [47] of regular aspect languages. It features VPA-based pointcuts and provides, in particu- lar, constructors for the declarative definition of pointcuts based on regular and non-regular structures. We have also extended and developed the technique for detecting automatically potential interactions among VPA-based aspects. Despite several advantages of the class of visibly pushdown automata, there has been no practical support for them available. Therefore, we have realized a library called VPAlib that provides the implementation of essential data structures and operations for the VPA. This library is essential to enable the construction and analysis of VPA-based aspects. For instance, we have successfully performed certain analysis for detecting interactions among aspects using this library. In order to motivate the use of VPA-based aspects, we have studied two basic kinds of distributed applications, one representing typical systems with nested login sessions, and the other representing a grid computing system over peer-to-peer network. We have shown how VPA-based aspects can be useful for the realization of certain functionalities of these typical distributed applications. Thanks to their highly expressive pointcuts, another important application of VPA-based aspects is to define evolution on component-based systems, especially those with explicit component protocols. The use of aspects over component protocols, however, may break the coherence between the components of the system. We have further developed proof methods to establish the preservation of fundamental correctness properties, such as compatibility and substitutability relations between software components after the application of VPA-based aspects. Finally, we have considered the use of model checking techniques to verify systems that are modified by aspects. The goal of the verification is to check whether an aspect violates the global properties of a base system or the properties of other aspects. We have chosen the approach in which we create an abstract model from the VPA model and then run a model checker that is capable of checking the abstract model against the properties. We formally define the abstraction process and demonstrate our model checking approach via examples.
10

Designing Round-Trip Systems by Change Propagation and Model Partitioning

Seifert, Mirko 26 July 2011 (has links) (PDF)
Software development processes incorporate a variety of different artifacts (e.g., source code, models, and documentation). For multiple reasons the data that is contained in these artifacts does expose some degree of redundancy. Ensuring global consistency across artifacts during all stages in the development of software systems is required, because inconsistent artifacts can yield to failures. Ensuring consistency can be either achieved by reducing the amount of redundancy or by synchronizing the information that is shared across multiple artifacts. The discipline of software engineering that addresses these problems is called Round-Trip Engineering (RTE). In this thesis we present a conceptual framework for the design RTE systems. This framework delivers precise definitions for essential terms in the context of RTE and a process that can be used to address new RTE applications. The main idea of the framework is to partition models into parts that require synchronization - skeletons - and parts that do not - clothings. Once such a partitioning is obtained, the relations between the elements of the skeletons determine whether a deterministic RTE system can be built. If not, manual decisions may be required by developers. Based on this conceptual framework, two concrete approaches to RTE are presented. The first one - Backpropagation-based RTE - employs change translation, traceability and synchronization fitness functions to allow for synchronization of artifacts that are connected by non-injective transformations. The second approach - Role-based Tool Integration - provides means to avoid redundancy. To do so, a novel tool design method that relies on role modeling is presented. Tool integration is then performed by the creation of role bindings between role models. In addition to the two concrete approaches to RTE, which form the main contributions of the thesis, we investigate the creation of bridges between technical spaces. We consider these bridges as an essential prerequisite for performing logical synchronization between artifacts. Also, the feasibility of semantic web technologies is a subject of the thesis, because the specification of synchronization rules was identified as a blocking factor during our problem analysis. The thesis is complemented by an evaluation of all presented RTE approaches in different scenarios. Based on this evaluation, the strengths and weaknesses of the approaches are identified. Also, the practical feasibility of our approaches is confirmed w.r.t. the presented RTE applications.

Page generated in 0.1066 seconds