• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 12
  • 10
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 94
  • 94
  • 28
  • 27
  • 27
  • 20
  • 19
  • 17
  • 15
  • 15
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Checking Metadata Usage for Enterprise Applications

Zhang, Yaxuan 20 May 2021 (has links)
It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effec- tiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers. / Master of Science / It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effectiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers.
2

A Domain-Specific Language for Traceability in Modeling

Rahman, Anisur 24 July 2013 (has links)
Requirements are a key aspect of software development. Requirements are also related with other software artefacts including designs, test cases and documentation. These artefacts are often captured with specialized models. However, many tools lack support for traceability relationships between requirements artefacts and model artefacts, leading to analysis issues. To establish traceability between models and other types of requirements artefacts, this thesis proposes a new Domain-Specific Language (DSL) for describing the concepts of a modeling language that would be intended to be traced using a Requirements Management System (RMS), with tool support handling the evolution of models and of their traceability links. In the first part of this thesis, the syntax and metamodel of the Model Traceability DSL (MT-DSL) are defined, together with an editor implemented using Xtext. In the second part of the thesis, a library of import and maintenance functions is generated automatically (using Xtend) from model traceability descriptions written using MT-DSL. The target language for this library is the DOORS eXtension Language (DXL), the scripting language of a leading commercial RMS with traceability support, namely IBM Rational DOORS. The implementation has been tested successfully for importing and evolution scenarios with two different modeling languages (User Requirements Notation and Finite State Machines). This work hence contributes a reliable mechanism to define and support traceability between requirements and models.
3

A Domain-Specific Language for Traceability in Modeling

Rahman, Anisur January 2013 (has links)
Requirements are a key aspect of software development. Requirements are also related with other software artefacts including designs, test cases and documentation. These artefacts are often captured with specialized models. However, many tools lack support for traceability relationships between requirements artefacts and model artefacts, leading to analysis issues. To establish traceability between models and other types of requirements artefacts, this thesis proposes a new Domain-Specific Language (DSL) for describing the concepts of a modeling language that would be intended to be traced using a Requirements Management System (RMS), with tool support handling the evolution of models and of their traceability links. In the first part of this thesis, the syntax and metamodel of the Model Traceability DSL (MT-DSL) are defined, together with an editor implemented using Xtext. In the second part of the thesis, a library of import and maintenance functions is generated automatically (using Xtend) from model traceability descriptions written using MT-DSL. The target language for this library is the DOORS eXtension Language (DXL), the scripting language of a leading commercial RMS with traceability support, namely IBM Rational DOORS. The implementation has been tested successfully for importing and evolution scenarios with two different modeling languages (User Requirements Notation and Finite State Machines). This work hence contributes a reliable mechanism to define and support traceability between requirements and models.
4

Mapping a Dataflow Programming Model onto Heterogeneous Architectures

Sbirlea, Alina 06 September 2012 (has links)
This thesis describes and evaluates how extending Intel's Concurrent Collections (CnC) programming model can address the problem of hybrid programming with high performance and low energy consumption, while retaining the ease of use of data-flow programming. The CnC model is a declarative, dynamic light-weight task based parallel programming model and is implicitly deterministic by enforcing the single assignment rule, properties which ensure that problems are modelled in an intuitive way. CnC offers a separation of concerns by allowing algorithms to be expressed as a two stage process: first by decomposing a problem into components and specifying how components interact with each other, and second by providing an implementation for each component. By facilitating the separation between a domain expert, who can provide an accurate problem specification at a high level, and a tuning expert, who can tune the individual components for better performance, we ensure that tuning and future development, such as replacement of a subcomponent with a more efficient algorithm, become straightforward. A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. In addition, the use of FPGAs has seen a significant increase for applications that can take advantage of such dedicated hardware. We see that computing is evolving from using many core CPUs to ``co-processing" on the CPU, GPU and FPGA, however hybrid programming models that support the interaction between multiple heterogeneous components are not widely accessible to mainstream programmers and domain experts who have a real need for such resources. We propose a C-based implementation of the CnC model for enabling parallelism across heterogeneous processor components in a flexible way, with high resource utilization and high programmability. We use the task-parallel HabaneroC language (HC) as the platform for implementing CnC-HabaneroC (CnC-HC), a language also used to implement the computation steps in CnC-HC, for interaction with GPU or FPGA steps and which offers the desired flexibility and extensibility of interacting with any other C based language. First, we extend the CnC model with tag functions and ranges to enable automatic code generation of high level operations for inter-task communication. This improves programmability and also makes the code more analysable, opening the door for future optimizations. Secondly, we introduce a way to specify steps that are data parallel and thus are fit to execute on the GPU, and the notion of task affinity, a tuning annotation in the specification language. Affinity is used by the runtime during scheduling and can be fine-tuned based on application needs to achieve better (faster, lower power, etc.) results. Thirdly, we introduce and develop a novel, data-driven runtime for the CnC model, using HabaneroC (HC) as a base language. In addition, we also create an implementation of the previous runtime approach and conduct a study to compare the performance. Next, we expand the HabaneroC dynamic work-stealing runtime to allow cross-device stealing based on task affinity. Cross-device dynamic work-stealing is used to achieve load balancing across heterogeneous platforms for improved performance. Finally, we implement and use a series of benchmarks for testing the model in different scenarios and show that our proposed approach can yield significant performance benefits and low power usage when using a hybrid execution.
5

Supporting Effective Reuse and Safe Evolution in Metadata-Driven Software Development

Song, Myoungkyu 29 April 2013 (has links)
In recent years, metadata-driven software development has gained prominence. In this implementation model, various application concerns are provided as third-party frameworks and libraries that the programmer configures through metadata, such as XML configuration files or Java annotations. Metadata-driven software development is a special case of declarative programming: metadata serves as a domain-specific language that the programmer uses to declare various concerns, whose implementation is provided by an elaborate ecosystem of libraries and frameworks that serve as pre-defined application building blocks. Examples abound: transparent persistence mechanisms facilitate data management; security frameworks provide access control and encryption; unit testing frameworks provide abstractions for implementing and executing unit tests, etc. Metadata-driven software development has been particularly embraced in enterprise computing as a means of providing standardized solutions to common application scenarios. Despite the conciseness and simplicity benefits of metadata-driven software development, this implementation model introduces a unique set of reuse and evolution challenges. In particular, metadata is not reusable across application modules, and program evolution causes unsafe discrepancies between the main source code and its corresponding metadata. The research described in this dissertation addresses five fundamental problems of metadata-driven software development: (1) bytecode enhancements that transparently introduce concerns hinder program understanding and debugging; (2) mainstream enterprise metadata formats are hard to understand, evolve, and reuse; (3) concerns declared via metadata cannot be reused when source-to-source compiling emerging languages to mainstream ones; (4) metadata correctness cannot be automatically ensured as application source code is being refactored and enhanced; and (5) lacking built-in metadata, JavaScript programs can be enhanced with additional concerns only through manual source code changes. The research described in this dissertation leverages domain-specific languages and automated code generation to enable effective reuse and safe evolution in metadata-driven software development. The specific innovations that address the problems outlined above are as follows: (1) a domain-specific language (DSL) describing bytecode enhancement that facilitates the understanding and debugging of additional concerns; (2) a novel metadata format expressed as a DSL that is easier to author, understand, reuse, and maintain than existing metadata formats; (3) automated metadata translation that enables effective reuse of target language additional concerns from source-to-source compiled source language programs; (4) metadata invariants---a new abstraction for expressing and verifying metadata coding convention; and (5) a new approach to declaratively enhancing JavaScript programs with additional concerns. / Ph. D.
6

WorkflowDSL: Scalable Workflow Execution with Provenance

Fernando, Tharidu January 2017 (has links)
Scientific workflow systems enable scientists to perform large-scale data intensive scientific experiments using distributed computing resources. Due to the diversity of domains and complexity of technology, delivering a successful outcome efficiently requires collaboration between domain experts and technical experts. However, existing scientific workflow systems require a large investment of time to familiarise and adapt existing workflows. Thus, many scientific workflows are still being implemented by script based languages (such as Python and R) due to familiarity and extensive third party library support. In this thesis, we implement a framework that uses a domain specific language that enables domain experts to collaborate on fine-tuning workflows. Technical experts are able to use Python for task implementations. Moreover, the framework includes support for parallel execution without any specialized code. It also provides a provenance capturing framework that enables users to analyse past executions and retrieve complete lineage of any data item generated. Experiments which were performed using a real-world scientific workflow from the bioinformatics domain show that users were able to execute workflows efficiently while using our DSL for workflow composition and Python for task implementations. Moreover, we show that captured provenance can be useful for analysing past workflow executions. / Vetenskapliga arbetsflödessystem gör det möjligt för forskare att utföra storskaliga dataintensiva vetenskapliga experiment med hjälp av distribuerade datorresurser. På grund av mångfalden av domäner, och komplexitet i teknik, krävs samarbete mellan domänexperter och tekniska experter för att på ett effektivt sätt leverera en framgångsrik lösning. Befintliga vetenskapliga arbetsflödessystem kräver dock en stor investering i tid för att bekanta och anpassa befintliga arbetsflöden. Som ett resultat av detta implementeras många vetenskapliga arbetsflöden fortfarande av skriptbaserade språk (som Python och R) på grund av förtrogenhet och omfattande support från tredje part. I denna avhandling implementeras ett framework som använder ett domänsspecifikt språk som gör det möjligt för domänexperter att samarbeta med att finjustera arbetsflöden. Tekniska experter kan använda Python för att genomföra uppgifter. Dessutom innehåller ramverket stöd för parallell exekvering utan någon specialkod. Detta ger också ett ursprungsfångande framework som gör det möjligt för användare att analysera tidigare exekveringar och att hämta fullständiga härstamningar för samtliga genererade dataobjekt. Experiment som utfördes med hjälp av ett verkligt vetenskapligt arbetsflöde från bioinformatikdomänen visar att användarna effektivt kunde utföra arbetsflöden medan de använde en DSL för arbetsflödesammansättning och Python för uppdragsimplementationer. Dessutom visar vi hur fångade ursprung kan vara användbara för att analysera tidigare genomförda arbetsflödesexekveringar.
7

A compiler front end for GUARDOL -- a domain-specific language for high assurance guards

Hoag, Jonathan January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John M. Hatcliff / Guardol, a domain-specific language (DSL) developed by Rockwell Collins, was designed to streamline the process of specifying, implementing, and verifying Cross Domain Solution (CDS) security policies. Guardol’s syntax and intended computational behavior tightly resembles the core of many functional programming languages, but a number of features have been added to ease the development of high assurance cross domain solutions. A significant portion of the formalization and implementation of Guardol’s grammar and type system was performed by the SAnToS group at Kansas State University. This report summarizes the key conceptual components of Guardol’s grammar and tool- chain architecture. The focus of the report is a detailed description of Guardol’s type system implementation and formalization. A great deal of effort was put into a formalization which provided a high level of assurance that the specification of types and data structures were maintained in the intended implementation.
8

Une approche générique de modélisation spatiale et temporelle : application à la modélisation de la dynamique des paysages / A generic approach of spatial and temporal modelling : application to dynamic landscape modelling

Degenne, Pascal 13 March 2012 (has links)
Les sciences qui traitent de la réalité, qu'elles soient naturelles, de la société ou de la vie, fonctionnent avec des modèles. Une partie de ces modèles décrivent les relations entre certaines grandeurs mesurables de la réalité, sans aller jusqu'au détail des interactions entre les éléments qui la composent. D'autres modèles décrivent ces interactions en prenant le point de vue des individus qui constituent le système, le comportement global n'est alors plus décrit à priori, mais observé à posteriori. Nous faisons le constat que dans les deux cas le scientifique a peu de liberté pour décrire les structures, en particulier spatiales, susceptibles de porter ces interactions. Nous proposons une approche de modélisation que l'on peut situer à mi-chemin entre les deux, et qui incite à étudier un système à travers la nature de ses interactions et des structures de graphes qui peuvent les porter. En plaçant au même niveau les relations spatiales, fonctionnelles, sociales ou hiérarchiques, nous tentons aussi de nous affranchir des contraintes induites par le choix effectué souvent à priori d'une forme de représentation de l'espace. Nous avons formalisé les concepts de base de cette approche, et ceux-ci ont constitué les éléments d'un langage métier, nommé Ocelet, que nous avons défini. Les outils permettant la mise en œuvre de ce langage ont été développés et intégrés sous la forme d'un environnement de modélisation et de simulation. Enfin nous avons pu expérimenter notre nouvelle approche de modélisation et le langage Ocelet à travers la réalisation de plusieurs modèles présentant des situations variées de dynamiques paysagères / Sciences dealing with reality be it related to nature, society or life, use models. Some of these models describe the relations that exist between measurable properties of that reality, without detailing the interactions between its components. Other models describe those interactions from the point of view of the individuals that form the system, in which case the overall behaviour is not defined a priori but observed a posteriori. In both cases, it can be noted that the scientist is often limited in its capacity to describe the structures, especially those spatial, which support the interactions. We propose a modelling approach that can be considered intermediate, where the system is studied by examining the nature of the interactions involved and the graph structures needed to support them. By unifying the description of spatial, functional, social or hierarchical relationships, we attempt to lift constraints induced by the form of spatial representation that are often chosen a priori. The basic concepts of this approach have been formalized, and were used to define and build a domain specific language, called Ocelet. The tools related to the implementation of the language have also been developed and assembled into an integrated modelling and simulation environment. It was then possible to experiment our new modelling approach and the Ocelet language by developing models for a variety of dynamic landscapes situations
9

Contrôle de la propagation et de la recherche dans un solveur de contraintes / Controlling propagation and search within a constraint solver

Prud'homme, Charles 28 February 2014 (has links)
La programmation par contraintes est souvent décrite, utopiquement, comme un paradigme déclaratif dans lequel l’utilisateur décrit son problème et le solveur le résout. Bien entendu, la réalité des solveurs de contraintes est plus complexe, et les besoins de personnalisation des techniques de modélisation et de résolution évoluent avec le degré d’expertise des utilisateurs. Cette thèse porte sur l’enrichissement de l’arsenal des techniques disponibles dans les solveurs de contraintes. D’une part, nous étudions la contribution d’un système d’explications à l’exploration de l’espace de recherche, dans le cadre spécifique d’une recherche locale. Deux heuristiques de voisinages génériques exploitant singulièrement les explications sont décrites. La première se base sur la difficulté de réparer une solution partiellement détruite, la seconde repose sur la nature non-optimale de la solution courante. Ces heuristiques mettent à jour la structure interne des problèmes traités pour construire des voisins de bonne qualité pour une recherche à voisinage large. Elles sont complémentaires d’autres heuristiques de voisinages génériques, avec lesquels elles peuvent être combinées efficacement. De plus, nous proposons de rendre le système d’explications paresseux afin d’en minimiser l’empreinte. D’autre part, nous effectuons un état des lieux des savoir-faire relatifs aux moteurs de propagation pour les solveurs de contraintes. Ces données sont exploitées opérationnellement à travers un langage dédié qui permet de personnaliser la propagation au sein d’un solveur, en fournissant des structures d’implémentation et en définissant des points de contrôle dans le solveur. Ce langage offre des concepts de haut niveau permettant à l’utilisateur d’ignorer les détails de mise en œuvre du solveur, tout en conservant un bon niveau de flexibilité et certaines garanties. Il permet l’expression de schémas de propagation spécifiques à la structure interne de chaque problème. La mise en œuvre et les expérimentations ont été effectués dans le solveur de contraintes Choco. Cette thèse a donné lieu à une nouvelle version de l’outil globalement plus efficace et nativement expliqué. / Constraint programming is often described, idealistically, as a declarative paradigm in which the user describes the problem and the solver solves it. Obviously, the reality of constraint solvers is more complex, and the needs in customization of modeling and solving techniques change with the level of expertise of users. This thesis focuses on enriching the arsenal of available techniques in constraint solvers. On the one hand, we study the contribution of an explanation system to the exploration of the search space in the specific context of a local search. Two generic neighborhood heuristics which exploit explanations singularly are described. The first one is based on the difficulty of repairing a partially destroyed solution, the second one is based on the non-optimal nature of the current solution. These heuristics discover the internal structure of the problems to build good neighbors for large neighborhood search. They are complementary to other generic neighborhood heuristics, with which they can be combined effectively. In addition, we propose to make the explanation system lazy in order to minimize its footprint. On the other hand, we undertake an inventory of know-how relative to propagation engines of constraint solvers. These data are used operationally through a domain specific language that allows users to customize the propagation schema, providing implementation structures and defining check points within the solver. This language offershigh-level concepts that allow the user to ignore the implementation details, while maintaining a good level of flexibility and some guarantees. It allows the expression of propagation schemas specific to the internal structure of each problem solved. Implementation and experiments were carried out in the Choco constraint solver, developed in this thesis. This has resulted in a new version of the overall effectiveness and natively explained tool.
10

Automated Synthesis of Model Comparison Benchmarks

Addazi, Lorenzo January 2019 (has links)
Model-driven engineering promotes the migration from code-centric to model-based software development. Systems consist of model collections integrating different concerns and perspectives, while semi-automated model transformations generate executable code combining the information from these. Increasing the abstraction level to models required appropriate management technologies supporting the various software development activities. Among these, model comparison represents one of the most challenging tasks and plays an essential role in various modelling activities. Its hardness led researchers to propose a multitude of approaches adopting different approximation strategies and exploiting specific knowledge of the involved models. However, almost no support is provided for their evaluation against specific scenarios and modelling practices. This thesis presents Benji, a framework for the automated generation of model comparison benchmarks. Given a set of differences and an initial model, users generate models resulting from the application of the first on the latter. Differences consist of preconditions, actions and postconditions expressed using a dedicated specification language. The generator converts benchmark specifications to design-space exploration problems and produces the final solutions along with a model-based description of their differences with respect to the initial model. A set of representative use cases is used to evaluate the framework against its design principles, which resemble the essential properties expected from model comparison benchmark generators.

Page generated in 0.439 seconds