• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 17
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 71
  • 24
  • 23
  • 19
  • 19
  • 17
  • 16
  • 16
  • 15
  • 15
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Test Automation for Grid-Based Multiagent Autonomous Systems

Entekhabi, Sina January 2024 (has links)
Traditional software testing usually comes with manual definitions of test cases. This manual process can be time-consuming, tedious, and incomplete in covering important but elusive corner cases that are hardly identifiable. Automatic generation of random test cases emerges as a strategy to mitigate the challenges associated with the manual test case design. However, the effectiveness of random test cases in fault detection may be limited, leading to increased testing costs, particularly in systems where test execution demands substantial resources and time. Leveraging the domain knowledge of test experts can guide the automatic random generation of test cases to more effective zones. In this thesis, we target quality assurance of multiagent autonomous systems and aim to automate test generation for them by applying the domain knowledge of test experts. To formalize the specification of the domain expert's knowledge, we introduce a small Domain Specific Language (DSL) for formal specification of particular locality-based constraints for grid-based multiagent systems. We initially employ this DSL for filtering randomly generated test inputs. Then, we evaluate the effectiveness of the generated test cases through an experiment on a case study of autonomous agents. Applying statistical analysis on the experiment results demonstrates that utilizing the domain knowledge to specify test selection criteria for filtering randomly generated test cases significantly reduces the number of potentially costly test executions to identify the persisting faults.  Domain knowledge of experts can also be utilized to directly generate test inputs with constraint solvers. We conduct a comprehensive study to compare the performance of filtering random cases and constraint-solving approaches in generating selective test cases across various test scenario parameters. The examination of these parameters provides criteria for determining the suitability of random data filtering versus constraint solving, considering the varying size and complexity of the test input generation constraint. To conduct our experiments, we use QuickCheck tool for random test data generation with filtering, and we employ Z3 for constraint solving. The findings, supported by observations and statistical analysis, reveal that test scenario parameters impact the performance of filtering and constraint-solving approaches differently. Specifically, the results indicate complementary strengths between the two approaches: random generation and filtering approach excels for the systems with a large number of agents and long agent paths but shows degradation in larger grid sizes and stricter constraints. Conversely, constraint solving approach demonstrates robust performance for large grid sizes and strict constraints but experiences degradation with increased agent numbers and longer paths. Our initially proposed DSL is limited in its features and is only capable of specifying particular locality-based constraints. To be able to specify more elaborate test scenarios, we extend that DSL based on a more intricate model of autonomous agents and their environment. Using the extended DSL, we can specify test oracles and test scenarios for a dynamic grid environment and agents having several attributes. To assess the extended DSL's utility, we design a questionnaire to gather opinions from several experts and also run an experiment to compare the efficiency of the extended DSL with the initially proposed one. The questionnaire results indicate that the extended DSL was successful in specifying several scenarios that the experts found more useful than the scenarios specified by the initial DSL. Moreover, the experimental results demonstrate that testing with the extended DSL can significantly reduce the number of test executions to detect system faults, leading to a more efficient testing process. / Safety of Connected Intelligent Vehicles in Smart Cities – SafeSmart
32

Application of software engineering methodologies to the development of mathematical biological models

Gill, Mandeep Singh January 2013 (has links)
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
33

Compilation efficace d'applications de traitement d'images pour processeurs manycore / Efficient Compilation of Image Processing Applications for Manycore Processors

Guillou, Pierre 30 November 2016 (has links)
Nous assistons à une explosion du nombre d’appareils mobiles équipés de capteurs optiques : smartphones, tablettes, drones... préfigurent un Internet des objets imminent. De nouvelles applications de traitement d’images (filtres, compression, réalité augmentée) exploitent ces capteurs mais doivent répondre à des contraintes fortes de vitesse et d’efficacité énergétique. Les architectures modernes — processeurs manycore, GPUs,... — offrent un potentiel de performance, avec cependant une hausse sensible de la complexité de programmation.L’ambition de cette thèse est de vérifier l’adéquation entre le domaine du traitement d’images et ces architectures modernes : concilier programmabilité, portabilité et performance reste encore aujourd’hui un défi. Le domaine du traitement d’images présente un fort parallélisme intrinsèque, qui peut potentiellement être exploité par les différents niveaux de parallélisme offerts par les architectures actuelles. Nous nous focalisons ici sur le domaine du traitement d’images par morphologie mathématique, et validons notre approche avec l’architecture manycore du processeur MPPA de la société Kalray.Nous prouvons d’abord la faisabilité de chaînes de compilation intégrées, composées de compilateurs, bibliothèques et d’environnements d’exécution, qui à partir de langages de haut niveau tirent parti de différents accélérateurs matériels. Nous nous concentrons plus particulièrement sur les processeurs manycore, suivant les différents modèles de programmation : OpenMP ; langage flot de données ; OpenCL ; passage de messages. Trois chaînes de compilation sur quatre ont été réalisées, et sont accessibles à des applications écrites dans des langages spécifiques au domaine du traitement d’images intégrés à Python ou C. Elles améliorent grandement la portabilité de ces applications, désormais exécutables sur un plus large panel d’architectures cibles.Ces chaînes de compilation nous ont ensuite permis de réaliser des expériences comparatives sur un jeu de sept applications de traitement d’images. Nous montrons que le processeur MPPA est en moyenne plus efficace énergétiquement qu’un ensemble d’accélérateurs matériels concurrents, et ceci particulièrement avec le modèle de programmation flot de données. Nous montrons que la compilation d’un langage spécifique intégré à Python vers un langage spécifique intégré à C permet d’augmenter la portabilité et d’améliorer les performances des applications écrites en Python.Nos chaînes de compilation forment enfin un environnement logiciel complet dédié au développement d’applications de traitement d’images par morphologie mathématique, capable de cibler efficacement différentes architectures matérielles, dont le processeur MPPA, et proposant des interfaces dans des langages de haut niveau. / Many mobile devices now integrate optic sensors; smartphones, tablets, drones... are foreshadowing an impending Internet of Things (IoT). New image processing applications (filters, compression, augmented reality) are taking advantage of these sensors under strong constraints of speed and energy efficiency. Modern architectures, such as manycore processors or GPUs, offer good performance, but are hard to program.This thesis aims at checking the adequacy between the image processing domain and these modern architectures: conciliating programmability, portability and performance is still a challenge today. Typical image processing applications feature strong, inherent parallelism, which can potentially be exploited by the various levels of hardware parallelism inside current architectures. We focus here on image processing based on mathematical morphology, and validate our approach using the manycore architecture of the Kalray MPPA processor.We first prove that integrated compilation chains, composed of compilers, libraries and run-time systems, allow to take advantage of various hardware accelerators from high-level languages. We especially focus on manycore processors, through various programming models: OpenMP, data-flow language, OpenCL, and message passing. Three out of four compilation chains have been developed, and are available to applications written in domain-specific languages (DSL) embedded in C or Python. They greatly improve the portability of applications, which can now be executed on a large panel of target architectures.Then, these compilation chains have allowed us to perform comparative experiments on a set of seven image processing applications. We show that the MPPA processor is on average more energy-efficient than competing hardware accelerators, especially with the data-flow programming model. We show that compiling a DSL embedded in Python to a DSL embedded in C increases both the portability and the performance of Python-written applications.Thus, our compilation chains form a complete software environment dedicated to image processing application development. This environment is able to efficiently target several hardware architectures, among them the MPPA processor, and offers interfaces in high-level languages.
34

Aplicação da análise de mutantes no contexto do teste e validação de redes de Petri coloridas" / The application of mutation testing in the context of testing and validation of coloured Petri nets

Simão, Adenilso da Silva 17 December 2004 (has links)
O uso de técnicas e métodos formais contribui para o desenvolvimento de sistemas confiáveis. No entanto, apesar do rigor obtido, em geral, é necessário que essas técnicas sejam complementadas com atividades de teste e validação. Deve-se ressaltar que o custo para eliminar erros encontrados nas etapas iniciais de desenvolvimento é menor do que quando esses erros são encontrados nas fases posteriores. Dessa forma, é essencial a condução de atividades de VV&T - Verificação, Validação e Teste - desde as primeiras fases de desenvolvimento. Critérios de teste, como uma forma sistemática de avaliar e/ou gerar casos de teste de qualidade e, dessa forma, contribuir para aumentar a qualidade da atividade de teste, têm sido investigados para o teste de especificação de Sistemas Reativos. A técnica Redes de Petri Coloridas tem sido constantemente utilizada para a especificação do aspecto comportamental de Sistemas Reativos. Apesar de existirem diversas técnicas de análise, um aspecto não considerado é a cobertura alcançada, visto que, em geral, a aplicação exaustiva não é viável devido ao alto custo. Considerando a relevância do estabelecimento de métodos sistemáticos para o teste e validação dessas especificações, este trabalho propõe a aplicação do critério de teste Análise de Mutantes para o teste de Redes de Petri Coloridas. Neste trabalho foram almejados três objetivos principais, os quais podem ser divididos em estudos teóricos, estudos empíricos e automatização. No contexto de estudos teóricos, foi realizada a definição e embasamento teórico para possibilitar a aplicação da Análise de Mutantes no contexto de Redes de Petri Coloridas. Além disso, investigaram-se mecanismos genéricos para a descrição e geração de mutantes. Definiu-se um algoritmo para a geração de casos de teste baseado na Análise de Mutantes. No contexto de estudos empíricos, foram conduzidos estudos de caso para avaliar a aplicabilidade e eficácia dos resultados teóricos obtidos. Finalmente, no contexto de automatização, foram desenvolvidas ferramentas de apoio à aplicação da Análise de Mutantes. / The usage of formal methods and techniques contributes to the development of highly reliable system, but, in spite of the achieved rigour, these techniques must be complemented with testing and validation activities. It should be highlighted that the cost to eliminate errors found in the early phases of development is smaller than when those errors are found in the later phases. Therefore, the accomplishment of VV&T activities - Verification, Validation and Test - starting at the first development phases is essential. Testing criteria, as a systematic way to evaluate and/or generate test cases, contributing, therefore, to improve the quality of the test activity, have been proposed for testing reactive systems specifications. A technique that has been steadily employed for specifying the behavioural aspect of reactive systems is the coloured Petri nets. Although there are several analysis and validation techniques, a usually neglected aspect is the achieved coverage, given that, in general, the exhaustive application is not feasible due to its high cost. Considering the relevance of establishing systematic methods for the test and validation of coloured Petri nets based specification, this work proposes the investigation of the viability of applying Mutation Testing to test coloured Petri nets. In this work three main goals were pursued, which can be grouped in: theoretical studies, empirical studies and tool development. In the context of theoretical studies, it was accomplished the definition of theoretical concepts to enable the application of Mutant Analysis in the context of coloured Petri nets. Moreover, a mutation-based algorithm was defined to generate test sequences for Petri nets. In the context of empirical studies, case studies were carried out to evaluate the applicability and effectiveness of the achieved theoretical results. Finally, in the context of tool development, tools for supporting the application of Mutation Testing were developed.
35

Description of languages based on object-oriented meta-modelling

Scheidgen, Markus 19 May 2009 (has links)
In dieser Dissertation, schaue ich auf objekt-orientierte Metamodellierung und wie sie verwendet werden kann, um Computersprachen zu beschreiben. Dabei, fokussiere ich mich nicht nur auf die Beschreibung von Sprachen, sondern auch auf die Verwendung von Sprachbeschreibungen zur automatischen Erzeugung von Sprachwerkzeugen aus Sprachbeschreibungen. Ich nutze die Idee von Metasprachen und Metawerkzeugen. Metasprachen werden verwendet um bestimmte Sprachaspekte, wie Notationen und Semantiken, zu beschreiben, und Metawerkzeuge werden verwendet um Sprachwerkzeuge wie Editoren und Interpreter aus entsprechenden Beschreibungen zu erzeugen. Diese Kombination von Beschreibung und automatischer Entwicklung von Werkzeugen ist als Domänenspezifische Modellierung (DSM) bekannt. Ich verwende DSM basierend auf objekt-orientierter Metamodellierung zur Beschreibung der wichtigen Aspekte ausführbarer Computersprachen. Ich untersuche existierende Metasprachen und Metawerkzeuge für die Beschreibung von Sprachvorkommen, ihrer konkreten Repräsentation und Semantik. Weiter, entwickle ich eine neue Plattform zur Beschreibung von Sprachen basierend auf dem CMOF-Modell der OMG MOF 2.x Empfehlungen. Ich entwickle eine Metasprache und Metawerkzeug für textuelle Notationen. Schlussendlich, entwickle ich eine graphische Metasprache und Metawerkzeug zur Beschreibung von operationaler Semantik von Computersprachen. Um die Anwendbarkeit der vorgestellten Techniken zu prüfen, nehme ich SDL, die Specification and Description Language, als einen Archetypen für textuell notierte Sprachen mit ausführbaren Instanzen. Für diesen Archetyp zeige ich, dass die präsentierten Metasprachen und Metawerkzeuge es erlauben solche Computersprachen zu beschreiben und automatisch Werkzeuge für diese Sprachen zu erzeugen. / In this thesis, I look into object-oriented meta-modelling and how it can be used to describe computer languages. Thereby, I do not only focus on describing languages, but also on utilising the language descriptions to automatically create language tools from language descriptions. I use the notion of meta-languages and meta-tools. Meta-languages are used to describe certain language aspects, such as notation or semantics, and meta-tools are used to create language tools, such as editors or interpreters, from corresponding descriptions. This combination of describing and automated development of tools is known as domain specific modelling (DSM). I use DSM based on object-oriented meta-modelling to describe all important aspects of executable computer languages. I look into existing meta-languages and meta-tools for describing language utterances, their concrete representation, and semantics. Furthermore, I develop a new platform to define languages based on the CMOF-model of the OMG MOF 2.x recommendations. I develop a meta-language and meta-tool for textual language notations. Finally, I develop a new graphical meta-language and meta-tool for describing the operational semantics of computer languages. To prove the applicability of the presented techniques, I take SDL, the Specification and Description Language, as an archetype for textually notated languages with executable instances. For this archetype, I show that the presented meta-languages and meta-tools allow to describe such computer languages and allow to automatically create tools for those languages.
36

Model transformation languages for domain-specific workbenches

Wider, Arif 15 December 2015 (has links)
Domänenspezifische Sprachen (DSLs) sind Software-Sprachen, die speziell für bestimmte Anwendungsdomänen entwickelt wurden. Mithilfe von DSLs können Domänenexperten ihr Domänenwissen auf einem hohen Abstraktionsniveau beschreiben. Wie andere Software-Sprachen auch, benötigen DSLs Sprachwerkzeuge, die Assistenz bei der Erstellung und Verarbeitung von domänenspezifischen Modellen bieten. Eine domänenspezifische Werkbank (DSW) ist ein Software-Werkzeug, welches mehrere solcher Sprachwerkzeuge für eine DSL miteinander integriert. Existierende Werkzeuge, die es erlauben eine DSW aufgrund der Beschreibung einer DSL automatisch generieren zu lassen, unterstützen jedoch nicht die Beschreibung und Generierung von editierbaren Sichten. Eine Sicht ist ein Teil einer DSW, der nur einen bestimmten Aspekt eines Modells darstellt. Diese Dissertation stellt spezielle Modelltransformationssprachen (MTLs) vor, mit denen die Synchronisation von Sichten in einer generierten DSW beschrieben werden kann. Dadurch können DSWs mit editierbaren Sichten mittels existierender Werkzeuge zur Generierung von Sprachwerkzeugen erstellt werden. Dafür wird eine DSW für die Nanophysik-Domäne sowie eine Taxonomie von Synchronisationstypen vorgestellt, welche es erlaubt genau zu bestimmen, welche Art von Modelltransformationen für die Synchronisation von Sichten in dieser Werkbank benötigt werden. Entsprechend dieser Anforderungen werden zwei MTLs entwickelt. Insbesondere wird eine bidirektionale MTL entwickelt. Mit solch einer Sprache kann man eine Relation, welche definiert ob zwei Modelle synchron sind, so beschreiben, dass die entsprechende Synchronisationslogik automatisch abgeleitet werden kann. Die gezeigten MTLs werden als interne DSLs - das heißt eingebettet als ausdrucksstarke Bibliotheken - in der Programmiersprache Scala implementiert. Auf diese Weise kann Scalas Typprüfung genutzt werden, um Transformationen und deren Komposition statisch zu verifizieren. / Domain-specific languages (DSLs) are software languages which are tailored to a specific application domain. DSLs enable domain experts to create domain-specific models, that is, high-level descriptions of domain knowledge. As any other software languages, DSLs rely on language tools which provide assistance for processing and managing domain-specific models. A domain-specific workbench is an integrated set of such tools for a DSL. A recently proposed approach is to automatically generate a domain-specific workbench for a DSL from a description of that DSL. However, existing tools which apply this approach do not support to describe and generate editable domain-specific views. A view is a part of domain-specific workbench that presents only one aspect of a model, for example, its hierarchical structure. This dissertation presents special model transformation languages which support the description of view synchronization in a generated domain-specific workbench. This allows a multi-view domain-specific workbench to be created with existing tools for language tool generation. We present a generated domain-specific workbench for the nanophysics domain and present a taxonomy of synchronization types. This allows us to precisely define what model transformations are required for view synchronization in that workbench. According to these requirements, we develop two transformation languages by adapting existing ones. In particular, we develop a bidirectional transformation language. With such a language one can describe a relation which defines whether two models are in sync and let the synchronization logic be inferred automatically. We implement model transformation languages as internal DSLs - that is, embedded as expressive libraries - in the Scala programming language and use Scala''s type checking for static verification of transformations and their composition.
37

Analyse et compilation de langages de programmation parallèle / Analysis and Compilation of Parallel Programming Languages

Susungi, Adilla 26 November 2018 (has links)
La compilation traditionnelle est confrontée à de nombreux défis face aux besoins d'optimisations de programmes pour architectures parallèles. Un défi particulier est la conception de langages et représentations intermédiaires (RIs) appropriés.Bien que différentes RIs aient été proposées pour repousser les limites de la compilation traditionnelle, la plupart ne sont toujours pas adaptées pour appliquer des transformations de programmes pertinentes.Différentes alternatives sont donc de plus en plus exploitées, telles que l'autotuning ou la compilation interactive. Ces dernières nécessitent l'usage de langages intermédiaires fondamentalement différents, par exemple, les méta-langages pour la transformation de programmes. Dans cette thèse, centrée sur les besoins en applications numériques, nous étudions ce type de meta-langages; nous adressons particulièrement quatre questions:(i) Comment introduire une expressivité spécifique à un domaine?(ii) Comment repenser leur conception pour améliorer leur flexibilité dans la composition de transformations et la génération de plusieurs variantes de programme?(iii) Jusqu'où pouvons-nous introduire du support pour le NUMA (Non-Uniform Memory Access)?(iv) En tant que nouvelle classe de méta-langages, comment formaliser leur sémantique? Nous répondons à ces questions au travers de la conception et la sémantique de TeML, un méta-langage pour l'optimisation d'applications tensorielles. / Traditional compilation faces numerous challenges with program optimizations for parallel architectures. A particular challenge is the design of proper intermediate languages and representations to enable the application of relevant optimization techniques.Various parallel intermediate representations and languages have been proposed.To overcome this issue, different alternatives are more and more exploitedsuch as empirical autotuning or interactive compilation. Such alernatives require fondamentally different typesof intermediates languages such as transformation meta-languages. In this thesis, we study transformation meta-languages for numerical applications: wa particularly address four questions:(i) How do we introduce domain-specific expressiveness?(ii) How do we rethink their design to enhance their flexibility in composing optimizations paths and generating multiple program variants?(iii) How far can we introduce NUMA (Non-Uniform Memory Access) awareness?(iv) As a new class of meta-languages, how do we formalize their semantics? We answer these questions through the design and semantics of TeML, a tensor optimizations meta-language.
38

A language-independent methodology for compiling declarations into open platform frameworks / Compilation de déclarations dans des cadriciels : une méthodologie indépendante du langage

Van der Walt, Paul 14 December 2015 (has links)
Dans le domaine des plates-formes ouvertes, l’utilisation des cadriciels (frameworks) enrichis par des déclarations pour exprimer les permissions de l’application est de plus en plus répandue. Ceci est une réaction logique au fait qu’il y a une explosion d’adoption des appareils embarqués et mobiles. Leur omniprésence dans notre vie quotidienne engendre des craintes liées à la sécurité et à la vie privée, car l’usager partage de plus en plus ses données et ressources privées avec des tiers qui développent des applications auxquelles on n’a pas de raison de faire confiance. Malheureusement, la manière dont ces langages de spécification ainsi que ces cadres d’applications sont développés est généralement assez ad hoc et repose sur un domaine d’application et un langage de programmation fixes. De plus, ces cadriciels ne sont pas assez restrictifs pour régler le problème de la fuite de données privées et ne donnent souvent pas non plus assez d’informations à l’usager sur le comportement attendu de l’application. Cette thèse présente une méthodologie généraliste pour développer des cadriciels dirigés par des déclarations, qui cible un spectre large de langages de programmation. Nous montrons comment des langages de déclaration expressifs permettent de spécifier avec modularité les droits d’accès aux ressources ainsi que le flux de contrôle d’une telle application. Ces langages peuvent ensuite être compilés en un cadriciel garantissant à l’usager final le respect de ces permissions. Par rapport aux cadriciels existants, notre méthodologie permet de guider la personne qui développe des applications à partir des spécifications ainsi que d’informer l’usager final sur l’usage des ressources sensibles. Contrairement aux travaux existants, la méthodologie présentée dans cette thèse ne repose par sur un langage de programmation particulier. Nous montrons comment mettre en oeuvre de tels cadriciels dans un spectre de langages : des langages avec typage statique ou dynamique, et suivant le paradigme objet ou fonctionnel. L’efficacité de l’approche est montrée à travers des prototypes dans le domaine des applications mobiles dans deux langages très différents, à savoir Java et Racket, ce qui montre la généralité de notre approche. / In the domain of open platforms, it has become common to use application programming frameworks extended with declarations that express permissions of applications. This is a natural reaction to ever more widespread adoption of mobile and pervasive computing devices. Their wide adoption raises privacy and safety concerns for users, as a result of the increasing number of sensitive resources a user is sharing with non-certified third-party application developers. However, the approach to designing these declaration languages and the frameworks that enforce their requirements is often ad hoc, and limited to a specific combination of application domain and programming language. Moreover, most widely used frameworks fail to address serious privacy leaks, and, crucially, do not provide the user with insight into application behaviour. This dissertation presents a generalised methodology for developing declaration-driven frameworks in a wide spectrum of host programming languages. We show that rich declaration languages, which express modularity, resource permissions and application control flow, can be compiled into frameworks that provide strong guarantees to end users. Compared to other declaration-driven frameworks, our methodology provides guidance to the application developer based on the specifications, and clear insight to the end user regarding the use of their private resources. Contrary to previous work, the methodology we propose does not depend on a specific host language, or even on a specific programming paradigm. We demonstrate how to implement declaration-driven frameworks in languages with static type systems, completely dynamic languages, object-oriented languages, or functional languages. The efficacy of our approach is shown through prototypes in the domain of mobile computing, implemented in two widely differing host programming languages, demonstrating the generality of our approach.
39

A Design-Driven Methodology for the Development of Large-Scale Orchestrating Applications / Une methodologie dirigée par la conception pour le developpement d’applications d’orchestration à grande echelle

Kabac, Milan 26 September 2016 (has links)
Notre environnement est de plus en plus peuplé de grandes quantités d’objets intelligents. Certains surveillent des places de stationnement disponibles, d’autres analysent les conditions matérielles dans les bâtiments ou détectent des niveaux de pollution dangereux dans les villes. Les quantités massives de capteurs et d’actionneurs constituent des infrastructures de grande envergure qui s’étendent sur des terrains de stationnement entiers, des campus comprenant plusieurs bâtiments ou des champs agricoles. Le développement d’applications pour de telles infrastructures reste difficile, malgré des déploiement réussis dans un certain nombre de domaines. Une connaissance considérable des spécificités matériel / réseau de l’infrastructure de capteurs est requise de la part du développeur. Pour remédier à ce problème, des méthodologies et des outils de développement logiciel permettant de relever le niveau d’abstraction doivent être introduits pour que des développeurs non spécialisés puissent programmer les applications. Cette thèse présente une méthodologie dirigée par la conception pour le développement d’applications orchestrant des quantités massives d’objets communicants. La méthodologie est basée sur un langage de conception dédié, nommé DiaSwarm qui fournit des constructions déclaratives de haut niveau permettant aux développeurs de traiter des masses d’objets en phase de conception, avant de programmer l’application. La programmation générative est utilisée pour produire des cadres de programmation spécifiques à la conception pour guider et soutenir le développement d’applications dans ce domaine. La méthodologie intègre le traitement parallèle de grandes quantités de données collectées à partir de masses de capteurs. Nous introduisons un langage de déclarations permettant de générer des cadres de programmation basés sur le modèle de programmation MapReduce. En outre, nous étudions comment la conception peut être utilisée pour rendre explicites les ressources requises par les applications ainsi que leur utilisation. Pour faire correspondre les exigences de l’application à une infrastructure de capteurs cible, nous considérons les déclarations de conception à différents stades du cycle de vie des applications. Le passage à l’échelle de cette approche est évaluée dans une expérience qui montre comment les cadres de programmation générés s’appuyant sur le modèle de programmation MapReduce sont utilisés pour le traitement efficace de grands ensembles de données de relevés des capteurs. Nous examinons l’efficacité de l’approche proposée pour relever les principaux défis du génie logiciel dans ce domaine en mettant en oeuvre des scénarios d’application qui nous sont fournis par des partenaires industriels. Nous avons sollicité des programmeurs professionnels pour évaluer l’utilisabilité de notre approche et présenter des données quantitatives et qualitatives de l’expérience. / Our environment is increasingly populated with large amounts of smart objects. Some monitor free parking spaces, others analyze material conditions in buildings or detect unsafe pollution levels in cities. The massive amounts of sensing and actuation devices constitute large-scale infrastructures that span over entire parking lots, campuses of buildings or agricultural fields. Despite being successfully deployed in a number of domains, the development of applications for such infrastructures remains challenging. Considerable knowledge about the hardware/network specificities of the sensor infrastructure is required on the part of the developer. To address this problem, software development methodologies and tools raising the level of abstraction need to be introduced to allow non-expert developers program applications. This dissertation presents a design-driven methodology for the development of applications orchestrating massive amounts of networked objects. The methodology is based on a domain-specific design language, named DiaSwarm that provides high-level, declarative constructs allowing developers to deal with masses of objects at design time, prior to programming the application. Generative programming is used to produce design-specific programming frameworks to guide and support the development of applications in this domain. The methodology integrates the parallel processing of large-amounts of data collected from masses of sensors. We introduce specific language declarations resulting in the generation of programming frameworks based on the MapReduce programming model. We furthermore investigate how design can be used to make explicit the resources required by applications as well as their usage. To match the application requirements to a target sensor infrastructure, we consider design declarations at different stages of the application lifecycle. The scalability of this approach is evaluated in an experiment, which shows how the generated programming frameworks relying on the MapReduce programming model are used for the efficient processing of large datasets of sensor readings. We examine the effectiveness of the proposed approach in dealing with key software engineering challenges in this domain by implementing application scenarios provided to us by industrial partners. We solicited professional programmers to evaluate the usability of our approach and present quantitative and qualitative data from the experiment.
40

Pristup modelovanju specifikacija informacionog sistema putem namenskih jezika / An Approach to Modeling Information System Specifications based on Domain Specific Languages

Čeliković Milan 12 July 2018 (has links)
No description available.

Page generated in 0.0665 seconds