• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 40
  • 26
  • 2
  • 1
  • Tagged with
  • 174
  • 174
  • 174
  • 75
  • 75
  • 74
  • 71
  • 70
  • 68
  • 42
  • 37
  • 37
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SimITK: Model Driven Engineering for Medical Imaging

Trezise, Melissa 06 August 2013 (has links)
The Insight Segmentation and Registration Toolkit (ITK) is a highly utilized open source medical imaging library. Written in C++, ITK provides chiefly the functionality to register, segment, and filter medical images. Although extremely powerful, ITK has a very steep learning curve for users with little or no background in programming. It was for this reason that SimITK was developed. SimITK wraps ITK into the model driven engineering environment Simulink, a part of the Matlab development suite. The first released version of SimITK was a proof of concept, and demonstrated that ITK could be wrapped successfully in Simulink. Very few segmentation and registration functions were available and the system was based on ITK version 3 with a semi-automatic wrapping procedure. In this thesis a new version of SimITK is presented that includes thirty-seven image filter, twelve optimizer, and nineteen transform classes from ITK version 4 which are successfully wrapped and tested. These classes were chosen to represent a broad range of usability (in the case of the filters) and to allow for greater flexibility when creating registration pipelines by having more options for optimizers, transforms, and metrics. Many usability improvements were also implemented for the registration pipeline, including providing the user with the metric value while executing a registration model and allowing the output image size to be specified for certain filters. In order for SimITK to transition to a usable research tool, several usability improvements were needed. These included transitioning from wrapping ITK version 3 to ITK version 4, fully automating the wrapping process, and usability modifications to the registration pipeline including a metric value output. These implementations of an automated wrapping procedure for ITK version 4, and improved usability of the registration pipeline have propelled SimITK on a path towards a usable research tool. The author will be creating a release of these changes, updating installation documentation, and updating tutorials which are available at www.SimITKVTK.com / Thesis (Master, Computing) -- Queen's University, 2013-08-05 10:15:16.607
2

Interprétations de la composition d'activités

Blay-Fornarino, Mireille 16 April 2009 (has links) (PDF)
Nos travaux de recherche portent sur la composition fiable d'activités logicielles. Ils s'inscrivent dans le cadre général du génie logiciel, et concernent les étapes de conception, implantation et adaptations d'applications logicielles réparties. Le thème central développé est la modélisation des activités et leur composition tout en garantissant différentes propriétés. La spécificité de ce travail et son originalité est d'avoir suivi une approche de la composition non pas dirigée par les langages mais par les éléments clefs de la composition indépendamment de mises en œuvres spécifiques. En fondant ce travail sur une approche formelle, nous proposons une vision unifiée d'un procédé de composition, et caractérisons les différences en terme d'interprétations. En étayant cette formalisation par plusieurs mises en œuvre à la fois en terme d'environnement de programmation et d'applications nous ancrons cette approche dans une réalité fonctionnelle. Les principales applications de ce travail portent sur la composition d'interactions entre composants hétérogènes et la composition de Workflows.
3

A framework for the analysis of failure behaviors in component-based model-driven development of dependable systems

Javed, Muhammad Atif, Faiz UL Muram, Faiz UL Muram January 2011 (has links)
Currently, the development of high-integrity embedded component-based software systems is not supported by well-integrated means allowing for quality evaluation and design support within a development process. Quality, especially dependability, is very important for such systems. The CHESS (Composition with Guarantees for High-integrity Embedded Software Components Assembly) project aims at providing a new systems development methodology to capture extra-functional concerns and extend Model Driven Engineering industrial practices and technology approaches to specifically address the architectural structure, the interactions and the behavior of system components while guaranteeing their correctness and the level of service at run time. The CHESS methodology is expected to be supported by a tool-set which consists of a set of plug-ins integrated within the Eclipse IDE. In the framework of the CHESS project, this thesis addresses the lack of well integrated means concerning quality evaluation and proposes an integrated framework to evaluate the dependability of high-integrity embedded systems. After a survey of various failure behavior analysis techniques, a specific technique, called Failure Propagation and Transformation Calculus (FPTC), is selected and a plug-in, called CHESS-FPTC, is developed within the CHESS tool-set. FPTC technique allows users to calculate the failure behavior of the system from the failure behavior of its building components. Therefore, to fully support FPTC, CHESS-FPTC plug-in allows users to model the failure behavior of the building components, perform the analysis automatically and get the analysis results back into their initial models. A case study about AAL2 Signaling Protocol is presented to illustrate and evaluate the CHESS-FPTC framework. / CHESS Project - http://chess-project.ning.com/
4

A domain specific language to support the definition of transformation rules for software process tailoring

Silvestre Quiroga, Luis Gregorio January 2018 (has links)
Doctor en Ciencias, Mención Computación / La adaptación de procesos de software es la actividad de adaptar el proceso de software de una organización a las necesidades de proyectos particulares. La ingeniería basada en modelos (MDE) se ha aplicado con este fin, utilizando modelos para formalizar el proceso de software y el contexto del proyecto, y transformaciones del modelo para adaptar estos procesos. A pesar de que la adaptación basada en MDE ha demostrado ser técnicamente factible, su uso en la práctica requiere conocimiento sobre cómo adaptar procesos y construir modelos y transformaciones. Existen algunas propuestas para la generación automática de transformaciones como una forma de reducir la complejidad de adaptar los procesos de software. Estas propuestas generalmente generan transformación solo parcialmente, y luego deben completarse manualmente. Estos enfoques no son adecuados para la adaptación de procesos de software porque no superan por completo las dificultades técnicas de adopción. Para enfrentar estos desafíos esta tesis propone un enfoque automático de generación de transformaciones, que aborda tanto la formalidad requerida por MDE, como la usabilidad que necesitan los ingenieros de proceso a cargo de las adaptaciones. Para ello, especificamos las reglas de adaptación utilizando un lenguaje específico de dominio (DSL). Además, definimos una transformación de orden superior (HOT) que toma las reglas de adaptación especificadas como entrada y genera automáticamente la transformación de adaptación de procesos requerida. Tanto el DSL como el HOT son genéricos y, por lo tanto, pueden reutilizarse en cualquier organización. Con el fin de mejorar la usabilidad, desarrollamos un conjunto de herramientas integradas (ATAGeTT) que incorpora ambas contribuciones. ATAGETT se aplicó en un estudio de caso exploratorio en dos pequeñas empresas de software, para evaluar su capacidad y corrección de adaptar el proceso de estas compañías. Los resultados obtenidos muestran queusuarios pudieron especificar todas las reglas de adaptación requeridas. Luego, se llevó cabo un caso de estudio en otra empresa para validar la usabilidad de ATAGeTT y la expresividad del lenguaje de decisión propuesto. Los usuarios pudieron especificar todas las reglas de ajuste de una manera simple, y de ejecutar la adaptación de procesos de manera automática. Los resultados muestran que ATAGeTT es fácil de aprender, usable y útil para sus potenciales usuarios. Aunque los resultados aún no son suficientes, son altamente positivos y consistentes; por lo tanto, esperamos que esta propuesta pueda ayudar a mejorar esta actividad, particularmente en organizaciones pequeñas y medianas, que generalmente están más limitadas para realizar adaptacion de procesos de software. / Este trabajo ha sido financiada por la beca CONICYT-PFCHA/Doctorado Nacional para Extranjeros/2013-63130130, y apoyada parcialmente por los proyectos FONDEF D09I-1171 /ADAPTE) y FONDEF IDeA IT13I20010 (GEMS), y el Programa de Becas de NIC Chile
5

Calcul intensif pour l'évaluation de la vulnérabilité en utilisant une approche d'Ingénierie Dirigée par les Modèles : application à la vulnérabilité des prairies au changement climatique sous contraintes de plans d'expériences / Intensive calculation for vulnerability assessment using a Model Driven Engineering approach : application to prairie vulnerability to climate change under experimental design constraints

Lardy, Romain 13 May 2013 (has links)
La vulnérabilité est le degré de stress ou de perturbation qu’un système humain et environnemental est capable de supporter avant d’être endommagé. Au cours des dernières années, c’est devenu un sujet central de l’étude du changement global (incluant le changement climatique), et l’évaluation de la vulnérabilité des agro-écosystèmes aux changements climatiques s’inscrit en effet dans les axes prioritaires de l’Institut National de la Recherche Agronomique (INRA). La littérature sur le changement climatique contient de nombreuses explications de la vulnérabilité, basée sur la notion de sensibilité et faisant éventuellement appel à des idées plus complexes, depuis la prise en compte de l’exposition jusqu’aux impacts résiduels du changement climatique après mise en place de mesures d’adaptation. Dans le cadre des activités de l’Unité de Recherche sur les Ecosystèmes Prairiaux (UREP), l’intérêt porte sur la vulnérabilité des systèmes prairiaux et d’élevage face aux risques de réduction de la production laitière et herbagère, ainsi que la problématique de l’augmentation des émissions de gaz à effet de serre qui accompagne la production des services d’un écosystème prairial.Compte-tenu du fait que l’analyse de vulnérabilité s’apparente à une forme d’analyse de sensibilité et de la cascade d’incertitudes lors des évaluations des impacts du changement climatique, un grand nombre de simulations sont nécessaires. Dans ce cadre, afin de réduire le temps d’attente utilisateur, la conception d’un plan expérimental approprié est nécessaire, ainsi que l’utilisation du calcul à haute performance. De plus, l’analyse de vulnérabilité peut se composer de nombreuses étapes parmi lesquelles on trouve la conception (choix du modèle agro-écologique, des variables d’intérêt, des scénarii et des seuils de référence, la distribution des paramètres, …), la génération de plans expérimentaux, la création de surfaces de réponse, le calcul de métriques (par exemple des indices de vulnérabilité) et l’optimisation (à travers la conception et l’évaluation de mesures d’adaptation) de la vulnérabilité. À notre connaissance, aucun outil spécifique n’a été conçu ou validé dans le but de faciliter l’exécution de la majeure partie des tâches énoncées précédemment. Ainsi, la problématique de cette thèse a été de proposer une méthode générique pour la réalisation d’une analyse de vulnérabilité sous changement climatique. Le travail réalisé dans cette thèse a donc commencé par une révision du concept de vulnérabilité et la proposition d’une démarche générique, en s’appuyant sur une synthèse critique des notions de l’état de l’art. Ensuite, avec une approche d’Ingénierie Dirigée par les Modèles, nous avons conçu un outil informatique pour l’analyse de vulnérabilité. Cet outil, implémenté avec le cadriciel Eclipse Modeling Framework (EMF),est générique, modulaire et permet la distribution et l’interprétation des simulations. Enfin,des exemples applicatifs d’analyse de vulnérabilité au changement climatique ont été réalisés à l’aide des solutions proposées précédemment. Cette démarche s’est notamment appuyée sur l’utilisation du modèle biogéochimique d’écosystème prairial PaSim ([Riedo et al., 1998], [Vuichard 2007a], [Graux 2011]). / Vulnerability is the degree to which human or environmental systems are likely toexperience harm due to a perturbation or a stress. In the last years, it has become a centralfocus of the global change (including climate change). Assessing the vulnerability of agroecosystemsto climate change is one of the priority areas of the French National Institute ofAgronomic Research (INRA). The climate change literature contains many explanations ofvulnerability, stemming from the notion of sensitivity to more complex ideas, yet takinginto account the exposure history of the system up to residual impacts of climate changeafter adaptation. In the framework of the activities of the INRA’s Grassland EcosystemResearch Unit (UREP) of Clermont-Ferrand, interest is on vulnerability of grassland andlivestock systems against the risk of reduced milk and forage production, and against theproblem of increased greenhouse gas emissions that comes with the production ofgrassland ecosystem services.Vulnerability assessment has similarities with sensitivity analysis and is based onsimulations of the target system, forced to respond to the changes of stress factors. Due tothe cascade of uncertainties in climate change impacts assessment, a large number ofsimulations are necessary. In this context, the need to reduce user waiting time calls for theconception of an appropriate experimental plan, as well as the use of high performancecomputing. Moreover, vulnerability assessment may consist of many steps, such asdesigning the experiment (choice of agro-ecological model, variables of interest, scenarios,reference thresholds, parameters distribution …), designing of the experimental plans,regressing response surfaces, computing metrics (e.g. vulnerability indices) and optimizingvulnerability (through designing and evaluating adaptation measures). To our knowledge,no specific tool has been built or validated, in order to facilitate the implementation ofmost of these tasks. Thus, the goal of this thesis was to propose a generic method toperform a comprehensive vulnerability analysis to climate change. The work in this thesishas begun with a review of the concept of vulnerability and the proposal of a genericapproach, based on a critical synthesis of the state of the art. Then, with a Model DrivenEngineering approach, we have developed a computer tool for vulnerability analysis. Thistool, implemented with the modelling framework Eclipse Modeling Framework (EMF) isgeneric, modular and allows the distribution and interpretation of simulation results.Finally, application examples of climate change vulnerability assessment were achievedwith the previously proposed solutions. This approach relied, in particular, on the use ofthe grassland ecosystem biogeochemical model PaSim ([Riedo et al., 1998], [Vuichard2007a], [Graux 2011]).
6

Génération stratégique de code pour la maîtrise des performances de systèmes temps-réel embarqués / Strategic generation of code to master the performances of real-time embedded systems

Cadoret, Fabien 26 May 2014 (has links)
Nous nous sommes intéressés aux systèmes embarqués temps-réel critiques (SETRC) qui soulèvent des problématiques de criticité, de respect de contraintes temporelles et de disponibilité des ressources telles que la mémoire. Pour maîtriser la complexité de conception de ces systèmes, l’Ingénierie Dirigée par les Modèles (IDM) propose de les modéliser pour les analyser au regard de leurs exigences et pour générer en partie leur code d’exécution. Cependant ces deux phases doivent s’articuler correctement de sorte que le système généré respecte toujours les propriétés du modèle initialement analysé. Par ailleurs, le générateur de code doit s’adapter à de multiples critères : notamment pour assurer le respect des performances ou bien pour cibler différentes plates-formes d’exécution qui ont leurs propres contraintes et sémantiques d’exécution. Pour réaliser cette adaptation, le processus de développement requiert de faire évoluer les règles de transformation selon ces critères. Son architecture doit également de permettre de sélectionner les composants logiciels répondant à ces critères. Nous répondons à cette problématique en proposant un processus de génération s’appuyant sur l’IDM. Lorsque l’utilisateur a spécifié et validé un modèle de haut niveau, une transformation traduit automatiquement ce modèle en un second modèle détaillé proche du code généré. Pour assurer la conservation des exigences, le modèle détaillé est exprimé dans le même formalisme que le modèle initial de sorte qu’il reste analysable. Cette démarche détermine l’impact de la stratégie du générateur sur les performances du système final et permet au générateur de changer de stratégie, à une étape donnée, pour assurer le respect des contraintes du système. Pour faciliter le développement et la sélection de stratégies alternatives, nous proposons une méthodologie qui s’articule autour d’un formalisme pour l’orchestration des transformations, un ensemble de patrons de transformation (qui factorisent et généralisent les règles de transformation) et une adaptation de composants logiciels selon leur impact sur les performances. Nous avons mis en place ce processus au sein de l’environnement OSATE, pour lequel nous avons développé le framework RAMSES (Refinment of AADL Models for Synthesis of Embedded Systems). Nous l’avons expérimenté sur la génération des communications entre tâches pour lesquelles plusieurs stratégies d’implémentation ont été définies / We focused on real-time embedded critical systems (RTECS) which present different problems: criticality, respect of time constraints and resources availability such as memory. In order to master design complexity of such systems, Model Driven Engineering (MDE) proposes to model it for analysis purposes and to generate, partially or totally, its execution code. However, these two phases must be correctly connected to ensure the generated code is always enforcing all the properties of the model initially analysed. In addition, the code generator must be adapted to several criteria: in particular to ensure respect of performances or to target different execution platforms which have their own execution constraints and semantics. To realize such an adaptation, the development process requires to evolve transformation rules according to these criteria. Its architecture needs also to allow the selection of the generated software components respecting these criteria.We answer such a problem by proposing a generation process based on the MDE. When the user specifies and validates a high-level model, a model transformation translates automatically this model into a detailed model close to the generated code. To ensure the conservation of the requirements, the detailed model is expressed in the same formalism as the initial model so that it remains analysable (by the same tools initially used). This approach determines the impact of the code generation strategy on the performances of the final system and allows the generator to adapt its strategy, in a given stage, to insure the respect of the system constraints. To facilitate the development and the selection of alternative strategies, we propose a methodology which articulates around a formalism for the orchestration of the transformations, a set of transformation patterns (which factorize and generalize the transformation rules) and an adaptation of software components according to their impact on the performances. We set up this process within the environment OSATE, for which we have developed the framework RAMSES (Refinement of AADL Models for Synthesis of Embedded Systems). We have experimented it on the code generation of the communications between tasks for which several strategies of implementation were defined.
7

Industrialising software development in systems integration

Minich, Matthias Ernst January 2013 (has links)
Compared to other disciplines, software engineering as of today is still dependent on craftsmanship of highly-skilled workers. However, with constantly increasing complexity and efforts, existing software engineering approaches appear more and more inefficient. A paradigm shift towards industrial production methods seems inevitable. Recent advances in academia and practice have lead to the availability of industrial key principles in software development as well. Specialization is represented in software product lines, standardization and systematic reuse are available with component-based development, and automation has become accessible through model-driven engineering. While each of the above is well researched in theory, only few cases of successful implementation in the industry are known. This becomes even more evident in specialized areas of software engineering such as systems integration. Today’s IT systems need to quickly adapt to new business requirements due to mergers and acquisitions and cooperations between enterprises. This certainly leads to integration efforts, i.e. joining different subsystems into a cohesive whole in order to provide new functionality. In such an environment. the application of industrial methods for software development seems even more important. Unfortunately, software development in this field is a highly complex and heterogeneous undertaking, as IT environments differ from customer to customer. In such settings, existing industrialization concepts would never break even due to one-time projects and thus insufficient economies of scale and scope. This present thesis, therefore, describes a novel approach for a more efficient implementation of prior key principles while considering the characteristics of software development for systems integration. After identifying the characteristics of the field and their affects on currently-known industrialization concepts, an organizational model for industrialized systems integration has thus been developed. It takes software product lines and adapts them in a way feasible for a systems integrator active in several business domains. The result is a three-tiered model consolidating recurring activities and reducing the efforts for individual product lines. For the implementation of component-based development, the present thesis assesses current component approaches and applies an integration metamodel to the most suitable one. This ensures a common understanding of systems integration across different product lines and thus alleviates component reuse, even across product line boundaries. The approach is furthermore aligned with the organizational model to depict in which way component-based development may be applied in industrialized systems integration. Automating software development in systems integration with model-driven engineering was found to be insufficient in its current state. The reason herefore lies in insufficient tool chains and a lack of modelling standards. As an alternative, an XML-based configuration of products within a software product line has been developed. It models a product line and its products with the help of a domain-specific language and utilizes stylesheet transformations to generate compliable artefacts. The approach has been tested for its feasibility within an exemplarily implementation following a real-world scenario. As not all aspects of industrialized systems integration could be simulated in a laboratory environment, the concept was furthermore validated during several expert interviews with industry representatives. Here, it was also possible to assess cultural and economic aspects. The thesis concludes with a detailed summary of the contributions to the field and suggests further areas of research in the context of industrialized systems integration.
8

Towards Dynamic Software Product Lines: Unifying Design and Runtime Adaptations

Parra, Carlos 04 March 2011 (has links) (PDF)
Pour profiter des nombreux matériels actuellement, les logiciels s'exécutant sur des téléphones mobiles doivent devenir sensibles au contexte, c'est-à-dire, qu'ils doivent surveiller les événements provenant de leur environnement et réagir en conséquence. Nous considérons que ces logiciels peuvent bénéficier d'une approche basée sur les Lignes de Produits Logiciels (LPL). Les LPLs sont définies pour exploiter les points communs par la définition d'éléments réutilisables. Néanmoins, les LPLs ne prennent pas en compte les modifications à l'exécution des applications. Cette thèse propose une ligne de produits logiciels dynamique (LPLD) qui étend une LPL classique en fournissant des mécanismes pour adapter les produits à l'exécution. Notre objectif principal est d'unifier les adaptations à la conception et à l'exécution en utilisant des artefacts logiciels de haut niveau. Concrètement, nous introduisons un modèle de variabilité et un modèle de composition pour modulariser les produits sous forme de modèles d'aspect. Chaque modèle d'aspect a trois parties : l'architecture, les modifications, et le point de coupe. Ensuite, nous proposons deux processus de dérivation du produit : un pour la conception que vise à construire un produit, et un pour l'exécution que vise à adapter un produit. Ce travail de recherche s'est déroulé dans le cadre du projet FUI CAPPUCINO. Nous avons défini une LPLD pour une étude de cas de vente d'un hypermarché sensible au contexte. Le scénario démontre les avantages de notre approche et, en particulier, l'unification réalisée par les modèles d'aspect utilisés à la fois à la conception et à l'exécution.
9

Um método para projetar sistemas embarcados baseado na metodologia de engenharia dirigida por modelos aplicado a sistemas de processamento de imagens

Doering, Dionísio January 2015 (has links)
Sistemas embarcados estão presentes em inúmeras atividades realizadas diariamente, por exemplo, eles são parte integrante dos meios de transporte, encontram-se em sistemas de entretenimento como os tablets e telefones celulares e ainda podem estar dentro das pessoas, como nos usuários de marca-passo. Os sistemas embarcados são desenvolvidos em níveis de complexidade bem distintos, variando de sistemas bem simples até sistemas altamente elaborados e complexos. Essa complexidade pode ser atribuída a diversos fatores, incluindo a necessidade de poder executar múltiplas tarefas em unidades de processamento heterogêneas (baseadas em CPU, GPU, DSP e FPGAs) e atendendo aos requisitos exigidos pelos usuários. Em função do aumento da complexidade e da necessidade de manter o tempo e custo do desenvolvimento de novos sistemas embarcados baixo, uma solução que vem sendo discutida é o uso de metodologias de projetos baseadas em modelos. Estas metodologias usam o modelo como fonte principal de informação. As informações contidas nos modelos variam de acordo com o estágio de desenvolvimento do sistema: nos modelos iniciais são descritivas, simples, incompletas mas permitem realizar simulações que podem ser usadas para guiar o desenvolvimento do próprio sistema. Com o amadurecimento do projeto, mais informações são agregadas aos modelos, tornando-os mais próximos do sistema a ser implementado. Este processo reduz o nível de abstração dos modelos até o ponto que é possível gerar, entre outros artefatos, o código fonte. Neste contexto, sistemas de processamento de imagens embarcados são exemplos de sistemas complexos que fazem processamento de dados de forma intensiva. Estes sistemas são tradicionalmente desenvolvidos em um nível de abstração baixo que tipicamente incluem diagramas blocos e diagrama de estados. Para que seja possível administrar o crescente aumento da complexidade, em um baixo tempo de desenvolvimento, destes sistemas é necessário que sejam criados métodos que permitam desenvolvê-los em um nível mais alto de abstração. Além disso é necessário que existam ferramentas que auxiliem no seu desenvolvimento permitindo, entre outras coisas, aumentar o reúso dos blocos parametrizáveis do sistema e a exploração de espaço de projetos em fases iniciais do projeto para guiar o seu próprio desenvolvimento. Neste trabalho é feita uma proposta de um novo método de desenvolvimento de sistemas embarcados de processamento de imagens baseado na metodologia de engenharia dirigida por modelos. São parte integrantes do método proposto: combinar a modelagem de requisitos funcionais (usando modelagem orientada a objetos) e requisitos não-funcionais (usando modelagem orientada a aspectos); a exploração de espaço de projeto baseada em modelos para resolver o problema de alocação das tarefas nas diferentes unidades de processamento; validação funcional dos modelos; e a geração de código em linguagens de programação. O método proposto chama-se HIPAO, do inglês “Hardware Image Processing system based on model driven engineering and Aspect-Oriented modeling”. O ciclo de desenvolvimento de projeto de sistemas de processamento de imagens embarcado inicia com a coleta dos requisitos técnicos em diagramas de requisitos da linguagem SysML A especificação deste requisitos é feita através de um metamodelo para requisitos técnicos de processamento de imagens que foi desenvolvido em conjunto com o método HIPAO. Estes requisitos são transformados automaticamente em modelos independentes de plataforma (PIM) iniciais. Esta transformação é feita a partir da ferramenta HIPAO desenvolvida em linguagem Java como mecanismo de apoio ao método. Os modelos PIM iniciais evoluem de forma iterativa através da adição de informações feita de forma manual e também de forma automática através da costura ou composição de modelos. O método sugere que modelos de plataforma (PM) sejam desenvolvidos de forma semelhante ao proposto para os modelos PIM, porém sua implementação é considerada trabalho futuro e os modelos PM apresentados neste trabalho são fruto de um desenvolvimento manual. De posse dos modelos PIM e PM o método realiza uma etapa de otimização do modelo através da sua transformação (tipo modelo para texto) para um formato compatível com o framework que realiza a exploração de espaço de projetos dirigida por modelos (MD-DSE). Os modelos específicos de plataforma (PSM) são produzidos com o auxílio de informações obtidas a partir de gráficos tipo Fronteira de Pareto que são produzidos pelo framework MD-DSE que realiza a atividade de alocação das tarefas (modelo PIM) na plataforma (modelo PM) com base em algoritmos heurísticos. O ciclo de desenvolvimento do projeto se encerra com a geração de código, de forma semelhante à maioria dos métodos baseados em engenharia dirigida por modelos, a partir de modelos PSM e alguns exemplos são apresentados. Para o uso e validação do método proposto foram realizados dois estudos de caso. O primeiro é um sistema de processamento de imagens para câmeras científicas de alta resolução e alta velocidade. O segundo estudo de caso propõe o desenvolvimento de um sistema de processamento de imagens a ser integrado a um veículo autônomo não-tripulado. / Embedded systems are present in many daily activities, for example, they are found in entertainment systems such as tablets and cell phones and they could even be inside a person’s body, which is the case of a pace-maker. Embedded systems can have very distinct complexity levels, from very simple to very complex and sophisticated ones. This complexity can be attributed to many different sources, which include the need to execute multiple tasks or deploy them in heterogeneous platforms (i.e. CPU, GPU, DSP and FPGAs) while fulfilling the users requirements. In order to cope with the rising system’s complexity and the need to keep the time to market and cost low, some solutions using methodologies based on model driven engineering are being considered. These methodologies use the model as the primary source of information. The information in the model varies based on the system’s development stage: at the beginning the models are descriptive, simple and incomplete, however they enable to simulate the system at a high level which can be used to drive the system’s development. As the project evolves, more information is added to them and the models looks closer to its final system. This process reduces the models abstraction level to the point that is possible to generate many artifacts from them, such as, source code. In this context, embedded image processing systems are examples of complex systems that perform data intensive processing. These systems are traditionally developed in a low level of abstraction, which typically include block diagrams and state charts. In order to handle the rising system’s complexity, while keeping time to market low, it is necessary to develop new methods that enable these systems development at a high abstraction level. Also, it is necessary the development of new tools to support these methods enabling, among other things, increase in the design reuse of its blocks and the design space exploration at an early stage such that it could guide the systems own development. In this work it is proposed a novel design method for embedded image processing systems based on model driven engineering concepts. This method proposes the following: combine functional requirements (using object oriented modeling) and non-functional requirements (using aspect oriented modeling); perform model based design space exploration in order to tackle task allocation in platforms with multiple processing units; functional model validation; and code generation. The proposed method is called HIPAO, which stands for “Hardware Image Processing system based on model driven engineering and Aspect-Oriented modeling”. The development cycle of embedded image processing systems starts starts with the elucidation of technical requirements using SysML requirements diagram. The specification of the image processing technical requirements is done using a metamodel, which has been developed with the HIPAO method. These requirements are transformed automatically into initial platform independent models (PIM). The transformations are performed by the HIPAO tools developed using Java language. The PIM models evolve by adding new information to it, which can be done manually or in some cases automatically by model weaving or composition. The proposed method suggests that the platform models (PM) should be developed in a similar fashion as the PIM models, but this task is considered a future work therefore all PM models presented in this thesis have been developed manually. Once the PIM and PM models are available the method executes an optimization phase that includes the model transformation (model to text) that produces the information needed by the model driven - design space exploration (MD-DSE). The platform specific models (PSM) are produced based on the information presented as Pareto Fronts that are generated the the MD-DSE framework while performing task (PIM model) allocation on the platform (PM model) using heuristic algorithms. The method development cycle ends with code generation from the PSM models using model to text transformations and some examples are presented. Two case-studies were designed to validate the proposed method. The first one consists of an image processing system for high speed, high resolution scientific image sensors. The second case study consists of an image processing system that is deployed in small Unmanned Autonomous Vehicles.
10

A model-driven design-space exploration tool for the HIPAO 2 methodology / Ferramenta de exploração de espaço de projeto baseada em modelos para a metodologia HIPAO2

Lerm, Rafael Andréas Raffi January 2015 (has links)
Hoje em dia, desenvolvedores de sistemas embarcados enfrentam uma crescente complexidade de projeto, tanto nas aplicações quanto nas plataformas usadas para executá-las. O uso de plataformas complexas faz com que os engenheiros precisem fazer escolhas não-triviais, e muitas vezes contra-intuitivas durante a fase de projeto. Para permitir que os projetistas gerenciem esta complexidade, o uso de metodologias baseadas em modelos tem atraído atenção, e dentro deste contexto, a metodologia HIPAO2 está sendo desenvolvida dentro da UFRGS. Dentre os problemas que os engenheiros precisam enfrentar, o mapeamento entre tarefas e processadores em sistemas multiprocessados heterogêneos é um problema NP-completo, onde o espaço de projeto rapidamente se torna grande demais para que seja explorado satisfatoriamente de maneira manual. Este trabalho detalha a extensão das ferramentas que suportam a metodologia HIPAO2, de maneira a incluir facilidades de Exploração de Espaço de Projeto semi-automática para a solução deste problema. A ferramenta proposta faz uso de um algoritmo genético multiobjetivo para evidenciar tradeoffs existentes no projeto, e algoritmos de análise de aplicações modeladas como synchronous dataflow para avaliar possíveis mapeamentos sem um custo computacional proibitivo. / Designers of today’s embedded systems are faced with increasing complexity both in the applications being developed and the platforms they run on. The use of complex platforms means that the engineers need to make non-trivial and many times non-intuitive decisions during the design phase. To help developers work with this complexity, model-driven techniques are gaining attention, and in this context, the HIPAO2 model-driven engineering methodology is being developed at UFRGS. Among the problems that designers must solve, the task-to-processor mapping in heterogeneous multiprocessor systems is an NP-complete problem and the design space will quickly become too large to be explored adequately by humans. This work details the extension of the tools that support HIPAO2 to include semiautomatic Design-Space Exploration capabilities for the mapping problem. The proposed tool includes the use of a multiobjective genetic algorithm to make tradeoffs explicit to the designers; it also uses synchronous dataflow analysis algorithms to evaluate potential alternatives with a reasonable computational cost.

Page generated in 0.0419 seconds