• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 22
  • 14
  • 9
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 202
  • 202
  • 58
  • 52
  • 49
  • 46
  • 45
  • 34
  • 31
  • 30
  • 28
  • 27
  • 26
  • 25
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Development, Modelling and Control of a Multirotor Vehicle

Mikkelsen, Markus January 2015 (has links)
The interest of drones in all forms has exploded in the recent years. The development of multirotor vehicles such as quadcopters and octocopters, has reached a point where they are cheap and versatile enough to start becoming a part of everyday life. It is clear to say that the future applications seem limitless. This thesis goes through the steps of development, modelling and control design of an octocopter system. The developed octocopter builds on a concept of using the mini computer Raspberry Pi together with the code generation functionality of Matlab/Simulink. The mathematical modelling of the octocopter includes the thrust and torques generated by the propellers, added with gyroscopic torque. These are combined with the aerodynamic effects caused by incoming air. The importance of modelling the later mentioned effects has increased with the demand of precise controlled extreme manoeuvres. A full state feedback based hybrid controller scheme is designed against a linearized model, which makes use of the motor dynamics. The controllers show good performance in simulations and are approved for flight tests, which are conducted on two separate occasions. The octocopter makes two successful flights, proving that the concept can be applied on multirotor vehicles. However, there is a miss-match between the mathematical model and the physical octocopter, leaving questions for future work.
192

Прилог пројектовању, консолидацији и трансформацијама ограничења торке шеме базе података, заснован на платформски независним моделима / Prilog projektovanju, konsolidaciji i transformacijama ograničenja torke šeme baze podataka, zasnovan na platformski nezavisnim modelima / An Approach to Design, Consolidation and Transformations of Database Schema Check Constraints Based on Platform Independent Models

Obrenović Nikola 10 October 2015 (has links)
<p>Употреба платформски независног моделовања и генерисања<br />прототипова у развоју информационих система скраћује време<br />њиховог развоја и побољшава квалитет тог процеса. При томе,<br />циљ је обезбеђење могућности да развој свих аспеката<br />информационих система буде подржан оваквим приступом.<br />Ова дисертација треба да пружи одговарајући допринос у<br />остварењу наведеног циља. У дисертацији представљени су<br />алгоритми за трансформацију модела ограничења вредности у<br />извршив к&ocirc;д и консолидацију подшема са јединственом<br />шемом базе података, са аспекта ограничења вредности.</p> / <p>Upotreba platformski nezavisnog modelovanja i generisanja<br />prototipova u razvoju informacionih sistema skraćuje vreme<br />njihovog razvoja i poboljšava kvalitet tog procesa. Pri tome,<br />cilj je obezbeđenje mogućnosti da razvoj svih aspekata<br />informacionih sistema bude podržan ovakvim pristupom.<br />Ova disertacija treba da pruži odgovarajući doprinos u<br />ostvarenju navedenog cilja. U disertaciji predstavljeni su<br />algoritmi za transformaciju modela ograničenja vrednosti u<br />izvršiv k&ocirc;d i konsolidaciju podšema sa jedinstvenom<br />šemom baze podataka, sa aspekta ograničenja vrednosti.</p> / <p>The usage of platform-independent modelling and generation of<br />prototypes in information systems development reduces the<br />development time and improves the process quality. By that, the<br />goal is to have all elements of an information system supported by<br />this approach.<br />This dissertation should provide a contribution towards fulfilling the<br />given goal. In the dissertation, author presents algorithms for<br />check constraint model into executable code transformations and<br />algorithms for testing subschema consolidation with respect to<br />check constraints.</p>
193

Capturing JUnit Behavior into Static Programs : Static Testing Framework

Siddiqui, Asher January 2010 (has links)
<p>In this research paper, it evaluates the benefits achievable from static testing framework by analyzing and transforming the <em>JUnit3.8 </em>source code and static execution of transformed code. Static structure enables us to analyze the code statically during creation and execution of test cases. The concept of research is by now well established in static analysis and testing development. The research approach is also increasingly affecting the static testing process and such research oriented work has proved particularly valuable for those of us who want to understand the reflective behavior of <em>JUnit3.8 Framework</em>.</p><p><em> JUnit3.8 Framework</em> uses <em>Java Reflection API</em> to invoke core functionality (test cases creation and execution) dynamically. However, <em>Java Reflection API</em> allows developers to access and modify structure and behavior of a program.  Reflection provides flexible solution for creating test cases and controlling the execution of test cases. Java reflection helps to encapsulate test cases in a single object representing the test suite. It also helps to associate each test method with a test object. Where reflection is a powerful tool to perform potential operations, on the other hand, it limits static analysis. Static analysis tools often cannot work effectively with reflection.</p><p>In order to avoid the reflection, <em>Static Testing Framework</em> provides a static platform to analyze the <em>JUnit3.8</em> source code and transform it into non-reflective version that emulates the dynamic behavior of <em>JUnit3.8</em>. The transformed source code has possible leverage to replace reflection with static code and does same things in an execution environment of <em>Static Testing Framework</em> that reflection does in <em>JUnit3.8</em>. More besides, the transformed code also enables execution environment of <em>Static Testing Framework</em> to run test methods statically. In order to measure the degree of efficiency, the implemented tool is evaluated. The evaluation of <em>Static Testing Framework</em> draws results for different Java projects and these statistical data is compared with <em>JUnit3.8</em> results to measure the effectiveness of <em>Static Testing Framework</em>. As a result of evaluation, <em>STF</em> can be used for static creation and execution of test cases up to <em>JUnit3.8</em> where test cases are not creating within a test class and where real definition of constructors is not required. These problems can be dealt as future work by introducing a middle layer to execute test fixtures for each test method and by generating test classes as per real definition of constructors.</p>
194

Génération dynamique de code pour l'optimisation énergétique / Online Auto-Tuning for Performance and Energy through Micro-Architecture Dependent Code Generation

Endo, Fernando Akira 18 September 2015 (has links)
Dans les systèmes informatiques, la consommation énergétique est devenue le facteur le plus limitant de la croissance de performance observée pendant les décennies précédentes. Conséquemment, les paradigmes d'architectures d'ordinateur et de développement logiciel doivent changer si nous voulons éviter une stagnation de la performance durant les décennies à venir.Dans ce nouveau scénario, des nouveaux designs architecturaux et micro-architecturaux peuvent offrir des possibilités d'améliorer l'efficacité énergétique des ordinateurs, grâce à la spécialisation matérielle, comme par exemple les configurations de cœurs hétérogènes, des nouvelles unités de calcul et des accélérateurs. D'autre part, avec cette nouvelle tendance, le développement logiciel devra faire face au manque de portabilité de la performance entre les matériels toujours en évolution et à l'écart croissant entre la performance exploitée par les programmeurs et la performance maximale exploitable du matériel. Pour traiter ce problème, la contribution de cette thèse est une méthodologie et la preuve de concept d'un cadriciel d'auto-tuning à la volée pour les systèmes embarqués. Le cadriciel proposé peut à la fois adapter du code à une micro-architecture inconnue avant la compilation et explorer des possibilités d'auto-tuning qui dépendent des données d'entrée d'un programme.Dans le but d'étudier la capacité de l'approche proposée à adapter du code à des différentes configurations micro-architecturales, j'ai développé un cadriciel de simulation de processeurs hétérogènes ARM avec exécution dans l'ordre ou dans le désordre, basé sur les simulateurs gem5 et McPAT. Les expérimentations de validation ont démontré en moyenne des erreurs absolues temporels autour de 7 % comparé aux ARM Cortex-A8 et A9, et une estimation relative d'énergie et de performance à 6 % près pour le benchmark Dhrystone 2.1 comparée à des CPUs Cortex-A7 et A15 (big.LITTLE). Les résultats de validation temporelle montrent que gem5 est beaucoup plus précis que les simulateurs similaires existants, dont les erreurs moyennes sont supérieures à 15 %.Un composant important du cadriciel d'auto-tuning à la volée proposé est un outil de génération dynamique de code, appelé deGoal. Il définit un langage dédié dynamique et bas-niveau pour les noyaux de calcul. Pendant cette thèse, j'ai porté deGoal au jeu d'instructions ARM Thumb-2 et créé des nouvelles fonctionnalités pour l'auto-tuning à la volée. Une validation préliminaire dans des processeurs ARM ont montré que deGoal peut en moyenne générer du code machine avec une qualité équivalente ou supérieure comparé aux programmes de référence écrits en C, et même par rapport à du code vectorisé à la main.La méthodologie et la preuve de concept de l'auto-tuning à la volée dans des processeurs embarqués ont été développées autour de deux applications basées sur noyau de calcul, extraits de la suite de benchmark PARSEC 3.0 et de sa version vectorisée à la main PARVEC.Dans l'application favorable, des accélérations de 1.26 et de 1.38 ont été observées sur des cœurs réels et simulés, respectivement, jusqu'à 1.79 et 2.53 (toutes les surcharges dynamiques incluses).J'ai aussi montré par la simulation que l'auto-tuning à la volée d'instructions SIMD aux cœurs d'exécution dans l'ordre peut surpasser le code de référence vectorisé exécuté par des cœurs d'exécution dans le désordre similaires, avec une accélération moyenne de 1.03 et une amélioration de l'efficacité énergétique de 39 %.L'application défavorable a été choisie pour montrer que l'approche proposée a une surcharge négligeable lorsque des versions de noyau plus performantes ne peuvent pas être trouvées.En faisant tourner les deux applications sur les processeurs réels, la performance de l'auto-tuning à la volée est en moyenne seulement 6 % en dessous de la performance obtenue par la meilleure implémentation de noyau trouvée statiquement. / In computing systems, energy consumption is limiting the performance growth experienced in the last decades. Consequently, computer architecture and software development paradigms will have to change if we want to avoid a performance stagnation in the next decades.In this new scenario, new architectural and micro-architectural designs can offer the possibility to increase the energy efficiency of hardware, thanks to hardware specialization, such as heterogeneous configurations of cores, new computing units and accelerators. On the other hand, with this new trend, software development should cope with the lack of performance portability to ever changing hardware and with the increasing gap between the performance that programmers can extract and the maximum achievable performance of the hardware. To address this issue, this thesis contributes by proposing a methodology and proof of concept of a run-time auto-tuning framework for embedded systems. The proposed framework can both adapt code to a micro-architecture unknown prior compilation and explore auto-tuning possibilities that are input-dependent.In order to study the capability of the proposed approach to adapt code to different micro-architectural configurations, I developed a simulation framework of heterogeneous in-order and out-of-order ARM cores. Validation experiments demonstrated average absolute timing errors around 7 % when compared to real ARM Cortex-A8 and A9, and relative energy/performance estimations within 6 % for the Dhrystone 2.1 benchmark when compared to Cortex-A7 and A15 (big.LITTLE) CPUs.An important component of the run-time auto-tuning framework is a run-time code generation tool, called deGoal. It defines a low-level dynamic DSL for computing kernels. During this thesis, I ported deGoal to the ARM Thumb-2 ISA and added new features for run-time auto-tuning. A preliminary validation in ARM processors showed that deGoal can in average generate equivalent or higher quality machine code compared to programs written in C, including manually vectorized codes.The methodology and proof of concept of run-time auto-tuning in embedded processors were developed around two kernel-based applications, extracted from the PARSEC 3.0 suite and its hand vectorized version PARVEC. In the favorable application, average speedups of 1.26 and 1.38 were obtained in real and simulated cores, respectively, going up to 1.79 and 2.53 (all run-time overheads included). I also demonstrated through simulations that run-time auto-tuning of SIMD instructions to in-order cores can outperform the reference vectorized code run in similar out-of-order cores, with an average speedup of 1.03 and energy efficiency improvement of 39 %. The unfavorable application was chosen to show that the proposed approach has negligible overheads when better kernel versions can not be found. When both applications run in real hardware, the run-time auto-tuning performance is in average only 6 % way from the performance obtained by the best statically found kernel implementations.
195

Une approche automatisée basée sur des contraintes d’intégrité définies en UML et OCL pour la vérification de la cohérence logique dans les systèmes SOLAP : applications dans le domaine agri-environnemental / An automated approach based on integrity constraints defined in UML and OCL for the verification of logical consistency in SOLAP systems : applications in the agri-environmental field

Boulil, Kamal 26 October 2012 (has links)
Les systèmes d'Entrepôts de Données et OLAP spatiaux (EDS et SOLAP) sont des technologies d'aide à la décision permettant l'analyse multidimensionnelle de gros volumes de données spatiales. Dans ces systèmes, la qualité de l'analyse dépend de trois facteurs : la qualité des données entreposées, la qualité des agrégations et la qualité de l’exploration des données. La qualité des données entreposées dépend de critères comme la précision, l'exhaustivité et la cohérence logique. La qualité d'agrégation dépend de problèmes structurels (e.g. les hiérarchies non strictes qui peuvent engendrer le comptage en double des mesures) et de problèmes sémantiques (e.g. agréger les valeurs de température par la fonction Sum peut ne pas avoir de sens considérant une application donnée). La qualité d'exploration est essentiellement affectée par des requêtes utilisateur inconsistantes (e.g. quelles ont été les valeurs de température en URSS en 2010 ?). Ces requêtes peuvent engendrer des interprétations erronées des résultats. Cette thèse s'attaque aux problèmes d'incohérence logique qui peuvent affecter les qualités de données, d'agrégation et d'exploration. L'incohérence logique est définie habituellement comme la présence de contradictions dans les données. Elle est typiquement contrôlée au moyen de Contraintes d'Intégrité (CI). Dans cette thèse nous étendons d'abord la notion de CI (dans le contexte des systèmes SOLAP) afin de prendre en compte les incohérences relatives aux agrégations et requêtes utilisateur. Pour pallier les limitations des approches existantes concernant la définition des CI SOLAP, nous proposons un Framework basé sur les langages standards UML et OCL. Ce Framework permet la spécification conceptuelle et indépendante des plates-formes des CI SOLAP et leur implémentation automatisée. Il comporte trois parties : (1) Une classification des CI SOLAP. (2) Un profil UML implémenté dans l'AGL MagicDraw, permettant la représentation conceptuelle des modèles des systèmes SOLAP et de leurs CI. (3) Une implémentation automatique qui est basée sur les générateurs de code Spatial OCL2SQL et UML2MDX qui permet de traduire les spécifications conceptuelles en code au niveau des couches EDS et serveur SOLAP. Enfin, les contributions de cette thèse ont été appliquées dans le cadre de projets nationaux de développement d'applications (S)OLAP pour l'agriculture et l'environnement. / Spatial Data Warehouse (SDW) and Spatial OLAP (SOLAP) systems are Business Intelligence (BI) allowing for interactive multidimensional analysis of huge volumes of spatial data. In such systems the quality ofanalysis mainly depends on three components : the quality of warehoused data, the quality of data aggregation, and the quality of data exploration. The warehoused data quality depends on elements such accuracy, comleteness and logical consistency. The data aggregation quality is affected by structural problems (e.g., non-strict dimension hierarchies that may cause double-counting of measure values) and semantic problems (e.g., summing temperature values does not make sens in many applications). The data exploration quality is mainly affected by inconsistent user queries (e.g., what are temperature values in USSR in 2010?) leading to possibly meaningless interpretations of query results. This thesis address the problems of logical inconsistency that may affect the data, aggregation and exploration qualities in SOLAP. The logical inconsistency is usually defined as the presence of incoherencies (contradictions) in data ; It is typically controlled by means of Integrity Constraints (IC). In this thesis, we extends the notion of IC (in the SOLAP domain) in order to take into account aggregation and query incoherencies. To overcome the limitations of existing approaches concerning the definition of SOLAP IC, we propose a framework that is based on the standard languages UML and OCL. Our framework permits a plateforme-independent conceptual design and an automatic implementation of SOLAP IC ; It consists of three parts : (1) A SOLAP IC classification, (2) A UML profile implemented in the CASE tool MagicDraw, allowing for a conceptual design of SOLAP models and their IC, (3) An automatic implementation based on the code generators Spatial OCLSQL and UML2MDX, which allows transforming the conceptual specifications into code. Finally, the contributions of this thesis have been experimented and validated in the context of French national projetcts aimming at developping (S)OLAP applications for agriculture and environment.
196

MDWA : Uma abordagem guiada por modelos para desenvolvimento de software Web

Theodoro Júnior, Marcelo Brandão 13 November 2012 (has links)
Made available in DSpace on 2016-06-02T19:06:01Z (GMT). No. of bitstreams: 1 4801.pdf: 4117819 bytes, checksum: b4df67024157ee1a2c79256315a97e7d (MD5) Previous issue date: 2012-11-13 / Universidade Federal de Sao Carlos / Software development techniques continually evolve in order to improve development and maintenance processes in addition to lower costs and higher quality. The goal of MDD is to reduce the semantic distance between a problem and its solution specification. Therefore MDD focuses on high-level abstraction modeling and successive model transformations, until finally, generate code. Studies assert that model-driven development can be significantly more efficient than traditional source code-driven software development and still reduce the possibility of occurrence of several problems during the software life-cycle. Likewise, Web engineering can also be benefited by MDD adoption, especially when supported by approaches that facilitate MDD use. Web development is usually agile with frequent releases, these approaches must be flexible to adapt to this context. However, generally, the approaches proposed by the academic community have complex processes which involve many different model definitions, programming languages, plug-ins and IDEs. These features contradict the practices adopted by Web developers. This paper presents the MWDA (Model-Driven Web Applications) approach that provides a simple process to support model-driven web development. This approach does not depend on tools, technologies or plug-ins and encourage combination with other forms of reuse and development processes. Furthermore, the Ruby- MDWA was developed with Ruby language and Ruby on Rails framework support, in order to create Web applications with MDWA assistance. This tool provides a set of textual models and defines M2M and M2C transformation tools, maintaining the requirements traceability since its specification to its construction and further maintenance. In order to show the use of the approach and tool, it was performed a real study case with a software company, from São Carlos SP, where a project management system was developed. In parallel, two experiments were conducted with undergraduate students in Computer Science and Computer Engineering and a Masters in Computer Science, to evaluate the gains and limitations of the Ruby-MDWA tool. / As técnicas de desenvolvimento de software evoluem continuamente com a finalidade de melhorar processos de construção e manutenção de software, além de obter ganhos em tempo, custo e qualidade. O objetivo do MDD é reduzir a distância semântica entre um problema e a especificação de sua solução. Para isso, MDD tem enfoque na modelagem de alto nível de abstração e em sucessivos refinamentos dos modelos construídos em artefatos mais detalhados, até enfim, gerar código. Há afirmações de que o desenvolvimento orientado a modelos pode ser significativamente mais eficiente que o desenvolvimento tradicional guiado por código fonte, além de reduzir a possibilidade de ocorrência de uma série de problemas durante o ciclo de vida do software. Da mesma forma, a engenharia de aplicações Web também pode ser beneficiada pela adoção de MDD, em especial com o apoio de abordagens que facilitem sua utilização. Como o desenvolvimento de aplicações Web comumente é ágil e com publicações freqüentes, essas abordagens devem ser flexíveis para que se adaptem a esse contexto. Entretanto, em geral, as abordagens propostas pela comunidade acadêmica apresentam processos complexos que envolvem diversos modelos, linguagens de programação, plug-ins e ambientes de programação. Essas características contrariam as práticas aprovadas pelos desenvolvedores Web. Esta dissertação apresenta a abordagem MDWA (Model-Driven Web Applications) que fornece um processo simples para desenvolvimento de software Web com apoio de MDD. A abordagem não depende de ferramentas, tecnologias ou plug-ins e estimula a combinação com outras formas de reuso e processos de desenvolvimento. Além disso, foi construída uma ferramenta, denominada Ruby-MDWA, baseada na linguagem Ruby e no framework Ruby on Rails destinada à criação de aplicações Web com auxílio da abordagem MDWA. Essa ferramenta fornece um conjunto de quatro modelos textuais e define transformadores M2M e M2C, que mantém a rastreabilidade de um requisito desde sua especificação até sua construção e posterior manutenção. Para mostrar o uso da abordagem e da ferramenta, foi realizado um estudo de caso real em conjunto com uma empresa de software de São Carlos SP, onde um sistema de gerenciamento de projetos foi desenvolvido. De forma paralela, foram conduzidos dois experimentos com alunos de graduação em Bacharelado em Ciência da Computação e Engenharia de Computação e mestrado em computação da UFSCar, visando avaliar os ganhos e as limitações da ferramenta Ruby-MDWA.
197

SIMD-aware word length optimization for floating-point to fixed-point conversion targeting embedded processors / Optimisation SIMD de la largeur des mots pour la conversion de virgule flottante en virgule fixe pour des processeurs embarqués

El Moussawi, Ali Hassan 16 December 2016 (has links)
Afin de limiter leur coût et/ou leur consommation électrique, certains processeurs embarqués sacrifient le support matériel de l'arithmétique à virgule flottante. Pourtant, pour des raisons de simplicité, les applications sont généralement spécifiées en utilisant l'arithmétique à virgule flottante. Porter ces applications sur des processeurs embarqués de ce genre nécessite une émulation logicielle de l'arithmétique à virgule flottante, qui peut sévèrement dégrader la performance. Pour éviter cela, l'application est converti pour utiliser l'arithmétique à virgule fixe, qui a l'avantage d'être plus efficace à implémenter sur des unités de calcul entier. La conversion de virgule flottante en virgule fixe est une procédure délicate qui implique des compromis subtils entre performance et précision de calcul. Elle permet, entre autre, de réduire la taille des données pour le coût de dégrader la précision de calcul. Par ailleurs, la plupart de ces processeurs fournissent un support pour le calcul vectoriel de type SIMD (Single Instruction Multiple Data) afin d'améliorer la performance. En effet, cela permet l'exécution d'une opération sur plusieurs données en parallèle, réduisant ainsi le temps d'exécution. Cependant, il est généralement nécessaire de transformer l'application pour exploiter les unités de calcul vectoriel. Cette transformation de vectorisation est sensible à la taille des données ; plus leurs tailles diminuent, plus le taux de vectorisation augmente. Il apparaît donc un compromis entre vectorisation et précision de calcul. Plusieurs travaux ont proposé des méthodologies permettant, d'une part la conversion automatique de virgule flottante en virgule fixe, et d'autre part la vectorisation automatique. Dans l'état de l'art, ces deux transformations sont considérées indépendamment, pourtant elles sont fortement liées. Dans ce contexte, nous étudions la relation entre ces deux transformations, dans le but d'exploiter efficacement le compromis entre performance et précision de calcul. Ainsi, nous proposons d'abord un algorithme amélioré pour l'extraction de parallélisme SLP (Superword Level Parallelism ; une technique de vectorisation). Puis, nous proposons une nouvelle méthodologie permettant l'application conjointe de la conversion de virgule flottante en virgule fixe et de l'exploitation du SLP. Enfin, nous implémentons cette approche sous forme d'un flot de compilation source-à-source complètement automatisé, afin de valider ces travaux. Les résultats montrent l'efficacité de cette approche, dans l'exploitation du compromis entre performance et précision, vis-à-vis d'une approche classique considérant ces deux transformations indépendamment. / In order to cut-down their cost and/or their power consumption, many embedded processors do not provide hardware support for floating-point arithmetic. However, applications in many domains, such as signal processing, are generally specified using floating-point arithmetic for the sake of simplicity. Porting these applications on such embedded processors requires a software emulation of floating-point arithmetic, which can greatly degrade performance. To avoid this, the application is converted to use fixed-point arithmetic instead. Floating-point to fixed-point conversion involves a subtle tradeoff between performance and precision ; it enables the use of narrower data word lengths at the cost of degrading the computation accuracy. Besides, most embedded processors provide support for SIMD (Single Instruction Multiple Data) as a mean to improve performance. In fact, this allows the execution of one operation on multiple data in parallel, thus ultimately reducing the execution time. However, the application should usually be transformed in order to take advantage of the SIMD instruction set. This transformation, known as Simdization, is affected by the data word lengths ; narrower word lengths enable a higher SIMD parallelism rate. Hence the tradeoff between precision and Simdization. Many existing work aimed at provide/improving methodologies for automatic floating-point to fixed-point conversion on the one side, and Simdization on the other. In the state-of-the-art, both transformations are considered separately even though they are strongly related. In this context, we study the interactions between these transformations in order to better exploit the performance/accuracy tradeoff. First, we propose an improved SLP (Superword Level Parallelism) extraction (an Simdization technique) algorithm. Then, we propose a new methodology to jointly perform floating-point to fixed-point conversion and SLP extraction. Finally, we implement this work as a fully automated source-to-source compiler flow. Experimental results, targeting four different embedded processors, show the validity of our approach in efficiently exploiting the performance/accuracy tradeoff compared to a typical approach, which considers both transformations independently.
198

Capturing JUnit Behavior into Static Programs : Static Testing Framework

Siddiqui, Asher January 2010 (has links)
In this research paper, it evaluates the benefits achievable from static testing framework by analyzing and transforming the JUnit3.8 source code and static execution of transformed code. Static structure enables us to analyze the code statically during creation and execution of test cases. The concept of research is by now well established in static analysis and testing development. The research approach is also increasingly affecting the static testing process and such research oriented work has proved particularly valuable for those of us who want to understand the reflective behavior of JUnit3.8 Framework. JUnit3.8 Framework uses Java Reflection API to invoke core functionality (test cases creation and execution) dynamically. However, Java Reflection API allows developers to access and modify structure and behavior of a program.  Reflection provides flexible solution for creating test cases and controlling the execution of test cases. Java reflection helps to encapsulate test cases in a single object representing the test suite. It also helps to associate each test method with a test object. Where reflection is a powerful tool to perform potential operations, on the other hand, it limits static analysis. Static analysis tools often cannot work effectively with reflection. In order to avoid the reflection, Static Testing Framework provides a static platform to analyze the JUnit3.8 source code and transform it into non-reflective version that emulates the dynamic behavior of JUnit3.8. The transformed source code has possible leverage to replace reflection with static code and does same things in an execution environment of Static Testing Framework that reflection does in JUnit3.8. More besides, the transformed code also enables execution environment of Static Testing Framework to run test methods statically. In order to measure the degree of efficiency, the implemented tool is evaluated. The evaluation of Static Testing Framework draws results for different Java projects and these statistical data is compared with JUnit3.8 results to measure the effectiveness of Static Testing Framework. As a result of evaluation, STF can be used for static creation and execution of test cases up to JUnit3.8 where test cases are not creating within a test class and where real definition of constructors is not required. These problems can be dealt as future work by introducing a middle layer to execute test fixtures for each test method and by generating test classes as per real definition of constructors.
199

From timed models to timed implementations

De Wulf, Martin 20 December 2006 (has links)
<p align="justify">Computer Science is currently facing a grand challenge :finding good design practices for embedded systems. Embedded systems are essentially computers interacting with some physical process. You could find one in a braking systems or in a nuclear power plant for example. They present several design difficulties :first they are reactive systems, interacting indefinitely with their environment. Second,they must satisfy real-time constraints specifying when they should respond, and not only how. Finally, their environment is often deeply continuous, presenting complex dynamics. The formal models of choice for specifying such systems are timed and hybrid automata for which model checking is pretty well studied.</p> <p><p align="justify">In a first part of this thesis, we study a complete design approach, including verification and code generation, for timed automata. We have to define a new semantics for timed automata, the AASAP semantics, that preserves the decidability properties for model checking and at the same time is implementable. Our notion of implementability is completely novel, and relies on the simulation of a semantics that is obviously implementable on a real platform. We wrote tools for the analysis and code generation and exemplify them on a case study about the well known Philips Audio Control Protocol.</p> <p><p align="justify">In a second part of this thesis, we study the problem of controller synthesis for an environment specified as a hybrid automaton. We give a new solution for discrete controllers having only an imperfect information about the state of the system. In the process, we defined a new algorithm, based on the monotonicity of the controllable predecessors operator, for efficiently finding a controller and we show some promising applications on a classical problem :the universality test for finite automata. / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
200

Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles / Qualification of source code generators in the avionics domain : automated testing of model transformation chains

Richa, Elie 15 December 2015 (has links)
Dans l’industrie de l’avionique, les Générateurs Automatiques de Code (GAC) sont de plus en plus utilisés pour produire des parties du logiciel embarqué. Puisque le code généré fait partie d’un logiciel critique, les standards de sûreté exigent une vérification approfondie du GAC: la qualification. Dans cette thèse en collaboration avec AdaCore, nous cherchons à réduire le coût des activités de test par des méthodes automatiques et efficaces.La première partie de la thèse aborde le sujet du test unitaire qui assure une exhaustivité élevée mais qui est difficile à réaliser pour les GACs. Nous proposons alors une méthode qui garantit le même niveau d’exhaustivité en n’utilisant que des tests d’intégration de mise en œuvre plus facile. Nous proposons tout d’abord une formalisation du langage ATL de définition du GAC dans la théorie des Transformations Algébriques de Graphes. Nous définissons ensuite une traduction de postconditions exprimant l’exhaustivité du test unitaire en des préconditions équivalentes qui permettent à terme de produire des tests d’intégration assurant le même niveau d’exhaustivité. Enfin, nous proposons d’optimiser l’algorithme complexe de notre analyse à l’aide de stratégies de simplification dont nous mesurons expérimentalement l’efficacité.La seconde partie du travail concerne les oracles de tests du GAC, c’est à dire le moyen de valider le code généré par le GAC lors d’un test. Nous proposons un langage de spécification de contraintes textuelles capables d’attester automatiquement de la validité du code généré. Cette approche est déployée expérimentalement à AdaCore pour le projet QGen, un générateur de code Ada/C à partir de Simulink®. / In the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen.

Page generated in 0.1271 seconds