261 |
Aplica??o do m?todo B ao projeto formal de software embarcadoMedeiros J?nior, Val?rio Gutemberg de 09 September 2009 (has links)
Made available in DSpace on 2015-03-03T15:47:45Z (GMT). No. of bitstreams: 1
ValerioGMJpdf.pdf: 1265506 bytes, checksum: f1fe3ef975bfeb2fce1dad3319a33f34 (MD5)
Previous issue date: 2009-09-09 / This work shows a project method proposed to design and build software components from the software functional m del up to assembly code level in a rigorous fashion. This
method is based on the B method, which was developed with support and interest of British Petroleum (BP). One goal of this methodology is to contribute to solve an important
problem, known as The Verifying Compiler. Besides, this work describes a formal model of Z80 microcontroller and a real system of petroleum area. To achieve this goal,
the formal model of Z80 was developed and documented, as it is one key component for the verification upto the assembly level. In order to improve the mentioned methodology,
it was applied on a petroleum production test system, which is presented in this work. Part of this technique is performed manually. However, almost of these activities can be
automated by a specific compiler. To build such compiler, the formal modelling of microcontroller and modelling of production test system should provide relevant knowledge and experiences to the design of a new compiler. In ummary, this work should improve the viability of one of the most stringent criteria for formal verification: speeding up the verification process, reducing design time and increasing the quality and reliability of the product of the final software. All these qualities are very important for systems that involve
serious risks or in need of a high confidence, which is very common in the petroleum industry / Este trabalho apresenta um m?todo de projeto proposta para veri ca??o formal do
modelo funcional do software at? o n?vel da linguagem assembly. Esse m?todo ? fundamentada
no m?todo B, o qual foi desenvolvido com o apoio e interesse da multinacional do
setor de petr?leo e g?s British Petroleum (BP). A evolu??o dessa metodologia tem como
objetivo contribuir na resposta de um importante problema, que pertence aos grandes
desa os da computa??o, conhecido como The Verifying Compiler . Nesse contexto, o
presente trabalho descreve um modelo formal do microcontrolador Z80 e um sistema real
da ?rea de petr?leo. O modelo formal do Z80 foi desenvolvido e documentado, por ser
um pr?-requisito para a veri ca??o at? n?vel de assembly. A m de validar e desenvolver
a metodologia citada, ela foi aplicada em um sistema de teste de produ??o de po?os de
petr?leo, o qual ? apresentado neste trabalho. Atualmente, algumas atividades s?o realizadas
manualmente. No entanto, uma parte signifi cativa dessas atividades pode ser
automatizada atrav?s de um compilador espec?fi co. Para esse m, a modelagem formal
do microcontrolador e a modelagem do sistema de teste de produ??o fornecem conhecimentos
e experi?ncias importantes para o projeto de um novo compilador. Em suma, esse
trabalho deve melhorar a viabilidade de um dos mais rigorosos crit?rios de veri ca??o
formal: acelerando o processo de verifica??o, reduzindo o tempo de projeto e aumentando
a qualidade e con fian?a do produto de software final. Todas essas qualidades s?o bastante
relevantes para sistemas que envolvem s?rios riscos ou exigem alta confian?a, os quais s?o
muito comuns na ind?stria do petr?leo
|
262 |
Evolução da filosofia do sistema de limitação de dose e a questão das substituições "superseded" / Philosophy evolution of the dose limitation system and the issue of replacements in the 'superseded' publicationsCORREA, FELIPE R. 09 November 2017 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2017-11-09T11:20:17Z
No. of bitstreams: 0 / Made available in DSpace on 2017-11-09T11:20:17Z (GMT). No. of bitstreams: 0 / Em 1958 a Comissão Internacional de Proteção Radiológica (CIPR) propôs a primeira filosofia do sistema de limitação de dose, introduzindo os Limites Anuais Máximos Permissíveis (LAMP). O grande avanço da era nuclear nas últimas décadas impôs novos paradigmas e a necessidade de atualização da filosofia em questão. O presente trabalho tem por objetivo apresentar uma análise da evolução da filosofia do sistema de limitação de dose, desde a década de 50 até os dias atuais. A primeira mudança de paradigma se deu com a criação dos Limites Anuais Máximos Admissíveis (LAMA), ainda vigentes. Por meio de um cuidadoso estudo das publicações do Organismo Internacional de Energia Atômica (OIEA) e das recomendações da CIPR, foi possível evidenciar e detalhar o processo de evolução dos LAMA ao longo das últimas décadas. A pesquisa aborda momentos-chaves que impulsionaram mudanças na filosofia do sistema de limitações de dose como, por exemplo, a crise internacional do petróleo e suas implicações no desenvolvimento da área nuclear. A comparação entre as diversas publicações das duas entidades (OIEA e CIPR) permitiu um estudo aprofundado desde o surgimento dessas filosofias até suas últimas publicações. Os resultados deste estudo apontam importantes informações que constam em publicações da CIPR, hoje consideradas \"superseded\", que não são encontradas nas publicações atuais. O OIEA, que elabora suas recomendações baseado na filosofia da CIPR, também não aborda as referidas informações. Por meio da presente pesquisa, foi possível evidenciar e detalhar valiosas informações que se perderam durante o processo de atualização das publicações e edição de recomendações de ambas as entidades. Este trabalho se propõe a apresentar essas informações, que foram estudadas em profundidade, discutindo seu real valor, propondo à comunidade internacional novas reflexões sobre a importância e a possibilidade de reintroduzir as informações perdidas em futuras publicações. / Dissertação (Mestrado em Tecnologia Nuclear) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
|
263 |
Compilation optimisée des modèles UML / An Optimized UML CompilerCharfi Smaoui, Asma 12 December 2011 (has links)
Cette thèse s'inscrit dans le cadre de la mise en œuvre de l’ingénierie dirigée par les modèles (IDM) pour le développement des systèmes embarquées. Ces systèmes ayant généralement des ressources limitées (mémoire et/ou calculs), exigent que le code généré soit le plus optimisé possible. L’objectif de cette thèse est de produire à partir d’un modèle spécifié dans le langage UML, un code assembleur plus compact que le code assembleur produit par les compilateurs de code. Malgré l’évolution croissante des compilateurs optimisés, les compilateurs les plus répandus comme le GCC (Gnu Compiler Collection) sont incapables d’effectuer certains types d’optimisations qu'il est possible d'effectuer à un plus haut niveau d'abstraction dans une phase de pré-génération de code. En effet, certaines informations (liées à la sémantique d’exécution du langage UML) sont perdues lors de la génération de code. Ces informations, utiles pour les optimisations de haut niveau, sont invisibles par le compilateur de code vue qu’il prend toutes les informations liées au système modélisé à partir du code généré. Nous proposons ainsi une nouvelle approche dirigée par les modèles pour le développement des systèmes à ressources limitées, qui élimine l’étape de la génération de code en remplaçant cette étape par une compilation directe des modèles. Nous avons développé le premier compilateur de modèles UML (GUML : le front-end UML pour le compilateur GCC) qui génère directement de l’assembleur (sans passer par un langage de programmation) à partir des modèles UML. GUML permet de compiler les classes, les activités et les machines à états UML. Il permet de générer, en compilant certaines machines à états, un code assembleur plus compact que le code assembleur produit par GCC. Deux optimisations de GCC sont améliorées : l’élimination de code mort et l’élimination des expressions redondantes. / Model-Based Development (MBD) provides an additional level of abstraction, the model, which allows dealing with the increasing complexity of systems. Models let engineers focus on the business aspect of the developed system and permits automatic treatments of these models with dedicated tools like for instance synthesis of system's application by automatic code generation. Embedded Systems are often constrained by their environment and/or the resources they own in terms of memory, energy consumption or performance requirements. Hence, an important problem to deal with in embedded system development is linked to the optimization of software part of those systems according to the resources provided by their platform. Although automatic code generation and the use of optimizing compilers bring some answers to application optimization issue, this thesis shows that optimization results may be enhanced by adding a new level of optimizations at the model level before the code generation step. Actually, during the code generation, an important part of the modeling language semantics which could be useful for optimization is lost, thus, making impossible some optimizations achievement. We propose in this thesis a novel MBD approach that compiles directly UML models. The biggest challenge for this approach –tackled in this thesis- is to propose a model compiler that is as efficient as code compiler. Our model compiler (GUML: the UML front end for GCC) performs optimizations that GCC is unable to perform resulting in a more compact assembly code. Two GCC optimizations are enhanced: the dead code elimination optimization and the block merging.
|
264 |
Processus et outils qualifiables pour le développement de systèmes critiques certifiés en avionique basés sur la génération automatique de code / Processes and qualifiable tools for the development of safety-critical certified systems in avionics based on automated code generationBedin França, Ricardo 10 April 2012 (has links)
Le développement des logiciels avioniques les plus critiques, comme les commandes de vol électriques, présentent plusieurs contraintes qui peuvent être quasiment contradictoires – par exemple, performance et sûreté – et toutes ces contraintes doivent être respectées simultanément. L'objective de cette thèse est d'étudier et de proposer des évolutions dans le cycle de développement des logiciels de commande de vol chez Airbus afin d'améliorer leur performance, tout en respectant les contraintes industrielles existantes et en conservant des processus de vérification au moins aussi sûrs que ceux utilisés actuellement. Le critère principal d'évaluation de performance est le temps d'exécution au pire cas (WCET), vu qu'il est utilisé lors des analyses temporelles des logiciels de vol réels. Dans un premier temps, le DO-178, qui contient des considérations pour l'approbation des logiciels avioniques, est présenté. Le DO-178B et le DO-178C sont étudiés. Le DO-178B est la référence pour plusieurs logiciels de commande de vol développés chez Airbus et le DO-178C est la référence pour le développement des nouveaux logiciels à partir de 2012. Ensuite, l'étude de cas est présentée. Afin d'améliorer sa compréhension, le contexte historique est fourni à travers l'étude des autres logiciels de commande de vol, car plusieurs activités de son cycle de vie réutilisent des techniques qui ont été utilisées avec succès dans des projets précédents. Quelques activités qui présentent des causes potentielles de pertes de performance logicielle sont exposées et l'axe principal d'étude choisi pour le reste de la thèse est la phase de compilation. Ce choix se justifie dans le contexte des logiciels de commande de vol car la compilation est réalisée avec peu ou pas d'optimisations, son impact sur la performance des logiciels est donc important et des travaux de recherche récents permettent d'envisager un changement dans les paradigmes actuels de compilation sûre. / The development of safety-critical avionics software, such as aircraft flight control programs, presents many different constraints that are nearly contradictory, such as performance and safety requirements, and all must be met simultaneously. The objective of this Thesis is to propose modifications in the development cycle of Airbus flight control programs in order to improve their performance without weakening their verification processes or violating other industrial constraints. The main criterion for performance evaluation is the Worst-Case Execution Time (WCET), as it is used in the timing analysis that is performed in actual avionics software verification processes. In a first moment, the DO-178, which contains guidance for avionics software development approval, is presented. Both the DO-178B and the DO-178C are discussed, since the former was the reference for the development of many Airbus flight control programs and the latter shall be the reference for the development of new programs, starting from 2012. Then, the case study is presented. In order to better understand it, some historical context is provided by the study of other flight control programs - many of its life cycle activities reuse techniques that were successful in previous software projects. Each activity is evaluated in order to underline what are the performance bottlenecks in the flight control software development. Some potential underperforming activities are depicted and the main axis of study developed subsequently is the compilation phase: not only it is a well-known unoptimized activity that has important impacts over software performance, but it is also an activity that might undergo a paradigm change due to innovating compilers that are being developed by researchers. The CompCert compiler is presented and its use in the scope of this Thesis is justified - at the time of this Thesis, it was the compiler that was best prepared to perform meaningful experiments, such as compiling a large subset of the chosen case study. Its architecture is studied, together with its semantic preservation theorem, which is the backbone of its formally-verified part. Additional features that were developed in CompCert during this Thesis in order to meet Airbus's requirements - such as its annotation mechanism and its reference interpreter - are discussed in order to underline their usefulness in the development of flight control software. The evaluation of CompCert consists in a performance comparison with the current compilation strategy and an assessment of the impacts that its utilization might have over the verification strategy commonly employed in flight control software. The results of the performance comparison are promising, since CompCert-generated code has a WCET more than 10% lower than if it were compiled with a good quality non-optimizing compiler. As expected, the use of CompCert has impacts over some important verification activities but its formal development and increased verifiability helps in the development of new compiler verification activities that can keep the whole development process at least as safe as the current one. Some development strategy propositions are then presented, according to the certification credit that might be required by using CompCert.
|
265 |
Placement of tasks under uncertainty on massively multicore architectures / Placement de tâches sous incertitudes sur des architectures massivement multicoeursStan, Oana 15 November 2013 (has links)
Ce travail de thèse de doctorat est dédié à l'étude de problèmes d'optimisation combinatoire du domaine des architectures massivement parallèles avec la prise en compte des données incertaines tels que les temps d'exécution. On s'intéresse aux programmes sous contraintes probabilistes dont l'objectif est de trouver la meilleure solution qui soit réalisable avec un niveau de probabilité minimal garanti. Une analyse quantitative des données incertaines à traiter (variables aléatoires dépendantes, multimodales, multidimensionnelles, difficiles à caractériser avec des lois de distribution usuelles), nous a conduit à concevoir une méthode qui est non paramétrique, intitulée "approche binomiale robuste". Elle est valable quelle que soit la loi jointe et s'appuie sur l'optimisation robuste et sur des tests d'hypothèse statistique. On propose ensuite une méthodologie pour adapter des algorithmes de résolution de type approchée pour résoudre des problèmes stochastiques en intégrant l'approche binomiale robuste afin de vérifier la réalisabilité d'une solution. La pertinence pratique de notre démarche est enfin validée à travers deux problèmes issus de la compilation des applications de type flot de données pour les architectures manycore. Le premier problème traite du partitionnement stochastique de réseaux de processus sur un ensemble fixé de nœuds, en prenant en compte la charge de chaque nœud et les incertitudes affectant les poids des processus. Afin de trouver des solutions robustes, un algorithme par construction progressive à démarrages multiples a été proposé ce qui a permis d'évaluer le coût des solution et le gain en robustesse par rapport aux solutions déterministes du même problème. Le deuxième problème consiste à traiter de manière globale le placement et le routage des applications de type flot de données sur une architecture clustérisée. L'objectif est de placer les processus sur les clusters en s'assurant de la réalisabilité du routage des communications entre les tâches. Une heuristique de type GRASP a été conçue pour le cas déterministe, puis adaptée au cas stochastique clustérisé. / This PhD thesis is devoted to the study of combinatorial optimization problems related to massively parallel embedded architectures when taking into account uncertain data (e.g. execution time). Our focus is on chance constrained programs with the objective of finding the best solution which is feasible with a preset probability guarantee. A qualitative analysis of the uncertain data we have to treat (dependent random variables, multimodal, multidimensional, difficult to characterize through classical distributions) has lead us to design a non parametric method, the so-called "robust binomial approach", valid whatever the joint distribution and which is based on robust optimization and statistical hypothesis testing. We also propose a methodology for adapting approximate algorithms for solving stochastic problems by integrating the robust binomial approach when verifying for solution feasibility. The paractical relevance of our approach is validated through two problems arising in the compilation of dataflow application for manycore platforms. The first problem treats the stochastic partitioning of networks of processes on a fixed set of nodes, by taking into account the load of each node and the uncertainty affecting the weight of the processes. For finding stochastic solutions, a semi-greedy iterative algorithm has been proposed which allowed measuring the robustness and cost of the solutions with regard to those for the deterministic version of the problem. The second problem consists in studying the global placement and routing of dataflow applications on a clusterized architecture. The purpose being to place the processes on clusters such that it exists a feasible routing, a GRASP heuristic has been conceived first for the deterministic case and afterwards extended for the chance constrained variant of the problem.
|
266 |
Aplikace pro obsluhu měřicích zařízení v energetice / Application for operation of measuring devices in power engineeringŠevčík, Radek January 2019 (has links)
This thesis is focused on application development with ARM processor architecture and Linux operating system. It uses cloud hosting services to build, test and deploy applications with container-based virtualization. The application communicates with smart meters on serial buses. It stores measurements in a database. It provides access to devices and measurements in internet network.
|
267 |
Just-in-time kompilace závisle typovaného lambda kalkulu / Just-in-Time Compilation of Dependently-Typed Lambda CalculusZárybnický, Jakub January 2021 (has links)
Řada programovacích jazyků byla schopna zvýšit svoji rychlost výměnou běhových systémů stavěných na míru za obecné platformy, které pro optimalizaci používají just-in-time překlad, jako jsou GraalVM nebo RPython. V této práci vyhodnocuji, zda je použití takovýchto platforem vhodné i pro jazyky se závislymi typy nebo důkazovými systémy. Tato práce představuje koncepty -kalkulu a teorie typů potřebné pro úvod do závislých typů s relevantními algoritmy, specifikuje malý závisle typovaný jazyk založený na $\lambda\Pi$ kalkulu, a prezentuje dva interpretery tohoto jazyka. Tyto interpretery jsou psané v jazyce Kotlin, první je jednoduchý, psaný ve funkcionálním stylu a druhý používá platformu GraalVM a Truffle. GraalVM je platforma založená na virtuálním stroji Javy (JVM), která přidává just-in-time překladač založený na částečném vyhodnocení (partial evaluation) a Truffle je knihovna pro tvorbu programovacích jazyků využívající tento překladač. Závěr práce vyhodnocuje běhové charakteristiky těchto interpreterů na různých zátěžových testech.Závěry práce jsou ale silně negativní. Vliv JIT překladu není znatelný ani přes snahu optimalizovat běžné algoritmy z teorie typů, které jsou zjevně nevhodné pro platformu JVM. Práce končí návrhy několika navazujících projektů, které by lépe využily možnosti Truffle a které by byly vhodnější pro implementaci závisle typovaných jazyků.
|
268 |
Kompilace KNF do backdoor decomposable monotone circuit / Compilation of a CNF into a backdoor decomposable monotone circuitIllner, Petr January 2021 (has links)
An NNF circuit is a directed acyclic graph (DAG), where each leaf is labelled with either true/false or a literal, and each inner node represents either a conjunction (∧) or a disjunction (∨). A decomposable NNF (DNNF) is an NNF satisfying the decomposabi- lity property for each conjunction node. The C-BDMC language generalizes the DNNF language. In a C-BDMC, the leaves can contain CNF formulae from a given base class C. In this paper, we focus only on renamable Horn formulae. We experimentally compare the sizes of d-BDMC and d-DNNF representations. We describe a new compilation langu- age, called cara DNNF (c-DNNF), that generalizes the DNNF language. A c-DNNF circuit can be considered as a compressed representation of a DNNF circuit. We present a new experimental knowledge compiler, called CaraCompiler, for converting a CNF formula into a d-BDMC or a (c)d-DNNF circuit. CaraCompiler is based on the state-of-the-art compiler D4. Also, we mention some extensions for the compiler D4, such as caching hypergraph cuts that can reduce the compilation times. 1
|
269 |
Booleovské techniky v reprezentaci znalostí / Boolean techniques in Knowledge representationChromý, Miloš January 2020 (has links)
Title: Boolean techniques in Knowledge representation Author: Miloš Chromý Department: Department of Theoretical Computer Science and Mathematical Logic Supervisor: Doc. RNDr. Ondřej Čepek, Ph.D., Department of Theoretical Com- puter Science and Mathematical Logic Abstract: In this thesis we will investigate switch-list representations of Boolean function and we will explore the biclique satisfiable formulas. Given a truth table representation of a Boolean function f the switch-list rep- resentation of f is a list of Boolean vectors from the truth table which have a different function value than the preceding Boolean vector in the truth table. We include this type of representation in the Knowledge Compilation Map [Dar- wiche and Marquis, 2002] and argue that switch-lists may in certain situations constitute a reasonable choice for a target language in knowledge compilation. First, we compare switch-list representations with a number of standard repre- sentations (such as CNF, DNF, and OBDD) with respect to their relative suc- cinctness. As a by-product of this analysis we also give a short proof of a long standing open question from [Darwiche and Marquis, 2002], namely the incom- parability of MODS (models) and PI (prime implicates) representations. Next, using the succinctness result between...
|
270 |
Optimizing the LLVM ELF linker for a distributed compilation environment : Concurrent Linking with LLVM LLD / Optimering av LLVMs ELF-länkare vid användning av distribuerad kompileringWilkens, Alexander January 2020 (has links)
Modern build systems that build large software projects often utilize a distributed compiler, allowing the compilation of object files to be parallelized over multiple machines. These build systems are often not able to fully utilize all the resources available on all machines. As linking is not part of this distributed process, the unused resources could be used to perform linking instead, reducing the total build time. However, linking is often performed at the end of the build process, thus not being able to access the previously unused resources. In this thesis project, a linker that runs concurrently with the compilation process of the build system is designed, implemented, and evaluated . As the compilation process produces an object file, the linker performs a partial link using this file. The link is finalized at the end of the build, not unlike a traditional linker. The results show that the total build time is reduced when using the new linker in a build system utilizing a distributed compiler. In some cases, the time spent linking at the end of the build system is reduced over 50 percent when compared to the reference linker.
|
Page generated in 1.5935 seconds