• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A General Work for the Flow Analysis of Concurrent Programs

Lam, Patrick 08 1900 (has links)
Standard techniques for analysing sequential programs are severely constrained when applied to a concurrent program because they cannot take full advantage of the concurrent structure of the program. In this work, we overcome this limitation using a novel approach which ``lifts'' a sequential dataflow analysis to a concurrent analysis. First, we introduce concurrency primitives which abstract away from the details of how concurrency features are implemented in real programming languages. Using these primitives, we describe how sequential analyses can be made applicable to concurrent programs. Under some circumstances, there is no penalty for concurrency: our method produces results which are as precise as the sequential analysis. Our lifting is straightforward, and we illustrate it on some standard analyses -- available expressions, live variables and generalized constant propagation. Finally, we describe how concurrency features of real languages can be expressed using our abstract concurrency primitives, and present analyses for finding our concurrency primitives in real programs.
2

Scaling scope bounded checking using incremental approaches

Gopinath, Divya 28 October 2010 (has links)
Bounded Verification is an effective technique for finding subtle bugs in object-oriented programs. Given a program, its correctness specification and bounds on the input domain size, scope bounded checking translates bounded code segments into formulas in boolean logic and uses off the shelf satisfiability solvers to search for correctness violations. However, scalability is a key issue of the technique, since for non-trivial programs, the formulas are often complex and can choke the solvers. This thesis describes approaches which aim to scale scope bounded checking by utilizing syntactic and semantic information from the code to split a program into sub-programs which can be checked incrementally. It presents a thorough evaluation of the approaches and compares their performance with existing bounded verification techniques. Novel ideas for future work, specifically a specification slicing driven splitting approach, are proposed to further improve the scalability of bounded verification. / text
3

Prezentace transformací atributů v SQL pro potřeby data governance / The Presentation of Transforming Attributes in SQL for Data Governance Support

Bartoš, Michal January 2018 (has links)
This thesis examines extraction of attributes' transformation descriptions from Oracle SQL scripts and possibilities of their presentation to users. This thesis com- pares several variants of output format. Then it describes details of construction for the chosen format and choices that led to it. One of the main decisions was the choice of input data structures: abstract syntax tree and dataflow graph. Those data structures provide initial analysis of input SQL scripts but they also strongly influence the rest of processing. Results of this thesis were verified by prototype implementation in software Manta. The prototype confirmed suitability of the chosen approach for convenient presentation to users. 1
4

Dynamická analýza pro hledání chyb endianity / Dynamic Analysis for Finding Endianity Bugs

Kápl, Roman January 2018 (has links)
When two computer systems communicate, for example over a network, they must agree on the ordering of bytes within numbers. This ordering is called endianess. Often one of the systems has to swap the order of bytes to the agreed standard. Results of this work help programmers to find places in their program where they forgot to swap the bytes. We have developed a dynamic data-flow analysis built upon the popular Valgrind tool. Compared to the static analysis currently used by the Linux kernel developers, our approach does not require annotation of variables with their endianess. Typically only few places in the program source code will need to be annotated. The analysis can also detect potential bugs that would only manifest if the program was run on computer with opposite endianess. Our approach has been validated on an existing program known to contain yet unfixed endianess problems (RadeonSI OpenGL driver). It has identified all endianess-related bugs and provided useful diagnostic messages together with their location.
5

Comprehensive Path-sensitive Data-flow Analysis

Thakur, Aditya 07 1900 (has links)
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. We initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward or backward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy heuristic. The framework has been implemented in the Scale research compiler, and instantiated for the specific problems of Constant Propagation and Liveness analysis. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach for Constant Propagation. For the problem of Liveness analysis, we see an average speedup of 0.8% in the running times over the baseline implementation.
6

Verifying Data-Oriented Gadgets in Binary Programs to Build Data-Only Exploits

Sisco, Zachary David 08 August 2018 (has links)
No description available.
7

Application of local semantic analysis in fault prediction and detection

Shao, Danhua 06 October 2010 (has links)
To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size. Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time. To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach. / text
8

Deployment of mixed criticality and data driven systems on multi-cores architectures / Déploiement de systèmes à flots de données en criticité mixte pour architectures multi-coeurs

Medina, Roberto 30 January 2019 (has links)
De nos jours, la conception de systèmes critiques va de plus en plus vers l’intégration de différents composants système sur une unique plate-forme de calcul. Les systèmes à criticité mixte permettent aux composants critiques ayant un degré élevé de confiance (c.-à-d. une faible probabilité de défaillance) de partager des ressources de calcul avec des composants moins critiques sans nécessiter des mécanismes d’isolation logicielle.Traditionnellement, les systèmes critiques sont conçus à l’aide de modèles de calcul comme les graphes data-flow et l’ordonnancement temps-réel pour fournir un comportement logique et temporel correct. Néanmoins, les ressources allouées aux data-flows et aux ordonnanceurs temps-réel sont fondées sur l’analyse du pire cas, ce qui conduit souvent à une sous-utilisation des processeurs. Les ressources allouées ne sont ainsi pas toujours entièrement utilisées. Cette sous-utilisation devient plus remarquable sur les architectures multi-cœurs où la différence entre le meilleur et le pire cas est encore plus significative.Le modèle d’exécution à criticité mixte propose une solution au problème susmentionné. Afin d’allouer efficacement les ressources tout en assurant une exécution correcte des composants critiques, les ressources sont allouées en fonction du mode opérationnel du système. Tant que des capacités de calcul suffisantes sont disponibles pour respecter toutes les échéances, le système est dans un mode opérationnel de « basse criticité ». Cependant, si la charge du système augmente, les composants critiques sont priorisés pour respecter leurs échéances, leurs ressources de calcul augmentent et les composants moins/non critiques sont pénalisés. Le système passe alors à un mode opérationnel de « haute criticité ».L’ intégration des aspects de criticité mixte dans le modèle data-flow est néanmoins un problème difficile à résoudre. Des nouvelles méthodes d’ordonnancement capables de gérer des contraintes de précédences et des variations sur les budgets de temps doivent être définies.Bien que plusieurs contributions sur l’ordonnancement à criticité mixte aient été proposées, l’ordonnancement avec contraintes de précédences sur multi-processeurs a rarement été étudié. Les méthodes existantes conduisent à une sous-utilisation des ressources, ce qui contredit l’objectif principal de la criticité mixte. Pour cette raison, nous définissons des nouvelles méthodes d’ordonnancement efficaces basées sur une méta-heuristique produisant des tables d’ordonnancement pour chaque mode opérationnel du système. Ces tables sont correctes : lorsque la charge du système augmente, les composants critiques ne manqueront jamais leurs échéances. Deux implémentations basées sur des algorithmes globaux préemptifs démontrent un gain significatif en ordonnançabilité et en utilisation des ressources : plus de 60 % de systèmes ordonnançables sur une architecture donnée par rapport aux méthodes existantes.Alors que le modèle de criticité mixte prétend que les composants critiques et non critiques peuvent partager la même plate-forme de calcul, l'interruption des composants non critiques réduit considérablement leur disponibilité. Ceci est un problème car les composants non critiques doivent offrir une degré minimum de service. C’est pourquoi nous définissons des méthodes pour évaluer la disponibilité de ces composants. A notre connaissance, nos évaluations sont les premières capables de quantifier la disponibilité. Nous proposons également des améliorations qui limitent l’impact des composants critiques sur les composants non critiques. Ces améliorations sont évaluées grâce à des automates probabilistes et démontrent une amélioration considérable de la disponibilité : plus de 2 % dans un contexte où des augmentations de l’ordre de 10-9 sont significatives.Nos contributions ont été intégrées dans un framework open-source. Cet outil fournit également un générateur utilisé pour l’évaluation de nos méthodes d’ordonnancement. / Nowadays, the design of modern Safety-critical systems is pushing towards the integration of multiple system components onto a single shared computation platform. Mixed-Criticality Systems in particular allow critical components with a high degree of confidence (i.e. low probability of failure) to share computation resources with less/non-critical components without requiring software isolation mechanisms (as opposed to partitioned systems).Traditionally, safety-critical systems have been conceived using models of computations like data-flow graphs and real-time scheduling to obtain logical and temporal correctness. Nonetheless, resources given to data-flow representations and real-time scheduling techniques are based on worst-case analysis which often leads to an under-utilization of the computation capacity. The allocated resources are not always completely used. This under-utilization becomes more notorious for multi-core architectures where the difference between best and worst-case performance is more significant.The mixed-criticality execution model proposes a solution to the abovementioned problem. To efficiently allocate resources while ensuring safe execution of the most critical components, resources are allocated in function of the operational mode the system is in. As long as sufficient processing capabilities are available to respect deadlines, the system remains in a ‘low-criticality’ operational mode. Nonetheless, if the system demand increases, critical components are prioritized to meet their deadlines, their computation resources are increased and less/non-critical components are potentially penalized. The system is said to transition to a ‘high-criticality’ operational mode.Yet, the incorporation of mixed-criticality aspects into the data-flow model of computation is a very difficult problem as it requires to define new scheduling methods capable of handling precedence constraints and variations in timing budgets.Although mixed-criticality scheduling has been well studied for single and multi-core platforms, the problem of data-dependencies in multi-core platforms has been rarely considered. Existing methods lead to poor resource usage which contradicts the main purpose of mixed-criticality. For this reason, our first objective focuses on designing new efficient scheduling methods for data-driven mixed-criticality systems. We define a meta-heuristic producing scheduling tables for all operational modes of the system. These tables are proven to be correct, i.e. when the system demand increases, critical components will never miss a deadline. Two implementations based on existing preemptive global algorithms were developed to gain in schedulability and resource usage. In some cases these implementations schedule more than 60% of systems compared to existing approaches.While the mixed-criticality model claims that critical and non-critical components can share the same computation platform, the interruption of non-critical components degrades their availability significantly. This is a problem since non-critical components need to deliver a minimum service guarantee. In fact, recent works in mixed-criticality have recognized this limitation. For this reason, we define methods to evaluate the availability of non-critical components. To our knowledge, our evaluations are the first ones capable of quantifying availability. We also propose enhancements compatible with our scheduling methods, limiting the impact that critical components have on non-critical ones. These enhancements are evaluated thanks to probabilistic automata and have shown a considerable improvement in availability, e.g. improvements of over 2% in a context where 10-9 increases are significant.Our contributions have been integrated into an open-source framework. This tool also provides an unbiased generator used to perform evaluations of scheduling methods for data-driven mixed-criticality systems.
9

Cache Prediction and Execution Time Analysis on Real-Time MPSoC

Neikter, Carl-Fredrik January 2008 (has links)
<p>Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.</p>
10

Cache Prediction and Execution Time Analysis on Real-Time MPSoC

Neikter, Carl-Fredrik January 2008 (has links)
Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.

Page generated in 0.0602 seconds