• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 19
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 63
  • 19
  • 18
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Context-aware automated refactoring for unified memory allocation in NVIDIA CUDA programs

Nejadfard, Kian 25 June 2021 (has links)
No description available.
52

Analýza práce s dynamickými datovými strukturami v C programech / Analysis of C Programs with Dynamic Linked Data Structures

Šoková, Veronika January 2016 (has links)
This master's thesis deals with the analysis of dynamic linked data structures using shape analysis used in the Predator tool. It describes the chosen abstract domain for heap representation - symbolic memory graphs. It deals with the design of framework for the development of static analyzers based on Clang/LLVM. The main contribution is implementing and testing LLVM's transformation passes that simplify the LLVM IR. Second contribution is the optimization of parameters for parallel run of several variants of the Predator tool. Parameters are tuned for benchmark from SV-COMP'16, where our tool won gold medal in Heap Data Structures category. Last contribution is the design of verification core with the focus on the SMG domain.
53

Statická detekce malware nad LLVM IR / Static Behavioral Malware Detection over LLVM IR

Surovič, Marek January 2016 (has links)
Tato práce se zabývá metodami pro behaviorální detekci malware, které využívají techniky formální analýzy a verifikace. Základem je odvozování stromových automatů z grafů závislostí systémových volání, které jsou získány pomocí statické analýzy LLVM IR. V rámci práce je implementován prototyp detektoru, který využívá překladačovou infrastrukturu LLVM. Pro experimentální ověření detektoru je použit překladač jazyka C/C++, který je schopen generovat mutace malware za pomoci obfuskujících transformací. Výsledky předběžných experimentů a případná budoucí rozšíření detektoru jsou diskutovány v závěru práce.
54

Optimizing the LLVM ELF linker for a distributed compilation environment : Concurrent Linking with LLVM LLD / Optimering av LLVMs ELF-länkare vid användning av distribuerad kompilering

Wilkens, Alexander January 2020 (has links)
Modern build systems that build large software projects often utilize a distributed compiler, allowing the compilation of object files to be parallelized over multiple machines. These build systems are often not able to fully utilize all the resources available on all machines. As linking is not part of this distributed process, the unused resources could be used to perform linking instead, reducing the total build time. However, linking is often performed at the end of the build process, thus not being able to access the previously unused resources. In this thesis project, a linker that runs concurrently with the compilation process of the build system is designed, implemented, and evaluated . As the compilation process produces an object file, the linker performs a partial link using this file. The link is finalized at the end of the build, not unlike a traditional linker. The results show that the total build time is reduced when using the new linker in a build system utilizing a distributed compiler. In some cases, the time spent linking at the end of the build system is reduced over 50 percent when compared to the reference linker.
55

Quantitative Metrics and Measurement Methodologies for System Security Assurance

Ahmed, Md Salman 11 January 2022 (has links)
Proactive approaches for preventing attacks through security measurements are crucial for preventing sophisticated attacks. However, proactive measures must employ qualitative security metrics and systemic measurement methodologies to assess security guarantees, as some metrics (e.g., entropy) used for evaluating security guarantees may not capture the capabilities of advanced attackers. Also, many proactive measures (e.g., data pointer protection or data flow integrity) suffer performance bottlenecks. This dissertation identifies and represents attack vectors as metrics using the knowledge from advanced exploits and demonstrates the effectiveness of the metrics by quantifying attack surface and enabling ways to tune performance vs. security of existing defenses by identifying and prioritizing key attack vectors for protection. We measure attack surface by quantifying the impact of fine-grained Address Space Layout Randomization (ASLR) on code reuse attacks under the Just-In-Time Return-Oriented Programming (JITROP) threat model. We conduct a comprehensive measurement study with five fine-grained ASLR tools, 20 applications including six browsers, one browser engine, and 25 dynamic libraries. Experiments show that attackers only need several seconds (1.5-3.5) to find various code reuse gadgets such as the Turing Complete gadget set. Experiments also suggest that some code pointer leaks allow attackers to find gadgets more quickly than others. Besides, the instruction-level single-round randomization can restrict Turing Complete operations by preventing up to 90% of gadgets. This dissertation also identifies and prioritizes critical data pointers for protection to enable the capability to tune between performance vs. security. We apply seven rule-based heuristics to prioritize externally manipulatable sensitive data objects/pointers. Our evaluations using 33 ground truths vulnerable data objects/pointers show the successful detection of 32 ground truths with a 42% performance overhead reduction compared to AddressSanitizer. Our results also suggest that sensitive data objects are as low as 3%, and on average, 82% of data objects do not need protection for real-world applications. / Doctor of Philosophy / Proactive approaches for preventing attacks through security measurements are crucial to prevent advanced attacks because reactive measures can become challenging, especially when attackers enter sophisticated attack phases. A key challenge for the proactive measures is the identification of representative metrics and measurement methodologies to assess security guarantees, as some metrics used for evaluating security guarantees may not capture the capabilities of advanced attackers. Also, many proactive measures suffer performance bottlenecks. This dissertation identifies and represents attack elements as metrics using the knowledge from advanced exploits and demonstrates the effectiveness of the metrics by quantifying attack surface and enabling the capability to tune performance vs. security of existing defenses by identifying and prioritizing key attack elements. We measure the attack surface of various software applications by quantifying the available attack elements of code reuse attacks in the presence of fine-grained Address Space Layout Randomization (ASLR), a defense in modern operating systems. ASLR makes code reuse attacks difficult by making the attack components unavailable. We perform a comprehensive measurement study with five fine-grained ASLR tools, real-world applications, and libraries under an influential code reuse attack model. Experiments show that attackers only need several seconds (1.5-3.5) to find various code reuse elements. Results also show the influence of one attack element over another and one defense strategy over another strategy. This dissertation also applies seven rule-based heuristics to prioritize externally manipulatable sensitive data objects/pointers – a type of attack element – to enable the capability to tune between performance vs. security. Our evaluations using 33 ground truths vulnerable data objects/pointers show the successful identification of 32 ground truths with a 42% performance overhead reduction compared to AddressSanitizer, a memory error detector. Our results also suggest that sensitive data objects are as low as 3% of all objects, and on average, 82% of objects do not need protection for real-world applications.
56

Machine virtuelle universelle pour codage vidéo reconfigurable

Gorin, Jérôme 22 November 2011 (has links) (PDF)
Cette thèse propose un nouveau paradigme de représentation d'applications pour les machines virtuelles, capable d'abstraire l'architecture des systèmes informatiques. Les machines virtuelles actuelles reposent sur un modèle unique de représentation d'application qui abstrait les instructions des machines et sur un modèle d'exécution qui traduit le fonctionnement de ces instructions vers les machines cibles. S'ils sont capables de rendre les applications portables sur une vaste gamme de systèmes, ces deux modèles ne permettent pas en revanche d'exprimer la concurrence sur les instructions. Or, celle-ci est indispensable pour optimiser le traitement des applications selon les ressources disponibles de la plate-forme cible. Nous avons tout d'abord développé une représentation " universelle " d'applications pour machine virtuelle fondée sur la modélisation par graphe flux de données. Une application est ainsi modélisée par un graphe orienté dont les sommets sont des unités de calcul (les acteurs) et dont les arcs représentent le flux de données passant au travers de ces sommets. Chaque unité de calcul peut être traitée indépendamment des autres sur des ressources distinctes. La concurrence sur les instructions dans l'application est alors explicite. Exploiter ce nouveau formalisme de description d'applications nécessite de modifier les règles de programmation. A cette fin, nous avons introduit et défini le concept de " Représentation Canonique et Minimale " d'acteur. Il se fonde à la fois sur le langage de programmation orienté acteur CAL et sur les modèles d'abstraction d'instructions des machines virtuelles existantes. Notre contribution majeure qui intègre les deux nouvelles représentations proposées, est le développement d'une " Machine Virtuelle Universelle " (MVU) dont la spécificité est de gérer les mécanismes d'adaptation, d'optimisation et d'ordonnancement à partir de l'infrastructure de compilation Low-Level Virtual Machine. La pertinence de cette MVU est démontrée dans le contexte normatif du codage vidéo reconfigurable (RVC). En effet, MPEG RVC fournit des applications de référence de décodeurs conformes à la norme MPEG-4 partie 2 Simple Profile sous la forme de graphe flux de données. L'une des applications de cette thèse est la modélisation par graphe flux de données d'un décodeur conforme à la norme MPEG-4 partie 10 Constrained Baseline Profile qui est deux fois plus complexe que les applications de référence MPEG RVC. Les résultats expérimentaux montrent un gain en performance en exécution de deux pour des plates-formes dotées de deux cœurs par rapport à une exécution mono-cœur. Les optimisations développées aboutissent à un gain de 25% sur ces performances pour des temps de compilation diminués de moitié. Les travaux effectués démontrent le caractère opérationnel et universel de cette norme dont le cadre d'utilisation dépasse le domaine vidéo pour s'appliquer à d'autres domaine de traitement du signal (3D, son, photo...)
57

The transactional HW/SW stack for fault tolerant embedded computing / Pilha HW/SW transacional para computacao embarcada tolerante a falhas

Ferreira, Ronaldo Rodrigues January 2015 (has links)
O desafio de implementar tolerância a falhas em sistemas embarcados advém das restrições físicas de ocupação de área, dissipação de potência e consumo de energia desses sistemas. A necessidade de otimizar essas três restrições de projeto concomitante à computação dentro dos requisitos de desempenho e de tempo-real cria um problema difícil de ser resolvido. Soluções clássicas de tolerância a falhas tais como redundância modular dupla e tripla não são factíveis devido ao alto custo em potência e a falta de um mecanismo para se recuperar erros. Apesar de algumas técnicas existentes reduzirem o overhead de potência e área, essas incorrem em alta degradação de desempenho e muitas vezes assumem um modelo de falhas que não é factível. Essa tese introduz a Pilha de HW/SW Transacional, ou simplesmente Pilha, para gerenciar de maneira eficiente as restrições de área, potência, cobertura de falhas e desempenho. A Pilha introduz uma nova estratégia de compilação que organiza os programas em Blocos Básicos Transacionais (BBT), juntamente com um novo processador, a Arquitetura de Blocos Básicos Transacionais (ABBT), a qual provê detecção e recuperação de erros de grão fino e determinística ao usar o BBT como um contâiner de erros e como unidade de checkpointing. Duas soluções para prover a semântica de execução do BBT em hardware são propostas, uma baseada em software e a outra em hardware. A área, potência, desempenho e cobertura de falhas foram avaliadas através do modelo de hardware do ABBT. A Pilha provê uma cobertura de falhas de 99,35%, com overhead de 2,05 em potência e 2,65 de área. A Pilha apresenta overhead de desempenho de 1,33 e 1,54, dependento do modelo de hardware usado para suportar a semântica de execução do BBT. / Fault tolerance implementation in embedded systems is challenging because the physical constraints of area occupation, power dissipation, and energy consumption of these systems. The need for optimizing these three physical constraints while doing computation within the available performance goals and real-time deadlines creates a conundrum that is hard to solve. Classical fault tolerance solutions such as triple and dual modular redundancy are not feasible due to their high power overhead or lack of efficient and deterministic error recovery. Existing techniques, although some of them reduce the power and area overhead, incur heavy perfor- mance penalties and most of the time do not assume a feasible fault model. This dissertation introduces the Transactional HW/SW Stack, or simply Stack, to effi- ciently manage the area, power, fault coverage, and performance conundrum. The Stack introduces a new compilation strategy that assembles programs into Transac- tional Basic Blocks, together with a novel microprocessor, the TransactiOnal Basic Block Architecture (ToBBA), which provides fine-grained error detection and deter- ministic error rollback and elimination using the Transactional Basic Blocks (TBBs) both as a container for errors and as a small unit of data checkpointing. Two so- lutions to sustain the TBB semantics in hardware are introduced: software- and hardware-based. Stack’s area, power, performance, and coverage were evaluated using ToBBA’s hardware implementation model. The Stack attains an error correc- tion coverage of 99.35% with 2.05 power overhead within an area overhead of 2.65. The Stack also presents a performance overhead of 1.33 or 1.54, depending on the hardware model adopted to support the TBB.
58

The transactional HW/SW stack for fault tolerant embedded computing / Pilha HW/SW transacional para computacao embarcada tolerante a falhas

Ferreira, Ronaldo Rodrigues January 2015 (has links)
O desafio de implementar tolerância a falhas em sistemas embarcados advém das restrições físicas de ocupação de área, dissipação de potência e consumo de energia desses sistemas. A necessidade de otimizar essas três restrições de projeto concomitante à computação dentro dos requisitos de desempenho e de tempo-real cria um problema difícil de ser resolvido. Soluções clássicas de tolerância a falhas tais como redundância modular dupla e tripla não são factíveis devido ao alto custo em potência e a falta de um mecanismo para se recuperar erros. Apesar de algumas técnicas existentes reduzirem o overhead de potência e área, essas incorrem em alta degradação de desempenho e muitas vezes assumem um modelo de falhas que não é factível. Essa tese introduz a Pilha de HW/SW Transacional, ou simplesmente Pilha, para gerenciar de maneira eficiente as restrições de área, potência, cobertura de falhas e desempenho. A Pilha introduz uma nova estratégia de compilação que organiza os programas em Blocos Básicos Transacionais (BBT), juntamente com um novo processador, a Arquitetura de Blocos Básicos Transacionais (ABBT), a qual provê detecção e recuperação de erros de grão fino e determinística ao usar o BBT como um contâiner de erros e como unidade de checkpointing. Duas soluções para prover a semântica de execução do BBT em hardware são propostas, uma baseada em software e a outra em hardware. A área, potência, desempenho e cobertura de falhas foram avaliadas através do modelo de hardware do ABBT. A Pilha provê uma cobertura de falhas de 99,35%, com overhead de 2,05 em potência e 2,65 de área. A Pilha apresenta overhead de desempenho de 1,33 e 1,54, dependento do modelo de hardware usado para suportar a semântica de execução do BBT. / Fault tolerance implementation in embedded systems is challenging because the physical constraints of area occupation, power dissipation, and energy consumption of these systems. The need for optimizing these three physical constraints while doing computation within the available performance goals and real-time deadlines creates a conundrum that is hard to solve. Classical fault tolerance solutions such as triple and dual modular redundancy are not feasible due to their high power overhead or lack of efficient and deterministic error recovery. Existing techniques, although some of them reduce the power and area overhead, incur heavy perfor- mance penalties and most of the time do not assume a feasible fault model. This dissertation introduces the Transactional HW/SW Stack, or simply Stack, to effi- ciently manage the area, power, fault coverage, and performance conundrum. The Stack introduces a new compilation strategy that assembles programs into Transac- tional Basic Blocks, together with a novel microprocessor, the TransactiOnal Basic Block Architecture (ToBBA), which provides fine-grained error detection and deter- ministic error rollback and elimination using the Transactional Basic Blocks (TBBs) both as a container for errors and as a small unit of data checkpointing. Two so- lutions to sustain the TBB semantics in hardware are introduced: software- and hardware-based. Stack’s area, power, performance, and coverage were evaluated using ToBBA’s hardware implementation model. The Stack attains an error correc- tion coverage of 99.35% with 2.05 power overhead within an area overhead of 2.65. The Stack also presents a performance overhead of 1.33 or 1.54, depending on the hardware model adopted to support the TBB.
59

The transactional HW/SW stack for fault tolerant embedded computing / Pilha HW/SW transacional para computacao embarcada tolerante a falhas

Ferreira, Ronaldo Rodrigues January 2015 (has links)
O desafio de implementar tolerância a falhas em sistemas embarcados advém das restrições físicas de ocupação de área, dissipação de potência e consumo de energia desses sistemas. A necessidade de otimizar essas três restrições de projeto concomitante à computação dentro dos requisitos de desempenho e de tempo-real cria um problema difícil de ser resolvido. Soluções clássicas de tolerância a falhas tais como redundância modular dupla e tripla não são factíveis devido ao alto custo em potência e a falta de um mecanismo para se recuperar erros. Apesar de algumas técnicas existentes reduzirem o overhead de potência e área, essas incorrem em alta degradação de desempenho e muitas vezes assumem um modelo de falhas que não é factível. Essa tese introduz a Pilha de HW/SW Transacional, ou simplesmente Pilha, para gerenciar de maneira eficiente as restrições de área, potência, cobertura de falhas e desempenho. A Pilha introduz uma nova estratégia de compilação que organiza os programas em Blocos Básicos Transacionais (BBT), juntamente com um novo processador, a Arquitetura de Blocos Básicos Transacionais (ABBT), a qual provê detecção e recuperação de erros de grão fino e determinística ao usar o BBT como um contâiner de erros e como unidade de checkpointing. Duas soluções para prover a semântica de execução do BBT em hardware são propostas, uma baseada em software e a outra em hardware. A área, potência, desempenho e cobertura de falhas foram avaliadas através do modelo de hardware do ABBT. A Pilha provê uma cobertura de falhas de 99,35%, com overhead de 2,05 em potência e 2,65 de área. A Pilha apresenta overhead de desempenho de 1,33 e 1,54, dependento do modelo de hardware usado para suportar a semântica de execução do BBT. / Fault tolerance implementation in embedded systems is challenging because the physical constraints of area occupation, power dissipation, and energy consumption of these systems. The need for optimizing these three physical constraints while doing computation within the available performance goals and real-time deadlines creates a conundrum that is hard to solve. Classical fault tolerance solutions such as triple and dual modular redundancy are not feasible due to their high power overhead or lack of efficient and deterministic error recovery. Existing techniques, although some of them reduce the power and area overhead, incur heavy perfor- mance penalties and most of the time do not assume a feasible fault model. This dissertation introduces the Transactional HW/SW Stack, or simply Stack, to effi- ciently manage the area, power, fault coverage, and performance conundrum. The Stack introduces a new compilation strategy that assembles programs into Transac- tional Basic Blocks, together with a novel microprocessor, the TransactiOnal Basic Block Architecture (ToBBA), which provides fine-grained error detection and deter- ministic error rollback and elimination using the Transactional Basic Blocks (TBBs) both as a container for errors and as a small unit of data checkpointing. Two so- lutions to sustain the TBB semantics in hardware are introduced: software- and hardware-based. Stack’s area, power, performance, and coverage were evaluated using ToBBA’s hardware implementation model. The Stack attains an error correc- tion coverage of 99.35% with 2.05 power overhead within an area overhead of 2.65. The Stack also presents a performance overhead of 1.33 or 1.54, depending on the hardware model adopted to support the TBB.
60

Generování modelů pro testy ze zdrojových kódů / Generating of Testing Models from Source Code

Kraut, Daniel January 2019 (has links)
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.

Page generated in 0.0156 seconds