• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Spatial Isolation against Logical Cache-based Side-Channel Attacks in Many-Core Architectures / Isolation physique contre les attaques logiques par canaux cachés basées sur le cache dans des architectures many-core

Méndez Real, Maria 20 September 2017 (has links)
L’évolution technologique ainsi que l’augmentation incessante de la puissance de calcul requise par les applications font des architectures ”many-core” la nouvelle tendance dans la conception des processeurs. Ces architectures sont composées d’un grand nombre de ressources de calcul (des centaines ou davantage) ce qui offre du parallélisme massif et un niveau de performance très élevé. En effet, les architectures many-core permettent d’exécuter en parallèle un grand nombre d’applications, venant d’origines diverses et de niveaux de sensibilité et de confiance différents, tout en partageant des ressources physiques telles que des ressources de calcul, de mémoire et de communication. Cependant, ce partage de ressources introduit également des vulnérabilités importantes en termes de sécurité. En particulier, les applications sensibles partageant des mémoires cache avec d’autres applications, potentiellement malveillantes, sont vulnérables à des attaques logiques de type canaux cachés basées sur le cache. Ces attaques, permettent à des applications non privilégiées d’accéder à des informations secrètes sensibles appartenant à d’autres applications et cela malgré des méthodes de partitionnement existantes telles que la protection de la mémoire et la virtualisation. Alors que d’importants efforts ont été faits afin de développer des contremesures à ces attaques sur des architectures multicoeurs, ces solutions n’ont pas été originellement conçues pour des architectures many-core récemment apparues et nécessitent d’être évaluées et/ou revisitées afin d’être applicables et efficaces pour ces nouvelles technologies. Dans ce travail de thèse, nous proposons d’étendre les services du système d’exploitation avec des mécanismes de déploiement d’applications et d’allocation de ressources afin de protéger les applications s’exécutant sur des architectures many-core contre les attaques logiques basées sur le cache. Plusieurs stratégies de déploiement sont proposées et comparées à travers différents indicateurs de performance. Ces contributions ont été implémentées et évaluées par prototypage virtuel basé sur SystemC et sur la technologie ”Open Virtual Platforms” (OVP). / The technological evolution and the always increasing application performance demand have made of many-core architectures the necessary new trend in processor design. These architectures are composed of a large number of processing resources (hundreds or more) providing massive parallelism and high performance. Indeed, many-core architectures allow a wide number of applications coming from different sources, with a different level of sensitivity and trust, to be executed in parallel sharing physical resources such as computation, memory and communication infrastructure. However, this resource sharing introduces important security vulnerabilities. In particular, sensitive applications sharing cache memory with potentially malicious applications are vulnerable to logical cache-based side-channel attacks. These attacks allow an unprivileged application to access sensitive information manipulated by other applications despite partitioning methods such as memory protection and virtualization. While a lot of efforts on countering these attacks on multi-core architectures have been done, these have not been designed for recently emerged many-core architectures and require to be evaluated, and/or revisited in order to be practical for these new technologies. In this thesis work, we propose to enhance the operating system services with security-aware application deployment and resource allocation mechanisms in order to protect sensitive applications against cached-based attacks. Different application deployment strategies allowing spatial isolation are proposed and compared in terms of several performance indicators. Our proposal is evaluated through virtual prototyping based on SystemC and Open Virtual Platforms(OVP) technology.
2

Early evaluation of multicore systems soft error reliability using virtual platforms / Avaliação de sistema de larga escala sob à influência de falhas temporárias durante a exploração de inicial projetos através do uso de plataformas virtuais

Rosa, Felipe Rocha da January 2018 (has links)
A crescente capacidade de computação dos componentes multiprocessados como processadores e unidades de processamento gráfico oferecem novas oportunidades para os campos de pesquisa relacionados computação embarcada e de alto desempenho (do inglês, high-performance computing). A crescente capacidade de computação progressivamente dos sistemas baseados em multicores permite executar eficientemente aplicações complexas com menor consumo de energia em comparação com soluções tradicionais de núcleo único. Essa eficiência e a crescente complexidade das cargas de trabalho das aplicações incentivam a indústria a integrar mais e mais componentes de processamento no mesmo sistema. O número de componentes de processamento empregados em sistemas grande escala já ultrapassa um milhão de núcleos, enquanto as plataformas embarcadas de 1000 núcleos estão disponíveis comercialmente. Além do enorme número de núcleos, a crescente capacidade de processamento, bem como o número de elementos de memória interna (por exemplo, registradores, memória RAM) inerentes às arquiteturas de processadores emergentes, está tornando os sistemas em grande escala mais vulneráveis a erros transientes e permanentes. Além disso, para atender aos novos requisitos de desempenho e energia, os processadores geralmente executam com frequências de relógio agressivos e múltiplos domínios de tensão, aumentando sua susceptibilidade à erros transientes, como os causados por efeitos de radiação. A ocorrência de erros transientes pode causar falhas críticas no comportamento do sistema, o que pode acarretar em perdas de vidas financeiras ou humanas. Embora tenha sido observada uma taxa de 280 erros transientes por dia durante o voo de uma nave espacial, os sistemas de processamento que trabalham à nível do solo devem experimentar pelo menos um erro transiente por dia em um futuro próximo. A susceptibilidade crescente de sistemas multicore à erros transientes necessariamente exige novas ferramentas para avaliar a resiliência à erro transientes de componentes multiprocessados em conjunto com pilhas complexas de software (sistema operacional, drivers) durante o início da fase de projeto. O objetivo principal abordado por esta Tese é desenvolver um conjunto de técnicas de injeção de falhas, que formam uma ferramenta de injeção de falha. O segundo objetivo desta Tese é estabelecer as bases para novas disciplinas de gerenciamento de confiabilidade considerando erro transientes em sistemas emergentes multi/manycore utilizando aprendizado de máquina. Este trabalho identifica multiplicas técnicas que podem ser usadas para fornecer diferentes níveis de confiabilidade na carga de trabalho e na criticidade do aplicativo. / The increasing computing capacity of multicore components like processors and graphics processing unit (GPUs) offer new opportunities for embedded and high-performance computing (HPC) domains. The progressively growing computing capacity of multicore-based systems enables to efficiently perform complex application workloads at a lower power consumption compared to traditional single-core solutions. Such efficiency and the ever-increasing complexity of application workloads encourage industry to integrate more and more computing components into the same system. The number of computing components employed in large-scale HPC systems already exceeds a million cores, while 1000-cores on-chip platforms are available in the embedded community. Beyond the massive number of cores, the increasing computing capacity, as well as the number of internal memory cells (e.g., registers, internal memory) inherent to emerging processor architectures, is making large-scale systems more vulnerable to both hard and soft errors. Moreover, to meet emerging performance and power requirements, the underlying processors usually run in aggressive clock frequencies and multiple voltage domains, increasing their susceptibility to soft errors, such as the ones caused by radiation effects. The occurrence of soft errors or Single Event Effects (SEEs) may cause critical failures in system behavior, which may lead to financial or human life losses. While a rate of 280 soft errors per day has been observed during the flight of a spacecraft, electronic computing systems working at ground level are expected to experience at least one soft error per day in near future. The increased susceptibility of multicore systems to SEEs necessarily calls for novel cost-effective tools to assess the soft error resilience of underlying multicore components with complex software stacks (operating system-OS, drivers) early in the design phase. The primary goal addressed by this Thesis is to describe the proposal and development of a fault injection framework using state-of-the-art virtual platforms, propose set of novel fault injection techniques to direct the fault campaigns according to with the software stack characteristics, and an extensive framework validation with over a million of simulation hours. The second goal of this Thesis is to set the foundations for a new discipline in soft error reliability management for emerging multi/manycore systems using machine learning techniques. It will identify and propose techniques that can be used to provide different levels of reliability on the application workload and criticality.
3

Integration of virtual platform models into a system-level design framework

Salinas Bomfim, Pablo E. 24 November 2010 (has links)
The fields of System-On-Chip (SOC) and Embedded Systems Design have received a lot of attention in the last years. As part of an effort to increase productivity and reduce the time-to-market of new products, different approaches for Electronic System-Level Design frameworks have been proposed. These different methods promise a transparent co-design of hardware and software without having to focus on the final hardware/software split. In our work, we focused on enhancing the component database, modeling and synthesis capabilities of the System-On-Chip Environment (SCE). We investigated two different virtual platform emulators (QEMU and OVP) for integration into SCE. Based on a comparative analysis, we opted on integrating the Open Virtual Platforms (OVP) models and tested the enhanced SCE simulation, design and synthesis capabilities with a JPEG encoder application, which uses both custom hardware and software as part of the system. Our approach proves not only to provide fast functional verification support for designers (10+ times faster than cycle accurate models), but also to offer a good speed/accuracy relationship when compared against integration of cycle accurate or behavioral (host-compiled) models. / text
4

Early evaluation of multicore systems soft error reliability using virtual platforms / Avaliação de sistema de larga escala sob à influência de falhas temporárias durante a exploração de inicial projetos através do uso de plataformas virtuais

Rosa, Felipe Rocha da January 2018 (has links)
A crescente capacidade de computação dos componentes multiprocessados como processadores e unidades de processamento gráfico oferecem novas oportunidades para os campos de pesquisa relacionados computação embarcada e de alto desempenho (do inglês, high-performance computing). A crescente capacidade de computação progressivamente dos sistemas baseados em multicores permite executar eficientemente aplicações complexas com menor consumo de energia em comparação com soluções tradicionais de núcleo único. Essa eficiência e a crescente complexidade das cargas de trabalho das aplicações incentivam a indústria a integrar mais e mais componentes de processamento no mesmo sistema. O número de componentes de processamento empregados em sistemas grande escala já ultrapassa um milhão de núcleos, enquanto as plataformas embarcadas de 1000 núcleos estão disponíveis comercialmente. Além do enorme número de núcleos, a crescente capacidade de processamento, bem como o número de elementos de memória interna (por exemplo, registradores, memória RAM) inerentes às arquiteturas de processadores emergentes, está tornando os sistemas em grande escala mais vulneráveis a erros transientes e permanentes. Além disso, para atender aos novos requisitos de desempenho e energia, os processadores geralmente executam com frequências de relógio agressivos e múltiplos domínios de tensão, aumentando sua susceptibilidade à erros transientes, como os causados por efeitos de radiação. A ocorrência de erros transientes pode causar falhas críticas no comportamento do sistema, o que pode acarretar em perdas de vidas financeiras ou humanas. Embora tenha sido observada uma taxa de 280 erros transientes por dia durante o voo de uma nave espacial, os sistemas de processamento que trabalham à nível do solo devem experimentar pelo menos um erro transiente por dia em um futuro próximo. A susceptibilidade crescente de sistemas multicore à erros transientes necessariamente exige novas ferramentas para avaliar a resiliência à erro transientes de componentes multiprocessados em conjunto com pilhas complexas de software (sistema operacional, drivers) durante o início da fase de projeto. O objetivo principal abordado por esta Tese é desenvolver um conjunto de técnicas de injeção de falhas, que formam uma ferramenta de injeção de falha. O segundo objetivo desta Tese é estabelecer as bases para novas disciplinas de gerenciamento de confiabilidade considerando erro transientes em sistemas emergentes multi/manycore utilizando aprendizado de máquina. Este trabalho identifica multiplicas técnicas que podem ser usadas para fornecer diferentes níveis de confiabilidade na carga de trabalho e na criticidade do aplicativo. / The increasing computing capacity of multicore components like processors and graphics processing unit (GPUs) offer new opportunities for embedded and high-performance computing (HPC) domains. The progressively growing computing capacity of multicore-based systems enables to efficiently perform complex application workloads at a lower power consumption compared to traditional single-core solutions. Such efficiency and the ever-increasing complexity of application workloads encourage industry to integrate more and more computing components into the same system. The number of computing components employed in large-scale HPC systems already exceeds a million cores, while 1000-cores on-chip platforms are available in the embedded community. Beyond the massive number of cores, the increasing computing capacity, as well as the number of internal memory cells (e.g., registers, internal memory) inherent to emerging processor architectures, is making large-scale systems more vulnerable to both hard and soft errors. Moreover, to meet emerging performance and power requirements, the underlying processors usually run in aggressive clock frequencies and multiple voltage domains, increasing their susceptibility to soft errors, such as the ones caused by radiation effects. The occurrence of soft errors or Single Event Effects (SEEs) may cause critical failures in system behavior, which may lead to financial or human life losses. While a rate of 280 soft errors per day has been observed during the flight of a spacecraft, electronic computing systems working at ground level are expected to experience at least one soft error per day in near future. The increased susceptibility of multicore systems to SEEs necessarily calls for novel cost-effective tools to assess the soft error resilience of underlying multicore components with complex software stacks (operating system-OS, drivers) early in the design phase. The primary goal addressed by this Thesis is to describe the proposal and development of a fault injection framework using state-of-the-art virtual platforms, propose set of novel fault injection techniques to direct the fault campaigns according to with the software stack characteristics, and an extensive framework validation with over a million of simulation hours. The second goal of this Thesis is to set the foundations for a new discipline in soft error reliability management for emerging multi/manycore systems using machine learning techniques. It will identify and propose techniques that can be used to provide different levels of reliability on the application workload and criticality.
5

Early evaluation of multicore systems soft error reliability using virtual platforms / Avaliação de sistema de larga escala sob à influência de falhas temporárias durante a exploração de inicial projetos através do uso de plataformas virtuais

Rosa, Felipe Rocha da January 2018 (has links)
A crescente capacidade de computação dos componentes multiprocessados como processadores e unidades de processamento gráfico oferecem novas oportunidades para os campos de pesquisa relacionados computação embarcada e de alto desempenho (do inglês, high-performance computing). A crescente capacidade de computação progressivamente dos sistemas baseados em multicores permite executar eficientemente aplicações complexas com menor consumo de energia em comparação com soluções tradicionais de núcleo único. Essa eficiência e a crescente complexidade das cargas de trabalho das aplicações incentivam a indústria a integrar mais e mais componentes de processamento no mesmo sistema. O número de componentes de processamento empregados em sistemas grande escala já ultrapassa um milhão de núcleos, enquanto as plataformas embarcadas de 1000 núcleos estão disponíveis comercialmente. Além do enorme número de núcleos, a crescente capacidade de processamento, bem como o número de elementos de memória interna (por exemplo, registradores, memória RAM) inerentes às arquiteturas de processadores emergentes, está tornando os sistemas em grande escala mais vulneráveis a erros transientes e permanentes. Além disso, para atender aos novos requisitos de desempenho e energia, os processadores geralmente executam com frequências de relógio agressivos e múltiplos domínios de tensão, aumentando sua susceptibilidade à erros transientes, como os causados por efeitos de radiação. A ocorrência de erros transientes pode causar falhas críticas no comportamento do sistema, o que pode acarretar em perdas de vidas financeiras ou humanas. Embora tenha sido observada uma taxa de 280 erros transientes por dia durante o voo de uma nave espacial, os sistemas de processamento que trabalham à nível do solo devem experimentar pelo menos um erro transiente por dia em um futuro próximo. A susceptibilidade crescente de sistemas multicore à erros transientes necessariamente exige novas ferramentas para avaliar a resiliência à erro transientes de componentes multiprocessados em conjunto com pilhas complexas de software (sistema operacional, drivers) durante o início da fase de projeto. O objetivo principal abordado por esta Tese é desenvolver um conjunto de técnicas de injeção de falhas, que formam uma ferramenta de injeção de falha. O segundo objetivo desta Tese é estabelecer as bases para novas disciplinas de gerenciamento de confiabilidade considerando erro transientes em sistemas emergentes multi/manycore utilizando aprendizado de máquina. Este trabalho identifica multiplicas técnicas que podem ser usadas para fornecer diferentes níveis de confiabilidade na carga de trabalho e na criticidade do aplicativo. / The increasing computing capacity of multicore components like processors and graphics processing unit (GPUs) offer new opportunities for embedded and high-performance computing (HPC) domains. The progressively growing computing capacity of multicore-based systems enables to efficiently perform complex application workloads at a lower power consumption compared to traditional single-core solutions. Such efficiency and the ever-increasing complexity of application workloads encourage industry to integrate more and more computing components into the same system. The number of computing components employed in large-scale HPC systems already exceeds a million cores, while 1000-cores on-chip platforms are available in the embedded community. Beyond the massive number of cores, the increasing computing capacity, as well as the number of internal memory cells (e.g., registers, internal memory) inherent to emerging processor architectures, is making large-scale systems more vulnerable to both hard and soft errors. Moreover, to meet emerging performance and power requirements, the underlying processors usually run in aggressive clock frequencies and multiple voltage domains, increasing their susceptibility to soft errors, such as the ones caused by radiation effects. The occurrence of soft errors or Single Event Effects (SEEs) may cause critical failures in system behavior, which may lead to financial or human life losses. While a rate of 280 soft errors per day has been observed during the flight of a spacecraft, electronic computing systems working at ground level are expected to experience at least one soft error per day in near future. The increased susceptibility of multicore systems to SEEs necessarily calls for novel cost-effective tools to assess the soft error resilience of underlying multicore components with complex software stacks (operating system-OS, drivers) early in the design phase. The primary goal addressed by this Thesis is to describe the proposal and development of a fault injection framework using state-of-the-art virtual platforms, propose set of novel fault injection techniques to direct the fault campaigns according to with the software stack characteristics, and an extensive framework validation with over a million of simulation hours. The second goal of this Thesis is to set the foundations for a new discipline in soft error reliability management for emerging multi/manycore systems using machine learning techniques. It will identify and propose techniques that can be used to provide different levels of reliability on the application workload and criticality.
6

[pt] FICO IMAGINANDO QUANDO VOU SER LIVRE DE NOVO: A NARRATIVA DAS CRIANÇAS SOBRE O CONTEXTO DE PANDEMIA / [en] I WONDER WHEN I WILL BE FREE AGAIN: CHILDREN S NARRATIVE ABOUT THE PANDEMIC CONTEXTO

FRANCISCA VALERIA MARTINS CUNHA 15 June 2021 (has links)
[pt] A presente dissertação tem como objetivo conhecer o que dizem crianças de 7-10 anos sobre o contexto de pandemia do novo coronavírus. A COVID-19 que assolou o Brasil e o mundo no ano de 2020 exigiu um distanciamento social em busca da contenção do vírus, e a reinventar formas de estudar, brincar, experimentar o mundo e (sobre)viver. A partir dos desafios impostos, quais são as impressões das crianças sobre o distanciamento social? Seria possível realizar pesquisa com criança por meio de plataformas virtuais? Com a fundamentação teórica nos campos da etnografia virtual (Junior e Mercado) e de estudos da infância (Corsaro, Ferreira e Sarmento), foi construída uma metodologia de pesquisa com criança mediada por plataformas virtuais. A pesquisa contou com participação de oito crianças moradoras da cidade do Rio de Janeiro que produziram dados nos meses de junho e julho de 2020, período que o distanciamento social foi uma obrigatoriedade na cidade. Foram elaboradas sete atividades para aproximar-se das crianças, resultando em 30 fotografias da vida antes e durante a pandemia, 8 desenhos e 23 vídeogravações de vivências durante o distanciamento social. Com base nas narrativas infantis, em diálogo com o referencial teórico, foi constatado que é possível realizar pesquisa com criança mediada por tela. A partir da relação estabelecida, verificou-se a busca por compreender o vírus, a preocupação com o contexto pandêmico, as brincadeiras possíveis, assim como a falta de brincadeiras, e os relatos de saudade – de pessoas, lugares e da liberdade - como pontos mais destacados pelas crianças. / [en] The present dissertation seeks to find what children of 7-10 years old have to say about the present context of the new coronavirus. In order to contain the virus that plagued Brazil and the world in 2020, social distancing measures were demanded, as well as reinventing ways of studying, playing, experiencing the world and surviving. What are the children s impressions of social distancing? Would it be possible to conduct research with children using only virtual platforms? With a theoretical basis in the fields of virtual ethnography (Junior and Mercado) and childhood studies (Corsaro, Ferreira and Sarmento), a research methodology with children mediated by virtual platforms was built. The survey consisted of eight children living in the city of Rio de Janeiro, who produced data in the months of June and July 2020, a period in which social distancing was mandatory in the city. Seven activities were designed to raise awareness amongst children, resulting in 30 photographs of life before and during the pandemic, 8 drawings and 23 video recordings of experiences during social distancing. Based on the children s narratives, in dialogue with the theoretical framework, it was found that it is possible to conduct research with a child mediated by screen. From the established relationship, there was a search to understand the virus, the concerns about the context of a pandemic, possible games, as well as the lack of thereof, and reports of homesickness - of people, places and freedom - as points that were most highlighted by the children.
7

Evaluating Gem5 and QEMU Virtual Platforms for ARM Multicore Architectures

Fuentes Morales, Jose Luis Bismarck January 2016 (has links)
Accurate virtual platforms allow for crucial, early, and inexpensive assessments about the viability and hardware constraints of software/hardware applications. The growth of multicore architectures in both number of cores and relevance in the industry, in turn, demands the emergence of faster and more efficient virtual platforms to make the benefits of single core simulation and emulation available to their multicore successors whilst maintaining accuracy, development costs, time, and efficiency at acceptable levels. The goal of this thesis is to find optimal virtual platforms to perform hardware design space exploration for multi-core architectures running filtering functions, particularly, a discrete signal filtering Matlab algorithm used for oil surveying applications running on an ARM Cortex-A53 quadcore CPU. In addition to the filtering algorithm, the PARSEC benchmark suite was also used to test platform compliance under workloads with diverse characteristics. Upon reviewing multiple virtual platforms, the gem5 simulator and the QEMU emulator were chosen to be tested due to their ubiquitousness, prominence and flexibility. A Raspberry Pi Model B was used as reference to measure how closely these tools can model a commonly used embedded platform. The results show that each of the virtual platforms is best suited for different scenarios. The QEMU emulator with KVM support yielded the best performance, albeit requiring access to a host with the same architecture as the target, and not guaranteeing timing accuracy. The most accurate setup was the gem5 simulator using a simplified cache system and an Out-of-Order detailed ARM CPU model.
8

Virtuální platformy pro simulaci instrukčních sad / Virtual Platforms for Instruction Set Simulation

Ministr, Martin January 2014 (has links)
This master's thesis deals with creation of generators of the code for existing virtual platforms QEMU and OVP. This work consist of study of techniques, which are used by virtual machines for their work. Main part of this work is the design of process, which transforms input instruction sets to the code used by these virtual platforms. As the result of this work functional programs, which generate the code for these virtual platforms, was created.
9

Data Race Detection for Parallel Programs Using a Virtual Platform

Haverås, Daniel January 2018 (has links)
Data races are highly destructive bugs found in concurrent programs. Because of unordered thread interleavings, data races can randomly appear and disappear during the debugging process which makes them difficult to find and reproduce. A data race exists when multiple threads or processes concurrently access a shared memory address, with at least one of the accesses being a write. Such a scenario can cause data corruption, memory leaks, crashes, or incorrect execution. It is therefore important that data races are absent from production software. This thesis explores dynamic data race detection in programs running on Ericsson’s System Virtualization Platform (SVP), a SystemC/TLM-2.0-based virtual platform used for running software on simulated hardware. SVP is a bit-accurate simulator of Ericsson Many-Core Architecture (EMCA) hardware, enabling software and hardware to be developed in parallel, as well as providing unique insight into software execution. This latter property of SVP has been utilized to implement SVPracer, a proof-of-concept dynamic data race detector. SVPracer is based on a happens-before algorithm similar to Google’s ThreadSanitizer v2, but is significantly different in implementation as it relies entirely on instrumenting binary code during runtime without requiring code modification during build time. A set of test programs exhibiting various data races were written and compiled for EOS, the operating system (OS) running on EMCA Digital Signal Processors (DSPs). Similar programs were created for Linux using POSIX APIs, to compare SVPracer against ThreadSanitizer v2. Both SVPracer and ThreadSanitizer v2 correctly detect the data races present in the respective test programs. Further work must be done in SVPracer to eliminate some false positive results, caused by missing support for some OS functionality such as semaphores. Still, the present state of SVPracer is sufficient proof that dynamic data race detection is possible using a virtual platform. Future work could involve exploring other data race detection algorithms as well as implementing deadlock/livelock detection in virtual platforms. / Datakapplöpning är en mycket destruktiv typ av bugg i samtidig programvara. På grund av icke-ordnad sammanvävning av trådar kan datakapplöpning slumpmässigt dyka upp och försvinna under avlusning (debugging), vilket gör dem svåra att hitta och återskapa. Datakapplöpning existerar när flera trådar eller processer samtidigt accessar en delad minnesaddress och minst en av accesserna är en skrivning. Ett sådant scenario kan orsaka datakorruption, minnesläckor, krascher eller felaktig exekvering. Det är därför viktigt att datakapplöpning inte finns med i programvara för slutlig release. Det här examensarbetet utforskar dynamisk detektion av datakapplöpning i program som körs på Ericssons System Virtualization Platform (SVP), en SystemC/TLM-2.0baserad virtuell platform som används för att köra program på simulerad hårdvara. SVP är en bit-exakt simulator för hårdvara av typen Ericsson Many-Core Architecture (EMCA), vilket möjliggör parallell utveckling av hårdvara och programvara samt unik inblick i programvaruexekvering. Den senare egenskapen hos SVP har använts för att implementera SVPracer, en konceptvalidering av dynamisk detektion av datakapplöpning. SVPracer baseras på en algoritm av typen happens-before, som liknar den i Googles ThreadSanitizer v2. Stora skillnader finns dock i SVPracers implementation eftersom den instrumenterar binärkod under körning, utan att behöva modifiera koden under kompilering. Ett antal testprogram med olika typer av datakapplöpning skapades för (EOS), ett operativsystem som körs på EMCAs signalprocessorer (DSP). Motsvarande program skrevs för Linux med POSIX-APIer, för att kunna jämföra SVPracer med ThreadSanitizer v2. Både SVPracer och ThreadSanitizer v2 upptäckte datakapplöpningarna i samtliga testprogram. SVPracer kräver vidare arbete för att eliminera några falska positiva resultat orsakade av saknat stöd för vissa OS-funktioner, exempelvis semaforer. Trots det bedöms SVPracers nuvarande prestanda som tillräckligt bevis för att virtuella plattformar kan användas för detektion av datakapplöpning. Framtida arbete skulle kunna involvera utforskning av andra detektionsalgoritmer samt detektion av baklås.

Page generated in 0.032 seconds