Spelling suggestions: "subject:"886"" "subject:"686""
11 |
Controlling the Bootstrap Process : Firmware Alternatives for an x86 Embedded PlatformEkholm Lindahl, Svante January 2011 (has links)
The viability of firmware engineering on a lower-tier computer manufacturer (OEM) level, where the OEM receives processor and chipset components second hand, was investigated. It was believed that safer and more reliable operation of an embedded system would be achieved if system startup times were minimised. Theoretical knowledge of firmware engineering, methods and standards for the x86 platform was compiled and evaluated. The practical aspects of firmware engineering were investigated through the construction of an open source boot loader for a rugged, closed-box embedded x86 Intel system using Coreboot and Seabios. The boot loader was compared with the original firmware and the startup times were found to be reduced ninefold from entry vector to operating system handover. Firmware engineering was found to be a complex field stretching from computer science to electrical engineering. Firmware development on a lower-tier OEM level was found to be possible, provided that the proper documentation could be obtained. To this end, the boot loader prototype was proof of concept. This allowed an alternative, open-source oriented model for firmware development to be proposed. Ultimately, each product use case needed to be individually evaluated in terms of requirements, cost and ideology.
|
12 |
Patterns in Dynamic Slices to Assist in Automated DebuggingBurbrink, Joshua W. 10 October 2014 (has links)
No description available.
|
13 |
Análise dos caminhos de execução de programas para a paralelização automática de códigos binários para a plataforma Intel x86 / Analysis of the execution paths of programs to perform automatic parallelization of binary codes on the platform Intel x86Eberle, André Mantini 06 October 2015 (has links)
Aplicações têm tradicionalmente utilizado o paradigma de programação sequencial. Com a recente expansão da computação paralela, em particular os processadores multinúcleo e ambientes distribuídos, esse paradigma tornou-se um obstáculo para a utilização dos recursos disponíveis nesses sistemas, uma vez que a maior parte das aplicações tornam-se restrita à execução sobre um único núcleo de processamento. Nesse sentido, este trabalho de mestrado introduz uma abordagem para paralelizar programas sequenciais de forma automática e transparente, diretamente sobre o código-binário, de forma a melhor utilizar os recursos disponíveis em computadores multinúcleo. A abordagem consiste na desmontagem (disassembly) de aplicações Intel x86 e sua posterior tradução para uma linguagem intermediária. Em seguida, são produzidos grafos de fluxo e dependências, os quais são utilizados como base para o particionamento das aplicações em unidades paralelas. Por fim, a aplicação é remontada (assembly) e traduzida novamente para a arquitetura original. Essa abordagem permite a paralelização de aplicações sem a necessidade de esforço suplementar por parte de desenvolvedores e usuários. / Traditionally, computer programs have been developed using the sequential programming paradigm. With the advent of parallel computing systems, such as multi-core processors and distributed environments, the sequential paradigm became a barrier to the utilization of the available resources, since the program is restricted to a single processing unit. To address this issue, we introduce a transparent automatic parallelization methodology using a binary rewriter. The steps involved in our approach are: the disassembly of an Intel x86 application, transforming it into an intermediary language; analysis of this intermediary code to obtain flow and dependency graphs; partitioning of the application into parallel units, using the obtained graphs and posterior reassembly of the application, writing it back to the original Intel x86 architecture. By transforming the compiled application software, we aim at obtaining a program which can explore the parallel resources, with no extra effort required either from users or developers.
|
14 |
Análise dos caminhos de execução de programas para a paralelização automática de códigos binários para a plataforma Intel x86 / Analysis of the execution paths of programs to perform automatic parallelization of binary codes on the platform Intel x86André Mantini Eberle 06 October 2015 (has links)
Aplicações têm tradicionalmente utilizado o paradigma de programação sequencial. Com a recente expansão da computação paralela, em particular os processadores multinúcleo e ambientes distribuídos, esse paradigma tornou-se um obstáculo para a utilização dos recursos disponíveis nesses sistemas, uma vez que a maior parte das aplicações tornam-se restrita à execução sobre um único núcleo de processamento. Nesse sentido, este trabalho de mestrado introduz uma abordagem para paralelizar programas sequenciais de forma automática e transparente, diretamente sobre o código-binário, de forma a melhor utilizar os recursos disponíveis em computadores multinúcleo. A abordagem consiste na desmontagem (disassembly) de aplicações Intel x86 e sua posterior tradução para uma linguagem intermediária. Em seguida, são produzidos grafos de fluxo e dependências, os quais são utilizados como base para o particionamento das aplicações em unidades paralelas. Por fim, a aplicação é remontada (assembly) e traduzida novamente para a arquitetura original. Essa abordagem permite a paralelização de aplicações sem a necessidade de esforço suplementar por parte de desenvolvedores e usuários. / Traditionally, computer programs have been developed using the sequential programming paradigm. With the advent of parallel computing systems, such as multi-core processors and distributed environments, the sequential paradigm became a barrier to the utilization of the available resources, since the program is restricted to a single processing unit. To address this issue, we introduce a transparent automatic parallelization methodology using a binary rewriter. The steps involved in our approach are: the disassembly of an Intel x86 application, transforming it into an intermediary language; analysis of this intermediary code to obtain flow and dependency graphs; partitioning of the application into parallel units, using the obtained graphs and posterior reassembly of the application, writing it back to the original Intel x86 architecture. By transforming the compiled application software, we aim at obtaining a program which can explore the parallel resources, with no extra effort required either from users or developers.
|
15 |
ISAMAP tradução binaria dinamica orientada a mapeamento de instruções / ISAMAP instruction mapping driven dynamic binary translationSouza, Maxwell Monteiro Andrade de 03 October 2008 (has links)
Orientador: Guido Costa Souza de Araujo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-11T00:36:00Z (GMT). No. of bitstreams: 1
Souza_MaxwellMonteiroAndradede_M.pdf: 1735414 bytes, checksum: 76715c4172c656603702b765b56a679f (MD5)
Previous issue date: 2008 / Resumo: Tradução binária dinâmica consiste em permitir que programas originalmente compilados para uma determinada arquitetura, executem sobre um nova arquitetura sem a necessidade de recompilação. Esta técnica pode ser usada como ferramenta de migração de aplicações entre arquiteturas ou até mesmo para permitir que uma aplicação execute sobre várias arquiteturas de forma transparente. A tradução binária dinâmica também permite que otimizações, não possíveis em tempo de compilação, sejam feitas em tempo de execução. ISAMAP é um sistema de tradução binária orientado a especificações de mapeamento de instruções entre um Conjunto de Instruções (ISA) origem e um ISA alvo. Em ISAMAP seqüências de instruções da ISA alvo são associadas á instruções da ISA origem, permitindo um mapeamento rápido e otimizado. Atualmente o ISAMAP realiza tradução binária de código PowerPC 32 para código x86 / Abstract: The main role of Dynamic Binary Translation is the capability of running applications compiled for a specific architecture over a totally diferent one without sources recompiling. This technique can be used neither in legacy code migration or in a transparent run-time environment to run applications of different arquitectures. Dynamic Binary Translation also offers otimizations possibilities once informations about application run-time behaviour are available. The ISAMAP is a mapping instructions driven dynamic binary translation system that makes able a mapping between two differents arquitectures. Instructions sequence of the source ISA are mapped to target ISA instructions, providing a fast and optimized mapping. In the current state ISAMAP translates PowerPC 32 binary code to x86 binary / Mestrado / Geração Dinamica de Codigo / Mestre em Ciência da Computação
|
16 |
The Performance of Post-Quantum Key Encapsulation Mechanisms : A Study on Consumer, Cloud and Mainframe HardwareGustafsson, Alex, Stensson, Carl January 2021 (has links)
Background. People use the Internet for communication, work, online banking and more. Public-key cryptography enables this use to be secure by providing confidentiality and trust online. Though these algorithms may be secure from attacks from classical computers, future quantum computers may break them using Shor’s algorithm. Post-quantum algorithms are therefore being developed to mitigate this issue. The National Institute of Standards and Technology (NIST) has started a standardization process for these algorithms. Objectives. In this work, we analyze what specialized features applicable for post-quantum algorithms are available in the mainframe architecture IBM Z. Furthermore, we study the performance of these algorithms on various hardware in order to understand what techniques may increase their performance. Methods. We apply a literature study to identify the performance characteristics of post-quantum algorithms as well as what features of IBM Z may accommodate and accelerate these. We further apply an experimental study to analyze the practical performance of the two prominent finalists NTRU and Classic McEliece on consumer, cloud and mainframe hardware. Results. IBM Z was found to be able to accelerate several key symmetric primitives such as SHA-3 and AES via the Central Processor Assist for Cryptographic Functions (CPACF). Though the available Hardware Security Modules (HSMs) did not support any of the studied algorithms, they were found to be able to accelerate them via a Field-Programmable Gate Array (FPGA). Based on our experimental study, we found that computers with support for the Advanced Vector Extensions (AVX) were able to significantly accelerate the execution of post-quantum algorithms. Lastly, we identified that vector extensions, Application-Specific Integrated Circuits (ASICs) and FPGAs are key techniques for accelerating these algorithms. Conclusions. When considering the readiness of hardware for the transition to post-quantum algorithms, we find that the proposed algorithms do not perform nearly as well as classical algorithms. Though the algorithms are likely to improve until the post-quantum transition occurs, improved hardware support via faster vector instructions, increased cache sizes and the addition of polynomial instructions may significantly help reduce the impact of the transition. / Bakgrund. Människor använder internet för bland annat kommunikation, arbete och bankärenden. Asymmetrisk kryptering möjliggör att detta sker säkert genom att erbjuda sekretess och tillit online. Även om dessa algoritmer förväntas vara säkra från attacker med klassiska datorer, riskerar framtida kvantdatorer att knäcka dem med Shors algoritm. Därför utvecklas kvantsäkra krypton för att mitigera detta problem. National Institute of Standards and Technology (NIST) har påbörjat en standardiseringsprocess för dessa algoritmer. Syfte. I detta arbete analyserar vi vilka specialiserade funktioner för kvantsäkra algoritmer som finns i stordator-arkitekturen IBM Z. Vidare studerar vi prestandan av dessa algoritmer på olika hårdvara för att förstå vilka tekniker som kan öka deras prestanda. Metod. Vi utför en litteraturstudie för att identifiera vad som är karaktäristiskt för kvantsäkra algoritmers prestanda samt vilka funktioner i IBM Z som kan möta och accelerera dessa. Vidare applicerar vi en experimentell studie för att analysera den praktiska prestandan av de två framträdande finalisterna NTRU och Classic McEliece på konsument-, moln- och stordatormiljöer. Resultat. Vi fann att IBM Z kunde accelerera flera centrala symmetriska primitiver så som SHA-3 och AES via en hjälpprocessor för kryptografiska funktioner (CPACF). Även om befintliga hårdvarusäkerhetsmoduler inte stödde några av de undersökta algoritmerna, fann vi att de kan accelerera dem via en på-plats-programmerbar grind-matris (FPGA). Baserat på vår experimentella studie, fann vi att datorer med stöd för avancerade vektorfunktioner (AVX) möjlggjorde en signifikant acceleration av kvantsäkra algoritmer. Slutligen identifierade vi att vektorfunktioner, applikationsspecifika integrerade kretsar (ASICs) och FPGAs är centrala tekniker som kan nyttjas för att accelerera dessa algortmer. Slutsatser. Gällande beredskapen hos hårdvara för en övergång till kvantsäkra krypton, finner vi att de föreslagna algoritmerna inte presterar närmelsevis lika bra som klassiska algoritmer. Trots att det är sannolikt att de kvantsäkra kryptona fortsatt förbättras innan övergången sker, kan förbättrat hårdvarustöd för snabbare vektorfunktioner, ökade cachestorlekar och tillägget av polynomoperationer signifikant bidra till att minska påverkan av övergången till kvantsäkra krypton.
|
17 |
Software lock elision for x86 machine codeRoy, Amitabha January 2011 (has links)
More than a decade after becoming a topic of intense research there is no transactional memory hardware nor any examples of software transactional memory use outside the research community. Using software transactional memory in large pieces of software needs copious source code annotations and often means that standard compilers and debuggers can no longer be used. At the same time, overheads associated with software transactional memory fail to motivate programmers to expend the needed effort to use software transactional memory. The only way around the overheads in the case of general unmanaged code is the anticipated availability of hardware support. On the other hand, architects are unwilling to devote power and area budgets in mainstream microprocessors to hardware transactional memory, pointing to transactional memory being a 'niche' programming construct. A deadlock has thus ensued that is blocking transactional memory use and experimentation in the mainstream. This dissertation covers the design and construction of a software transactional memory runtime system called SLE_x86 that can potentially break this deadlock by decoupling transactional memory from programs using it. Unlike most other STM designs, the core design principle is transparency rather than performance. SLE_x86 operates at the level of x86 machine code, thereby becoming immediately applicable to binaries for the popular x86 architecture. The only requirement is that the binary synchronise using known locking constructs or calls such as those in Pthreads or OpenMPlibraries. SLE_x86 provides speculative lock elision (SLE) entirely in software, executing critical sections in the binary using transactional memory. Optionally, the critical sections can also be executed without using transactions by acquiring the protecting lock. The dissertation makes a careful analysis of the impact on performance due to the demands of the x86 memory consistency model and the need to transparently instrument x86 machine code. It shows that both of these problems can be overcome to reach a reasonable level of performance, where transparent software transactional memory can perform better than a lock. SLE_x86 can ensure that programs are ready for transactional memory in any form, without being explicitly written for it.
|
18 |
Analýza progresivních HW řešení pro zpracování real-time medíí / Analysis of progressive hardware for real-time media processingRežný, Jan January 2015 (has links)
Diploma thesis focuses on the selection of suitable HW solution for parallell processing of multiple audio sources. Compares several different platforms based on architectures ARM, x86 and Epiphany, compares their performance in serial and parallel data processing, their energy consumption and price.
|
19 |
Explicit-State Model Checking of Concurrent x86-64 AssemblyBharadwaj, Abhijith Ananth 10 July 2020 (has links)
The thesis presents xavier, a novel tool-set for model checking of concurrent x86-64 assembly programs, via Partial Order Reduction (POR).
xavier{} presents a realistic platform for systematically exploring and analyzing the state-space of concurrent x86 assembly programs, with the aim of detecting bugs via assertion failures in mainstream programs.
Recently, a number of state-of-the-art model checking solutions have been introduced to efficiently explore the state-space of concurrent programs, using POR algorithms.
However, such solutions are inefficient while analyzing stateful programming languages, such as the x86 assembly language, due to their low level of abstraction.
To this end, xavier{} makes two contributions: i) a novel order-sensitivity based POR algorithm, that is applicable to concurrent x86 assembly,
ii) an x86 machine-model that can accurately perform relaxed-consistency emulation of concurrent x86 assembly, without the need for any translations.
We demonstrate the applicability of xavier{} through an evaluation on several classical mutual-exclusion benchmarks and mainstream benchmarks from the Userspace Read-Copy-Update (URCU) concurrency library, where the benchmarks range from $250-3700$ lines of x86 assembly.
The framework is the first that supports systematic model checking of concurrent x86 assembly programs,
and the effectiveness of xavier{} is demonstrated by reproducing a concurrency issue of threads accessing intermediate states in the URCU library, which stems from an assumption violation. / Master of Science / Sound verification of multi-threaded programs necessitate a systematic analysis of program state-spaces that result from thread interactions.
Consequently, model-checking cite{godefroid1997model, Clarke2018} has been one of the prominent methods used to tackle the verification of multi-threaded programs.
However, existing model-checking solutions are inefficient while analyzing stateful programming languages, such as the x86 assembly language, due to the solutions' higher level of abstraction.
Therefore, the thesis presents xavier, a novel tool-set and a realistic platform for systematically exploring and analyzing the state-space of mainstream concurrent x86 assembly programs, with the aim of detecting bugs via assertion failures.
To this end, xavier{} makes two contributions: i) a novel order-sensitivity based Partial Order Reduction algorithm, which efficiently explores the state space of concurrent x86 assembly,
ii) an x86 machine-model that can accurately emulate the execution of concurrent x86 assembly, without the need for any translations.
We demonstrate the applicability of xavier{} through an evaluation on several classical mutual-exclusion and mainstream benchmarks from the Userspace Read-Copy-Update (URCU) concurrency library, where the benchmarks range from $250-3700$ lines of x86 assembly.
Moreover, we demonstrate the effectiveness of xavier{} by reproducing a concurrency issue in the URCU library, which manifests as a result of an assumption violation.
|
20 |
Avaliação de desempenho de plataformas de virtualização de redes. / Performance evaluation of network virtualization plataforms.Leopoldo Alexandre Freitas Mauricio 27 August 2013 (has links)
O objetivo desta dissertação é avaliar o desempenho de ambientes virtuais de
roteamento construídos sobre máquinas x86 e dispositivos de rede existentes na Internet atual.
Entre as plataformas de virtualização mais utilizadas, deseja-se identificar quem melhor
atende aos requisitos de um ambiente virtual de roteamento para permitir a programação do
núcleo de redes de produção. As plataformas de virtualização Xen e KVM foram instaladas
em servidores x86 modernos de grande capacidade, e comparadas quanto a eficiência,
flexibilidade e capacidade de isolamento entre as redes, que são os requisitos para o bom
desempenho de uma rede virtual. Os resultados obtidos nos testes mostram que, apesar de ser
uma plataforma de virtualização completa, o KVM possui desempenho melhor que o do Xen
no encaminhamento e roteamento de pacotes, quando o VIRTIO é utilizado. Além disso,
apenas o Xen apresentou problemas de isolamento entre redes virtuais. Também avaliamos o
efeito da arquitetura NUMA, muito comum em servidores x86 modernos, sobre o desempenho
das VMs quando muita memória e núcleos de processamento são alocados nelas. A análise
dos resultados mostra que o desempenho das operações de Entrada e Saída (E/S) de rede pode
ser comprometido, caso as quantidades de memória e CPU virtuais alocadas para a VM não
respeitem o tamanho dos nós NUMA existentes no hardware. Por último, estudamos o
OpenFlow. Ele permite que redes sejam segmentadas em roteadores, comutadores e em
máquinas x86 para que ambientes virtuais de roteamento com lógicas de encaminhamento
diferentes possam ser criados. Verificamos que ao ser instalado com o Xen e com o KVM, ele
possibilita a migração de redes virtuais entre diferentes nós físicos, sem que ocorram
interrupções nos fluxos de dados, além de permitir que o desempenho do encaminhamento de
pacotes nas redes virtuais criadas seja aumentado. Assim, foi possível programar o núcleo da
rede para implementar alternativas ao protocolo IP. / The aim of this work is to evaluate the performance of routing virtual environments
built on x86 machines and network devices existing on the Internet today. Among the most
widely used virtualization platforms, we want to identify which best meets the requirements
of a virtual routing to allow programming of the core production networks. Virtualization
platforms Xen and KVM were installed on modern large capacity x86 machines, and they
were compared for efficiency, flexibility and isolation between networks, which are the
requirements for good performance of a virtual network. The tests results show that, despite
being a full virtualization platform, KVM has better performance than Xen in forwarding and
routing packets when the VIRTIO is used. Furthermore, only Xen had isolation problems
between networks. We also evaluate the effect of the NUMA architecture, very common in
modern x86 servers, on the performance of VMs when lots of memory and processor cores
are allocated to them. The results show that Input and Output (I/O) network performance can
be compromised whether the amounts of virtual memory and CPU allocated to VM do not
respect the size of the existing hardware NUMA nodes. Finally, we study the OpenFlow. It
allows slicing networks into routers, switches and x86 machines to create virtual
environments with different routing forwarding rules. We found that, when installed with Xen
and KVM, it enables the migration of virtual networks among different physical nodes,
without interruptions in the data streams, and allows to increase the performance of packet
forwarding in the virtual networks created. Thus, it was possible to program the core network
to implement alternatives to IP protocol.
|
Page generated in 0.0365 seconds