• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 16
  • 10
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 104
  • 37
  • 14
  • 14
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

TLS Library for Isolated Enclaves : Optimizing the performance of TLS libraries for SGX

Li, Jiatong January 2019 (has links)
Nowadays cloud computing systems handle large amounts of data and process this data across different systems. It is essential to considering data security vulnerabilities and data protection. One means of decreasing security vulnerabilities is to partition the code into distinct modules and then isolate the execution of the code together with its data. Intel’s Software Guard Extension (SGX) provides security critical code isolation in an enclave. By isolating the code’s execution from an untrusted zone (an unprotected user platform), code integrity and confidentiality are ensured. Transport Layer Security (TLS) is responsible for providing integrity and confidentiality for communication between two entities. Several TLS libraries support cryptographic functions both for an untrusted zone and an enclave. Different TLS libraries have different performance when used with Intel’s SGX. It is desirable to use the best performance TLS library for specific cryptographic functions. This thesis describes a performance evaluation several popular TLS libraries performance on Intel SGX. Using the evaluation results and combining several different TLS libraries together, the thesis proposes a new solution to improve the performance of TLS libraries on Intel SGX. The performance is best when invoking the best specific TLS library based upon the data size – as there is a crossover in performance between the two best libraries. This solution also maintains the versatility of the existing cryptographic functions. / Numera hanterar molnberäkningssystem stora mängder data och bearbetar dessa data över olika system. Det är viktigt att ta itu med datasäkerhetsproblem och dataskydd. Ett sätt att minska säkerhetsproblem är att partitionera koden i olika moduler och sedan isolera kodens exekvering tillsammans med dess data. Intel’s Software Guard Extension (SGX) tillhandahåller säkerhetskritisk kodisolering i en enklav. Genom att isolera kodens körning från en otillförlitlig zon (en oskyddad användarplattform) säkerställs kodintegritet och sekretess. Transport Layer Security (TLS) ansvarar för att ge integritet och konfidentialitet för kommunikation mellan två enheter. Flera TLS-bibliotek stödjer kryptografiska funktioner både för en osäker zon och en enklav. Olika TLS-bibliotek har olika prestanda när de används med Intel’s SGX. Det är önskvärt att använda TLS-bibliotekets bästa prestanda för specifika kryptografiska funktioner. Denna avhandling beskriver en prestationsutvärdering av flera populära TLS-bibliotekens prestanda på Intel SGX. Genom att använda utvärderingsresultaten och kombinera flera olika TLS-bibliotek tillsammans, presenterar avhandlingen en ny design och lösning för att förbättra prestanda för TLS-bibliotek på Intel SGX. Den resulterande prestanda åberopar TLS-bibliotekets bästa prestanda inom en viss datastorlek samtidigt som krypteringsfunktionerna är mångsidiga.
32

Selective Core Boosting: The Return of the Turbo Button

Wamhoff, Jons-Tobias, Diestelhorst, Stephan, Fetzer, Christof, Marlier, Patrick, Felber, Pascal, Dice, Dave 26 November 2013 (has links) (PDF)
Several modern multi-core architectures support the dynamic control of the CPU's clock rate, allowing processor cores to temporarily operate at speeds exceeding the operational base frequency. Conversely, cores can operate at a lower speed or be disabled altogether to save power. Such facilities are notably provided by Intel's Turbo Boost and AMD's Turbo CORE technologies. Frequency control is typically driven by the operating system which requests changes to the performance state of the processor based on the current load of the system. In this paper, we investigate the use of dynamic frequency scaling from user space to speed up multi-threaded applications that must occasionally execute time-critical tasks or to solve problems that have heterogeneous computing requirements. We propose a general-purpose library that allows selective control of the frequency of the cores - subject to the limitations of the target architecture. We analyze the performance trade-offs and illustrate its benefits using several benchmarks and real-world workloads when temporarily boosting selected cores executing time-critical operations. While our study primarily focuses on AMD's architecture, we also provide a comparative evaluation of the features, limitations, and runtime overheads of both Turbo Boost and Turbo CORE technologies. Our results show that we can successful exploit these new hardware facilities to accelerate the execution of key sections of code (critical paths) improving overall performance of some multi-threaded applications. Unlike prior research, we focus on performance instead of power conservation. Our results further can give guidelines for the design of hardware power management facilities and the operating system interfaces to those facilities.
33

Sistema para aquisição de sinais de tensão e corrente utilizando a plataforma BEAGLEBONE BLACK.

Padilha, Celso Machado Maia 27 November 2015 (has links)
Submitted by Morgana Silva (morgana_linhares@yahoo.com.br) on 2016-07-26T17:16:36Z No. of bitstreams: 1 arquivototal.pdf: 3263751 bytes, checksum: 7ac8739b55aa723353d94f530579f4fc (MD5) / Made available in DSpace on 2016-07-26T17:16:36Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3263751 bytes, checksum: 7ac8739b55aa723353d94f530579f4fc (MD5) Previous issue date: 2015-11-27 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / In the current global scenario, the power sector crisis is evident, and to maintain an economic development compatible with this growth, many countries are investing in energy production from many different sources, renewable and clean or not, to supply the growing demand. The integration of these new energy sources demands realtime coordination. Thus, monitoring units need to be reliable besides a good processing and data transmission are needed. In this paper it’s proposed to implement a data acquisition system for current and voltage signals with a low cost embedded platform, focused in hardware programation coupled with the acquisition and conditioning module to apply a Smart Grid concept.This system are responsible for performing analysis of voltage and current provided from electrical grid and transform these informations to readable informations for embedded platform, making possible manipulate this informations from surpevisory systems. With these data, the supervisory systems can analyze and make decisions based on information provided by the platform and use to apply in different concepts present in Smart Grid, such as energy flow control, minimizing the effects of electricity lack, design of distributed generation from renewable sources, identification of energy theft, reduction of technical losses, power monitoring and others. To develop this system was used a Beaglebone Black© development platform, associated with a module for conditioning unit of voltage and current signals, that module was developed by the Optimization Lab Power Systems program (LOSE) of the Electrical Engineering Department (DEE) from Federal University of Paraíba (UFPB). This module is responsible for conditioning the signal voltage and current supplied by the power grid adapting them to the specifications required for reading and conversion in Beaglebone Black© development platform. / No atual cenário mundial, a crise do setor elétrico está cada vez mais evidente, e para se manter um nível de desenvolvimento econômico compatível com este crescimento, muitos países estão investindo no aumento da produção de energia elétrica a partir de diversas fontes, sejam estas renováveis e limpas ou não, para suprir a crescente demanda. A integração dessas novas fontes de energia demanda coordenação em tempo real. Desta forma, unidades confiáveis de monitoramento, processamento e transmissão de dados são necessárias. Neste trabalho propõe-se a implementação de um sistema para aquisição de dados para sinais de tensão e corrente, por meio de uma plataforma de desenvolvimento embarcado de baixo custo associado com foco na programação de hardware, aliado a um módulo de aquisição e condicionamento de sinais para aplicação do conceito de Redes Elétricas Inteligentes - REI (do inglês Smart Grid). O sistema é responsável por realizar a leitura da tensão e da corrente fornecidos pela rede elétrica e transformar-los para informações analisáveis pela plataforma de desenvolvimento, possibilitando assim, a manipulação por sistemas supervisórios. Em posse destes dados, os sistemas supervisórios podem, além de analisar, tomar decisões baseadas nas informações fornecidas pela plataforma e utilizar-las para aplicar em diversos conceitos presentes nas REIs, tais como controle do fluxo de energia, minimização dos efeitos da falta da energia elétrica, concepção de geração distribuída a partir de fontes renováveis, identificação de furtos de energia, redução de perdas técnicas, monitoramento da qualidade de energia e outros. Para o desenvolvimento deste sistema foi utilizado a plataforma de desenvolvimento Beaglebone Black©, associada a uma unidade de condicionamento de sinais de tensão e corrente desenvolvido pelo Progama de Pós-Graduação em Engenharia Elétrica no Laboratório de Otimização de Sistemas de Energia (LOSE) do Departamento de Engenharia Elétrica (DEE), da Universidade Federal da Paraíba (UFPB). Este módulo é reponsável por condicionar os sinais de tensão e corrente fornecidos pela rede elétrica adequando-os às especificações necessárias para leitura e conversão na plataforma de desenvolvimento Beaglebone Black©.
34

Confidential Computing in Public Clouds : Confidential Data Translations in hardware-based TEEs: Intel SGX with Occlum support

Yulianti, Sri January 2021 (has links)
As enterprises migrate their data to cloud infrastructure, they increasingly need a flexible, scalable, and secure marketplace for collaborative data creation, analysis, and exchange among enterprises. Security is a prominent research challenge in this context, with a specific question on how two mutually distrusting data owners can share their data. Confidential Computing helps address this question by allowing to perform data computation inside hardware-based Trusted Execution Environments (TEEs) which we refer to as enclaves, a secured memory that is allocated by CPU. Examples of hardware-based TEEs are Advanced Micro Devices (AMD)-Secure Encrypted Virtualization (SEV), Intel Software Guard Extensions (SGX) and Intel Trust Domain Extensions (TDX). Intel SGX is considered as the most popular hardware-based TEEs since it is widely available in processors targeting desktop and server platforms. Intel SGX can be programmed using Software Development Kit (SDK) as development framework and Library Operating Systems (Library OSes) as runtimes. However, communication with software in the enclave such as the Library OS through system calls may result in performance overhead. In this project, we design confidential data transactions among multiple users, using Intel SGX as TEE hardware and Occlum as Library OS. We implement the design by allowing two clients as data owners share their data to a server that owns Intel SGX capable platform. On the server side, we run machine learning model inference with inputs from both clients inside an enclave. In this case, we aim to evaluate Occlum as a memory-safe Library Operating System (OS) that enables secure and efficient multitasking on Intel SGX by measuring two evaluation aspects such as performance overhead and security benefits. To evaluate the measurement results, we compare Occlum with other runtimes: baseline Linux and Graphene-SGX. The evaluation results show that our design with Occlum outperforms Graphene-SGX by 4x in terms of performance. To evaluate the security aspects, we propose 11 threat scenarios potentially launched by both internal and external attackers toward the design in SGX platform. The results show that Occlum security features succeed to mitigate 10 threat scenarios out of 11 scenarios overall. / När företag migrerar sin data till molninfrastruktur behöver de i allt högre grad en flexibel, skalbar och säker marknadsplats för gemensam dataskapande, analys och utbyte mellan företag. Säkerhet är en framstående forskningsutmaning i detta sammanhang, med en specifik fråga om hur två ömsesidigt misstroende dataägare kan dela sina data. Confidential Computing hjälper till att ta itu med den här frågan genom att tillåta att utföra databeräkning i hårdvarubaserad TEEs som vi kallar enklaver, ett säkert minne som allokeras av CPU. Exempel på maskinvarubaserad TEEs är AMD-SEV, Intel SGX och Intel TDX. Intel SGX anses vara den mest populära maskinvarubaserade TEEs eftersom det finns allmänt tillgängligt i processorer som riktar sig mot stationära och serverplattformar. Intel SGX kan programmeras med hjälp av SDK som utvecklingsram och Library Operating System (Library OSes) som körtid. Kommunikation med programvara i enklaven, till exempel Library OS via systemanrop, kan dock leda till prestandakostnader. I det här projektet utformar vi konfidentiella datatransaktioner mellan flera användare, med Intel SGX som TEE-hårdvara och Occlum som Library OS. Vi implementerar designen genom att låta två klienter som dataägare dela sina data till en server som äger Intel SGX-kompatibel plattform. På serversidan kör vi maskininlärningsmodell slutsats med ingångar från båda klienterna i en enklav. I det här fallet strävar vi efter att utvärdera Occlum som ett minnessäkert Library OS som möjliggör säker och effektiv multitasking på Intel SGX genom att mäta två utvärderingsaspekter som prestandakostnader och säkerhetsfördelar. För att utvärdera mätresultaten jämför vi Occlum med andra driftstider: baslinjen Linux och Graphene-SGX. Utvärderingsresultaten visar att vår design med Occlum överträffar Graphene-SGX av 4x när det gäller prestanda. För att utvärdera säkerhetsaspekterna föreslår vi elva hotscenarier som potentiellt lanseras av både interna och externa angripare mot designen i SGX-plattformen. Resultaten visar att Occlums säkerhetsfunktioner lyckas mildra 10 hotscenarier av 11 scenarier totalt.
35

Systems Support for Trusted Execution Environments

Trach, Bohdan 09 February 2022 (has links)
Cloud computing has become a default choice for data processing by both large corporations and individuals due to its economy of scale and ease of system management. However, the question of trust and trustoworthy computing inside the Cloud environments has been long neglected in practice and further exacerbated by the proliferation of AI and its use for processing of sensitive user data. Attempts to implement the mechanisms for trustworthy computing in the cloud have previously remained theoretical due to lack of hardware primitives in the commodity CPUs, while a combination of Secure Boot, TPMs, and virtualization has seen only limited adoption. The situation has changed in 2016, when Intel introduced the Software Guard Extensions (SGX) and its enclaves to the x86 ISA CPUs: for the first time, it became possible to build trustworthy applications relying on a commonly available technology. However, Intel SGX posed challenges to the practitioners who discovered the limitations of this technology, from the limited support of legacy applications and integration of SGX enclaves into the existing system, to the performance bottlenecks on communication, startup, and memory utilization. In this thesis, our goal is enable trustworthy computing in the cloud by relying on the imperfect SGX promitives. To this end, we develop and evaluate solutions to issues stemming from limited systems support of Intel SGX: we investigate the mechanisms for runtime support of POSIX applications with SCONE, an efficient SGX runtime library developed with performance limitations of SGX in mind. We further develop this topic with FFQ, which is a concurrent queue for SCONE's asynchronous system call interface. ShieldBox is our study of interplay of kernel bypass and trusted execution technologies for NFV, which also tackles the problem of low-latency clocks inside enclave. The two last systems, Clemmys and T-Lease are built on a more recent SGXv2 ISA extension. In Clemmys, SGXv2 allows us to significantly reduce the startup time of SGX-enabled functions inside a Function-as-a-Service platform. Finally, in T-Lease we solve the problem of trusted time by introducing a trusted lease primitive for distributed systems. We perform evaluation of all of these systems and prove that they can be practically utilized in existing systems with minimal overhead, and can be combined with both legacy systems and other SGX-based solutions. In the course of the thesis, we enable trusted computing for individual applications, high-performance network functions, and distributed computing framework, making a <vision of trusted cloud computing a reality.
36

Evaluating hardware isolation for secure software development in Highly Regulated Environments / Utvärdering av hårdvaruisolering för säker programvaruutveckling i mycket reglerade miljöer

Brogärd, Andre January 2023 (has links)
Organizations in highly regulated industries have an increasing need to protect their intellectual assets, because Advanced Persistent Threat (APT) entities are capable of using supply chain attacks to bypass traditional defenses. This work investigates the feasibility of preventing supply chain attacks by isolating the build environment of the software using hardware isolation. Specifically, this work analyzes the extent to which the Intel SGX can guarantee the integrity and authenticity of software produced in Highly Regulated Environments. A theoretical evaluation using assurance cases shows that a hardware isolation approach has the potential to guarantee the integrity and authenticity of the produced software. Security weaknesses in Intel SGX significantly limit the confidence in its ability to secure the build environment. Directions for future work to secure a build environment with a hardware isolation approach are suggested. Most importantly, the guarantees from hardware isolation should be improved, suggestively by choosing a more secure hardware isolation solution, and a proof-of-concept of the approach should be implemented. / Organisationer i mycket reglerade industrier har ett ökat behov av att skydda sina intellektuella tillgångar, eftersom avancerade långvariga hot (APT) har förmågan att använda sig av distributionskedjeattacker för att ta sig förbi existerande skydd. Det här arbetet undersöker möjligheten att skydda sig mot distributionskedjeattacker genom att isolera mjukvarans byggmiljö med hjälp av hårdvaruisolering. Specifikt analyseras till vilken grad Intel SGX kan garantera integriteten och autenticiteten av mjukvara som produceras i mycket reglerade miljöer. En teoretisk evaluering genom assurans visar att hårdvaruisolering har möjligheten att garantera integriteten och autenticiteten hos den producerade mjukvaran. Säkerhetsbrister i Intel SGX begränsar i hög grad förtroendet för dess förmåga att säkra byggmiljön. För vidare forskning föreslås att garantierna från hårdvaruisolering förbättras, förslagsvis genom att välja säkrare hårdvaruisoleringslösningar, samt att en prototyp av lösningen implementeras.
37

Reorganization on employee satisfaction: The gray area of corporations : A case study on Intel Corporation’s employees

Karayianni, Fotini January 2019 (has links)
The present thesis exploits a concept that lays in the core of human capital, employee satisfaction, under the context of a proactive organizational change. The prior literature depicts organizational change as a strategy applied to increase the efficiency of the company and its relevance to the market involved. The unique element of the matter is that proactive reorganizations are a product of a structural practice initiated by an entity’s human resources department. The department operates under a standardized model of change, which focuses on addressing the technical discrepancies that may occur in the human capital. Mainly analyzed from a company’s perspective, its influence on the employees involved in the change is often been neglected. The thesis was conducted in an effort to assess the need for a change in the current model in order to better address employee’s needs. To achieve that a sample of 100 Intel employees was used to uncover the state of the employees’ job satisfaction after an organizational change has been taken place. Results of the analysis exhibited above average overall satisfaction scores. The areas that employees seem to be the least satisfied were that of job security and company’s policies. Moreover, the elements of culture and the type of reorganization have also seemed to influence the overall satisfaction scores. Upon viewing the results the authors concluded that a need does exist, for a more interpersonal human resource approach to be incorporated within the current reorganizational model of an entity.
38

Věrnostní rabaty jako vylučující praktika v evropském soutěžním právu. / Loyalty Rebates as an Exclusionary Practice in the European Competition Law.

Šebo, Igor January 2019 (has links)
1 LOYALTY REBATES AS AN EXCLUSIONARY PRACTISE IN THE EUROPEAN COMPETITION LAW ABSTRACT This master thesis treats loyalty rebates in the light of European competition law when applied by dominant undertakings and analyses its consequences. It describes when such practise might be considered by European Union authorities as an abuse of a dominant position as it has negative impact on the competitors by inducing customer's loyalty to the dominant undertaking. It depicts its position in the European competition law system and compares it to other practises that influence the market in a similar way. Also, it classifies different types of loyalty and other types of rebates and explains how such rebates can force a customer to acquire increasing portions of his demand from the dominant undertaking and how they can damage its competitors. The thesis also offers a critical view on a very strict treatment of this practise by European institutions in the past and it arguments by several positive effects that loyalty and other types of rebates may have. Simultaneously it takes into consideration the newest decision of the Court of Justice of the European Union in the Intel case from September 2017 which will hopefully affect EU institutions' approach to this practise as it broke well-established per se interdiction of...
39

Nouveaux algorithmes numériques pour l’utilisation efficace des architectures multi-cœurs et hétérogènes / New numerical algorithms for efficient utilization of multicore and heterogeneous architectures

Ye, Fan 16 December 2015 (has links)
Cette étude est motivée par les besoins réels de calcul dans la physique des réacteurs. Notre objectif est de concevoir les algorithmes parallèles, y compris en proposant efficaces noyaux algébriques linéaires et méthodes numériques parallèles.Dans un environnement many-cœurs en mémoire partagée tel que le système Intel Many Integrated Core (MIC), la parallélisation efficace d'algorithmes est obtenue en termes de parallélisme des tâches à grain fin et parallélisme de données. Pour la programmation des tâches, deux principales stratégies, le partage du travail et vol de travail ont été étudiées. A des fins de généralité et de réutilisation, nous utilisons des interfaces de programmation parallèle standard, comme OpenMP, Cilk/Cilk+ et TBB. Pour vectoriser les tâches, les outils disponibles incluent Cilk+ array notation, pragmas SIMD, et les fonctions intrinsèques. Nous avons évalué ces techniques et proposé un noyau efficace de multiplication matrice-vecteur dense. Pour faire face à une situation plus complexe, nous proposons d'utiliser le modèle hybride MPI/OpenMP pour la mise en œuvre de noyau multiplication matrice-vecteur creux. Nous avons également conçu un modèle de performance pour modéliser les performances sur MICs et ainsi guider l'optimisation. En ce qui concerne la résolution de systèmes linéaires, nous avons proposé un solveur parallèle évolutif issue de méthodes Monte Carlo. Cette méthode présente un degré de parallélisme abondant, qui s’adapte bien à l'architecture multi-coeurs. Pour répondre à certains des goulots d'étranglement fondamentaux de ce solveur, nous proposons un modèle d'exécution basée sur les tâches qui résout complètement ces problèmes. / This study is driven by the real computational needs coming from different fields of reactor physics, such as neutronics or thermal hydraulics, where the eigenvalue problem and resolution of linear system are the key challenges that consume substantial computing resources. In this context, our objective is to design and improve the parallel computing techniques, including proposing efficient linear algebraic kernels and parallel numerical methods. In a shared-memory environment such as the Intel Many Integrated Core (MIC) system, the parallelization of an algorithm is achieved in terms of fine-grained task parallelism and data parallelism. For scheduling the tasks, two main policies, the work-sharing and work-stealing was studied. For the purpose of generality and reusability, we use common parallel programming interfaces, such as OpenMP, Cilk/Cilk+, and TBB. For vectorizing the task, the available tools include Cilk+ array notation, SIMD pragmas, and intrinsic functions. We evaluated these techniques and propose an efficient dense matrix-vector multiplication kernel. In order to tackle a more complicated situation, we propose to use hybrid MPI/OpenMP model for implementing sparse matrix-vector multiplication. We also designed a performance model for characterizing performance issues on MIC and guiding the optimization. As for solving the linear system, we derived a scalable parallel solver from the Monte Carlo method. Such method exhibits inherently abundant parallelism, which is a good fit for many-core architecture. To address some of the fundamental bottlenecks of this solver, we propose a task-based execution model that completely fixes the problems.
40

CSP problems as algorithmic benchmarks: measures, methods and models

Mateu Piñol, Carles 30 January 2009 (has links)
On Computer Science research, traditionally, most efforts have been devoted to research hardness for the worst case of problems (proving NP completeness and comparing and reducing problems between them are the two most known). Artifcial Intelligence research, recently, has focused also on how some characteristics of concrete instances have dramatic effects on complexity and hardness while worst-case complexity remains the same. This has lead to focus research efforts on understanding which aspects and properties of problems or instances affect hardness, why very similar problems can require very diferent times to be solved. Research search based problems has been a substantial part of artificial intelligence research since its beginning. Big part of this research has been focused on developing faster and faster algorithms, better heuristics, new pruning techniques to solve ever harder problems. One aspect of this effort to create better solvers consists on benchmarking solver performance on selected problem sets, and, an, obviously, important part of that benchmarking is creating and defining new sets of hard problems. This two folded effort, on one hand to have at our disposal new problems, harder than previous ones, to test our solvers, and on the other hand, to obtain a deeper understanding on why such new problems are so hard, thus making easier to understand why some solvers outperform others, knowledge that can contribute towards designing and building better and faster algorithms and solvers. This work deals with designing better, that is harder and easy to generate, problems for CSP solvers, also usable for SAT solvers. In the first half of the work general concepts on hardness and CSP are introduced, including a complete description of the chosen problems for our study. This chosen problems are, Random Binary CSP Problems (BCSP), Quasi-group Completion Problems (QCP), Generalised Sudoku Problems (GSP), and a newly defined problem, Edge-Matching Puzzles (GEMP). Although BCSP and QCP are already well studied problems, that is not the case with GSP and GEMP. For GSP we will define new creation methods that ensure higher hardness than standard random methods. GEMP on the other hand is a newly formalised problem, we will define it, will provide also algorithms to build easily problems of tunable hardness and study its complexity and hardness. On the second part of the work we will propose and study new methods to increase the hardness of such problems. Providing both, algorithms to build harder problems and an in-depth study of the effect of such methods on hardness, specially on resolution time.

Page generated in 0.0507 seconds