• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Desafios no desenvolvimento de aplicações seguras usando Intel SGX.

SILVA, Rodolfo de Andrade Marinho. 06 September 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-09-06T19:24:24Z No. of bitstreams: 1 RODOLFO DE ANDRADE MARINHO SILVA – DISSERTAÇÃO (PPGCC) 2018.pdf: 798016 bytes, checksum: 4dfd41c1185e692e1c3b8a11f541a6a6 (MD5) / Made available in DSpace on 2018-09-06T19:24:24Z (GMT). No. of bitstreams: 1 RODOLFO DE ANDRADE MARINHO SILVA – DISSERTAÇÃO (PPGCC) 2018.pdf: 798016 bytes, checksum: 4dfd41c1185e692e1c3b8a11f541a6a6 (MD5) Previous issue date: 2018-03-01 / No decorrer das últimas décadas, uma quantidade de dados de usuários cada vez maior vem sendo enviada para ambientes não controlados pelos mesmos. Em alguns casos esses dados são enviados com o objetivo de tornar esses dados públicos, mas na grande maioria das vezes há a necessidade de manter esses dados seguros e privados, ou autorizar o seu acesso apenas em usos bem específicos. Considerando o caso onde os dados devem ser mantidos privados, entidades devem tomar cuidados especiais para manter a segurança e privacidade de tais dados tanto durante a transmissão quanto durante o armazenamento e processamento dos mesmos. Com esse objetivo, vários esforços vêm sendo feitos, inclusive o desenvolvimento de componentes de hardware que provêem ambientes de execução confiável,TEEs, como o Intel Software Guard Extensions(SGX). O uso dessa tecnologia, porém, pode ser feito de forma incorreta ou ineficiente, devido a cuidados não observados durante o desenvolvimento de aplicações. O trabalho apresentado nessa dissertação aborda os principais desafios enfrentados no desenvolvimento de aplicações que façam uso deSGX, e propõe boas práticas e um conjunto de ferramentas (DynSGX) que ajudam a fazer melhor uso das capacidades da tecnologia. Tais desafios incluem, mas não são limitados a, particionamento de aplicações de acordo com o modelo de programação do SGX, colocação de aplicações em ambientes de computação na nuvem, e, sobretudo, gerência de memória. Os estudos apresentados neste trabalho apontam que o mal uso da tecnologia pode acarretar em uma perda de performance considerável se comparado com implementações que levam em conta as boas práticas propostas. O conjunto de ferramentas proposto neste trabalho também mostrou possibilitar a proteção de código de aplicações em ambientes de computação na nuvem, com uma sobrecarga desprezível em comparação com o modelo de programação padrão de SGX. / During the last few decades, an increasing amount of user data have been sent to environments not controlled by data owners. In some cases these data are sent with the objective to turn them public, but in the vast majority of times, these data need to be kept safe and private, or to be allowed access only in very specific use cases. Considering the case where data need to be kept private, entities must take specific measures to maintain the data security and privacy while transmitting, storing and processing them. With this objective many efforts have been made, including the specification of hardware components that provide a trusted execution environment (TEEs), like the Intel Software Guard Extensions (SGX). The use of this technology , though, can be made in incorrect or ineffective ways, due to not taking some considerations into account during the development of applications. In this work, we approach the main challenges faced in the development of applications that use SGX, and propose good practices and a toolset (DynSGX) that help making better use of the capabilities of this technology. Such challenges include, but are not limited to, application partitioning, application colocation in cloud computing environments, and memory management. The studies presented in this work show that the bad use of this technology can result in a considerable performance loss when compared to implementations that take into account the good practices proposed. The toolset proposed in this work also showed to enable protecting application code in cloud computing environments, having a negligible performance overhead when compared to the regular SGX programming model.
12

The influence of individual differences on neural correlates of emotional and cognitive information process

Mériau, Katja 13 December 2007 (has links)
Moderne Mehr-Ebenen-Ansätze gehen davon aus, dass Emotionen auf unterschiedlichen Ebenen der Informationsverarbeitung und durch unterschiedliche Prozesse erzeugt werden. Im Rahmen des ‘dual memory model of emotion’ werden diese Prozesse als schematische (automatische) und propositionale (kontrollierte) Verarbeitungsprozesse bezeichnet. Darüber hinaus integriert das Modell Strategien zur Emotionsregulation, wie Aufmerksamkeitslenkung und semantische Elaborierung emotionaler Information. Über die zugrundeliegenden neuronalen Korrelate weiß man bisher allerdings noch wenig. Die vorliegende Arbeit konzentriert sich auf die Identifizierung behavioraler und neuronaler Korrelate der schematischen und propositionalen Verarbeitungsprozesse und wie diese durch interindividuelle Differenzen in der Affektivität und in der kognitiven Verarbeitung von Emotionen moduliert werden. Interindividuelle Differenzen im aktuellen negativen Affekt waren mit Aktivitätsveränderungen in der Insula während der schematischen Verarbeitung negativer Stimuli assoziiert. Dies kann als verstärkte Verarbeitung des hedonischen Wertes negativer Stimuli in Individuen mit hohem aktuellen negativen Affekt interpretiert werden. Interindividuelle Differenzen in der Zustandsangst und im kognitiven Verarbeiten von Emotionen modulierten behaviorale und neuronale Korrelate propositionaler Verarbeitungsprozesse. Hohe Zustandsangst und Schwierigkeiten im kognitiven Verarbeiten von Emotionen waren assoziiert mit erhöhtem kognitiven Aufwand, wenn der emotionale Gehalt der Stimuli ignoriert werden musste. Die neuronalen Befunde deuten darauf hin, dass für Individuen mit Schwierigkeiten im kognitiven Verarbeiten von Emotionen Aufmerksamkeitslenkung im Vergleich zu Elaborierung emotionaler Informationen eine weniger effektive Strategie zur Emotionsregulation darstellt. / Modern multi-level theories claim that emotion may be generated by different ways using different processes. The dual memory model of emotion refers to these processes as schematic processing (automatic) and propositional processing (controlled). The model further integrates emotion regulatory strategies, such as re-direction of attention and emotional elaboration as essential components of emotion processing. However, research on the neurobiological correlates of the different processing modes is scarce. Hence, the present work focuses on the identification of behavioral and neural correlates of the hypothesized processing modes and how these are modulated by individual differences in affectivity and in the cognitive processing of emotions. Individual differences in state negative affect were associated with altered activity in the insula during schematic processing of negative emotional information. This may indicate increased processing of the hedonic dimension of aversive stimuli in individuals with high state negative affect. Individual differences in state anxiety and in the cognitive processing of emotions modulated behavioral and neural correlates of propositional processing of emotional information. Specifically, in individuals with high state anxiety and with difficulties to cognitively process emotions, re-direction of attention was associated with increased cognitive effort. Findings at the neural level indicate that re-direction of attention as compared to elaboration of emotional information may represent a less effective emotion regulatory strategy in individuals with difficulties to cognitively process emotions.
13

Système distribué à adressage global et cohérence logicielle pourl’exécution d’un modèle de tâche à flot de données / Distributed runtime system with global address space and software cache coherence for a data-flow task model

Gindraud, François 11 January 2018 (has links)
Les architectures distribuées sont fréquemment utilisées pour le calcul haute performance (HPC). Afin de réduire la consommation énergétique, certains fabricants de processeurs sont passés d’architectures multi-cœurs en mémoire partagée aux MPSoC. Les MPSoC (Multi-Processor System On Chip) sont des architectures incluant un système distribué dans une puce.La programmation des architectures distribuées est plus difficile que pour les systèmes à mémoire partagée, principalement à cause de la nature distribuée de la mémoire. Une famille d’outils nommée DSM (Distributed Shared Memory) a été développée pour simplifier la programmation des architectures distribuées. Cette famille inclut les architectures NUMA, les langages PGAS, et les supports d’exécution distribués pour graphes de tâches. La stratégie utilisée par les DSM est de créer un espace d’adressage global pour les objets du programme, et de faire automatiquement les transferts réseaux nécessaires lorsque ces objets sont utilisés. Les systèmes DSM sont très variés, que ce soit par l’interface fournie, les fonctionnalités, la sémantique autour des objets globalement adressables, le type de support (matériel ou logiciel), ...Cette thèse présente un nouveau système DSM à support logiciel appelé Givy. Le but de Givy est d’exécuter sur des MPSoC (MPPA) des programmes sous la forme de graphes de tâches dynamiques, avec des dépendances de flot de données (data-flow ). L’espace d’adressage global (GAS) de Givy est indexé par des vrais pointeurs, contrairement à de nombreux autres systèmes DSM à support logiciel : les pointeurs bruts du langage C sont valides sur tout le système distribué. Dans Givy, les objets globaux sont les blocs de mémoire fournis par malloc(). Ces blocs sont répliqués entre les nœuds du système distribué, et sont gérés par un protocole de cohérence de cache logiciel nommé Owner Writable Memory. Le protocole est capable de déplacer ses propres métadonnées, ce qui devrait permettre l’exécution efficace de programmes irréguliers. Le modèle de programmation impose de découper le programme en tâches créées dynamiquement et annotées par leurs accès mémoire. Ces annotations sont utilisées pour générer les requêtes au protocole de cohérence, ainsi que pour fournir des informations à l’ordonnanceur de tâche (spatial et temporel).Le premier résultat de cette thèse est l’organisation globale de Givy. Une deuxième contribution est la formalisation du protocole Owner Writable Memory. Le troisième résultat est la traduction de cette formalisation dans le langage d’un model checker (Cubicle), et les essais de validation du protocole. Le dernier résultat est la réalisation et explication détaillée du sous-système d’allocation mémoire : le choix de pointeurs bruts en tant qu’index globaux nécessite une intégration forte entre l’allocateur mémoire et le protocole de cohérence de cache. / Distributed systems are widely used in HPC (High Performance Computing). Owing to rising energy concerns, some chip manufacturers moved from multi-core CPUs to MPSoC (Multi-Processor System on Chip), which includes a distributed system on one chip.However distributed systems – with distributed memories – are hard to program compared to more friendly shared memory systems. A family of solutions called DSM (Distributed Shared Memory) systems has been developed to simplify the programming of distributed systems. DSM systems include NUMA architectures, PGAS languages, and distributed task runtimes. The common strategy of these systems is to create a global address space of some kind, and automate network transfers on accesses to global objects. DSM systems usually differ in their interfaces, capabilities, semantics on global objects, implementation levels (hardware / software), ...This thesis presents a new software DSM system called Givy. The motivation of Givy is to execute programs modeled as dynamic task graphs with data-flow dependencies on MPSoC architectures (MPPA). Contrary to many software DSM, the global address space of Givy is indexed by real pointers: raw C pointers are made global to the distributed system. Givy global objects are memory blocks returned by malloc(). Data is replicated across nodes, and all these copies are managed by a software cache coherence protocol called Owner Writable Memory. This protocol can relocate coherence metadata, and thus should help execute irregular applications efficiently. The programming model cuts the program into tasks which are annotated with memory accesses, and created dynamically. Memory annotations are used to drive coherence requests, and provide useful information for scheduling and load-balancing.The first contribution of this thesis is the overall design of the Givy runtime. A second contribution is the formalization of the Owner Writable Memory coherence protocol. A third contribution is its translation in a model checker language (Cubicle), and correctness validation attempts. The last contribution is the detailed allocator subsystem implementation: the choice of real pointers for global references requires a tight integration between memory allocator and coherence protocol.
14

A direct comparison between mathematical operations in mental arithmetic with regard to working memory’s subsystems

Koch, Felix-Sebastian January 2004 (has links)
This study examined the idea that each mathematical operation (addition, subtraction, multiplication and division) is mainly linked to one of the components of working memory as proposed by Baddeley. The phonological loop, visual-spatial sketchpad and central executive have been studied using a dual-task methodology with 7 different secondary tasks. 35 undergraduate and graduate students were timed in their response time for mental calculation and error rates were calculated. Results show clear differences of operations and of number pairs. Interaction between conditions and operations was just approaching significance. Results did not give support to the idea that operations can be linked to a certain working memory component. Several factors, such as language, problem size, lack for detail in the working memory model, difficulty of the secondary tasks, and internal validity problems are discussed with regard to the results and mental arithmetic.
15

Vérification de programmes avec pointeurs à l'aide de régions et de permissions / Verification of Pointer Programs Using Regions and Permissions

Bardou, Romain 14 October 2011 (has links)
La vérification déductive de programmes consiste à annoter des programmes par une spécification, c'est-à-dire un ensemble de formules logiques décrivant le comportement du programme, et à prouver que les programmes vérifient bien leur spécification. Des outils tels que la plate-forme Why prennent en entrée un programme et sa spécification et calculent des formules logiques telles que, si elles sont prouvées, le programme vérifie sa spécification. Ces formules logiques peuvent être prouvées automatiquement ou à l'aide d'assistants de preuve.Lorsqu'un programme est écrit dans un langage supportant les alias de pointeurs, c'est-à-dire si plusieurs variables peuvent désigner la même case mémoire, alors le raisonnement sur le programme devient particulièrement ardu. Il est nécessaire de spécifier quels pointeurs peuvent être égaux ou non. Les invariants des structures de données, en particulier, sont plus difficiles à vérifier.Cette thèse propose un système de type permettant de structurer la mémoire de façon modulaire afin de contrôler les alias de pointeurs et les invariants de données. Il est basé sur les notions de région et de permission. Les programmes sont ensuite interprétés vers Why de telle façon que les pointeurs soient séparés au mieux, facilitant ainsi le raisonnement. Cette thèse propose aussi un mécanisme d'inférence permettant d'alléger le travail d'annotation des opérations de régions introduites par le langage. Un modèle est introduit pour décrire la sémantique du langage et prouver sa sûreté. En particulier, il est prouvé que si le type d'un pointeur affirme que celui-ci vérifie son invariant, alors cet invariant est effectivement vérifié dans le modèle. Cette thèse a fait l'objet d'une implémentation sous la forme d'un outil nommé Capucine. Plusieurs exemples ont été écrits pour illustrer le langage, et ont été vérifié à l'aide de Capucine. / Deductive verification consists in annotating programs by a specification, i.e. logic formulas which describe the behavior of the program, and prove that programs verify their specification. Tools such as the Why platform take a program and its specification as input and compute logic formulas such that, if they are valid, the program verifies its specification. These logic formulas can be proven automatically or using proof assistants.When a program is written in a language supporting pointer aliasing, i.e. if several variables may denote the same memory cell, then reasoning about the program becomes particularly tricky. It is necessary to specify which pointers may or may not be equal. Invariants of data structures, in particular, are harder to maintain.This thesis proposes a type system which allows to structure the heap in a modular fashion in order to control pointer aliases and data invariants. It is based on the notions of region and permission. Programs are then translated to Why such that pointers are separated as best as possible, to facilitate reasoning. This thesis also proposes an inference mechanism to alleviate the need to write region operations introduced by the language. A model is introduced to describe the semantics of the language and prove its safety. In particular, it is proven that if the type of a pointer tells that its invariant holds, then this invariant indeed holds in the model. This work has been implemented as a tool named Capucine. Several examples have been written to illustrate the language, and where verified using Capucine.
16

Parallellisering av Sliding Extensive Cancellation Algorithm (ECA-S) för passiv radar med OpenMP / Parallelization of Sliding Extensive Cancellation Algorithm (ECA-S) for Passive Radar with OpenMP

Johansson Hultberg, Andreas January 2021 (has links)
Software parallelization has gained increasing interest since the transistor manufacturing of smaller chips within an integrated circuit has begun to stagnate. This has led to the development of new processing units with an increasing number of cores. Parallelization is an optimization technique that allows the user to utilize parallel processes in order to streamline algorithm flows. This study examines the performance benefits that a passive bistatic radar system can obtain by parallelization and code refactorization. The study focuses mainly on investigating the use of parallel instructions within a shared memory model on a Central Processing Unit (CPU) with the use of an application programming interface, namely OpenMP. Quantitative data is collected to compare the runtime of the most central algorithm in the passive radar system, namely the Extensive Cancellation Algorithm (ECA). ECA can be used to suppress unwanted clutter in the surveillance signal, which purpose is to create clear target detections of airborne objects. The algorithm on the other hand is computationally demanding, which has led to the development of faster versions such as the Sliding ECA (ECA-S). Despite the ongoing development, the algorithm is still relatively computationally demanding which can lead to long execution times within the radar system. In this study, a MATLAB implementation of ECA-S is transformed to C in order to take advantage of the fast execution time of the procedural programming language. Parallelism is introduced within the converted algorithm by the use of Intel's thread methodology and then applied within two different operating systems. The study shows that a speedup can be obtained, in the programming language C, by a factor of 24 while still ensuring the correctness of the results. The results also showed that code refactorization of a MATLAB algorithm could result in 73% faster code and that C-MEX implementations are twice as slow as a C-implementation. Finally, the study pointed out that real-time can be achieved for a passive bistatic radar system with the use of the programming language C and by using parallel instructions within a shared memory model on a CPU. / Parallellisering av mjukvara har fått ett ökat intresse sedan transistortillverkningen av mindre chip inom en integrerade krets har börjat att stagnera. Detta har lett till utveckling av moderna processorer med ett ökande antal av kärnor. Parallellisering är en optimeringsteknik vilken tillåter användaren att utnyttja parallella processer till att effektivisera algoritmflöden. Denna studie undersöker de tidsmässiga fördelar ett passivt bistatiskt radarsystem kan erhålla genom att, bland annat tillämpa parallellisering och omformning. Studien fokuserar främst på att undersöka användandet av parallella trådar inom det delade minnesutrymmet på en centralprocessor (CPU), detta med hjälp av applikationsprogrammeringsgränssnittet OpenMP. Kvantitativa jämförelser tas fram med hjälp av en av de mest centrala algoritmerna inom det passiva radarsystemet, nämligen Extensive Cancellation Algorithm (ECA). ECA kan används till att undertrycka oönskat klotter i övervakningssignalen, vilket har till syfte att skapa klara måldetektioner av luftföremål. Algoritmen är däremot beräkningstung, vilket har medfört utveckling av snabbare versioner som exempelvis Sliding ECA (ECA-S). Trots utvecklingen är algoritmen fortfarande relativt beräkningstung och kan medföra en lång exekeveringstid inom hela radarsystemet. I denna studie transformeras en MATLAB-implementation av ECA-S till C för att kunna dra nytta av den snabba exekeveringstiden i det procedurella programmeringsspråket. Parallellism införs inom den transformerade algoritmen med hjälp av Intels trådmetodik och appliceras sedan inom två olika operativsystem. Studien visar på en tidsmässig optimering i C med upp till 24 gånger snabbare exekeveringstid och bibehållen noggrannhet. Resultaten visade även på att en enklare omformning av en MATLAB-algoritm kunde resultera till 73% snabbare kod och att en C-MEX-implementation är dubbelt så långsam i jämförelse med en C-implementering. Slutligen pekade studien på att realtid kan uppnås för ett passivt bistatiskt radarsystem vid användandet av programmeringsspråket C och med utnyttjandet av parallella instruktioner inom det delade minnet på en CPU.
17

Access Path Based Dataflow Analysis For Sequential And Concurrent Programs

Arnab De, * 12 1900 (has links) (PDF)
In this thesis, we have developed a flow-sensitive data flow analysis framework for value set analyses for Java-like languages. Our analysis frame work is based on access paths—a variable followed by zero or more field accesses. We express our abstract states as maps from bounded access paths to abstract value sets. Using access paths instead of allocation sites enables us to perform strong updates on assignments to dynamically allocated memory locations. We also describe several optimizations to reduce the number of access paths that need to be tracked in our analysis. We have instantiated this frame work for flow-sensitive pointer and null-pointer analysis for Java. We have implemented our analysis inside the Chord frame work. A major part of our implementation is written declaratively using Datalog. We leverage the use of BDDs in Chord for keeping our memory usage low. We show that our analysis is much more precise and faster than traditional flow-sensitive and flow-insensitive pointer and null-pointer analysis for Java. We further extend our access path based analysis frame work to concurrent Java programs. We use the synchronization structure of the programs to transfer abstract states from one thread to another. Therefore, we do not need to make conservative assumptions about reads or writes to shared memory. We prove our analysis to be sound for the happens-before memory model, which is weaker than most common memory models, including sequential consistency and the Java Memory Model. We implement a null-pointer analysis for concurrent Java programs and show it to be more precise than the traditional analysis.
18

Logique de séparation et vérification déductive / Separation logic and deductive verification

Bobot, François 12 December 2011 (has links)
Cette thèse s'inscrit dans la démarche de preuve de programmes à l'aide de vérification déductive. La vérification déductive consiste à produire, à partir des sources d'un programme, c'est-à-dire ce qu'il fait, et de sa spécification, c'est-à-dire ce qu'il est sensé faire, une conjecture qui si elle est vraie alors le programme et sa spécification concordent. On utilise principalement des démonstrateurs automatiques pour montrer la validité de ces formules. Quand ons'intéresse à la preuve de programmes qui utilisent des structures de données allouées en mémoire, il est élégant et efficace de spécifier son programme en utilisant la logique de séparation qui est apparu il y a une dizaine d'année. Cela implique de prouver des conjectures comportant les connectives de la logique de séparation, or les démonstrateurs automatiques ont surtout fait des progrès dans la logique du premier ordre qui ne les contient pas.Ce travail de thèse propose des techniques pour que les idées de la logique de séparation puissent apparaître dans les spécifications tout en conservant la possibilité d'utiliser des démonstrateurs pour la logique du premier ordre. Cependant les conjectures que l'ont produit ne sont pas dans la même logique du premier ordre que celles des démonstrateurs. Pour permettre une plus grande automatisation, ce travail de thèse a également défini de nouvelles conversions entre la logique polymorphe du premier ordre et la logique multi-sortée dupremier ordre utilisé par la plupart des démonstrateurs.La première partie a donné lieu à une implémentation dans l'outil Jessie, la seconde a donné lieu à une participation conséquente à l'écriture de l'outil Why3 et particulièrement dans l'architecture et écriture des transformations qui implémentent ces simplifications et conversions. / This thesis comes within the domain of proofs of programs by deductive verification. The deductive verification generates from a program source and its specification a mathematical formula whose validity proves that the program follows its specification. The program source describes what the program does and its specification represents what the program should do. The validity of the formula is mainly verified by automatic provers. During the last ten years separation logic has shown to be an elegant way to deal with programs which use data-structures with pointers. However it requires a specific logical language, provers, and specific reasoning techniques.This thesis introduces a technique to express ideas from separation logic in the traditional framework of deductive verification. Unfortunately the mathematical formulas produced are not in the same first-order logic than the ones of provers. Thus this work defines new conversions between the polymorphic first-order logic and the many-sorted logic used by most proves.The first part of this thesis leads to an implementation in the Jessietool. The second part results in an important participation to the writing of the Why3 tool, in particular in the architecture and writing of the transformations which implement these conversions.
19

Real Time Face Recognition on GPU using OPENCL

Naik, Narmada January 2017 (has links) (PDF)
Face recognition finds various applications in surveillance, Law enforcement etc. These applications require fast image processing in real time. Modern GPUs have evolved fully programmable parallel stream processors. The problem of face recognition in real time system is benefited by parallelism. With the aim of fulfilling both speed and accuracy criteria we present a GPU accelerated Face Recognition system. OpenCL is a heterogeneous computing language that allows extracting parallelism on different platforms like DSP processors, FPGAs, GPUs. The proposed kernel on GPU exploits coarse grain parallelism for Local Binary Pattern (LBP) histogram computation and ELTP (Enhanced Local Ternary Pattern) feature extraction. The proposed optimization techniques on local memory, work group size and work group dimension enhances the computation of face recognition on GPU further. As a result, we have achieved a speed up of 30 times to 300 times for 124*124 to 2048*2048 image sizes for LBP and ELTP feature extraction compared to CPU. We also present a robust real time face recognition and tracking on GPU using fusion of RGB and Depth images taken from Kinect sensor. The proposed segmentation after detection algorithm enhances the performances of recognition using LBP.

Page generated in 0.4568 seconds