• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 58
  • 16
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

High performance trace replay event simulation of parallel programs behavior / Ferramenta de alto desempenho para análise de comportamento de programas paralelos baseada em rastos de execução

Korndorfer, Jonas Henrique Muller January 2016 (has links)
Sistemas modernos de alto desempenho compreendem milhares a milhões de unidades de processamento. O desenvolvimento de uma aplicação paralela escalável para tais sistemas depende de um mapeamento preciso da utilização recursos disponíveis. A identificação de recursos não utilizados e os gargalos de processamento requere uma boa análise desempenho. A observação de rastros de execução é uma das técnicas mais úteis para esse fim. Infelizmente, o rastreamento muitas vezes produz grandes arquivos de rastro, atingindo facilmente gigabytes de dados brutos. Portanto ferramentas para análise de desempenho baseadas em rastros precisam processar esses dados para uma forma legível e serem eficientes a fim de permitirem uma análise rápida e útil. A maioria das ferramentas existentes, tais como Vampir, Scalasca e TAU, focam no processamento de formatos de rastro com semântica associada, geralmente definidos para lidar com programas desenvolvidos com bibliotecas populares como OpenMP, MPI e CUDA. No entanto, nem todas aplicações paralelas utilizam essas bibliotecas e assim, algumas vezes, essas ferramentas podem não ser úteis. Felizmente existem outras ferramentas que apresentam uma abordagem mais dinâmica, utilizando um formato de arquivo de rastro aberto e sem semântica específica. Algumas dessas ferramentas são Paraver, Pajé e PajeNG. Por outro lado, ser genérico tem custo e assim tais ferramentas frequentemente apresentam baixo desempenho para o processamento de grandes rastros. O objetivo deste trabalho é apresentar otimizações feitas para o conjunto de ferramentas PajeNG. São apresentados o desenvolvimento de um estratégia de paralelização para o PajeNG e uma análise de desempenho para demonstrar nossos ganhos. O PajeNG original funciona sequencialmente, processando um único arquivo de rastro que contém todos os dados do programa rastreado. Desta forma, a escalabilidade da ferramenta fica muito limitada pela leitura dos dados. Nossa estratégia divide o arquivo em pedaços permitindo seu processamento em paralelo. O método desenvolvido para separar os rastros permite que cada pedaço execute em um fluxo de execução separado. Nossos experimentos foram executados em máquinas com acesso não uniforme à memória (NUMA).Aanálise de desempenho desenvolvida considera vários aspectos como localidade das threads, o número de fluxos, tipo de disco e também comparações entre os nós NUMA. Os resultados obtidos são muito promissores, escalando o PajeNG cerca de oito a onze vezes, dependendo da máquina. / Modern high performance systems comprise thousands to millions of processing units. The development of a scalable parallel application for such systems depends on an accurate mapping of application processes on top of available resources. The identification of unused resources and potential processing bottlenecks requires good performance analysis. The trace-based observation of a parallel program execution is one of the most helpful techniques for such purpose. Unfortunately, tracing often produces large trace files, easily reaching the order of gigabytes of raw data. Therefore tracebased performance analysis tools have to process such data to a human readable way and also should be efficient to allow an useful analysis. Most of the existing tools such as Vampir, Scalasca, TAU have focus on the processing of trace formats with a fixed and well-defined semantic. The corresponding file format are usually proposed to handle applications developed using popular libraries like OpenMP, MPI, and CUDA. However, not all parallel applications use such libraries and so, sometimes, these tools cannot be useful. Fortunately, there are other tools that present a more dynamic approach by using an open trace file format without specific semantic. Some of these tools are the Paraver, Pajé and PajeNG. However the fact of being generic comes with a cost. These tools very frequently present low performance for the processing of large traces. The objective of this work is to present performance optimizations made in the PajeNG tool-set. This comprises the development of a parallelization strategy and a performance analysis to set our gains. The original PajeNG works sequentially by processing a single trace file with all data from the observed application. This way, the scalability of the tool is very limited by the reading of the trace file. Our strategy splits such file to process several pieces in parallel. The created method to split the traces allows the processing of each piece in each thread. The experiments were executed in non-uniform memory access (NUMA) machines. The performance analysis considers several aspects like threads locality, number of flows, disk type and also comparisons between the NUMA nodes. The obtained results are very promising, scaling up the PajeNG about eight to eleven times depending on the machine.
22

Dynamic Analysis of Multithreaded Embedded Software to Expose Atomicity Violations

January 2016 (has links)
abstract: Concurrency bugs are one of the most notorious software bugs and are very difficult to manifest. Significant work has been done on detection of atomicity violations bugs for high performance systems but there is not much work related to detect these bugs for embedded systems. Although criteria to claim existence of bugs remains same, approach changes a bit for embedded systems. The main focus of this research is to develop a systemic methodology to address the issue from embedded systems perspective. A framework is developed which predicts the access interleaving patterns that may violate atomicity using memory references of shared variables and provides support to force and analyze these schedules for any output change, system fault or change in execution path. / Dissertation/Thesis / Masters Thesis Computer Science 2016
23

High performance trace replay event simulation of parallel programs behavior / Ferramenta de alto desempenho para análise de comportamento de programas paralelos baseada em rastos de execução

Korndorfer, Jonas Henrique Muller January 2016 (has links)
Sistemas modernos de alto desempenho compreendem milhares a milhões de unidades de processamento. O desenvolvimento de uma aplicação paralela escalável para tais sistemas depende de um mapeamento preciso da utilização recursos disponíveis. A identificação de recursos não utilizados e os gargalos de processamento requere uma boa análise desempenho. A observação de rastros de execução é uma das técnicas mais úteis para esse fim. Infelizmente, o rastreamento muitas vezes produz grandes arquivos de rastro, atingindo facilmente gigabytes de dados brutos. Portanto ferramentas para análise de desempenho baseadas em rastros precisam processar esses dados para uma forma legível e serem eficientes a fim de permitirem uma análise rápida e útil. A maioria das ferramentas existentes, tais como Vampir, Scalasca e TAU, focam no processamento de formatos de rastro com semântica associada, geralmente definidos para lidar com programas desenvolvidos com bibliotecas populares como OpenMP, MPI e CUDA. No entanto, nem todas aplicações paralelas utilizam essas bibliotecas e assim, algumas vezes, essas ferramentas podem não ser úteis. Felizmente existem outras ferramentas que apresentam uma abordagem mais dinâmica, utilizando um formato de arquivo de rastro aberto e sem semântica específica. Algumas dessas ferramentas são Paraver, Pajé e PajeNG. Por outro lado, ser genérico tem custo e assim tais ferramentas frequentemente apresentam baixo desempenho para o processamento de grandes rastros. O objetivo deste trabalho é apresentar otimizações feitas para o conjunto de ferramentas PajeNG. São apresentados o desenvolvimento de um estratégia de paralelização para o PajeNG e uma análise de desempenho para demonstrar nossos ganhos. O PajeNG original funciona sequencialmente, processando um único arquivo de rastro que contém todos os dados do programa rastreado. Desta forma, a escalabilidade da ferramenta fica muito limitada pela leitura dos dados. Nossa estratégia divide o arquivo em pedaços permitindo seu processamento em paralelo. O método desenvolvido para separar os rastros permite que cada pedaço execute em um fluxo de execução separado. Nossos experimentos foram executados em máquinas com acesso não uniforme à memória (NUMA).Aanálise de desempenho desenvolvida considera vários aspectos como localidade das threads, o número de fluxos, tipo de disco e também comparações entre os nós NUMA. Os resultados obtidos são muito promissores, escalando o PajeNG cerca de oito a onze vezes, dependendo da máquina. / Modern high performance systems comprise thousands to millions of processing units. The development of a scalable parallel application for such systems depends on an accurate mapping of application processes on top of available resources. The identification of unused resources and potential processing bottlenecks requires good performance analysis. The trace-based observation of a parallel program execution is one of the most helpful techniques for such purpose. Unfortunately, tracing often produces large trace files, easily reaching the order of gigabytes of raw data. Therefore tracebased performance analysis tools have to process such data to a human readable way and also should be efficient to allow an useful analysis. Most of the existing tools such as Vampir, Scalasca, TAU have focus on the processing of trace formats with a fixed and well-defined semantic. The corresponding file format are usually proposed to handle applications developed using popular libraries like OpenMP, MPI, and CUDA. However, not all parallel applications use such libraries and so, sometimes, these tools cannot be useful. Fortunately, there are other tools that present a more dynamic approach by using an open trace file format without specific semantic. Some of these tools are the Paraver, Pajé and PajeNG. However the fact of being generic comes with a cost. These tools very frequently present low performance for the processing of large traces. The objective of this work is to present performance optimizations made in the PajeNG tool-set. This comprises the development of a parallelization strategy and a performance analysis to set our gains. The original PajeNG works sequentially by processing a single trace file with all data from the observed application. This way, the scalability of the tool is very limited by the reading of the trace file. Our strategy splits such file to process several pieces in parallel. The created method to split the traces allows the processing of each piece in each thread. The experiments were executed in non-uniform memory access (NUMA) machines. The performance analysis considers several aspects like threads locality, number of flows, disk type and also comparisons between the NUMA nodes. The obtained results are very promising, scaling up the PajeNG about eight to eleven times depending on the machine.
24

Att automatisera funktionell testning av webbapplikationer

Jonsson Forsblad, Olle, Basu, Henry January 2013 (has links)
Testning är en viktig del av utveckling av webbapplikationer. Det är möjligt att effektivisera testprocessen genom att automatisera delar av den. Trots detta finns det företag som testar helt manuellt. Syftet med denna uppsats är att visa under vilka villkor det kan finnas värde i att automatisera testning av webbapplikationer samt att identifiera orsaker till varför vissa företaginte redan gör det. Uppsatsen grundar sig i en beskrivande fallstudie där intervjuer, enkäter och observationer har genererat data. Uppsatsen bidrar med kunskap om automatiserad testning och beskriver de faktorer som kan påverka företags beslut om att automatisera. / Testing is an important part of the development of web applications. It is possible to improve theefficiency of the testing process by automating parts of it. Despite this, there are companies thatonly conduct manual testing. The purpose of this paper is to show under what circumstances theremay be value in automating the testing of web applications and to identify reasons why some companies are not already doing so. The essay is based on a descriptive case study where interviews, questionnaires and observations have generated data. The paper contributes with knowledge of automated testing and describe the factors that may affect the company's decision toautomate.
25

Light-Weight Authentication Schemes with Applications to RFID Systems

Malek, Behzad January 2011 (has links)
The first line of defence against wireless attacks in Radio Frequency Identi cation (RFID) systems is authentication of tags and readers. RFID tags are very constrained in terms of power, memory and size of circuit. Therefore, RFID tags are not capable of performing sophisticated cryptographic operations. In this dissertation, we have designed light-weight authentication schemes to securely identify the RFID tags to readers and vice versa. The authentication schemes require simple binary operations and can be readily implemented in resource-constrained Radio Frequency Identi cation (RFID) tags. We provide a formal proof of security based on the di culty of solving the Syndrome Decoding (SD) problem. Authentication veri es the unique identity of an RFID tag making it possible to track a tag across multiple readers. We further protect the identity of RFID tags by a light-weight privacy protecting identifi cation scheme based on the di culty of the Learning Parity with Noise (LPN) complexity assumption. To protect RFID tags authentication against the relay attacks, we have designed a resistance scheme in the analog realm that does not have the practicality issues of existing solutions. Our scheme is based on the chaos-suppression theory and it is robust to inconsistencies, such as noise and parameters mismatch. Furthermore, our solutions are based on asymmetric-key algorithms that better facilitate the distribution of cryptographic keys in large systems. We have provided a secure broadcast encryption protocol to effi ciently distribute cryptographic keys throughout the system with minimal communication overheads. The security of the proposed protocol is formally proven in the adaptive adversary model, which simulates the attacker in the real world.
26

Designing for Replayability : Designing a game with a simple gameplay loop for the purpose of being replayable

Hammar, Nicolas, Persson, Jonathan January 2022 (has links)
Replayability in games is important to many players as it increases the amount of play time they get out of a game for the price they paid, which is why it is interesting to know how replayability can be promoted in games with simple mechanics. Previous research has categorised what motivates players to play a game again, as aspects of replayability. These aspects and the inherent subjectivity of replayability have been taken into account to define replayability. In this study, a game is designed to be replayable according to those definitions and then iterated on three times. Four different tools and principles for designing for replayability are used and evaluated in the design. All four tools and principles are considered in the initial design, after which one or a few are selected to be used in each game iteration. For each version of the game, the reason for, and the theory behind, design decisions is documented. The game is then released and player data and answers to two questions within the game are gathered to inform reflection. After which the design is reflected on before designing the next iteration. Four different tools and principles were tested as part of the design process. The Periodic Dilemma Generator (Aghekyan, 2021), the Aspects of Replayability (Krall and Menzies, 2012) (Monedero March, 2019), game elements based on randomisation (Bycer, 2018), and a tool to add synergy (Rosewater, 2013). Each of them proved to be useful in different ways. The Aspects of Replayability helped focus the design as goals to work towards. The Periodic Dilemma Generator tool was used throughout the design as both a design tool and a guide for creating meaningful choices. Randomisation was added as part of the game’s initial design and remained the main source of variance throughout all iterations. Designing synergy between game elements then enhanced both the Periodic Dilemma Generator and the randomised variance, making it the tool that provided the most replayability in the game. Using these tools and principles together, they can guide the design to enhance the complexity of a simple game to promote replayability. / Omspelbarhet i spel är viktigt för många spelare då det ökar mängden speltid de får ut av ett spel för det pris de betalade, därför är det intressant att veta hur man kan främja omspelbarhet i spel med enkla mechanics. Tidigare forskning har kategoriserat vad som motiverar spelare att spela ett spel igen. Det kallas aspekter av omspelbarhet. Dessa aspekter och subjektiviteten av omspelbarhet har beaktats för att definiera konceptet. I den här studien designas ett spel för att vara omspelbart enligt de definitionerna och sedan designas tre ytterligare iterationer. Fyra olika verktyg och principer som handlar om att designa för omspelbarhet används och evalueras i designen. Alla fyra verktyg och principer används i den ursprungliga designen, varefter en eller ett fåtal av dem väljs till att användas för att designa varje iteration. För varje version av spelet dokumenteras orsaken till, och teorin bakom, designbeslut. Spelet släpps sedan och spelardata och svar på två frågor inom spelet samlas in för att informera vid reflektion. Varefter designen reflekteras över innan nästa iteration utformas. Fyra olika verktyg och principer testades som en del av designprocessen. Verktyget Periodic Dilemma Generator (Aghekyan, 2021), principen Aspects of Replayability (Krall och Menzies, 2012) (Monedero March, 2019), spelelement baserade på randomisering (Bycer, 2018) och ett verktyg för att lägga till synergi (Rosewater, 2013). De visade sig alla vara användbara på olika sätt. Aspects of Replayability hjälpte till att fokusera designen som genom att agera som mål att arbeta mot. Periodic Dilemma Generator användes genom hela designen som både ett designverktyg och som en guide för att skapa meningsfulla val. Randomisering lades till som en del av spelets ursprungliga design och förblev den huvudsakliga källan till varians genom alla iterationer. Att designa synergi mellan spelelementen förbättrade sedan både Periodic Dilemma Generator och den randomiserade variansen, vilket gjorde det till det verktyg som gav mest omspelbarhet i spelet. Genom att använda dessa verktyg och principer tillsammans kan de vägleda designen för att förbättra komplexiteten i ett enkelt spel och främja omspelbarhet.
27

Automatically Identify and Create a Highlight Cinematic in a Virtual Reality Action Game

Almroth, Kristoffer January 2021 (has links)
Users sharing their experiences on social media and streaming sites becomes increasingly important for marketing games. Virtual reality has the added challenge that the head movement is sometimes too erratic to capture and present on a flat screen. This paper solves this issue by automatically generating highlight cinematics for virtual reality action games using new camera angles to create easily shareable media focusing on non-players. The problem is solved by first identifying the interest over time, then split it into sequences of coherent action. The most interesting sequences are selected for the highlight reel where each sequence is split into clips tied to a specific camera angle. The highlight cinematics was evaluated using a survey. The results suggest that dynamic cameras are more engaging and interesting than static cameras. The selection of camera angles gave more significant results than the length of the highlight or the intensity of the action, pointing towards the presentation of the highlight cinematic being more important than the actual highlighted material for non-players. / Det blir alltmer viktigt att spelare delar med sig av sina upplevelser på sociala medier som en form av marknadsföring för spel. Denna delning blir svårare från virtuell verklighet till vanliga skärmar då de snabba huvudrörelserna gör det svårt att följa vad som händer. Detta problem kan lösas genom att automatiskt identifiera höjdpunkter i spel och filma in dem med nya kameravinklar för att generera en video som är enkelt delbar. Höjdpunkterna identifieras genom att först analysera intresset över tid i matchen. Därefter delas det upp i olika sekvenser där de mest intressanta sekvenserna gör upp videon. Sekvenserna delas upp i olika klipp där varje klipp är kopplat till en kameravinkel. Programmet utvärderades med en enkät där målgruppen var de som inte spelat spelet tidigare. Resultatet tyder på att dynamiska kameravinklar ses som mer engagerande och intressanta än statiska kameravinklar. Valet av kameravinklar gav mer betydande resultat än längden på videorna och intensiteten i handlingen, vilket tyder på att framläggandet av videorna är viktigare än det underliggande materialet för de som inte spelat spelet tidigare.
28

Hippocampal Representations of Targeted Memory Reactivation and Reactivated Temporal Sequences

Alm, Kylie H January 2017 (has links)
Why are some memories easy to retrieve, while others are more difficult to access? Here, we tested whether we could bias memory replay, a process whereby newly learned information is reinforced by reinstating the neuronal patterns of activation that were present during learning, towards particular memory traces. The goal of this biasing is to strengthen some memory traces, making them more easily retrieved. To test this, participants were scanned during interleaved periods of encoding and rest. Throughout the encoding runs, participants learned triplets of images that were paired with semantically related sound cues. During two of the three rest periods, novel, irrelevant sounds were played. During one critical rest period, however, the sound cues learned in the preceding encoding period were played in an effort to preferentially increase reactivation of the associated visual images, a manipulation known as targeted memory reactivation. Representational similarity analyses were used to compare multi-voxel patterns of hippocampal activation across encoding and rest periods. Our index of reactivation was selectively enhanced for memory traces that were targeted for preferential reactivation during offline rest, both compared to information that was not targeted for preferential reactivation and compared to a baseline rest period. Importantly, this neural effect of targeted reactivation was related to the difference in delayed order memory for information that was cued versus uncued, suggesting that preferential replay may be a mechanism by which specific memory traces can be selectively strengthened for enhanced subsequent memory retrieval. We also found partial evidence of discrimination of unique temporal sequences within the hippocampus. Over time, multi-voxel patterns associated with a given triplet sequence became more dissimilar to the patterns associated with the other sequences. Furthermore, this neural marker of sequence preservation was correlated with the difference in delayed order memory for cued versus uncued triplets, signifying that the ability to reactivate particular temporal sequences within the hippocampus may be related to enhanced temporal order memory for the cued information. Taken together, these findings support the claim that awake replay can be biased towards preferential reactivation of particular memory traces and also suggest that this preferential reactivation, as well as representations of reactivated temporal sequences, can be detected within patterns of hippocampal activation. / Psychology
29

Distributed Relay/Replay Attacks on GNSS Signals

Lenhart, Malte January 2022 (has links)
In modern society, Global Navigation Satellite Systems (GNSSs) are ubiquitously relied upon by many systems, among others in critical infrastructure, for navigation and time synchronization. To overcome the prevailing vulnerable state of civilian GNSSs, many detection schemes for different attack types (i.e., jamming and spoofing) have been proposed in literature over the last decades. With the launch of Galileo Open Service Navigation Message Authentication (OS­NMA), certain, but not all, types of GNSS spoofing are prevented. We therefore analyze the remaining attack surface of relay/replay attacks in order to identify a suitable and effective combination of detection schemes against these. One shortcoming in the evaluation of countermeasures is the lack of available test platforms, commonly limiting evaluation to mathematical description, simulation and/or test against a well defined set of recorded spoofing incidents. In order to allow researchers to test countermeasures against more diverse threats, this degree project investigates relay/replay attacks against GNSS signals in real­world setups. For this, we consider colluding adversaries, relaying/replaying on signal­ and on message­level in real­time, over consumer grade Internet, and with Commercially off the Shelf (COTS) hardware. We thereby highlight how effective and simple relay/replay attacks can be on existent and likely on upcoming authenticated signals. We investigate the requirements for such colluding attacks and present their limitations and impact, as well as highlight possible detection points. / Det moderna samhället förlitar sig på ständigt tillgängliga satellitnavigeringssystem (GNSSs) för navigering och tidssynkronisering i bland annat kritisk infrastruktur. För att åtgärda det rådande såbara tillståndet i civila GNSSs har många detektionssystem för olika attacktyper (dvs. jamming och förfalskning) blivit förslagna i den vetenskapliga litteraturen under de senaste årtiondena. Införandet av Galileo Open Service Navigation Message Authentication (OS NMA) förhindrar vissa, men inte alla typer av förfalskningsattacker. Därför analyserar vi den övriga angreppsytan för replay attacker för att identifiera en kvalificerad och effektiv kombination av detektionssystem emot dem. Ett tillkortakommande i utvärdering av detektionssystemen är bristen på tillgängliga testplattformar, vilket får konsekvensen att utvärderingen ofta är begränsad till matematiska beskrivningar, simuleringar, och/eller testning mot ett väldefinierat set av genererad förfalskningsattacker. För att hjälpa forskarna testa detektionssystemen mot mer varierade angrepp undersöker detta examensarbete replay attacker mot GNSS signaler i realistiska situationer. För dessa syften betraktar vi kollaborerande angripare som utför replay attacker på signal ­ och meddelandennivå i realtid över konsument­kvalité Internet med vanlig hårdvara. Vi framhäver därmed hur effektiva och enkla replay attacker kan vara mot befintliga och kommande autentiserade signaler. Vi undersöker förutsättningar för sådana kollaborerande attacker och presenterar deras begränsningar och verkan, samt möjliga kännetecken.
30

Neurocomputational model for learning, memory consolidation and schemas

Dupuy, Nathalie January 2018 (has links)
This thesis investigates how through experience the brain acquires and stores memories, and uses these to extract and modify knowledge. This question is being studied by both computational and experimental neuroscientists as it is of relevance for neuroscience, but also for artificial systems that need to develop knowledge about the world from limited, sequential data. It is widely assumed that new memories are initially stored in the hippocampus, and later are slowly reorganised into distributed cortical networks that represent knowledge. This memory reorganisation is called systems consolidation. In recent years, experimental studies have revealed complex hippocampal-neocortical interactions that have blurred the lines between the two memory systems, challenging the traditional understanding of memory processes. In particular, the prior existence of cortical knowledge frameworks (also known as schemas) was found to speed up learning and consolidation, which seemingly is at odds with previous models of systems consolidation. However, the underlying mechanisms of this effect are not known. In this work, we present a computational framework to explore potential interactions between the hippocampus, the prefrontal cortex, and associative cortical areas during learning as well as during sleep. To model the associative cortical areas, where the memories are gradually consolidated, we have implemented an artificial neural network (Restricted Boltzmann Machine) so as to get insight into potential neural mechanisms of memory acquisition, recall, and consolidation. We analyse the network's properties using two tasks inspired by neuroscience experiments. The network gradually built a semantic schema in the associative cortical areas through the consolidation of multiple related memories, a process promoted by hippocampal-driven replay during sleep. To explain the experimental data we suggest that, as the neocortical schema develops, the prefrontal cortex extracts characteristics shared across multiple memories. We call this information meta-schema. In our model, the semantic schema and meta-schema in the neocortex are used to compute consistency, conflict and novelty signals. We propose that the prefrontal cortex uses these signals to modulate memory formation in the hippocampus during learning, which in turn influences consolidation during sleep replay. Together, these results provide theoretical framework to explain experimental findings and produce predictions for hippocampal-neocortical interactions during learning and systems consolidation.

Page generated in 0.4222 seconds