• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 41
  • 23
  • 16
  • 15
  • 9
  • 8
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 303
  • 107
  • 104
  • 104
  • 60
  • 52
  • 50
  • 47
  • 46
  • 39
  • 31
  • 30
  • 30
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Utilização de técnicas de GPGPU em sistema de vídeo-avatar. / Use of GPGPU techniques in a video-avatar system.

Fernando Tsuda 01 December 2011 (has links)
Este trabalho apresenta os resultados da pesquisa e da aplicação de técnicas de GPGPU (General-Purpose computation on Graphics Processing Units) sobre o sistema de vídeo-avatar com realidade aumentada denominado AVMix. Com o aumento da demanda por gráficos tridimensionais interativos em tempo real cada vez mais próximos da realidade, as GPUs (Graphics Processing Units) evoluíram até o estado atual, como um hardware com alto poder computacional que permite o processamento de algoritmos paralelamente sobre um grande volume de dados. Desta forma, É possível usar esta capacidade para aumentar o desempenho de algoritmos usados em diversas áreas, tais como a área de processamento de imagens e visão computacional. A partir das pesquisas de trabalhos semelhantes, definiu-se o uso da arquitetura CUDA (Computer Unified Device Architecture) da Nvidia, que facilita a implementação dos programas executados na GPU e ao mesmo tempo flexibiliza o seu uso, expondo ao programador o detalhamento de alguns recursos de hardware, como por exemplo a quantidade de processadores alocados e os diferentes tipos de memória. Após a reimplementação das rotinas críticas ao desempenho do sistema AVMix (mapa de profundidade, segmentação e interação), os resultados mostram viabilidade do uso da GPU para o processamento de algoritmos paralelos e a importância da avaliação do algoritmo a ser implementado em relação a complexidade do cálculo e ao volume de dados transferidos entre a GPU e a memória principal do computador. / This work presents the results of research and application of GPGPU (General-Purpose computation on Graphics Processing Units) techniques on the video-avatar system with augmented reality called AVMix. With increasing demand for interactive three-dimensional graphics rendered in real-time and closer to reality, GPUs (Graphics Processing Units) evolved to the present state as a high-powered computing hardware enabled to process parallel algorithms over a large data set. This way, it is possible to use this capability to increase the performance of algorithms used in several areas, such as image processing and computer vision. From the research of similar work, it is possible to define the use of CUDA (Computer Unified Device Architecture) from Nvidia, which facilitates the implementation of the programs that run on GPU and at the same time flexibilize its use, exposing to the programmer some details of hardware such as the number of processors allocated and the different types of memory. Following the reimplementation of critical performance routines of AVMix system (depth map, segmentation and interaction), the results show the viability of using the GPU to process parallel algorithms in this application and the importance of evaluating the algorithm to be implemented, considering the complexity of the calculation and the volume of data transferred between the GPU and the computer\'s main memory.
102

[en] HYBRID FRUSTUM CULLING USING CPU AND GPU / [pt] FRUSTUM CULLING HÍBRIDO UTILIZANDO CPU E GPU

EDUARDO TELLES CARLOS 15 September 2017 (has links)
[pt] Um dos problemas mais antigos da computação gráfica tem sido a determinação de visibilidade. Vários algoritmos têm sido desenvolvidos para viabilizar modelos cada vez maiores e detalhados. Dentre estes algoritmos, destaca-se o frustum culling, cujo papel é remover objetos que não sejam visíveis ao observador. Esse algoritmo, muito comum em várias aplicações, vem sofrendo melhorias ao longo dos anos, a fim de acelerar ainda mais a sua execução. Apesar de ser tratado como um problema bem resolvido na computação gráfica, alguns pontos ainda podem ser aperfeiçoados, e novas formas de descarte desenvolvidas. No que se refere aos modelos massivos, necessita-se de algoritmos de alta performance, pois a quantidade de cálculos aumenta significativamente. Este trabalho objetiva avaliar o algoritmo de frustum culling e suas otimizações, com o propósito de obter o melhor algoritmo possível implementado em CPU, além de analisar a influência de cada uma de suas partes em modelos massivos. Com base nessa análise, novas técnicas de frustum culling serão desenvolvidas, utilizando o poder computacional da GPU (Graphics Processing Unit), e comparadas com o resultado obtido apenas pela CPU. Como resultado, será proposta uma forma de frustum culling híbrido, que tentará aproveitar o melhor da CPU e da GPU. / [en] The definition of visibility is a classical problem in Computer Graphics. Several algorithms have been developed to enable the visualization of huge and complex models. Among these algorithms, the frustum culling, which plays an important role in this area, is used to remove invisible objects by the observer. Besides being very usual in applications, this algorithm has been improved in order to accelerate its execution. Although being treated as a well-solved problem in Computer Graphics, some points can be enhanced yet, and new forms of culling may be disclosed as well. In massive models, for example, algorithms of high performance are required, since the calculus arises considerably. This work analyses the frustum culling algorithm and its optimizations, aiming to obtain the state-of-the-art algorithm implemented in CPU, as well as explains the influence of each of its steps in massive models. Based on this analysis, new GPU (Graphics Processing Unit) based frustum culling techniques will be developed and compared with the ones using only CPU. As a result, a hybrid frustum culling will be proposed, in order to achieve the best of CPU and GPU processing.
103

Simulation et rendu de vagues déferlantes / Simulation and rendering of breaking waves

Brousset, Mathias 07 December 2017 (has links)
Depuis plusieurs décennies, la communauté informatique graphique s’intéresse à la simulation physique du mouvement et du rendu des fluides. Ils nécessitent d’approcher numériquement des systèmes complexes d’équations aux dérivées partielles, coûteux en temps de calcul. Ces deux domaines trouvent entre autres des applications dans le domaine vidéoludique, qui requiert des performances pouvant offrir des résultats en temps interactif, et dans la simulation d’écoulements réalistes et complexes pour les effets spéciaux, nécessitant des temps de calcul et d’espace mémoire beaucoup plus considérables. Les modèles de la dynamique des fluides permettent de simuler des écoulements complexes, tout en offrant à l’artiste la possibilité d’interagir avec la simulation. Toutefois, contrôler la dynamique et l’apparence des vagues reste difficile. Cette thèse porte d’une part sur le contrôle du mouvement des vagues océaniques dans un contexte d’animation basée sur les équations de Navier-Stokes, et sur leur visualisation réaliste. Nos deux contributions principales sont : (i) un modèle de forces externes pour contrôler le mouvement des vagues, avec leur hauteur, leur point de déferlement et leur vitesse. Une extension du modèle pour représenter l’interaction entre plusieurs vagues et des vagues tournantes est également proposée. (ii) une méthodologie pour visualiser les vagues, à l’aide d’une méthode de rendu réaliste, en s’appuyant sur des données optiques des constituants océaniques pour contrôler l’apparence du fluide considéré comme milieu participant. La simulation et le contrôle de la dynamique des vagues sont mis en oeuvre dans un simulateur basé sur la méthode SPH (Smoothed Particle Hydrodynamics). Afin d’obtenir des performances interactives, nous avons développé un moteur de simulation SPH tirant parti des technologies GPGPU. Pour la visualisation physico-réaliste, nous utilisons un moteur de rendu existant permettant de représenter des milieux participants. Utilisés conjointement, les deux contributions permettent de simuler et contrôler la dynamique d’un front de mer ainsi que son apparence, sur la base de ses paramètres physiques. / Physics based animation and photorealistic rendering of fluids are two research field that has been widely addressed by the computer graphics research community. Both have applications in the video-entertainment industry and used in simulations of natural disasters, which require high computing performance in order to provide interactive time results. This thesis first focuses on simulating breaking wave on modern computer architecturesm and then to render them in the case of oceanic environments. The first part of this thesis deals with physics-based animation of breaking waves, and describes a simple model to generate and control such waves. Current methods only enable to simulate the effects but not the causes of water waves. The implementation of our method takes advantage of GPGPU technologies because of its massively parallel nature, in order to achieve interactive performances. Besides, the method was designed to provide the graphist user-control of the physical phenomena, which enables to control in real time all the physical parameters of the generated waves, in order to achieve the desired result. The second part of this thesis deals with the optical properties of water in oceanic environments and describes a model that enables to realistically render an oceanic scene. Its second goal is to provide user-control of the oceanic constituants amount to tune the appearance of the oceanic participating media.
104

DistributedCL: middleware de processamento distribuído em GPU com interface da API OpenCL. / DistributedCL: middleware de processamento distribuído em GPU com interface da API OpenCL.

Andre Luiz Rocha Tupinamba 10 July 2013 (has links)
Este trabalho apresenta a proposta de um middleware, chamado DistributedCL, que torna transparente o processamento paralelo em GPUs distribuídas. Com o suporte do middleware DistributedCL uma aplicação, preparada para utilizar a API OpenCL, pode executar de forma distribuída, utilizando GPUs remotas, de forma transparente e sem necessidade de alteração ou nova compilação do seu código. A arquitetura proposta para o middleware DistributedCL é modular, com camadas bem definidas e um protótipo foi construído de acordo com a arquitetura, onde foram empregados vários pontos de otimização, incluindo o envio de dados em lotes, comunicação assíncrona via rede e chamada assíncrona da API OpenCL. O protótipo do middleware DistributedCL foi avaliado com o uso de benchmarks disponíveis e também foi desenvolvido o benchmark CLBench, para avaliação de acordo com a quantidade dos dados. O desempenho do protótipo se mostrou bom, superior às propostas semelhantes, tendo alguns resultados próximos do ideal, sendo o tamanho dos dados para transmissão através da rede o maior fator limitante. / This work proposes a middleware, called DistributedCL, which makes parallel processing on distributed GPUs transparent. With DistributedCL middleware support, an OpenCL enabled application can run in a distributed manner, using remote GPUs, transparently and without alteration to the code or recompilation. The proposed architecture for the DistributedCL middleware is modular, with well-defined layers. A prototype was built according to the architecture, into which were introduced multiple optimization features, including batch data transfer, asynchronous network communication and asynchronous OpenCL API invocation. The prototype was evaluated using available benchmarks and a specific benchmark, the CLBench, was developed to facilitate evaluations according to the amount of processed data. The prototype presented good performance, higher compared to similar proposals. The size of data for transmission over the network showed to be the biggest limiting factor.
105

Adéquation Algorithme Architecture et modèle de programmation pour l'implémentation d'algorithmes de traitement du signal et de l'image sur cluster multi-GPU / Programming model for the implementation of 2D-3D image processing applications on a hybrid CPU-GPU cluster.

Boulos, Vincent 18 December 2012 (has links)
Initialement con¸cu pour d´echarger le CPU des tˆaches de rendu graphique, le GPU estdevenu une architecture massivement parall`ele adapt´ee au traitement de donn´ees volumineuses.Alors qu’il occupe une part de march´e importante dans le Calcul Haute Performance, uned´emarche d’Ad´equation Algorithme Architecture est n´eanmoins requise pour impl´ementerefficacement un algorithme sur GPU.La contribution de cette th`ese est double. Dans un premier temps, nous pr´esentons legain significatif apport´e par l’impl´ementation optimis´ee d’un algorithme de granulom´etrie(l’ordre de grandeur passe de l’heure `a la minute pour un volume de 10243 voxels). Un mod`eleanalytique permettant d’´etablir les variations de performance de l’application de granulom´etriesur GPU a ´egalement ´et´e d´efini et pourrait ˆetre ´etendu `a d’autres algorithmes r´eguliers.Dans un second temps, un outil facilitant le d´eploiement d’applications de Traitementdu Signal et de l’Image sur cluster multi-GPU a ´et´e d´evelopp´e. Pour cela, le champ d’actiondu programmeur est r´eduit au d´ecoupage du programme en tˆaches et `a leur mapping sur les´el´ements de calcul (GPP ou GPU). L’am´elioration notable du d´ebit sortant d’une applicationstreaming de calcul de carte de saillence visuelle a d´emontr´e l’efficacit´e de notre outil pourl’impl´ementation d’une solution sur cluster multi-GPU. Afin de permettre un ´equilibrage decharge dynamique, une m´ethode de migration de tˆaches a ´egalement ´et´e incorpor´ee `a l’outil. / Originally designed to relieve the CPU from graphics rendering tasks, the GPU has becomea massively parallel architecture suitable for processing large amounts of data. While it haswon a significant market share in the High Performance Computing domain, an Algorithm-Architecture Matching approach is still necessary to efficiently implement an algorithm onGPU.The contribution of this thesis is twofold. Firstly, we present the significant gain providedby the implementation of a granulometry optimized algorithm (computation time decreasesfrom several hours to less than minute for a volume of 10243 voxels). An analytical modelestablishing the performance variations of the granulometry application is also presented. Webelieve it can be expanded to other regular algorithms.Secondly, the deployment of Signal and Image processing applications on multi-GPUcluster can be a tedious task for the programmer. In order to help him, we developped alibrary that reduces the scope of the programmer’s contribution in the development. Hisremaining tasks are decomposing the application into a Data Flow Graph and giving mappingannotations in order for the tool to automatically dispatch tasks on the processing elements(GPP or GPU). The throughput of a visual sailency streaming application is then improvedthanks to the efficient implementation brought by our tool on a multi-GPU cluster. In orderto permit dynamic load balancing, a task migration method has also been incorporated into it.
106

UM COMPONENTE PARA EXPLORAÇÃO DA CAPACIDADE DE PROCESSAMENTO DE GPUS EM GRADES COMPUTACIONAIS / DEVELOPMENT OF A MODULE TO EXPLORE GPGPU CAPABLE COMPUTERS IN A GRID COMPUTING

Linck, Guilherme 24 September 2010 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Computer grids emerged in the 90 s with the goal of using geographically dispersed computers for high performance computing. Through grids, computational power of a supercomputer can be reached in a simple, efficient and inexpensive way. Such benefits led to highlights in researchs of computer grids. Recently, appeared on market graphics adapter cards whose computational power overcomes, and by a wide margin, even the most modern processors commonly used. This led to researchs that resulted in programming techniques relatively easy to learn and did simplify application programming for these processors. These techniques effectively introduced the processors in the business of high performance computing. The use of these techniques gave rise to General Purpose computing on Graphic Processing Units (GPGPU). Grids applications are generally programmed through a grid computing framework. TUXUR is one of those frameworks and is under development by Master s Program Graduate at the Federal University of Santa Maria. This dissertation discusses the development of a TUXUR s foreseen feature. Such feature allows the computer grid managed by TUXUR to enjoy the benefits of GPGPU applications, particularly regarding to the best use of the nodes s hardware that comprises it. The immediate impact of this synergy is the significant increase in grid computational capacity without adding new computers. The findings of the evaluation highlights the importance of using GPGPU tasks that take advantage of this programming technique, even when performed in a grid. / Grades de computadores surgiram na década de 90 com o objetivo de utilizar computadores geograficamente dispersos para computação de alto desempenho. Através destas grades, pode-se chegar ao poder computacional de um supercomputador de uma forma simples, eficiente e barata. Tais benefícios fizeram com que pesquisas em grades de computadores obtivessem destaques no ramo da computação. Recentemente, surgiram no mercado placas adaptadoras gráficas cujo poder computacional supera, e com larga vantagem, mesmo os mais modernos processadores de uso geral. Isso deu origem a pesquisas que resultaram em técnicas de programação relativamente fáceis de aprender e que simplificam a programação de aplicações para estes processadores. Estas técnicas efetivamente introduziram estes processadores no ramo de computação de alto desempenho. A utilização destas técnicas deu origem à programação de propósito geral em unidades de processamento gráfico (General-Purpose computation on Graphical Processor Units-GPGPU). Aplicações de grades são geralmente programadas através de um framework de computação em grade. TUXUR é um destes frameworks e encontra-se em desenvolvimento por mestrandos do Programa de Pós-graduação em Informática da Universidade Federal de Santa Maria. Este trabalho aborda o desenvolvimento de uma funcionalidade prevista no TUXUR. Tal funcionalidade permite que a grade de computadores gerenciada por TUXUR usufrua dos benefícios de aplicações GPGPU, sobretudo no que diz respeito à melhor utilização do hardware dos nós que a compõem. O reflexo imediato desta sinergia é o aumento significativo da capacidade computacional da grade sem o acréscimo de novos computadores. Os resultados encontrados na avaliação evidenciam a importância do uso de GPGPU nas tarefas que se beneficiam desta técnica de programação, mesmo quando executadas em uma grade.
107

DistributedCL: middleware de processamento distribuído em GPU com interface da API OpenCL. / DistributedCL: middleware de processamento distribuído em GPU com interface da API OpenCL.

Andre Luiz Rocha Tupinamba 10 July 2013 (has links)
Este trabalho apresenta a proposta de um middleware, chamado DistributedCL, que torna transparente o processamento paralelo em GPUs distribuídas. Com o suporte do middleware DistributedCL uma aplicação, preparada para utilizar a API OpenCL, pode executar de forma distribuída, utilizando GPUs remotas, de forma transparente e sem necessidade de alteração ou nova compilação do seu código. A arquitetura proposta para o middleware DistributedCL é modular, com camadas bem definidas e um protótipo foi construído de acordo com a arquitetura, onde foram empregados vários pontos de otimização, incluindo o envio de dados em lotes, comunicação assíncrona via rede e chamada assíncrona da API OpenCL. O protótipo do middleware DistributedCL foi avaliado com o uso de benchmarks disponíveis e também foi desenvolvido o benchmark CLBench, para avaliação de acordo com a quantidade dos dados. O desempenho do protótipo se mostrou bom, superior às propostas semelhantes, tendo alguns resultados próximos do ideal, sendo o tamanho dos dados para transmissão através da rede o maior fator limitante. / This work proposes a middleware, called DistributedCL, which makes parallel processing on distributed GPUs transparent. With DistributedCL middleware support, an OpenCL enabled application can run in a distributed manner, using remote GPUs, transparently and without alteration to the code or recompilation. The proposed architecture for the DistributedCL middleware is modular, with well-defined layers. A prototype was built according to the architecture, into which were introduced multiple optimization features, including batch data transfer, asynchronous network communication and asynchronous OpenCL API invocation. The prototype was evaluated using available benchmarks and a specific benchmark, the CLBench, was developed to facilitate evaluations according to the amount of processed data. The prototype presented good performance, higher compared to similar proposals. The size of data for transmission over the network showed to be the biggest limiting factor.
108

Otimização de pathfinding em GPU

SILVA 30 August 2013 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-13T13:05:50Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação-Mestrado-Adônis_Tavares-digital.pdf: 1967837 bytes, checksum: 2d0c23ab20f389f08ae9964b086b5f9f (MD5) / Made available in DSpace on 2017-02-13T13:05:50Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação-Mestrado-Adônis_Tavares-digital.pdf: 1967837 bytes, checksum: 2d0c23ab20f389f08ae9964b086b5f9f (MD5) Previous issue date: 2013-08-30 / Nos últimos anos, as unidades de processamento gráfico (GPU) têm apresentado um avanço significativo dos recursos computacionais disponíveis para o uso de aplicações não-gráficas. A capacidade de resolução de problemas envolvendo computação paralela, onde o mesmo programa é executado em diversos elementos de dados diferentes ao mesmo tempo, bem como o desenvolvimento de novas arquiteturas que suportem esse novo paradigma, como CUDA (Computed Unified Device Architecture), tem servido de motivação para a utilização da GPU em aplicações de propósito geral, especialmente em jogos. Em contrapartida, a performance das CPUs, mesmo com a presença de múltiplos núcleos (multi-core), tem diminuído, limitando o avanço tecnológico de diversas técnicas desenvolvidas na área de jogos e favorecendo a transição e o desenvolvimento das mesmas para a GPU. Alguns algoritmos de Inteligência Artificial que podem ser decompostos e demonstram certo nível de paralelismo, como o pathfinding, utilizado na navegação de agentes durante o jogo, têm sido desenvolvidos em GPU e demonstrado um desempenho melhor quando comparado à CPU. De modo semelhante, este trabalho tem como proposta a investigação e o desenvolvimento de possíveis otimizações ao algoritmo de pathfinding em GPU, por meio de CUDA, com ênfase em sua utilização na área de jogos, escalando a quantidade de agentes e nós de um mapa, possibilitando um comparativo com seu desempenho apresentado na CPU. / In recent years, graphics processing units (GPUs) have shown a significant advance of computational resources available for the use of non-graphical applications. The ability to solve problems involving parallel computing as well as the development of new architectures that supports this new paradigm, such as CUDA, has encouraged the use of GPU for general purpose applications, especially in games. Some parallel tasks which were CPU based are being ported over to the GPU due to their superior performance. One of these tasks is the pathfinding of an agent over a game map, which has already achieved a better performance on GPU, but is still limited. This work describes some optimizations to a GPU pathfinding implementation, addressing a larger work set (agents and nodes) with good performance compared to a CPU implementation.
109

Combining Influence Maps and Potential Fields for AI Pathfinding

Pentikäinen, Filip, Sahlbom, Albin January 2019 (has links)
This thesis explores the combination of influence maps and potential fields in two novel pathfinding algorithms, IM+PF and IM/PF, that allows AI agents to intelligently navigate an environment. The novel algorithms are compared to two established pathfinding algorithms, A* and A*+PF, in the real-time strategy (RTS) game StarCraft 2. The main focus of the thesis is to evaluate the pathfinding capabilities and real-time performance of the novel algorithms in comparison to the established pathfinding algorithms. Based on the results of the evaluation, general use cases of the novel algorithms are presented, as well as an assessment if the novel algorithms can be used in modern games. The novel algorithms’ pathfinding capabilities, as well as performance scalability, are compared to established pathfinding algorithms to evaluate the viability of the novel solutions. Several experiments are created, using StarCraft 2’s base game as a benchmarking tool, where various aspects of the algorithms are tested. The creation of influence maps and potential fields in real-time are highly parallelizable, and are therefore done in a GPGPU solution, to accurately assess all algorithms’ real-time performance in a game environment. The experiments yield mixed results, showing better pathfinding and scalability performance by the novel algorithms in certain situations. Since the algorithms utilizing potential fields enable agents to inherently avoid and engage units in the environment, they have an advantage in experiments where such qualities are assessed. Similarly, influence maps enable agents to traverse the map more efficiently than simple A*, giving agents inherent advantages. In certain use cases, where multiple agents require pathfinding to the same destination, creating a single influence map is more beneficial than generating separate A* paths for each agent. The main benefits of generating the influence map, compared to A*-based solutions, being the lower total compute time, more precise pathfinding and the possibility of pre-calculating the map. / Denna rapport utforskar kombinationen av influence maps och potential fields med två nya pathfinding algoritmer, IM+PF och IM/PF, som möjliggör intelligent navigation av AI agenter. De nya algoritmerna jämförs med två existerande pathfindingalgoritmer, A* och A*+PF, i realtidsstrategispelet StarCraft 2. Rapportens fokus är att utvärdera de nya algoritmernas pathfindingförmåga samt realtidsprestanda i förhållande till de två existerande algoritmerna, i sex olika experiment. Baserat på resultaten av experimenten presenteras generella användningsområden för algoritmerna tillsammans med en bedömning om algoritmerna kan användas i moderna spel. De fyra pathfindingalgoritmerna implementeras för att jämföra pathfindingförmåga och realtidsprestanda, för att dra slutsatser angående de nya algoritmernas livsduglighet. Med användningen av StarCraft 2 som ett benchmarkingvertyg skapas sex experiment där olika aspekter av algoritmerna testas. Genereringen av influence maps och potential fields i realtid är ett arbete som kan parallelliseras, och därför implementeras en GPGPU-lösning för att få en meningsfull representation av realtidsprestandan av algoritmerna i en spelmiljö. Experimenten visar att de nya algoritmerna presterar bättre i både pathfindingförmåga och skalbarhet under vissa förhållanden. Algoritmerna som använder potential fields har en stor fördel gentemot simpel A*, då agenterna kan naturligt undvika eller konfrontera enheter i miljön, vilket ger de algoritmerna stora fördelar i experiment där sådana förmågor utvärderas. Influence maps ger likväl egna fördelar gentemot A*, då agenter som utnyttjar influence maps kan traversera världen mer effektivt. Under förhållanden då flera AI agenter ska traversera en värld till samma mål kan det vara förmånligt att skapa en influence map, jämfört med att generera individuella A*-vägar till varje agent. De huvudsakliga fördelarna för de influence map-baserade algoritmerna är att de kräver lägre total beräkningstid och ger en merexakt pathfinding, samt möjligheten att förberäkna influence map-texturen.
110

AES - kryptering med cuda : Skillnader i beräkningshastighet mellan AES-krypteringsmetoderna ECB och CTR vid implementering med Cuda-ramverket.

Vidén, Pontus, Henningsson, Viktor January 2020 (has links)
Purpose – The purpose of this study is partly to illustrate how the AES encryption methods ECB and CTR affect the computational speed when using the GPGPU framework Cuda, but also to clarify the advantages and disadvantages between the different AES encryption modes. Method – A preliminary study was conducted to obtain empirical data on the AES encryption modes ECB and CTR. Data from the study has been analyzed and compared to determine the various aspects of the AES encryption modes and to create a basis for determining the advantages and disadvantages between them. The preliminary study has been carried out systematically by finding scientific works by searching databases within the subject. An experiment has been used as a method to be able to extract execution time data for the GPGPU framework Cuda when processing the AES encryption modes. Experiment were chosen as a method to gain control over the variables included in the study and to see how these variables change when they are consciously influenced. Findings – The findings of the preliminary study show that CTR is more secure than the ECB, but also considerably more complex, which can lead to integrity risks when implementation is done incorrectly. In the experiment, computational speeds are produced when the CPU memory sends to the GPU memory, the encryption on the GPU and how long it takes for the GPU memory to send to the CPU memory. This is done for both CTR and ECB in encryption and decryption. The result of the analysis shows that the ECB is faster than CTR in encryption and decryption. The calculation speed is higher with the ECB compared to the CTR. Implications – The experiment shows that CTR is slower than the ECB. But the most amount of time spent in encryption for both modes are the transfers between the CPU memory and the GPU memory. Limitations – The file sizes of the files tested only goes up to about 1 gigabyte which gave small computation times.

Page generated in 0.0396 seconds