• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 29
  • 11
  • 11
  • 8
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 75
  • 51
  • 40
  • 29
  • 28
  • 24
  • 23
  • 22
  • 21
  • 19
  • 19
  • 18
  • 18
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Resurshantering i Dual-core kluster

Gustafsson, Johan, Lingbrand, Mikael January 2008 (has links)
<p>Med den nya generationen processorer där vi har flera cpu-kärnor på ett chip, så ökas prestandan genom parallell exekvering. I denna rapport presenterar vi en omvärldsstudie om allmän multiprocessorteori där vi går igenom olika tekniker för både hårdvara och mjukvara. Vi har även utfört empiriska tester på ett datorkluster, där vi har testat de två olika programmen Fluent och CFX, som utför CFD beräkningar. För varje program så har tre modeller använts för simuleringar med varierande antal beräkningsnoder. Vi har undersökt vad som är mest lönsamt, att använda en eller båda CPU-kärnorna vid de olika simuleringarna. För att testa detta har vi kört simuleringar där vi har kört med en respektive två cpu-kärnor på beräkningsnoderna. Under simuleringarna har vi samlat in mätvärden som nätverk, minne och cpu-belastning för alla noder samt exekveringstider. Dessa värden har sedan sammanställts där vi ser att ju större en modell är desto mer lönar det sig att köra med en cpu-kärna. I endast ett av våra tester har det visat sig lönsamt att använda båda cpu-kärnorna. En formel har sedan utarbetats för att påvisa skillnaderna mellan olika antal processer med en respektive två cpu-kärnor per nod. Denna formel kan appliceras för att räkna ut den totala kostnaden per simulering med hjälp av årskostnaden för de noder och licenser som används.</p>
12

Cache-Aware Virtual Page Management

Szlavik, Alexander 19 February 2013 (has links)
With contemporary research focusing its attention primarily on benchmark-driven performance evaluation, studying fundamental memory characteristics has gone by the way-side. This thesis presents a systematic study of the expected performance characteristics for contemporary multi-core CPUs. These characteristics are the primary influence on benchmarking variability and need to be quantified if more accurate benchmark results are desired. With the aid of a new, highly customizable, micro-benchmark suite, these CPU-specific attributes are evaluated and contrasted. The benchmark tool provides the framework for accurately measuring instruction throughput and integrates hardware performance counters to gain insight into machine-level caching performance. Additionally, the Linux operating system's impact on cache utilization is evaluated. With careful virtual memory management, cache-misses may be reduced, significantly contributing to benchmark result stability. Finally, a popular cache performance model, stack distance profile, is evaluated with respect to contemporary CPU architectures. While particularly popular in multi-core contention-aware scheduling projects, modern incarnations of the model fail to account for trends in CPU cache hardware, leading to measurable degrees of inaccuracy.
13

Resurshantering i Dual-core kluster

Gustafsson, Johan, Lingbrand, Mikael January 2008 (has links)
Med den nya generationen processorer där vi har flera cpu-kärnor på ett chip, så ökas prestandan genom parallell exekvering. I denna rapport presenterar vi en omvärldsstudie om allmän multiprocessorteori där vi går igenom olika tekniker för både hårdvara och mjukvara. Vi har även utfört empiriska tester på ett datorkluster, där vi har testat de två olika programmen Fluent och CFX, som utför CFD beräkningar. För varje program så har tre modeller använts för simuleringar med varierande antal beräkningsnoder. Vi har undersökt vad som är mest lönsamt, att använda en eller båda CPU-kärnorna vid de olika simuleringarna. För att testa detta har vi kört simuleringar där vi har kört med en respektive två cpu-kärnor på beräkningsnoderna. Under simuleringarna har vi samlat in mätvärden som nätverk, minne och cpu-belastning för alla noder samt exekveringstider. Dessa värden har sedan sammanställts där vi ser att ju större en modell är desto mer lönar det sig att köra med en cpu-kärna. I endast ett av våra tester har det visat sig lönsamt att använda båda cpu-kärnorna. En formel har sedan utarbetats för att påvisa skillnaderna mellan olika antal processer med en respektive två cpu-kärnor per nod. Denna formel kan appliceras för att räkna ut den totala kostnaden per simulering med hjälp av årskostnaden för de noder och licenser som används.
14

Implementering av en mjuk CPU i FPGA / Implementation of a soft CPU in FPGA

Nordmark, Daniel January 2012 (has links)
Målet med examensarbetet är att implementera en mjuk CPU i en FPGA-krets som finns tillgänglig på ett ALTERA DE2 Board. Denna mjuka processor integreras i ett projekt skapat i utvecklingsmiljön Quartus II. Den kommunicera med programmerad logik i FPGA:n och den signalbehandlar en audiosignal (stereo), så att ett eko kan genereras och att volym och balans blir justerbar. Detta styrs av ett tangentbord som kopplas till DE2-kortet och de olika förändringarna på utsignalen visas på en LCD. / The ambition with this thesis is to implement a soft CPU i a FPGA-circuit which is available on an ALTERA DE2 Board. This soft processor is integrated in a project designed in the development environment: Quartus II CAD System. It communicates with programmed logic in the FPGA and it alters an audiosignal so that an eco is generated and so that volume and balance can be adjusted. This is controled from a keyboard which is connected to the DE2-card and all the different adjustments of the outsignal are shown on an LCD.
15

Krypteringsalgoritmer i OpenCL : AES-256 och ECC ElGamal / Crypthography algorithms in OpenCL : AES-256 and ECC ElGamal

Sjölander, Erik January 2012 (has links)
De senaste åren har grafikkorten genomgått en omvandling från renderingsenheter till att klara av generella beräkningar, likt en vanlig processor. Med hjälp av språk som OpenCL blir grafikkorten kraftfulla enheter som går att använda effektivt vid stora beräkningar. Målet med detta examensarbete var att visa krypteringsalgoritmer som passar bra att accelerera med OpenCL på grafikkort. Ytterligare mål var att visa att programmet inte behöver omfattande omskrivning för att fungera i OpenCL. Två krypteringsalgoritmer portades för att kunna köras på grafikkorten. Den första algoritmen AES-256 testades i två olika implementationer, en 8- samt 32-bitars. Den andra krypteringsalgoritmen som användes var ECC ElGamal. Dessa två är valda för visa att både symmetrisk och öppen nyckelkryptering går att accelerera. Resultatet för AES-256 i ECB mod på GPU blev 7 Gbit/s, en accelerering på 25 gånger jämfört med CPU. För elliptiska kurvor ElGamal blev resultatet en acceleration på 55 gånger för kryptering och 67 gånger för avkryptering. Arbetet visar skalärmultiplikation med kurvan B-163 som tar 65us. Båda implementationerna bygger på dataparallellisering, där dataelementen distribueras över tillgänglig hårdvara. Arbetet är utfört på Syntronic Software Innovations AB i Linköping. / Last years, the graphic cards have become more powerful than ever before. A conversion from pure rendering components to more general purpose computing devices together with languages like OpenCL have created a new division for graphics cards. The goal of this thesis is to show that crypthography algorithms are well suited for acceleration with OpenCL using graphics cards. A second goal was to show that C-code can be easily translated into OpenCL kernel with just a small syntax change. The two algorithms that have been used are AES-256 implemented in 8- and 32-bits variants, and the second algorithm is Elliptic Curve Crypthography with the ElGamal scheme. The algoritms are chosen to both represent fast symmetric and the slower public-key schemes. The results for AES-256 in ECB mode on GPU, ended up with a throughtput of 7Gbit/s which is a acceleration of 25 times compared to a CPU. For Elliptic Curve, a single scalar point multiplication for the B-163 NIST curve is computed on the GPU in 65us. Using this in the ElGamal encryption scheme, an acceleration of 55 and 67 times was gained for encryption and decryption. The work has been made at Syntronic Software Innovations AB in Linköping, Sweden.
16

An Investigation of CPU utilization relationship between host and guests in a Cloud infrastructure

Ahmadi Mehri, Vida January 2015 (has links)
Cloud computing stands as a revolution in IT world in recent years. This technology facilitates resource sharing by reducing hardware costs for business users and promises energy efficiency and better resource utilization to the service providers. CPU utilization is a key metric considered in resource management across clouds. The main goal of this thesis study is directed towards investigating CPU utilization behavior with regard to host and guest, which would help us in understanding the relationship between them. It is expected that perception of these relationships would be helpful in resource management. Working towards our goal, the methodology we adopted is experi- mental research. This involves experimental modeling, measurements and observations from the results. The experimental setup covers sev- eral complex scenarios including cloud and a standalone virtualization system. The results are further analyzed for a visual correlation. Results show that CPU utilization in cloud and virtualization sce- nario coincides. More experimental scenarios are designed based on the first observations. The obtaining results show the irregular behav- ior between PM and VM in variable workload. CPU utilization retrieved from both cloud and a standalone system is similar. 100% workload situations showed that CPU utilization is constant with no correlation co-efficient obtained. Lower workloads showed (more/less) correlation in most of the cases in our correlation analysis. It is expected that more number of iterations can possibly vary the output. Further analysis of these relationships for proper resource management techniques will be considered.
17

A study on Android games : 3G energy consumption, CPU-utilization and system calls

Almquist, Mathias, Almquist, Viktor January 2015 (has links)
The popularity of mobile games has increased drastically during the recent years andmany people use them as their main source of entertainment. Mobile gamescommunicate with other devices over the network which consumes a lot of energy,especially when connected to cellular networks (e.g., 3G). This high energy expensecan feel unjustified to the player since always-on network connectivity is not requiredin order to play most games.Furthermore, the number of malware-infected applications in offical applicationstores has increased significantly in the recent years. These malware-infectedapplications can gain unrestricted access and control of users phones which can be athreat to security. Information about the behaviour characteristics of games can beused to develop or improve systems for detecting malware applications.In this thesis, 20 popular Android games are analysed with a focus on the datacommunication, CPU utilization and system call behaviour. The main subject of thedata communication study is the 3G communication energy consumed by games. Thesystem call study aims at quantifying the number and type of calls used by games.This may be useful in a further study of harmful behaviour by apps.The profiling results presented in this report show that the communication energyvaries drastically among games. Games with a very similar gameplay can consumevery different amounts of energy which indicates that there is room for improvementsin many of the games. Ad-free games consume significantly less energy than gamesthat use in-app advertisements. The results show that improving the advertisementfetching policy could reduce the energy consumption of these games. The majority ofthe games can be played without network connectivity and therefore thecommunication energy consumed could be completely avoided. The thesis alsoshows that games use a wide variety of system calls and that many of the system callsare common among the games.
18

Cache-Aware Virtual Page Management

Szlavik, Alexander 19 February 2013 (has links)
With contemporary research focusing its attention primarily on benchmark-driven performance evaluation, studying fundamental memory characteristics has gone by the way-side. This thesis presents a systematic study of the expected performance characteristics for contemporary multi-core CPUs. These characteristics are the primary influence on benchmarking variability and need to be quantified if more accurate benchmark results are desired. With the aid of a new, highly customizable, micro-benchmark suite, these CPU-specific attributes are evaluated and contrasted. The benchmark tool provides the framework for accurately measuring instruction throughput and integrates hardware performance counters to gain insight into machine-level caching performance. Additionally, the Linux operating system's impact on cache utilization is evaluated. With careful virtual memory management, cache-misses may be reduced, significantly contributing to benchmark result stability. Finally, a popular cache performance model, stack distance profile, is evaluated with respect to contemporary CPU architectures. While particularly popular in multi-core contention-aware scheduling projects, modern incarnations of the model fail to account for trends in CPU cache hardware, leading to measurable degrees of inaccuracy.
19

Monitoramento e Cumprimento de Acordos de Nível de Serviço em Ambientes Virtualizados usando um Controlador de CPU

Silva, Cyrus Dias da 12 September 2013 (has links)
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-10T13:26:34Z No. of bitstreams: 2 dissertação Cyros da Silva.pdf: 1649531 bytes, checksum: 7f67a5d2cb492b92a5fd69b6fc1ffb74 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-10T13:26:34Z (GMT). No. of bitstreams: 2 dissertação Cyros da Silva.pdf: 1649531 bytes, checksum: 7f67a5d2cb492b92a5fd69b6fc1ffb74 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-09-12 / A virtualização de servidores tem trazido mudanças no mundo da hospedagem de sites e aplicações web. Na abordagem tradicional, por questões de isolamento cada máquina física fica atrelada a uma aplicação ou a um serviço, o que acarreta em uma subutilização dos recursos computacionais. A virtualização de servidores supera essa limitação proporcionando diversos benefícios como: a redução nos custos operacionais (dado que são necessárias menos máquinas físicas), utilização mais eficiente dos recursos e um menor tempo para disponibilização de servidores. A utilização de acordos de nível de serviço (SLAs) é uma prática comum que ocorre na hospedagem de aplicações web. No entanto, existem desafios no cumprimento de SLAs em ambientes virtualizados. Um deles é que não é trivial converter os objetivos de nível de serviço (SLOs) que usam métricas de nível de aplicação, como tempo de resposta ou transações por segundo, na alocação de recursos de baixo nível, como CPU e memória. Além disso, visto que a carga de trabalho em que os servidores são submetidos varia com o tempo, alocações de recursos estáticas só são suficientes para garantir o nível de serviço se os recursos forem alocados para os picos de demanda, o que leva a subutilização no restante do tempo. Nesta dissertação foi desenvolvida uma solução de monitoramento e controle que ajusta dinamicamente os parâmetros do escalonamento de CPU do ambiente virtualizado de forma a evitar que os acordos de nível de serviço sejam violados. A abordagem utilizada foi derivada de um controlador de referência na literatura. Foram realizados experimentos num ambiente com o hypervisor Xen usando uma aplicação e cargas representativas, e as soluções foram avaliadas. Os resultados obtidos foram compatíveis para os cenários investigados.
20

Comparison between OpenStack virtual machines and Docker containers in regards to performance

Bonnier, Victor January 2020 (has links)
Cloud computing is a fast growing technology which more and more companies are starting to use throughout the years. When deploying a cloud computing application it is important to know what kind of technology that you should use. Two popular technologies are containers and virtual machines. The objective with this study was to find out how the performance differs between Docker containers and OpenStack virtual machines in regards to memory usage, CPU utilization, time to boot up and throughput from a scalability perspective when scaling between two and four instances of containers and virtual machines. The comparison was done by having two different virtual machines running, one with Docker that ran the containers and another machine with OpenStack that was running a stack of my virtual machines. To gather the data from the virtual machines I used the command ”htop” and to get the data from the containers, I used the command ”Docker stats”. The results from the experiment showed a favor towards the Docker containers where the boot time on the virtual machines were between 280-320 seconds and the containers had between 5-8 seconds bootup time. The memory usage was more than doubled on the virtual machines than the containers. The CPU utilization and throughput favored the containers and the gap in performance increased when scaling the application outwards to four instances in all cases except for the throughput when adding information to a database. The conclusion that can be drawn from this is that Docker containers are favored over the OpenStack virtual machines from a performance perspective. There are still other aspects to think about regarding when choosing which technology to use when deploying a cloud application, such as security for example.

Page generated in 0.0584 seconds