• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 30
  • 11
  • 11
  • 8
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 75
  • 52
  • 40
  • 29
  • 28
  • 24
  • 23
  • 23
  • 21
  • 19
  • 19
  • 18
  • 18
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Runtime Systems for Load Balancing and Fault Tolerance on Distributed Systems

Arafat, Md Humayun January 2014 (has links)
No description available.
102

Sched-ITS: An Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating Systems Course

Koya, Bharath Kumar 31 May 2017 (has links)
No description available.
103

EFFICIENT DETECTION OF HANG BUGS IN MOBILE APPLICATIONS

Thiagarajan, Deepa January 2016 (has links)
No description available.
104

A Framework for Providing Automatic Resource and Accuracy Management in a Cloud Environment

Vijayakumar, Smita 30 July 2010 (has links)
No description available.
105

Efficient fMRI Analysis and Clustering on GPUs

Talasu, Dharneesh 16 December 2011 (has links)
No description available.
106

Runtime Systems and Scheduling Support for High-End CPU-GPU Architectures

Trichy Ravi, Vignesh 27 June 2012 (has links)
No description available.
107

Accelerating Radiowave Propagation Simulations: A GPU-based Approach to Parabolic Equation Modeling / Accelererad simulering av utbredning av radiovågor: En GPU-baserad lösning av en parabolisk ekvation

Nilsson, Andreas January 2024 (has links)
This study explores the application of GPU-based algorithms in radiowave propagation modeling, specifically through the scope of solving parabolic wave equations. Radiowave propagation models are crucial in the field of wireless communications, where they help predict how radio waves travel through different environments, which is vital for planning and optimization. The research specifically examines the implementation of two numerical methods: the Split Step Method and the Finite Difference Method. Both methods are adapted to utilize the parallel processing capabilities of modern GPUs, harnessing a parallel computing framework known as CUDA to achieve considerable speed enhancements compared to traditional CPU-based methods.Our findings reveal that the Split Step method generally achieves higher speedup factors, especially in scenarios involving large system sizes and high-frequency simulations, making it particularly effective for expansive and complex models. In contrast, the Finite Difference Method shows more consistent speedup across various domain sizes and frequencies, suggesting its robustness across a diverse range of simulation conditions. Both methods maintained high accuracy levels, with differences in computed norms remaining low when comparing GPU implementations against their CPU counterparts.
108

Trådlösa Nätverk : säkerhet och GPU

de Laval, johnny January 2009 (has links)
Trådlosa nätverk är av naturen sårbara for avlyssning för att kommunikationen sker med radiovagor. Därfor skyddas trådlosa nätverk med kryptering. WEP var den första krypteringsstandarden som användes av en bredare publik som senare visade sig innehålla flera sårbarheter. Följden blev att krypteringen kunde förbigås på ett par minuter. Därför utvecklades WPA som ett svar till sårbarheterna i WEP. Kort därefter kom WPA2 som är den standard som används i nutid. Den svaghet som kan påvisas med WPA2 finns hos WPA2-PSK när svaga lösenord används. Mjukvaror kan med enkelhet gå igenom stora uppslagsverk för att testa om lösenord går att återställa. Det är en process som tar tid och som därför skyddar nätverken i viss mån. Dock har grafikprocessorer börjat användas i syfte för att återställa lösenord. Grafikkorten är effektivare och återställer svaga lösenord betydligt snabbare än moderkortens processorer. Det öppnar upp for att jämföra lösenord med ännu större uppslagsverk och fler kombinationer. Det är vad denna studie avser att belysa; hur har grafikkortens effektivitet påverkat säkerheten i trådlösa nätverk ur ett verksamhetsperspektiv. / Wireless networks are inherently vulnerable for eavesdropping since they use radio waves to communicate. Wireless networks are therefore protected by encryption. WEP was the first encryption standard that was widely used. Unfortunately WEP proved to have several serious vulnerabilities. WEP could be circumvented within few minutes. Therefore WPA was developed as a response to the weak WEP. Shortly thereafter WPA2 was released and are now being used in present. The only weakness with WPA2 is in the subset WPA2-PSK when weak passwords are being used. Software could easily go through large dictionaries to verify if a password could be recovered. But that is time consuming and therefore providing wireless networks limited protection. However a new area of use with advanced graphic cards has showed that it is providing a faster way of recovering passwords than the ordinary processor on the motherboard. That opens up for the larger use of dictionaries and the processing of words or combinations of words. That is what this study aims to shed light on. How the efficiency of the graphic cards have affected security in wireless networks from a corporate perspective of view.
109

Cooperative Execution of Opencl Programs on Multiple Heterogeneous Devices

Pandit, Prasanna Vasant January 2013 (has links) (PDF)
Computing systems have become heterogeneous with the increasing prevalence of multi-core CPUs, Graphics Processing Units (GPU) and other accelerators in them. OpenCL has emerged as an attractive programming framework for heterogeneous systems. However, utilizing mul- tiple devices in OpenCL is a challenge as it requires the programmer to explicitly map data and computation to each device. Utilizing multiple devices simultaneously to speed up execu- tion of a kernel is even more complex, as the relative execution time of the kernel on different devices can vary significantly. Also, after each kernel execution, a coherent version of the data needs to be established. This means that, in order to utilize all devices effectively, the programmer has to spend considerable time and effort to distribute work across all devices, keep track of modified data in these devices and correctly perform a merging step to put the data together. Further, the relative performance of a program may vary across different inputs, which means a statically determined work distribution may not work well. In this work, we present FluidiCL, an OpenCL runtime that takes a program written for a single device and uses multiple heterogeneous devices to execute each kernel. The runtime performs dynamic work distribution and cooperatively executes each kernel on all available devices. Since we consider a setup with devices having discrete address spaces, our solution ensures that execution of OpenCL work-groups on devices is adjusted by taking into account the overheads for data management. The data transfers and data merging needed to ensure coherence are handled transparently without requiring any effort from the programmer. Flu- idiCL also does not require prior training or profiling and is completely portable across dif- ferent machines. Because it is dynamic, the runtime is able to adapt to system load. We have developed several optimizations for improving the performance of FluidiCL. We evaluate the runtime across different sets of devices. On a machine with an Intel quad-core processor and an NVidia Fermi GPU, FluidiCL shows a geomean speedup of nearly 64% over the GPU, 88% over the CPU and 14% over the best of the two devices in each benchmark. In all benchmarks, performance of our runtime comes to within 13% of the best of the two devices. FluidiCL shows similar results on a machine with a quad-core CPU and an NVidia Kepler GPU, with up to 26% speedup over the best of the two. We also present results considering an Intel Xeon Phi accelerator and a CPU and find that FluidiCL performs up to 45% faster than the best of the two devices. We extend FluidiCL from a CPU–GPU scenario to a three-device setup hav- ing a quad-core CPU, an NVidia Kepler GPU and an Intel Xeon Phi accelerator and find that FluidiCL obtains a geomean improvement of 6% in kernel execution time over the best of the three devices considered in each case.
110

Bearbetningstid och CPU-användning i Snort IPS : En jämförelse mellan ARM Cortex-A53 och Cortex-A7 / Processing time and CPU usage in Snort IPS : A comparision between ARM Cortex-A53 and Cortex-A7

Nadji, Al-Husein, Sarbast Hgi, Haval January 2020 (has links)
Syftet med denna studie är att undersöka hur bearbetningstiden hos Snort intrångsskyddssystem varierar mellan två olika processorer; ARM Cortex-A53 och Cortex-A7. CPU-användningen undersöktes även för att kontrollera om bearbetningstid är beroende av hur mycket CPU Snort använder. Denna studie ska ge kunskap om hur viktig en processor är för att Snort ska kunna prestera bra när det gäller bearbetningstid och CPU användning samt visa det uppenbara valet mellan Cortex-A53 och Cortex-A7 när man ska implementera Snort IPS. Med hjälp av litteratursökning konstruerades en experimentmiljö för att kunna ge svar på studiens frågeställningar. Snort kan klassificeras som CPU-bunden vilket innebär att systemet är beroende av en snabb processor. I detta sammanhang innebär en snabb processor gör att Snort hinner bearbeta den mängd nätverkstrafik den får, annars kan trafiken passera utan att den inspekteras vilket kan skada enheten som är skyddat av Snort. Studiens resultat visar att bearbetningstiden i Snort på Cortex-A53 och Cortex-A7 skiljer sig åt och en tydlig skillnad i CPU-användning mellan processorerna observerades. Studien visar även kopplingen mellan bearbetningstiden och CPUanvändning hos Snort. Studiens slutsats är att ARM Cortex-A53 har bättre prestanda vid användning av Snort IPS avseende bearbetningstid och CPU-användning, där Cortex-A53 har 10 sekunder kortare bearbetningstid och använder 2,87 gånger mindre CPU. / The purpose of this study is to examine how the processing time of the Snort intrusion prevention system varies on two different processors; ARM Cortex-A53 and CortexA7. CPU usage was also examined to check if processing time depends on how much CPU Snort uses. This study will provide knowledge about how important a processor is for Snort to be able to perform well in terms of processing time and CPU usage. This knowledge will help choosing between Cortex-A53 and Cortex-A7 when implementing Snort IPS. To achieve the purpose of the study a literature search has been done to design an experimental environment. Snort can be classified as CPU-bound, which means that the system is dependent on a fast processor. In this context, a fast processor means that Snort is given enough time to process the amount of traffic it receives, otherwise the traffic can pass through without it being inspected, which can be harmful to the device that is protected by Snort. The results of the study show that the processing time in Snort on Cortex-A53 and Cortex-A7 differs and an obvious difference in CPU usage between the processors is shown. The study also presents the connection between processing time and CPU usage for Snort. In conclusion, ARM Cortex-A53 has better performance when using Snort IPS in terms of processing time and CPU usage, Cortex-A53 has 10 seconds less processing time and uses 2,87 times less CPU.

Page generated in 0.0506 seconds