• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Traffic Analysis Attacks in Anonymity Networks : Relationship Anonymity-Overhead Trade-off

Vuković, Ognjen, Dán, György, Karlsson, Gunnar January 2013 (has links)
Mix networks and anonymity networks provide anonymous communication via relaying, which introduces overhead and increases the end-to-end message delivery delay. In practice overhead and delay must often be low, hence it is important to understand how to optimize anonymity for limited overhead and delay. In this work we address this question under passive traffic analysis attacks, whose goal is to learn the traffic matrix. For our study, we use two anonymity networks: MCrowds, an extension of Crowds, which provides unbounded communication delay and Minstrels, which provides bounded communication delay. We derive exact and approximate analytical expressions for the relationship anonymity for these systems. Using MCrowds and Minstrels we show that, contrary to intuition, increased overhead does not always improve anonymity. We investigate the impact of the system's parameters on anonymity, and the sensitivity anonymity to the misestimation of the number of attackers. / <p>QC 20130522</p>
2

Reducing Inter-Process Communication Overhead in Parallel Sparse Matrix-Matrix Multiplication

Ahmed, Salman, Houser, Jennifer, Hoque, Mohammad A., Raju, Rezaul, Pfeiffer, Phil 01 July 2017 (has links)
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., O (log (p)) where p is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes O(p) run-time complexity for accumulating the results.
3

Distributed parallel processing in networks of workstations

Wang, Yang January 1994 (has links)
No description available.
4

MINIMIZAÇÃO DO CABEÇALHO DO PROTOCOLO DE COMUNICAÇÃO IPV6 VISANDO A MELHORIA DE DESEMPENHO EM REDES LOCAIS / HEADER MINIMIZATION OF IPV6 COMMUNICATION PROTOCOL AIMING AT IMPROVING LOCAL AREA NETWORK PERFORMANCE

Torrel, Robert 12 November 2013 (has links)
One of the major advances that has been happening in the use of communication technologies, is the gradual replacement of IPv4 by the IPv6. This change provides several enhancements, although the transmission performance is relatively low, by considering the new addressing standard, which becomes 128 bits in IPv6, compared to 32 bits in IPv4, increasing the size of the resultant header and the communication overhead. This work has as main objective the development of a method for the minimization of the IPv6 header, seeking to increase the performance of data transmission in a local area network. Practical tests have shown that the proposed solution enables an improvement in network performance, increasing data throughput, in addition to decreasing latency and bandwidth utilization in the transmission of packets between two devices. / Um dos principais avanços que vem ocorrendo na utilizazação de tecnologias de comunicação, é a substituição gradativa do protocolo IPv4 pelo IPv6. Essa mudança apresenta diversas melhorias, embora o desempenho na transmissão seja relativamente menor ao considerar o novo padrão de endereçamento, que passa de 32 bits no IPv4, para 128 bits no IPv6, aumentando o tamanho do cabeçalho resultante e o overhead de comunicação. Este trabalho tem como principal objetivo o desenvolvimento de um método para a minimização do cabeçalho IPv6, buscando o aumento de desempenho na transmissão de dados em uma rede local. Testes práticos demonstraram que a solução proposta permite uma melhora de desempenho na rede, com o aumento da vazão de dados, além da diminuição da latência e da utilização de banda na transmissão de pacotes entre dois dispositivos.
5

IoT as Fog Nodes: An Evaluation on Performance and Scalability

Ezaz, Ishaq January 2023 (has links)
I takt med den exponentiella tillväxten av Internet of Things (IoT) har utmaningen att hantera den enorma mängden genererade data blivit allt större. Denna studie undersöker paradigmen med distribuerade dimdatorer, där kostnadseffektiva IoT-enheter används som dimnoder, som en potentiell lösning på de utmaningarna som det centraliserade molnet står inför. Skalbarheten och prestandan hos ett dimdatorsystem utvärderades under en rad olika arbetsbelastningar genererade av beräkningsintensiva uppgifter. Resultaten visade att en ökning av antal dimnoder förbättrade systemets skalbarhet och minskade den totala latensen. Dock visade det sig att konfigurationer med färre dimnoder presterade bättre vid lägre arbetsbelastningar, vilket understryker vikten av balansen mellan beräkningsuppgifter och kommunikationskostnaden. Sammantaget framhäver denna studie dimdatorkonceptets genomförbarhet som en effektiv och skalbar lösning för beräkningsintensiva databearbetning inom IoT. Trots att studiens fokus låg på latens, kan de insikter som vunnits vägleda framtida design och implementering av dimdatorsystem och bidra till de pågående diskussionerna om strategier för datahantering inom IoT. / With the exponential growth of the Internet of Things (IoT), managing the enormous amount of data generated has become a significant challenge. This study investigates the distributed paradigm of fog computing, using cost-effective IoT devices as fog nodes, as a potential solution for the centralized cloud. The scalability and performance of a fog computing system were evaluated under a range of workloads, using computationally intensive tasks reflective of real-world scenarios. Results indicated that with an increase in the number of fog nodes, system scalability improved, and the overall latency decreased. However, at lower workloads, configurations with fewer fog nodes outperformed those with more, highlighting the importance of the balance between computation and communication overheads. Overall, this study emphasizes the viability of fog computing as an efficient and scalable solution for data processing in IoT systems. Although the study primarily focused on latency, the insights gained could guide future design and implementation of fog computing systems and contribute to the ongoing discussions on IoT data processing strategies.
6

Performance Optimisation of Discrete-Event Simulation Software on Multi-Core Computers / Prestandaoptimering av händelsestyrd simuleringsmjukvara på flerkärniga datorer

Kaeslin, Alain E. January 2016 (has links)
SIMLOX is a discrete-event simulation software developed by Systecon AB for analysing logistic support solution scenarios. To cope with ever larger problems, SIMLOX's simulation engine was recently enhanced with a parallel execution mechanism in order to take advantage of multi-core processors. However, this extension did not result in the desired reduction in runtime for all simulation scenarios even though the parallelisation strategy applied had promised linear speedup. Therefore, an in-depth analysis of the limiting scalability bottlenecks became necessary and has been carried out in this project. Through the use of a low-overhead profiler and microarchitecture analysis, the root causes were identified: atomic operations causing a high communication overhead, poor locality leading to translation lookaside buffer thrashing, and hot spots that consume significant amounts of CPU time. Subsequently, appropriate optimisations to overcome the limiting factors were implemented: eliminating the expensive operations, more efficient handling of heap memory through the use of a scalable memory allocator, and data structures that make better use of caches. Experimental evaluation using real world test cases demonstrated a speedup of at least 6.75x on an eight-core processor. Most cases even achieve a speedup of more than 7.2x. The various optimisations implemented further helped to lower run times for sequential execution by 1.5x or more. It can be concluded that achieving nearly linear speedup on a multi-core processor is possible in practice for discrete-event simulation. / SIMLOX är en kommersiell mjukvara utvecklad av Systecon AB, vars huvudsakliga funktion är en händelsestyrd simuleringskärna för analys av underhållslösningar för komplexa tekniska system. För hantering av stora problem så används parallellexekvering för simuleringen, vilket i teorin borde ge en nästan linjär skalning med antal trådar. Prestandaförbättringen som observerats i praktiken var dock ytterst begränsad, varför en ordentlig analys av skalbarheten har gjorts i detta projekt. Genom användandet av ett profileringsverktyg med liten overhead och mikroarkitektur-analys, så kunde orsakerna hittas: atomiska operationer som skapar mycket overhead för kommunikation, dålig lokalitet ger fragmentering vid översättning till fysiska adresser och dåligt utnyttjande av TLB-cachen, och vissa flaskhalsar som kräver mycket CPU-kraft. Därefter implementerades och testade optimeringar för att undvika de identifierade problem. Testade lösningar inkluderar eliminering av dyra operationer, ökad effektivitet i minneshantering genom skalbara minneshanteringsalgoritmer och implementation av datastrukturer som ger bättre lokalitet och därmed bättre användande av cache-strukturen. Verifiering på verkliga testfall visade på uppsnabbningar på åtminstone 6.75 gånger på en processor med 8 kärnor. De flesta fall visade på en uppsnabbning med en faktor större än 7.2. Optimeringarna gav även en uppsnabbning med en faktor på åtminstone 1.5 vid sekventiell exekvering i en tråd. Slutsatsen är därmed att det är möjligt att uppnå nästan linjär skalning med antalet kärnor för denna typ av händelsestyrd simulering.

Page generated in 0.1515 seconds