• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parallel computational techniques for explicit finite element analysis

Sziveri, Janos January 1997 (has links)
No description available.
2

Parallel solution of sparse linear systems

Nader, Babak 05 1900 (has links) (PDF)
M.S. / Computer Science / This paper deals with the problem of solving a system of sparse nonsymmetric matrices on a distributed memory multiprocessor computer, the Intel iPSC (hypercube). The processors have substantial local memory but no global shared memory. They communicate among themselves and with a host processor through message passing. The primary interest is to design an algorithm which exploits parallelism, and which performs elimination and solution of large sparse matrices. Elimination is performed by LU- decomposition. The storage scheme is based on linked list data-structure defined for a given generated matrix. The matrix is distributed by columns in a "wrapped" fashion so that elimination in the natural order will be balanced, if the sparsity structure is equally distributed across the columns. Numerical results from experiments running on the hypercube are included along with performance analysis.
3

Spark on Kubernetes using HopsFS as a backing store : Measuring performance of Spark with HopsFS for storing and retrieving shuffle files while running on Kubernetes

Saini, Shivam January 2020 (has links)
Data is a raw list of facts and details, such as numbers, words, measurements or observations that is not useful for us all by itself. Data processing is a technique that helps to process the data in order to get useful information out of it. Today, the world produces huge amounts of data that can not be processed using traditional methods. Apache Spark (Spark) is an open-source distributed general-purpose cluster computing framework for large scale data processing. In order to fulfill its task, Spark uses a cluster of machines to process the data in a parallel fashion. External shuffle service is a distributed component of Apache Spark cluster that provides resilience in case of a machine failure. A cluster manager helps spark to manage the cluster of machines and provide Spark with the required resources to run the application. Kubernetes is a new cluster manager that enables Spark to run in a containerized environment. However, running external shuffle service is not possible while running Spark using Kubernetes as the resource manager. This highly impacts the performance of Spark applications due to the failed tasks caused by machine failures. As a solution to this problem, the open source Spark community has developed a plugin that can provide the similar resiliency as provided by the external shuffle service. When used with Spark applications, the plugin asynchronously back-up the data onto an external storage. In order not to compromise the Spark application performance, it is important that the external storage provides Spark with a minimum latency. HopsFS is a next generation distribution of Hadoop Distributed Filesystem (HDFS) and provides special support to small files (<64 KB) by storing them in a NewSQL database and thus enabling it to provide lower client latencies. The thesis work shows that HopsFS provides 16% higher performance to Spark applications for small files as compared to larger ones. The work also shows that using the plugin to back-up Spark data on HopsFS can reduce the total execution time of Spark applications by 20%-30% as compared to recalculation of tasks in case of a node failure. / Data är en rå lista över fakta och detaljer, som siffror, ord, mätningar eller observationer som inte är användbara för oss alla i sig. Databehandling är en teknik som hjälper till att bearbeta data för att få användbar information ur den. Idag producerar världen enorma mängder data som inte kan bearbetas med traditionella metoder. Apache Spark (Spark) är en öppen källkod distribuerad ram för allmänt ändamål kluster dator för storskalig databehandling. För att fullgöra sin uppgift använder Spark ett kluster av maskiner för att bearbeta data på ett parallellt sätt. Extern shuffle-tjänst är en distribuerad komponent i Apache Spark-klustret som ger motståndskraft vid maskinfel. En klusterhanterare hjälper gnista att hantera kluster av maskiner och förse Spark med de resurser som krävs för att köra applikationen. Kubernetes är en ny klusterhanterare som gör att Spark kan köras i en containeriserad miljö. Det är dock inte möjligt att köra extern shuffle-tjänst när du kör Spark med Kubernetes som resurshanterare. Detta påverkar starkt prestanda för Spark-applikationer på grund av misslyckade uppgifter orsakade av maskinfel. Som en lösning på detta problem har Spark-communityn med öppen källkod utvecklat ett plugin-program som kan tillhandahålla liknande motståndskraft som tillhandahålls av den externa shuffle-tjänsten. När det används med Spark- applikationer säkerhetskopierar plugin-programmet asynkront data till en extern lagring. För att inte kompromissa med Spark-applikationsprestandan är det viktigt att det externa lagret ger Spark en minimal latens. HopsFS är en nästa generations distribution av Hadoop Distribuerat filsystem (HDFS) och ger specialstöd till små filer (<64 kB) genom att lagra dem i en NewSQL-databas och därmed möjliggöra lägre klientfördröjningar. Examensarbetet visar att HopsFS ger 16 % högre prestanda till Spark-applikationer för små filer jämfört med större. Arbetet visar också att användning av plugin för att säkerhetskopiera Spark-data på HopsFS kan minska den totala körningstiden för Spark-applikationer med 20 % - 30 % jämfört med omberäkning av uppgifter i händelse av ett nodfel.
4

Smart distributed processing technologies for hedge fund management

Thayalakumar, Sinnathurai January 2017 (has links)
Distributed processing cluster design using commodity hardware and software has proven to be a technological breakthrough in the field of parallel and distributed computing. The research presented herein is the original investigation on distributed processing using hybrid processing clusters to improve the calculation efficiency of the compute-intensive applications. This has opened a new frontier in affordable supercomputing that can be utilised by businesses and industries at various levels. Distributed processing that uses commodity computer clusters has become extremely popular over recent years, particularly among university research groups and research organisations. The research work discussed herein addresses a bespoke-oriented design and implementation of highly specific and different types of distributed processing clusters with applied load balancing techniques that are well suited for particular business requirements. The research was performed in four phases, which are cohesively interconnected, to find a suitable solution using a new type of distributed processing approaches. The first phase is an implementation of a bespoke-type distributed processing cluster using an existing network of workstations as a calculation cluster based on a loosely coupled distributed process system design that has improved calculation efficiency of certain legacy applications. This approach has demonstrated how to design an innovative, cost-effective, and efficient way to utilise a workstation cluster for distributed processing. The second phase is to improve the calculation efficiency of the distributed processing system; a new type of load balancing system is designed to incorporate multiple processing devices. The load balancing system incorporates hardware, software and application related parameters to assigned calculation tasks to each processing devices accordingly. Three types of load balancing methods are tested, static, dynamic and hybrid, which each of them has their own advantages, and all three of them have further improved the calculation efficiency of the distributed processing system. The third phase is to facilitate the company to improve the batch processing application calculation time, and two separate dedicated calculation clusters are built using small form factor (SFF) computers and PCs as separate peer-to-peer (P2P) network based calculation clusters. Multiple batch processing applications were tested on theses clusters, and the results have shown consistent calculation time improvement across all the applications tested. In addition, dedicated clusters are built using SFF computers with reduced power consumption, small cluster size, and comparatively low cost to suit particular business needs. The fourth phase incorporates all the processing devices available in the company as a hybrid calculation cluster utilises various type of servers, workstations, and SFF computers to form a high-throughput distributed processing system that consolidates multiple calculations clusters. These clusters can be utilised as multiple mutually exclusive multiple clusters or combined as a single cluster depending on the applications used. The test results show considerable calculation time improvements by using consolidated calculation cluster in conjunction with rule-based load balancing techniques. The main design concept of the system is based on the original design that uses first principle methods and utilises existing LAN and separate P2P network infrastructures, hardware, and software. Tests and investigations conducted show promising results where the company's legacy applications can be modified and implemented with different types of distributed processing clusters to achieve calculation and processing efficiency for various applications within the company. The test results have confirmed the expected calculation time improvements in controlled environments and show that it is feasible to design and develop a bespoke-type dedicated distributed processing cluster using existing hardware, software, and low-cost SFF computers. Furthermore, a combination of bespoke distributed processing system with appropriate load balancing algorithms has shown considerable calculation time improvements for various legacy and bespoke applications. Hence, the bespoke design is better suited to provide a solution for the calculation of time improvements for critical problems currently faced by the sponsoring company.

Page generated in 0.1306 seconds