• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 43
  • 19
  • 11
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 290
  • 290
  • 61
  • 61
  • 53
  • 52
  • 48
  • 47
  • 40
  • 36
  • 35
  • 34
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

An Application-Attuned Framework for Optimizing HPC Storage Systems

Paul, Arnab Kumar 19 August 2020 (has links)
High performance computing (HPC) is routinely employed in diverse domains such as life sciences, and Geology, to simulate and understand the behavior of complex phenomena. Big data driven scientific simulations are resource intensive and require both computing and I/O capabilities at scale. There is a crucial need for revisiting the HPC I/O subsystem to better optimize for and manage the increased pressure on the underlying storage systems from big data processing. Extant HPC storage systems are designed and tuned for a specific set of applications targeting a range of workload characteristics, but they lack the flexibility in adapting to the ever-changing application behaviors. The complex nature of modern HPC storage systems along with the ever-changing application behaviors present unique opportunities and engineering challenges. In this dissertation, we design and develop a framework for optimizing HPC storage systems by making them application-attuned. We select three different kinds of HPC storage systems - in-memory data analytics frameworks, parallel file systems and object storage. We first analyze the HPC application I/O behavior by studying real-world I/O traces. Next we optimize parallelism for applications running in-memory, then we design data management techniques for HPC storage systems, and finally focus on low-level I/O load balance for improving the efficiency of modern HPC storage systems. / Doctor of Philosophy / Clusters of multiple computers connected through internet are often deployed in industry and laboratories for large scale data processing or computation that cannot be handled by standalone computers. In such a cluster, resources such as CPU, memory, disks are integrated to work together. With the increase in popularity of applications that read and write a tremendous amount of data, we need a large number of disks that can interact effectively in such clusters. This forms the part of high performance computing (HPC) storage systems. Such HPC storage systems are used by a diverse set of applications coming from organizations from a vast range of domains from earth sciences, financial services, telecommunication to life sciences. Therefore, the HPC storage system should be efficient to perform well for the different read and write (I/O) requirements from all the different sets of applications. But current HPC storage systems do not cater to the varied I/O requirements. To this end, this dissertation designs and develops a framework for HPC storage systems that is application-attuned and thus provides much improved performance than other state-of-the-art HPC storage systems without such optimizations.
202

Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining Challenges

Persson, Magnus January 2009 (has links)
<p>Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis.</p> / <p>Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen.</p> / DySCAS
203

Conception et validation d'algorithmes de remaillage parallèles à mémoire distribuée basés sur un remailleur séquentiel / Design and validation of distributed-memory, parallel remeshing algorithms based on asequential remesher

Lachat, Cédric 13 December 2013 (has links)
L'objectif de cette thèse était de proposer, puis de valider expérimentalement, un ensemble de méthodes algorithmiques permettant le remaillage parallèle de maillages distribués, en s'appuyant sur une méthode séquentielle de remaillage préexistante. Cet objectif a été atteint par étapes : définition de structures de données et de schémas de communication adaptés aux maillages distribués, permettant le déplacement à moindre coût des interfaces entre sous-domaines sur les processeurs d'une architecture à mémoire distribuée ; utilisation d'algorithmes de répartition dynamique de la charge adaptés aux techniques parallèles de remaillage ; conception d'algorithmes parallèles permettant de scinder le problème global de remaillage parallèle en plusieurs sous-tâches séquentielles, susceptibles de s'exécuter concurremment sur les processeurs de la machine parallèle. Ces contributions ont été mises en oeuvre au sein de la bibliothèque parallèle PaMPA, en s'appuyant sur les briques logicielles MMG3D (remaillage séquentiel de maillages tétraédriques) et PT-Scotch (repartitionnement parallèle de graphes). La bibliothèque PaMPA offre ainsi les fonctionnalités suivantes : communication transparente entre processeurs voisins des valeurs portées par les noeuds, les éléments, etc. ;remaillage, selon des critères fournis par l'utilisateur, de portions du maillage distribué, en offrant une qualité constante, que les éléments à remailler soient portés par un unique processeur ou bien répartis sur plusieurs d'entre eux ; répartition et redistribution de la charge des maillages pour préserver l'efficacité des simulations après remaillage. / The purpose of this thesis was to propose and to validate experimentally a set of algorithmic methods for the parallel remeshing of distributed meshes, based on a preexisting sequential remeshing method. This goal has been achieved through several steps : definition of data structures and of communication schemes suitable for distributed meshes, allowing for cheap migration of subdomain interfaces across the processors of a distributed-memory architecture ; use of dynamic load balancing algorithms suitable for parallel remeshing techniques ; design of parallel algorithms for splitting the global remeshing problem into several independent sequential tasks, susceptible to be executed concurrently across the processors of the parallel machine. These contributions have been implemented into the PaMPA parallel library, taking advantage of the MMG3D (sequential anisotropic tetrahedral remesher) PT-Scotch (parallel graph repartitioning) software. The PaMPA library consequently provides the following features : transparent communication across neighboring processors of data borne by nodes, elements, etc.; remeshing, according to used-defined criteria, of portions of the distributed mesh, that yields constant quality, irrespective of whether elements to be remeshed are located on a single processor or distributed across several of them ; balancing and redistribution of the workload of the mesh, to preserve the efficiency of simulations after the remeshing phase.
204

"Índices de carga e desempenho em ambientes paralelos/distribuídos - modelagem e métricas" / Load and Performance Index for Parallel/Distributed System - Modelling and Metrics

Branco, Kalinka Regina Lucas Jaquie Castelo 15 December 2004 (has links)
Esta tese aborda o problema de obtenção de um índice de carga ou de desempenho adequado para utilização no escalonamento de processos em sistemas computacionais heterogêneos paralelos/distribuídos. Uma ampla revisão bibliográfica com a correspondente análise crítica é apresentada. Essa revisão é a base para a comparação das métricas existentes para a avaliação do grau de heterogeneidade/homogeneidade dos sistemas computacionais. Uma nova métrica é proposta neste trabalho, removendo as restrições identificadas no estudo comparativo realizado. Resultados de aplicações dessa nova métrica são apresentados e discutidos. Esta tese propõe também o conceito de heterogeneidade/homogeneidade temporal que pode ser utilizado para futuros aprimoramentos de políticas de escalonamento empregadas em plataformas computacionais heterogêneas paralelas/distribuídas. Um novo índice de desempenho (Vector for Index of Performance - VIP), generalizando o conceito de índice de carga, é proposto com base em uma métrica Euclidiana. Esse novo índice é aplicado na implementação de uma política de escalonamento e amplamente testado através de modelagem e simulação. Os resultados obtidos são apresentados e analisados estatisticamente. É demonstrado que o novo índice leva a bons resultados de modo geral e é apresentado um mapeamento mostrando as vantagens e desvantagens de sua adoção quando comparado às métricas tradicionais. / This thesis approaches the problem of evaluating an adequate load index or a performance index, for using in process scheduling in heterogeneous parallel/distributed computing systems. A wide literature review with the corresponding critical analysis is presented. This review is the base for the comparison of the existing metrics for the evaluation of the computing systems homogeneity/heterogeneity degree. A new metric is proposed in this work, removing the restrictions identified during the comparative study realized. Results from the application of the new metric are presented and discussed. This thesis also proposes the concept of temporal heterogeneity/homogeneity that can be used for future improvements in scheduling polices for parallel/distributed heterogeneous computing platforms. A new performance index (Vector for Index of Performance - VIP), generalizing the concept of load index, is proposed based on an Euclidean metric. This new index is applied to the implementation of a scheduling police and widely tested through modeling and simulation. The results obtained are presented and statistically analyzed. It is shown that the new index reaches good results in general and it is also presented a mapping showing the advantages and disadvantages of its adoption when compared with the traditional metrics.
205

A self-optimised cloud radio access network for emerging 5G architectures

Khan, Muhammad January 2018 (has links)
Network densification has become a dominant theme for capacity enhancement in cellular networks. However, it increases the operational complexity and expenditure for mobile network operators. Consequently, the essential features of Self-Organising Networks (SON) are considered to ensure the economic viability of the emerging cellular networks. This thesis focuses on quantifying the benefits of self-organisation in Cloud Radio Access Network (C-RAN) by proposing a flexible, energy efficient, and capacity optimised system. The Base Band Unit (BBU) and Remote Radio Head (RRH) map is formulated as an optimisation problem. A self-optimised C-RAN (SOCRAN) is proposed which hosts Genetic Algorithm (GA) and Discrete-Particle-Swarm-Optimisation algorithm (DPSO), developed for optimisation. Computational results based on different network scenarios demonstrate that DPSO delivers excellent performances for the key performance indicators compared to GA. The percentage of blocked users is reduced from 10.523% to 0.409% in a medium sized network scenario and 5.394% to 0.56% in a vast network scenario. Furthermore, an efficient resource utilisation scheme is proposed based on the concept of Cell Differentiation and Integration (CDI). The two-stage CDI scheme semi-statically scales the number of BBUs and RRHs to serve an offered load and dynamically defines the optimum BBU-RRH mapping to avoid unbalanced network scenarios. Computational results demonstrate significant throughput improvement in a CDI-enabled C-RAN compared to a fixed C-RAN, i.e., an average throughput increase of 45.53% and an average blocked users decrease of 23.149% is experienced. A power model is proposed to estimate the overall power consumption of C-RAN. Approximately 16% power reduction is calculated in a CDI-enabled C-RAN when compared to a fixed C-RAN, both serving the same geographical area. Moreover, a Divide-and-Sort load balancing scheme is proposed and compared to the SOCRAN scheme. Results show excellent performances by the Divide-and-Sort algorithm in small networks when compared to SOCRAN and K-mean clustering algorithm.
206

Redes de sensores com nodos móveis: investigando efeitos da mobilidade na cobertura de sensoriamento e no balanceamento de carga / Sensor Networks with Mobile Nodes: investigating the Effects of Mobility on Sensing Coverage and Load Balancing

González, Andrea Veronica 23 November 2016 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2017-03-23T22:13:11Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Redes de sensores com nodos móveis.pdf: 6727540 bytes, checksum: 3de001c9a299b0722e7e13bd8cafbb33 (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2017-04-05T19:13:02Z (GMT) No. of bitstreams: 2 Redes de sensores com nodos móveis.pdf: 6727540 bytes, checksum: 3de001c9a299b0722e7e13bd8cafbb33 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-04-05T19:13:10Z (GMT). No. of bitstreams: 2 Redes de sensores com nodos móveis.pdf: 6727540 bytes, checksum: 3de001c9a299b0722e7e13bd8cafbb33 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-11-23 / Sem bolsa / A mobilidade de nodos em redes de sensores sem fio tem sido empregada para resolver problemas de comunicação através de nodos coletores de dados ou estações base móveis, ou ainda para melhorar a cobertura empregando nodos sensores móveis, que se movem para sensoriar áreas descobertas. No entanto, um dos principais desafios em redes de sensores sem fio é o consumo de energia, visto que o tempo de vida da rede depende da carga da bateria de seus nodos. Visando aumentar o tempo de vida das redes orientadas a eventos, estratégias dinâmicas de balanceamento de carga exploram a redundância nas áreas de sensoriamento dos nodos e evitam que mais de um nodo processe um mesmo evento. A mobilidade bem como o balanceamento de carga são importantes adaptações dinâmicas que podem ser empregadas para melhorar a eficiência de redes de sensores, mas o emprego integrado destas duas adaptações precisa ser investigado. Este trabalho avalia os efeitos da mobilidade de nodos sensores tanto na cobertura da rede quanto na eficiência das técnicas de balanceamento de carga empregadas em redes de sensores orientadas a eventos. No contexto deste trabalho, uma estratégia é implementada, a qual move nodos baseada na ação de forças de repulsão, visando espalhar nodos sobre a área de interesse e melhorar a cobertura da rede. O seu impacto na cobertura foi avaliado em diferentes cenários de implantação e em redes com diferentes densidades. Primeiramente, quando nodos são implantados de forma aleatória, e então, a mobilidade permite redistribuí-los e maximizar a cobertura da rede. Em um segundo momento, a estratégia é aplicada quando nodos começam a ser desativados pela descarga de suas baterias, onde a mobilidade pode minimizar o efeito da desativação de um nodo da rede. Além disso, experimentos foram realizados de forma a observar o impacto do emprego desta estratégia de mobilidade no desempenho de duas técnicas de balanceamento de carga consideradas estado-da-arte em redes de sensores sem fio orientadas a eventos. Neste trabalho foi considerado o consumo de energia que o nodo gasta com o sensoriamento, mas o consumo energético gasto com o movimento está fora do escopo. / The nodes mobility in wireless sensor networks has been employed to solve communication problems through mobile data mulling or base stations, or yet to improve coverage using mobile sensor nodes, which move to sensing uncovered areas. However, one of the main challenges in wireless sensor networks is the energy consumption, since the network lifetime depends on the node battery charge. In order to increase the lifetime of the event-oriented networks, dynamic load balancing strategies exploit redundancy in the nodes sensing areas and avoid that more than one node process the same event. Mobility as well as the load balancing are important dynamic adaptations that can be employed to improve the ef?ciency of sensor networks, but the integrated use of these two adaptations needs to be investigated. This work evaluates the effects of the sensor nodes mobility both on network coverage and on the ef?ciency of load balancing techniques used in event-oriented sensor networks. In the context of this work, an strategy has implemented, which moves nodes based on the action of repulsion forces, aiming to spread nodes over the area of interest and improve network coverage. Its impact on coverage has assessed in different deployment scenarios and networks with different densities. First, when nodes are deployed at random, then mobility allows them to redistribute and maximize the network coverage. In a second moment, the strategy is applied when nodes begin to be deactivated by the discharge of their batteries, where the mobility can minimize the effect of the deactivation of a node of the network. In addition, experiments have carried out in order to observe the impact of the use of this mobility strategy on the ef?ciency of two load balancing techniques considered state-of-the-art in event-oriented wireless sensor networks. In this work we considered the energy consumption that the node spends with the sensing, but the energy consumption spent with the movement is out of scope.
207

"Índices de carga e desempenho em ambientes paralelos/distribuídos - modelagem e métricas" / Load and Performance Index for Parallel/Distributed System - Modelling and Metrics

Kalinka Regina Lucas Jaquie Castelo Branco 15 December 2004 (has links)
Esta tese aborda o problema de obtenção de um índice de carga ou de desempenho adequado para utilização no escalonamento de processos em sistemas computacionais heterogêneos paralelos/distribuídos. Uma ampla revisão bibliográfica com a correspondente análise crítica é apresentada. Essa revisão é a base para a comparação das métricas existentes para a avaliação do grau de heterogeneidade/homogeneidade dos sistemas computacionais. Uma nova métrica é proposta neste trabalho, removendo as restrições identificadas no estudo comparativo realizado. Resultados de aplicações dessa nova métrica são apresentados e discutidos. Esta tese propõe também o conceito de heterogeneidade/homogeneidade temporal que pode ser utilizado para futuros aprimoramentos de políticas de escalonamento empregadas em plataformas computacionais heterogêneas paralelas/distribuídas. Um novo índice de desempenho (Vector for Index of Performance - VIP), generalizando o conceito de índice de carga, é proposto com base em uma métrica Euclidiana. Esse novo índice é aplicado na implementação de uma política de escalonamento e amplamente testado através de modelagem e simulação. Os resultados obtidos são apresentados e analisados estatisticamente. É demonstrado que o novo índice leva a bons resultados de modo geral e é apresentado um mapeamento mostrando as vantagens e desvantagens de sua adoção quando comparado às métricas tradicionais. / This thesis approaches the problem of evaluating an adequate load index or a performance index, for using in process scheduling in heterogeneous parallel/distributed computing systems. A wide literature review with the corresponding critical analysis is presented. This review is the base for the comparison of the existing metrics for the evaluation of the computing systems homogeneity/heterogeneity degree. A new metric is proposed in this work, removing the restrictions identified during the comparative study realized. Results from the application of the new metric are presented and discussed. This thesis also proposes the concept of temporal heterogeneity/homogeneity that can be used for future improvements in scheduling polices for parallel/distributed heterogeneous computing platforms. A new performance index (Vector for Index of Performance - VIP), generalizing the concept of load index, is proposed based on an Euclidean metric. This new index is applied to the implementation of a scheduling police and widely tested through modeling and simulation. The results obtained are presented and statistically analyzed. It is shown that the new index reaches good results in general and it is also presented a mapping showing the advantages and disadvantages of its adoption when compared with the traditional metrics.
208

Exploring coordinated software and hardware support for hardware resource allocation

Figueiredo Boneti, Carlos Santieri de 04 September 2009 (has links)
Multithreaded processors are now common in the industry as they offer high performance at a low cost. Traditionally, in such processors, the assignation of hardware resources between the multiple threads is done implicitly, by the hardware policies. However, a new class of multithreaded hardware allows the explicit allocation of resources to be controlled or biased by the software. Currently, there is little or no coordination between the allocation of resources done by the hardware and the prioritization of tasks done by the software.This thesis targets to narrow the gap between the software and the hardware, with respect to the hardware resource allocation, by proposing a new explicit resource allocation hardware mechanism and novel schedulers that use the currently available hardware resource allocation mechanisms.It approaches the problem in two different types of computing systems: on the high performance computing domain, we characterize the first processor to present a mechanism that allows the software to bias the allocation hardware resources, the IBM POWER5. In addition, we propose the use of hardware resource allocation as a way to balance high performance computing applications. Finally, we propose two new scheduling mechanisms that are able to transparently and successfully balance applications in real systems using the hardware resource allocation. On the soft real-time domain, we propose a hardware extension to the existing explicit resource allocation hardware and, in addition, two software schedulers that use the explicit allocation hardware to improve the schedulability of tasks in a soft real-time system.In this thesis, we demonstrate that system performance improves by making the software aware of the mechanisms to control the amount of resources given to each running thread. In particular, for the high performance computing domain, we show that it is possible to decrease the execution time of MPI applications biasing the hardware resource assignation between threads. In addition, we show that it is possible to decrease the number of missed deadlines when scheduling tasks in a soft real-time SMT system.
209

Lastbalanseringskluster : En studie om operativsystemets påverkan på lastbalanseraren

Liv, Jakob, Nygren, Fredrik January 2014 (has links)
Denna rapport innehåller en studie över ett operativsystems påverkan på lastbalanserarenHAproxy. Studien utfördes i en experimentmiljö med fyra virtuella testklienter, en lastbalanseraresamt tre webbservernoder kopplade till lastbalanseraren. Operativsystemet varhuvudpunkten i studien där belastningen på dess hårdvara, svarstiden, antalet anslutningarsamt det maximala antalet anslutninger per sekund undersöktes. De operativsystem somtestades var Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 och OpenBSD 5.5. Resultaten fråntesterna visar att hårdvaran och svarstiden är näst intill identisk på samtliga operativsystemmed undantag för OpenBSD där förutsättningarna för att genomföra hårdvarutesternainte kunde uppnås. FreeBSD var det operativsystem som klarade av att hantera flestantal anslutningar tillsammans med CentOS. Ubuntu visade sig vara mer begränsat ochOpenBSD var mycket begränsat. FreeBSD klarade även av högst antal anslutningar persekund, följt av Ubuntu, CentOS och slutligen OpenBSD som visade sig vara det sämstpresterande. / This report contains a study over an operating system’s impact on the load balancerHAproxy. The study was performed in an experimental environment with four virtualclients for testing, one load balancer and three web server nodes connected to the loadbalancer. The operating system was the main point in the study where the load on theload balancer’s hardware, the response time, the amount of connections and the maximumamount of connections per second were examined. The operating systems whichwere tested was Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 and OpenBSD 5.5. The resultsfrom the tests shows that the load on the hardware and the response time are almost identicalon all operating systems with the exception of OpenBSD where the conditions to beable to run the hardware tests could not be achieved. FreeBSD was the operating systemthat was able to manage the highest amount of connections along with CentOS. Ubuntuturned out to be more limited and OpenBSD was very limited. FreeBSD also managedthe highest amount of connections per second, followed by Ubuntu, CentOS and finallyOpenBSD which turned out to be the worst performer.
210

Smart distributed processing technologies for hedge fund management

Thayalakumar, Sinnathurai January 2017 (has links)
Distributed processing cluster design using commodity hardware and software has proven to be a technological breakthrough in the field of parallel and distributed computing. The research presented herein is the original investigation on distributed processing using hybrid processing clusters to improve the calculation efficiency of the compute-intensive applications. This has opened a new frontier in affordable supercomputing that can be utilised by businesses and industries at various levels. Distributed processing that uses commodity computer clusters has become extremely popular over recent years, particularly among university research groups and research organisations. The research work discussed herein addresses a bespoke-oriented design and implementation of highly specific and different types of distributed processing clusters with applied load balancing techniques that are well suited for particular business requirements. The research was performed in four phases, which are cohesively interconnected, to find a suitable solution using a new type of distributed processing approaches. The first phase is an implementation of a bespoke-type distributed processing cluster using an existing network of workstations as a calculation cluster based on a loosely coupled distributed process system design that has improved calculation efficiency of certain legacy applications. This approach has demonstrated how to design an innovative, cost-effective, and efficient way to utilise a workstation cluster for distributed processing. The second phase is to improve the calculation efficiency of the distributed processing system; a new type of load balancing system is designed to incorporate multiple processing devices. The load balancing system incorporates hardware, software and application related parameters to assigned calculation tasks to each processing devices accordingly. Three types of load balancing methods are tested, static, dynamic and hybrid, which each of them has their own advantages, and all three of them have further improved the calculation efficiency of the distributed processing system. The third phase is to facilitate the company to improve the batch processing application calculation time, and two separate dedicated calculation clusters are built using small form factor (SFF) computers and PCs as separate peer-to-peer (P2P) network based calculation clusters. Multiple batch processing applications were tested on theses clusters, and the results have shown consistent calculation time improvement across all the applications tested. In addition, dedicated clusters are built using SFF computers with reduced power consumption, small cluster size, and comparatively low cost to suit particular business needs. The fourth phase incorporates all the processing devices available in the company as a hybrid calculation cluster utilises various type of servers, workstations, and SFF computers to form a high-throughput distributed processing system that consolidates multiple calculations clusters. These clusters can be utilised as multiple mutually exclusive multiple clusters or combined as a single cluster depending on the applications used. The test results show considerable calculation time improvements by using consolidated calculation cluster in conjunction with rule-based load balancing techniques. The main design concept of the system is based on the original design that uses first principle methods and utilises existing LAN and separate P2P network infrastructures, hardware, and software. Tests and investigations conducted show promising results where the company's legacy applications can be modified and implemented with different types of distributed processing clusters to achieve calculation and processing efficiency for various applications within the company. The test results have confirmed the expected calculation time improvements in controlled environments and show that it is feasible to design and develop a bespoke-type dedicated distributed processing cluster using existing hardware, software, and low-cost SFF computers. Furthermore, a combination of bespoke distributed processing system with appropriate load balancing algorithms has shown considerable calculation time improvements for various legacy and bespoke applications. Hence, the bespoke design is better suited to provide a solution for the calculation of time improvements for critical problems currently faced by the sponsoring company.

Page generated in 0.06 seconds