Spelling suggestions: "subject:"computer cluster"" "subject:"coomputer cluster""
1 |
Development of Apple Workgroup Cluster and Parallel Computing for Phase Field Model of Magnetic MaterialsHuang, Yongxin 16 January 2010 (has links)
Micromagnetic modeling numerically solves magnetization evolution equation to
process magnetic domain analysis, which helps to understand the macroscopic
magnetic properties of ferromagnets. To apply this method in simulation of
magnetostrictive ferromagnets, there exist two main challenges: the complicated microelasticity
due to the magnetostrictive strain, and very expensive computation mainly
caused by the calculation of long-range magnetostatic and elastic interactions. A
parallel computing for phase field model based on computer cluster is then developed
as a promising tool for domain analysis in magnetostrictive ferromagnetic materials.
We have successfully built an 8-node Apple workgroup cluster, deploying the
hardware system and configuring the software environment, as a platform for parallel
computation of phase field model of magnetic materials. Several testing programs have
been implemented to evaluate the performance of the cluster system, especially for the
application of parallel computation using MPI. The results show the cluster system can simultaneously support up to 32 processes for MPI program with high performance of
interprocess communication.
The parallel computations of phase field model of magnetic materials implemented by
a MPI program have been performed on the developed cluster system. The simulated
results of a single domain rotation in Terfenol-D crystals agree well with the theoretical
prediction. A further simulation including magnetic and elastic interaction among
multiple domains shows that we need take into account the interaction effects in order
to accurately characterize the magnetization processes in Terfenol-D. These simulation
examples suggest that the paralleling computation of the phase field model of magnetic
materials based on a powerful cluster system is a promising technology that meets the
need of domain analysis.
|
2 |
Improving the throughput of novel cluster computing systemsWu, Jiadong 21 September 2015 (has links)
Traditional cluster computing systems such as the supercomputers are equipped with specially designed high-performance hardware, which escalates the manufacturing cost and the energy cost of those systems. Due to such drawbacks and the diversified demand in computation, two new types of clusters are developed: the GPU clusters and the Hadoop clusters.
The GPU cluster combines traditional CPU-only computing cluster with general purpose GPUs to accelerate the applications. Thanks to the massively-parallel architecture of the GPU, this type of system can deliver much higher performance-per-watt than the traditional computing clusters. The Hadoop cluster is another popular type of cluster computing system. It uses inexpensive off-the-shelf component and standard Ethernet to minimize manufacturing cost. The Hadoop systems are widely used throughout the industry.
Alongside with the lowered cost, these new systems also bring their unique challenges. According to our study, the GPU clusters are prone to severe under-utilization due to the heterogeneous nature of its computation resources, and the Hadoop clusters are vulnerable to network congestion due to its limited network resources. In this research, we are trying to improve the throughput of these novel cluster computing systems by increasing the workload parallelism and network I/O parallelism.
|
3 |
Lygiagrečiųjų simbolinių skaičiavimų programinė įranga / Software for parallel symbolic computingUžpalis, Evaldas 15 July 2009 (has links)
Egzistuoja du matematinių problemų sprendimo būdai: skaitmeninis ir simbolinis. Simbolinis sprendimo būdas manipuliuoja simboliniais objektais, tokiais kaip loginės ar algebrinės formulės, taisyklės ar programos. Priešingai nei skaitmeninis būdas, pagrindinis simbolinių skaičiavimų tikslas yra matematinės išraiškos supaprastinimas. Dažniausiai galutinis atsakymas būna racionalusis skaičius arba formulė, todėl simboliniai skaičiavimai gali būti naudojami: • surasti tikslų matematinės problemos sprendimą, • supaprastinti matematinį modelį. Nedidelės apimties matematinėms išraiškoms supaprastinti užtenka ir vieno kompiuterio, tačiau yra tokių išraiškų, kurioms supaprastinti nebeužtenka vieno kompiuterio atminties ar procesoriaus, todėl geriausias sprendimas šioje situacijoje yra lygiagretieji skaičiavimai kompiuterių klasteryje. Pagrindinė problema lygiagrečiuose skaičiavimuose yra duomenų paskirstymo algoritmo efektyvumas. Šiame darbe yra pateikti vieno iš paskirstymo algoritmų ir kelių jo modifikacijų eksperimentiniai tyrimai. / There are two methods of mathematical problems solving: the digital, and symbolic. Symbolic solutions manipulate symbolic objects, such as logical or algebraic formulas, rules or programs. In contrast to the digital solution, the main purpose of the symbolic calculations is the symbolic simplification of mathematical expressions. In most cases, the final answer is rational number, or formula, and therefore symbolic calculations can be used: (1) • to identify the precise solution of the mathematical problem, • to simplify the mathematical model. For calculation of small mathematical expression it is enough one computer. But there are expressions which need more then one computer memory capacity or processing power. In these cases best solution is parallel calculations in computer cluster. The main problem of parallel calculations is the efficiency of distribution algorithm. This work presents experimental studies of one distribution algorithm and of several it‘s modifications.
|
4 |
Parallel Computing of Particle Filtering Algorithms for Target Tracking ApplicationsWu, Jiande 18 December 2014 (has links)
Particle filtering has been a very popular method to solve nonlinear/non-Gaussian state estimation problems for more than twenty years. Particle filters (PFs) have found lots of applications in areas that include nonlinear filtering of noisy signals and data, especially in target tracking. However, implementation of high dimensional PFs in real-time for large-scale problems is a very challenging computational task.
Parallel & distributed (P&D) computing is a promising way to deal with the computational challenges of PF methods. The main goal of this dissertation is to develop, implement and evaluate computationally efficient PF algorithms for target tracking, and thereby bring them closer to practical applications. To reach this goal, a number of parallel PF algorithms is designed and implemented using different parallel hardware architectures such as Computer Cluster, Graphics Processing Unit (GPU), and Field-Programmable Gate Array (FPGA). Proposed is an improved PF implementation for computer cluster - the Particle Transfer Algorithm (PTA), which takes advantage of the cluster architecture and outperforms significantly existing algorithms. Also, a novel GPU PF algorithm implementation is designed which is highly efficient for GPU architectures. The proposed algorithm implementations on different parallel computing environments are applied and tested for target tracking problems, such as space object tracking, ground multitarget tracking using image sensor, UAV-multisensor tracking. Comprehensive performance evaluation and comparison of the algorithms for both tracking and computational capabilities is performed. It is demonstrated by the obtained simulation results that the proposed implementations help greatly overcome the computational issues of particle filtering for realistic practical problems.
|
5 |
Delayed Transfer Entropy applied to Big Data / Delayed Transfer Entropy aplicado a Big DataDourado, Jonas Rossi 30 November 2018 (has links)
Recent popularization of technologies such as Smartphones, Wearables, Internet of Things, Social Networks and Video streaming increased data creation. Dealing with extensive data sets led the creation of term big data, often defined as when data volume, acquisition rate or representation demands nontraditional approaches to data analysis or requires horizontal scaling for data processing. Analysis is the most important Big Data phase, where it has the objective of extracting meaningful and often hidden information. One example of Big Data hidden information is causality, which can be inferred with Delayed Transfer Entropy (DTE). Despite DTE wide applicability, it has a high demanding processing power which is aggravated with large datasets as those found in big data. This research optimized DTE performance and modified existing code to enable DTE execution on a computer cluster. With big data trend in sight, this results may enable bigger datasets analysis or better statistical evidence. / A recente popularização de tecnologias como Smartphones, Wearables, Internet das Coisas, Redes Sociais e streaming de Video aumentou a criação de dados. A manipulação de grande quantidade de dados levou a criação do termo Big Data, muitas vezes definido como quando o volume, a taxa de aquisição ou a representação dos dados demanda abordagens não tradicionais para analisar ou requer uma escala horizontal para o processamento de dados. A análise é a etapa de Big Data mais importante, tendo como objetivo extrair informações relevantes e às vezes escondidas. Um exemplo de informação escondida é a causalidade, que pode ser inferida utilizando Delayed Transfer Entropy (DTE). Apesar do DTE ter uma grande aplicabilidade, ele possui uma grande demanda computacional, esta última, é agravada devido a grandes bases de dados como as encontradas em Big Data. Essa pesquisa otimizou e modificou o código existente para permitir a execução de DTE em um cluster de computadores. Com a tendência de Big Data em vista, esse resultado pode permitir bancos de dados maiores ou melhores evidências estatísticas.
|
6 |
Delayed Transfer Entropy applied to Big Data / Delayed Transfer Entropy aplicado a Big DataJonas Rossi Dourado 30 November 2018 (has links)
Recent popularization of technologies such as Smartphones, Wearables, Internet of Things, Social Networks and Video streaming increased data creation. Dealing with extensive data sets led the creation of term big data, often defined as when data volume, acquisition rate or representation demands nontraditional approaches to data analysis or requires horizontal scaling for data processing. Analysis is the most important Big Data phase, where it has the objective of extracting meaningful and often hidden information. One example of Big Data hidden information is causality, which can be inferred with Delayed Transfer Entropy (DTE). Despite DTE wide applicability, it has a high demanding processing power which is aggravated with large datasets as those found in big data. This research optimized DTE performance and modified existing code to enable DTE execution on a computer cluster. With big data trend in sight, this results may enable bigger datasets analysis or better statistical evidence. / A recente popularização de tecnologias como Smartphones, Wearables, Internet das Coisas, Redes Sociais e streaming de Video aumentou a criação de dados. A manipulação de grande quantidade de dados levou a criação do termo Big Data, muitas vezes definido como quando o volume, a taxa de aquisição ou a representação dos dados demanda abordagens não tradicionais para analisar ou requer uma escala horizontal para o processamento de dados. A análise é a etapa de Big Data mais importante, tendo como objetivo extrair informações relevantes e às vezes escondidas. Um exemplo de informação escondida é a causalidade, que pode ser inferida utilizando Delayed Transfer Entropy (DTE). Apesar do DTE ter uma grande aplicabilidade, ele possui uma grande demanda computacional, esta última, é agravada devido a grandes bases de dados como as encontradas em Big Data. Essa pesquisa otimizou e modificou o código existente para permitir a execução de DTE em um cluster de computadores. Com a tendência de Big Data em vista, esse resultado pode permitir bancos de dados maiores ou melhores evidências estatísticas.
|
7 |
RedBlue: cluster para pesquisa e ensino em EngenhariaPedras, Marcelo Br?ulio 13 November 2017 (has links)
Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2018-01-31T18:35:38Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
marcelo_braulio_pedras.pdf: 2382099 bytes, checksum: 3edc0615e188d815d0a9d1a514edfb8f (MD5) / Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2018-02-03T12:04:59Z (GMT) No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
marcelo_braulio_pedras.pdf: 2382099 bytes, checksum: 3edc0615e188d815d0a9d1a514edfb8f (MD5) / Made available in DSpace on 2018-02-03T12:04:59Z (GMT). No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
marcelo_braulio_pedras.pdf: 2382099 bytes, checksum: 3edc0615e188d815d0a9d1a514edfb8f (MD5)
Previous issue date: 2017 / Programas de computadores s?o muito utilizados para resolu??o de problemas complexos
em engenharia. Atualmente, espera-se que um engenheiro saiba mais que apenas utiliz?-los,
sendo esta habilidade muito valorizada no mercado de trabalho. Tal habilidade possibilita que
profissionais consigam utilizar um maior conjunto de ferramentas para solucionar problemas. As
simula??es computacionais, por exemplo, podem ser utilizadas como ferramenta de aquisi??o de
conhecimento, permitindo que um profissional ou um estudante crie, teste e valide suas hip?teses.
As simula??es tamb?m s?o utilizadas em pesquisas cient?ficas como alternativa a experimentos
de dif?cil obten??o e na ind?stria para reduzir custos. Por?m, uma simula??o pode consumir mais
recursos do que os dispon?veis em um computador, tornando seu tempo de execu??o invi?vel.
Uma forma barata de se obter mais desempenho ? utilizando um cluster de computadores
comuns. Dessa forma, seria poss?vel utilizar os laborat?rios de inform?tica dispon?veis para
execut?-las. Entretanto, isso implicaria em conhecimentos aprofundados em computa??o paralela
e/ou distribu?da por parte dos usu?rios, dificultado o desenvolvimento de aplica??es. Com o
objetivo de minimizar o tempo de execu??o de simula??es complexas utilizando clusters e
permitir que usu?rios com poucos conhecimentos em programa??o paralela e/ou distribu?da
possam utiliz?-lo, este trabalho apresenta uma solu??o denominada ?plataforma RedBlue?. Essa
plataforma recebe a aplica??o do usu?rio e a executa nos n?s do cluster de forma autom?tica
e transparente para o mesmo. Para testar a plataforma desenvolvida foram realizados testes
com redes neurais artificiais e com um algoritmo gen?tico simples, ambos buscando descobrir a
melhor configura??o de par?metros para determinado problema. Utilizaram-se 60 m?quinas de
um laborat?rio de inform?tica para testar a plataforma. Os resultados mostram que houve uma
redu??o de at? 98% no tempo de execu??o do experimento com redes neurais e 99,3% para o
experimento com o algoritmo gen?tico em compara??o a execu??o sequencial. Esses resultados
indicam que a plataforma ? vi?vel para utiliza??o em laborat?rios de inform?tica, possibilitando
uma redu??o consider?vel no tempo de execu??o de simula??es complexas. A plataforma ?
aplic?vel a um n?mero flex?vel de computadores, ajustando-se ? capacidade dos laborat?rios.
Al?m disso, pode ser utilizada como instrumento ?til ao ensino e pesquisa. Ressalta-se que
a utiliza??o de simula??es computacionais para ensino e pesquisa contribui n?o apenas para
a aprendizagem de conte?dos, mas tamb?m para o surgimento de habilidades necess?rias ao
mercado de trabalho do engenheiro. / Disserta??o (Mestrado Profissional) ? Programa de P?s-Gradua??o em Educa??o, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2017. / Computer programs are commonly used to solve complex engineering problems, and it is
expected from an engineer a more than hands-on experience in using these computer programs
with the ability to develop them using a wide range of tools. Computational simulations, for
instance, can be used as tools for knowledge acquisition allowing a professional or student
to create, test and validate their hypotheses. Such simulations are used at an academic setting
as an alternative to expensive experiments. However, a simulation can take more resources
than those available in a single computer machine, rendering long execution times. To create a
cluster of regular computers, such as the ones already available at computer labs, is a cheaper
alternative to improve such execution times. One major drawback of this approach is that the
user must be knowledgeable in parallel and distributed programming, which makes software
development harder. To overcome such constraints, this work presents a solution named ?RedBlue
platform?that receives and runs user?s applications over a computer cluster in an automatic,
transparent manner. To test the RedBlue platform, we performed a set of tests via artificial
neural networks and a simplified genetic algorithm, whose main purpose was to search for the
best-suited parameter configurations for the application problem at hand. To test the platform,
the experiments were run using 60 computer machines from a computer lab. This study has
identified a reduction in execution times of 98% for neural networks, and a reduction of 99,3%
for the genetic algorithm, and also shown that the platform is suited for real-world applications of
simulations at computer labs. Furthermore, the platform accepts a variable number of computers,
easily adaptable to different academic environments, such as research and training. Lastly, we
have noted that computational simulations not only contribute to research and learning, but also
to develop the required industry skills.
|
8 |
Metody extrakce informací / Methods of Information ExtractionAdamček, Adam January 2015 (has links)
The goal of information extraction is to retrieve relational data from texts written in natural human language. Applications of such obtained information is wide - from text summarization, through ontology creation up to answering questions by QA systems. This work describes design and implementation of a system working in computer cluster which transforms a dump of Wikipedia articles to a set of extracted information that is stored in distributed RDF database with a possibility to query it using created user interface.
|
9 |
Evoluční návrh kombinačních obvodů na počítačovém clusteru / Evolutionary Design of Combinational Circuits on Computer ClusterPánek, Richard January 2015 (has links)
This master's thesis deals with evolutionary algorithms and how them to use to design of combinational circuits. Genetic programming especially CGP is the most applicable to use for this type of task. Furthermore, it deals with computation on computer cluster and the use of evolutionary algorithms on them. For this computation is the most suited island models with CGP. Then a new way of recombination in CGP is designed to improve them. This design is implemented and tested on the computer cluster.
|
10 |
Investigation of Immersion Cooled ARM-Based Computer Clusters for Low-Cost, High-Performance ComputingMohammed, Awaizulla Shareef 08 1900 (has links)
This study aimed to investigate performance of ARM-based computer clusters using two-phase immersion cooling approach, and demonstrate its potential benefits over the air-based natural and forced convection approaches. ARM-based clusters were created using Raspberry Pi model 2 and 3, a commodity-level, single-board computer. Immersion cooling mode utilized two types of dielectric liquids, HFE-7000 and HFE-7100. Experiments involved running benchmarking tests Sysbench high performance linpack (HPL), and the combination of both in order to quantify the key parameters of device junction temperature, frequency, execution time, computing performance, and energy consumption. Results indicated that the device core temperature has direct effects on the computing performance and energy consumption. In the reference, natural convection cooling mode, as the temperature raised, the cluster started to decease its operating frequency to save the internal cores from damage. This resulted in decline of computing performance and increase of execution time, further leading to increase of energy consumption. In more extreme cases, performance of the cluster dropped by 4X, while the energy consumption increased by 220%. This study therefore demonstrated that two-phase immersion cooling method with its near-isothermal, high heat transfer capability would enable fast, energy efficient, and reliable operation, particularly benefiting high performance computing applications where conventional air-based cooling methods would fail.
|
Page generated in 0.1036 seconds