• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 81
  • 12
  • 12
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 375
  • 375
  • 164
  • 123
  • 85
  • 76
  • 62
  • 61
  • 54
  • 51
  • 40
  • 39
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Augmenting MPI Programming Process with Cognitive Computing

Kazilas, Panagiotis January 2019 (has links)
Cognitive Computing is a new and quickly advancing technology. In thelast decade Cognitive Computing has been used to assist researchers in theirendeavors in many different scientific fields such as Health & medicine,Education, Marketing, Psychology and Financial Services. On the otherhand, Parallel programming is a more complex concept than sequentialprogramming. The additional complexity of Parallel Programming isintroduced by its nature that requires implementations of more complexalgorithms and it introduces additional concepts to the developers, namelythe communication between the processes (Distributed memory systems)that execute the parallel program and their synchronization (Share memorysystems). As a result of this additional complexity, a lot of novice developersare reserved in their attempts to implement parallel programs. The objectiveof this research project was to investigate whether we can assist parallelprogramming process through cognitive computing solutions. In order toachieve our objective, the MPI Assistant, a Q&A system has been developedand a case study has been carried out to determine our application’s efficiencyin our attempt to assist parallel programming developers. The case studyshowed that our MPI Assistant system indeed helped developers reduce thetime they spend to develop their solutions, but not improve the quality ofthe program or its efficiency as these improvements require features that areout of this research project’s scope. However, the case study had limitednumber of participants, which may affect our results’ reliability. As a nextstep in our attempt to determine if cognitive computing technologies are ableto assist developers in their parallel programming development, we movedto investigate if cognitive solutions can extract better and more completeresponses compared to our manually-created responses that we created forthe MPI Assistant. We have experimented with 2 different approaches to theproblem. An approach where we manually created responses for the MPIAssistant, and an approach where we investigated if cognitive solutions canautomatically extract better and complete responses. We compared the qualityof the latter automatic responses with the quality of the former which weremanually created.
102

EVOLUTION OF THE COST EFFECTIVE, HIGH PERFORMANCE GROUND SYSTEMS: A QUANTITATIVE APPROACH

Hazra, Tushar K., Stephenson, Richard A., Troendly, Gregory M. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / During the recent years of small satellite space access missions, the trend has been towards designing low-cost ground control centers to maintain the space/ground cost ratio. The use of personal computers (PC) in combination with high speed transputer modules as embedded parallel processors, provides a relatively affordable, highly versatile, and reliable desktop workstation upon which satellite telemetry systems can be built to meet the ever-growing challenge of the space missions today and of the future. This paper presents the feasibility of cost effective, high performance ground systems and a quantitative analysis and study in terms of performance, speedup, efficiency, and the compatibility of the architecture to commercial off the shelf (COTS) tools, and finally, introduces an operational high performance, low cost ground system to strengthen the insight of the concept.
103

Load balancing of irregular parallel applications on heterogeneous computing environments

Janjic, Vladimir January 2012 (has links)
Large-scale heterogeneous distributed computing environments (such as Computational Grids and Clouds) offer the promise of access to a vast amount of computing resources at a relatively low cost. In order to ease the application development and deployment on such complex environments, high-level parallel programming languages exist that need to be supported by sophisticated runtime systems. One of the main problems that these runtime systems need to address is dynamic load balancing that ensures that no resources in the environment are underutilised or overloaded with work. This thesis deals with the problem of obtaining good speedups for irregular applications on heterogeneous distributed computing environments. It focuses on workstealing techniques that can be used for load balancing during the execution of irregular applications. It specifically addresses two problems that arise during work-stealing: where thieves should look for work during the application execution and how victims should respond to steal attempts. In particular, we describe and implement a new Feudal Stealing algorithm and also we describe and implement new granularity-driven task selection policies in the SCALES simulator, which is a work-stealing simulator developed for this thesis. In addition, we present the comprehensive evaluation of the Feudal Stealing algorithm and the granularity-driven task selection policies using the simulations of a large class of regular and irregular parallel applications on a wide range of computing environments. We show how the Feudal Stealing algorithm and the granularity-driven task selection policies bring significant improvements in speedups of irregular applications, compared to the state-of-the-art work-stealing algorithms. Furthermore, we also present the implementation of the task selection policies in the Grid-GUM runtime system [AZ06] for Glasgow Parallel Haskell (GpH) [THLPJ98], in addition to the implementation in SCALES, and we also present the evaluation of this implementation on a large set of synthetic applications.
104

An adaptive software transactional memory support for multi-core programming

Chan, Kinson., 陳傑信. January 2009 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
105

Erstellung einer einheitlichen Taxonomie für die Programmiermodelle der parallelen Programmierung

Nestmann, Markus 02 May 2017 (has links) (PDF)
Durch die parallele Programmierung wird ermöglicht, dass Programme nebenläufig auf mehreren CPU-Kernen oder CPUs ausgeführt werden können. Um das parallele Programmieren zu erleichtern, wurden diverse Sprachen (z.B. Erlang) und Bibliotheken (z.B. OpenMP) aufbauend auf parallele Programmiermodelle (z.B. Parallel Random Access Machine) entwickelt. Möchte z.B. ein Softwarearchitekt sich in einem Projekt für ein Programmiermodell entscheiden, muss er dabei auf mehrere wichtige Kriterien (z.B. Abhängigkeiten zur Hardware) achten. erleichternd für diese Suche sind Übersichten, die die Programmiermodelle in diesen Kriterien unterscheiden und ordnen. Werden existierenden Übersichten jedoch betrachtet, finden sich Unterschiede in der Klassifizierung, den verwendeten Begriffen und den aufgeführten Programmiermodellen. Diese Arbeit begleicht dieses Defizit, indem zuerst durch ein Systematic Literature Review die existierenden Taxonomien gesammelt und analysiert werden. Darauf aufbauend wird eine einheitliche Taxonomie erstellt. Mit dieser Taxonomie kann eine Übersicht über die parallelen Programmiermodelle erstellt werden. Diese Übersicht wird zusätzlich durch Informationen zu den jeweiligen Abhängigkeiten der Programmiermodelle zu der Hardware-Architektur erweitert werden. Der Softwarearchitekt (oder Projektleiter, Softwareentwickler,...) kann damit eine informierte Entscheidung treffen und ist nicht gezwungen alle Programmiermodelle einzeln zu analysieren.
106

Study of Parallel Algorithms Related to Subsequence Problems on the Sequent Multiprocessor System

Pothuru, Surendra 08 1900 (has links)
The primary purpose of this work is to study, implement and analyze the performance of parallel algorithms related to subsequence problems. The problems include string to string correction problem, to determine the longest common subsequence problem and solving the sum-range-product, 1 —D pattern matching, longest non-decreasing (non-increasing) (LNS) and maximum positive subsequence (MPS) problems. The work also includes studying the techniques and issues involved in developing parallel applications. These algorithms are implemented on the Sequent Multiprocessor System. The subsequence problems have been defined, along with performance metrics that are utilized. The sequential and parallel algorithms have been summarized. The implementation issues which arise in the process of developing parallel applications have been identified and studied.
107

Object-oriented parallel paradigms

17 March 2015 (has links)
M.Sc. (Computer Science) / This report is primarily concerned with highlighting fmdings of a research recently undertaken towards completing the requirements for the M.Sc. degree of 1994 at the Rand Afrikaans University (RAU). The research is aimed at striving to investigate what benefits (if any) exist in Object-Oriented Parallel Systems. The area of research revolves around the Object-Oriented Parallel Paradigm (OOPP) which is currently under development by the author. One primary aim of this research is to investigate numerous current trends in Object-Oriented Parallel Systems and Language Developments with the objective of providing an indication as to whether the Object-Oriented methodology can be (or has been) successfully married with existing Parallel Processing mechanisms. New benefits may come about while attempting to combine these methodologies, and this expectation will also be reflected upon. The Object-Oriented methodology allows a system designer the ability to approach a problem with a good degree of problem space understanding; while Parallel Processing allows the system designer the ability to create extremely fast algorithms for solving problems amenable to Parallel Processing techniques. The question we attempt to answer is whether the Object-Oriented methodology can be successfully married to the Parallel Processing field (whilst maintaining a high degree of benefits encountered in both methodologies) so as to gain the best of both worlds. Certain papers have laid claim to their proposed system encompassing both the Object-Oriented methodology, as well as the Parallel Processing methodology. In view of this fact, we shall furthermore examine papers to see if any of these systems are candidates for successfully marrying Object-Oriented and Parallel Processing into one homogeneous body. Criticism will be given on the shortcomings of unsuccessful candidates. Based on the findings of the research, the report will culminate to the proposal of the Object-Oriented Parallel Paradigm (OOPP). OOPP will speculate on the most probable features that system designers can expect to see in an almost ideal Object-Oriented Parallel System. It is very important at this stage to mention that, at its current state of development, OOPP is only a paradigm; thus OOPP should be viewed merely as an abstract model intended to establish a solid foundation for building more formal Object-Oriented Parallel Methodologies. Furthermore, OOPP is intended to be suitable for present day systems and amenable (possibly with a few minor adjustments) to future systems. The author trusts OOPP to generate sufficient interest to warrant further research being commissioned. In this event, OOPP should be expected to undergo modifications and enhancements...
108

Desenvolvimento de modelos para predição de desempenho de programas paralelos MPI. / Development of Performance Prediction Models for MPI Parallel Programs

Laine, Jean Marcos 27 January 2003 (has links)
Existem muitos fatores capazes de influenciar o desempenho de um programa paralelo MPI (Message Passing Interface). Dentre esses fatores, podemos citar a quantidade de dados processados, o número de nós envolvidos na solução do problema, as características da rede de interconexão, o tipo de switch utilizado, entre outros. Por isso, realizar predições de desempenho sobre programas paralelos que utilizam passagem de mensagem não é uma tarefa trivial. Com o intuito de modelar e predizer o comportamento dos programas citados anteriormente, nosso trabalho foi desenvolvido baseado em uma metodologia de análise e predição de desempenho de programas paralelos MPI. Inicialmente, propomos um modelo gráfico, denominado DP*Graph+, para representar o código das aplicações. Em seguida, desenvolvemos modelos analíticos, utilizando técnicas de ajuste de curvas, para representar o comportamento das estruturas de repetição compostas por primitivas de comunicação e/ou computação local. Além disso, elaboramos modelos para predizer o comportamento de aplicações do tipo mestre/escravo. Durante o desenvolvimento das atividades de análise e predição de desempenho, implementamos algumas funções para automatizar tarefas e facilitar nosso trabalho. Por último, modelamos e estimamos o desempenho de duas versões diferentes de um programa de multiplicação de matrizes, a fim de validar os modelos propostos. Os resultados das predições realizadas sobre os programas de multiplicação de matrizes foram satisfatórios. Na maioria dos casos preditos, os erros ficaram abaixo de 6 %, confirmando a validade e a precisão dos modelos elaborados. / There are many factors able to influence the performance of a MPI (Message Passing Interface) parallel program. Within these factors, we may cite: amount of data, number of nodes, characteristics of the network and type of switch, among others. Then, performance prediction isn’t a easy task. The work was developed based on a methodology of analysis and performance prediction of MPI parallel programs. First of all, we proposed a graphical model, named DP*Graph+, to represent the code of applications. Next, we developed analytical models applying curve fitting techniques to represent the behavior of repetition structure compounds by comunication primitives and/or local computations. Besides, we elaborated models to predict aplications of type master/slave. For development of performance prediction activities, some functions was developed to automate tasks and make our work easy. Finally, we modeled and predicted the performance of two different programs of matrix multiplication to prove the accuracy of models. The results of predictions on the programs were good. In the majority of predicted cases, the errors were down 6 %. With these results, we proved the accuracy of developed models.
109

Modelagem e predição de desempenho de primitivas de comunicação MPI. / Performance modeling and prediction of MPI communication primitives.

Oliveira, Hélio Marci de 28 January 2003 (has links)
O desenvolvimento de programas paralelos e distribuídos encontra na programação baseada em passagem de mensagens uma abordagem eficaz para explorar adequadamente as características das máquinas de memória distribuída. Com o uso de clusters e de bibliotecas de suporte às trocas de mensagens, como o padrão MPI (Message Passing Interface), aplicações eficientes e economicamente viáveis podem ser construídas. Em tais sistemas, o tempo despendido nas comunicações constitui um importante fator de desempenho a ser considerado e requer a utilização de procedimentos e cuidados para a sua correta caracterização. Neste trabalho, modelos analíticos de primitivas de comunicação bloqueante MPI são desenvolvidos segundo uma metodologia de análise e predição apropriada. São tratadas algumas das principais operações ponto-a-ponto e coletivas e, utilizando técnicas de ajuste de curvas e tempos experimentais, o comportamento das primitivas de comunicação é representado em equações, possibilitando ainda a realização de análises e predições de desempenho em função do tamanho das mensagens e do número de processos envolvidos. Através de testes em um cluster de estações de trabalho, a precisão dos modelos elaborados é comprovada. Sendo a maioria dos erros percentuais inferiores a 8%, os resultados obtidos confirmam a validade do processo de modelagem. Além disso, o trabalho apresenta um conjunto de funções construídas com o objetivo de oferecer suporte a atividades de análise e predição, procurando facilitar e automatizar sua execução. / The development of parallel and distributed programs finds at message-passing programming a powerful approach to explore properly the distributed memory machines issues. Using clusters and message-passing libraries, as MPI standard (Message Passing Interface), efficient and cost effective applications can be constructed. In these systems, the time spent with communications means a important performance factor to be considered and its correct characterization requires procedures and cautions. In this work, analytic models for MPI blocking communication primitives are developed according one appropriate methodology for analysis and prediction. Some of the main peer-to-peer and collective operations are treated, and through curve fitting techniques and experimental times the behavior of the communication primitives is represented in equations, allowing also the accomplishment of performance analysis and prediction in function of the message length and the number of processes. Tests realized in a cluster of workstations prove the accuracy of the elaborated models. With most of errors within 8%, the obtained results show the validity of the modeling process. Also, the work presents a set of functions constructed with the purpose of support analysis and prediction activities, in order to facilitate and automate them.
110

Ferramenta de programação e processamento para execução de aplicações com grandes quantidades de dados em ambientes distribuídos. / Programming and processing tool for execution of applications with large amounts of data in distributed environments.

Vasata, Darlon 03 September 2018 (has links)
A temática envolvendo o processamento de grandes quantidades de dados é um tema amplamente discutido nos tempos atuais, envolvendo seus desafios e aplicabilidade. Neste trabalho é proposta uma ferramenta de programação para desenvolvimento e um ambiente de execução para aplicações com grandes quantidades de dados. O uso da ferramenta visa obter melhor desempenho de aplicações neste cenário, explorando o uso de recursos físicos como múltiplas linhas de execução em processadores com diversos núcleos e a programação distribuída, que utiliza múltiplos computadores interligados por uma rede de comunicação, de forma que estes operam conjuntamente em uma mesma aplicação, dividindo entre tais máquinas sua carga de processamento. A ferramenta proposta consiste na utilização de blocos de programação, de forma que tais blocos sejam compostos por tarefas, e sejam executados utilizando o modelo produtor consumidor, seguindo um fluxo de execução definido. A utilização da ferramenta permite que a divisão das tarefas entre as máquinas seja transparente ao usuário. Com a ferramenta, diversas funcionalidades podem ser utilizadas, como o uso de ciclos no fluxo de execução ou no adiantamento de tarefas, utilizando a estratégia de processamento especulativo. Os resultados do trabalho foram comparados a duas outras ferramentas de processamento de grandes quantidades de dados, Hadoop e que o uso da ferramenta proporciona aumento no desempenho das aplicações, principalmente quando executado em clusters homogêneos. / The topic involving the processing of large amounts of data is widely discussed subject currently, about its challenges and applicability. This work proposes a programming tool for development and an execution environment for applications with large amounts of data. The use of the tool aims to achieve better performance of applications in this scenario, exploring the use of physical resources such as multiple lines of execution in multi-core processors and distributed programming, which uses multiple computers interconnected by a communication network, so that they operate jointly in the same application, dividing such processing among such machines. The proposed tool consists of the use of programming blocks, so that these blocks are composed of tasks, and the blocks are executed using the producer consumer model, following an execution flow. The use of the tool allows the division of tasks between the machines to be transparent to the user. With the tool, several functionalities can be used, such as cycles in the execution flow or task advancing using the strategy of speculative processing. The results were compared with two other frameworks, Hadoop and Spark. These results indicate that the use of the tool provides an increase in the performance of the applications, mostly when executed in homogeneous clusters.

Page generated in 0.0897 seconds