• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 81
  • 31
  • 30
  • 17
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 543
  • 543
  • 111
  • 70
  • 66
  • 62
  • 61
  • 59
  • 58
  • 57
  • 57
  • 56
  • 54
  • 50
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Performance Analysis of kNN Query Processing on large datasets using CUDA & Pthreads : comparing between CPU & GPU

Kalakuntla, Preetham January 2017 (has links)
Telecom companies do a lot of analytics to provide consumers a better service and to stay in competition. These companies accumulate special big data that has potential to provide inputs for business. Query processing is one of the major tool to fire analytics at their data. Traditional query processing techniques which follow in-memory algorithm cannot cope up with the large amount of data of telecom operators. The k nearest neighbour technique(kNN) is best suitable method for classification and regression of large datasets. Our research is focussed on implementation of kNN as query processing algorithm and evaluate the performance of it on large datasets using single core, multi-core and on GPU. This thesis shows an experimental implementation of kNN query processing on single core CPU, Multicore CPU and GPU using Python, P- threads and CUDA respectively. We considered different levels of sizes, dimensions and k as inputs to evaluate the performance. The experiment shows that GPU performs better than CPU single core on the order of 1.4 to 3 times and CPU multi-core on the order of 5.8 to 16 times for different levels of inputs.
232

On the Effect of Replication of Input Files on the Efficiency and the Robustness of a Set of Computations / Étude de l’effet de la réplication de fichiers d’entrée sur l’efficacité et la robustesse d’un ensemble de calculs

Lambert, Thomas 08 September 2017 (has links)
Avec l’émergence du calcul haute-performance (HPC) et des applications Big Data, de nouvelles problématiques cruciales sont apparues. Parmi elles on trouve le problème du transfert de données, c’est-à-dire des communications entre machines, qui peut génerer des délais lors de gros calculs en plus d’avoir un impact sur la consommation énergétique. La réplication, que ce soit de tâches ou de fichiers, est un facteur qui accroît ces communications, tout en étant un outil quasi-indispensable pour améliorer le parallélisme du calcul et la résistance aux pannes. Dans cette thèse nous nous intéressons à la réplication de fichiers et à son impact sur les communications au travers de deux problèmes. Dans le premier, la multiplication de matrices en parallèle, le but est de limiter autant que possible ces réplications pour diminuer la quantité de données déplacées. Dans le second, l’ordonnancement de la phase « Map » de MapReduce, il existe une réplication initiale qu’il faut utiliser au mieux afin d’obtenir l’ordonnancement le plus rapide ou entraînant le moins de création de nouvelles copies. En plus de la réplication, nous nous intéressons aussi à la comparaison entre stratégies d’ordonnancement statiques (allocations faites en amont du calcul) et dynamiques (allocations faites pendant le calcul) sur ces deux problèmes avec pour objectif de créer des stratégies hybrides mélangeant les deux aspects. Pour le premier problème, le produit de matrices en parallèle, nous nous ramenons à un problème de partition de carré où l’équilibrage de charge est donné en entrée. Cet équilibrage donné, le but est de minimiser la somme des semi-paramètres des rectangles couvrant des zones ainsi créés. Ce problème a déjà été étudié par le passé et nous démontrons de nouveaux résultats. Nous proposons ainsi deux nouveaux algorithmes d’approximation, l’un fondé sur une stratégie récursive et l’autre sur l’usage d’une courbe fractale. Nous présentons également une modélisation alternative, fondée sur un problème similaire de partition de cube, dont nous prouvons la NP-complétude tout en fournissant deux algorithmes d’approximation. Pour finir, nous réalisons également une implémentation pratique du produit de matrices en utilisant nos stratégies d’allocation grâce à la librairie StarPU. Les résultats expérimentaux montrent une amélioration du temps de calcul ainsi qu’une diminution significative des transferts de données lorsqu’on utilise une stratégie statique d’allocation couplée à une technique de vol de tâches. Pour le second problème, l’ordonnancement de la phase « Map » de MapReduce, plusieurs copies des fichiers d’entrée sont distribuées parmi les processeurs disponibles. Le but ici est de faire en sorte que chaque tâche soit attribuée à un processeur possédant son fichier d’entrée tout en ayant le meilleur temps de calcul total. Une autre option étudiée est d’autoriser les tâches nonlocales (attribués à des processeurs ne possédant pas leurs fichiers d’entrée) mais d’en limiter le nombre. Dans cette thèse nous montrons premièrement qu’un algorithme glouton pour ce problème peut être modélisé par un processus de « balls-in-bins » avec choix, impliquant une surcharge (nombre de tâches supplémentaires par rapport à la moyenne) en O(mlogm) où m est le nombre de processeurs. Secondement, dans le cas où les tâches non-locales sont interdites, nous relions le problème à celui de l’orientation de graphes, ce qui permet d’obtenir des algorithmes optimaux et polynomiaux et l’existence d’une assignation presque parfaite avec forte probabilité. Dans le cas où les tâches non locales sont autorisées, nous proposons également des algorithmes polynomiaux et optimaux. Finalement, nous proposons un ensemble de simulations pour montrer l’efficacité de nos méthodes dans le cas de tâches faiblement hétérogènes. / The increasing importance of High Performance Computing (HPC) and Big Data applications creates new issues in parallel computing. One of them is communication, the data transferred from a processor to another. Such data movements have an impact on computational time, inducing delays and increase of energy consumption. If replication, of either tasks or files, generates communication, it is also an important tool to improve resiliency and parallelism. In this thesis, we focus on the impact of the replication of input files on the overall amount of communication. For this purpose, we concentrate on two practical problems. The first one is parallel matrix multiplication. In this problem, the goal is to induce as few replications as possible in order to decrease the amount of communication. The second problem is the scheduling of the “Map” phase in the MapReduce framework. In this case, replication is an input of the problem and this time the goal is to use it in the best possible way. In addition to the replication issue, this thesis also considers the comparison between static and dynamic approaches for scheduling. For consistency, static approaches compute schedules before starting the computation while dynamic approaches compute the schedules during the computation itself. In this thesis we design hybrid strategies in order to take advantage of the pros of both. First, we relate communication-avoiding matrix multiplication with a square partitioning problem, where load-balancing is given as an input. In this problem, the goal is to split a square into zones (whose areas depend on the relative speed of resources) while minimizing the sum of their half-perimeters. We improve the existing results in the literature for this problem with two additional approximation algorithms. In addition we also propose an alternative model using a cube partitioning problem. We prove the NP-completeness of the associated decision problem and we design two approximations algorithms. Finally, we implement the algorithms for both problems in order to provide a comparison of the schedules for matrix multiplication. For this purpose, we rely on the StarPU library. Second, in the Map phase of MapReduce scheduling case, the input files are replicated and distributed among the processors. For this problem we propose two metrics. In the first one, we forbid non-local tasks (a task that is processed on a processor that does not own its input files) and under this constraint, we aim at minimizing the makespan. In the second problem, we allow non-local tasks and we aim at minimizing them while minimizing makespan. For the theoretical study, we focus on tasks with homogeneous computation times. First, we relate a greedy algorithm on the makespan metric with a “ball-into-bins” process, proving that this algorithm produces solutions with expected overhead (the difference between the number of tasks on the most loaded processor and the number of tasks in a perfect distribution) equal to O(mlogm) where m denotes the number of processors. Second, we relate this scheduling problem (with forbidden non-local tasks) to a problem of graph orientation and therefore prove, with the results from the literature, that there exists, with high probability, a near-perfect assignment (whose overhead is at most 1). In addition, there are polynomial-time optimal algorithms. For the communication metric case, we provide new algorithms based on a graph model close to matching problems in bipartite graphs. We prove that these algorithms are optimal for both communication and makespan metrics. Finally, we provide simulations based on traces from a MapReduce cluster to test our strategies with realistic settings and prove that the algorithms we propose perform very well in the case of low or medium variance of the computation times of the different tasks of a job.
233

A Novel 3-D Segmentation Algorithm for Anatomic Liver and Tumor Volume Calculations for Liver Cancer Treatment Planning

Goryawala, Mohammed 23 March 2012 (has links)
Three-Dimensional (3-D) imaging is vital in computer-assisted surgical planning including minimal invasive surgery, targeted drug delivery, and tumor resection. Selective Internal Radiation Therapy (SIRT) is a liver directed radiation therapy for the treatment of liver cancer. Accurate calculation of anatomical liver and tumor volumes are essential for the determination of the tumor to normal liver ratio and for the calculation of the dose of Y-90 microspheres that will result in high concentration of the radiation in the tumor region as compared to nearby healthy tissue. Present manual techniques for segmentation of the liver from Computed Tomography (CT) tend to be tedious and greatly dependent on the skill of the technician/doctor performing the task. This dissertation presents the development and implementation of a fully integrated algorithm for 3-D liver and tumor segmentation from tri-phase CT that yield highly accurate estimations of the respective volumes of the liver and tumor(s). The algorithm as designed requires minimal human intervention without compromising the accuracy of the segmentation results. Embedded within this algorithm is an effective method for extracting blood vessels that feed the tumor(s) in order to plan effectively the appropriate treatment. Segmentation of the liver led to an accuracy in excess of 95% in estimating liver volumes in 20 datasets in comparison to the manual gold standard volumes. In a similar comparison, tumor segmentation exhibited an accuracy of 86% in estimating tumor(s) volume(s). Qualitative results of the blood vessel segmentation algorithm demonstrated the effectiveness of the algorithm in extracting and rendering the vasculature structure of the liver. Results of the parallel computing process, using a single workstation, showed a 78% gain. Also, statistical analysis carried out to determine if the manual initialization has any impact on the accuracy showed user initialization independence in the results. The dissertation thus provides a complete 3-D solution towards liver cancer treatment planning with the opportunity to extract, visualize and quantify the needed statistics for liver cancer treatment. Since SIRT requires highly accurate calculation of the liver and tumor volumes, this new method provides an effective and computationally efficient process required of such challenging clinical requirements.
234

Modélisation des problèmes de grandes déformations multi-domaines par une approche Eulérienne monolithique massivement parallèle / Modelling multi-domain large deformation problems using an Eulerian monolithic approach in a massively parallel environment

El Haddad, Fadi 29 May 2015 (has links)
La modélisation des problèmes multi-domaine est abordée dans un cadre purement Eulérien. Un maillage unique, ne représentant plus la matière, est utilisé. Les différentes frontières et leur évolution sont décrites via des outils numériques tels que la méthode Level Set. Les caractéristiques locales de chaque sous domaines sont déterminées par des lois de mélange.Ce travail est une des premières tentations appliquant une approche Eulérienne pour modéliser de problèmes de grandes déformations. Dans un premier temps, la capacité de l'approche est testée afin de déterminer les développements nécessaires.Le frottement entre les différents objets est géré par un lubrifiant ajouté dans une couche limite. Combinée avec une technique d'identification, une nouvelle loi de mélange quadratique est introduite pour décrire la viscosité du lubrifiant. Des comparaisons ont été effectuées avec Forge® et les résultats sont trouvés satisfaisants. Pour traiter le contact entre les différents objets, un solveur directionnel a été développé. Malgré que les résultats soient intéressants, il reste le sujet de nouvelles améliorations. La scalabilité de l'approche dans un environnement massivement parallèle est testée aussi. Plusieurs recommandations ont été proposées pour s'assurer d'une performance optimale. La technique du maillage unique permet d'obtenir une très bonne scalabilité. L'efficacité du parallélisme ne dépend que de la partition d'un seul maillage (contrairement aux méthodes Lagrangiennes). La méthode proposée présente des capacités indéniables mais reste loin d'être complète. Des pistes d'amélioration sont proposées en conséquence. / Modeling of multi-domain problems is addressed in a Purely Eulerian framework. A single mesh is used all over the domain. The evolution of the different interacting bodies is described using numerical tools such as the Level Set method. The characteristics of the subdomains, considered as heterogeneities in the mesh, are determined using mixture laws.This work is one of the first attempts applying fully Eulerian Approach to Model large deformation problems. Therefore, the capacity of this approach is tested to determine necessary developments. The friction between the different objects is managed by adding a boundary layer implying the presence of a lubricant. Combined with an identification technique, a new quadratic mixture Law is introduced to determine the lubricant viscosity. Comparisons have been performed with Forge® and results were found satisfactory. To treat the contact problem between the different objects, a directional solver was developed. Despite the interesting results, it remains the topic of further improvements. The scalability of the approach in a massively parallel environment is tested as well. Several recommendations were proposed to ensure an optimal performance. The technique of a single mesh guarantees a very good scalability since the efficiency of parallelism depends of the partition of a single mesh (unlike the Lagrangian Methods). The proposed method presents undeniable capacities but remains far from being complete. Ideas for future Improvements are proposed accordingly.
235

Aurora : seamless optimization of openMP applications / Aurora: Otimização Transparente de Aplicações OpenMP

Lorenzon, Arthur Francisco January 2018 (has links)
A exploração eficiente do paralelismo no nível de threads tem sido um desafio para os desenvolvedores de softwares. Como muitas aplicações não escalam com o número de núcleos, aumentar cegamente o número de threads pode não produzir os melhores resultados em desempenho ou energia. No entanto, a tarefa de escolher corretamente o número ideal de threads não é simples: muitas variáveis estão envolvidas (por exemplo, saturação do barramento off-chip e sobrecarga de sincronização de dados), que mudam de acordo com diferentes aspectos do sistema (por exemplo, conjunto de entrada, micro-arquitetura) e mesmo durante a execução da aplicação. Para abordar esse complexo cenário, esta tese apresenta Aurora. Ela é capaz de encontrar automaticamente, em tempo de execução e com o mínimo de sobrecarga, o número ideal de threads para cada região paralela da aplicação e se readaptar nos casos em que o comportamento de uma região muda durante a execução. Aurora trabalha com o OpenMP e é completamente transparente tanto para o programador quanto para o usuário final: dado um binário de uma aplicação OpenMP, Aurora o otimiza sem nenhuma transformação ou recompilação de código. Através da execução de quinze benchmarks conhecidos em quatro processadores multi-core, mostramos que Aurora melhora o trade-off entre desempenho e energia em até: 98% sobre a execução padrão do OpenMP; 86% sobre o recurso interno do OpenMP que ajusta dinamicamente o número de threads; e 91% quando comparado a uma emulação do feedback-driven threading. / Efficiently exploiting thread-level parallelism has been challenging for software developers. As many parallel applications do not scale with the number of cores, blindly increasing the number of threads may not produce the best results in performance or energy. However, the task of rightly choosing the ideal amount of threads is not straightforward: many variables are involved (e.g. off-chip bus saturation and overhead of datasynchronization), which will change according to different aspects of the system at hand (e.g., input set, micro-architecture) and even during execution. To address this complex scenario, this thesis presents Aurora. It is capable of automatically finding, at run-time and with minimum overhead, the optimal number of threads for each parallel region of the application and re-adapt in cases the behavior of a region changes during execution. Aurora works with OpenMP and is completely transparent to both designer and end-user: given an OpenMP application binary, Aurora optimizes it without any code transformation or recompilation. By executing fifteen well-known benchmarks on four multi-core processors, Aurora improves the trade-off between performance and energy by up to: 98% over the standard OpenMP execution; 86% over the built-in feature of OpenMP that dynamically adjusts the number of threads; and 91% over a feedback-driven threading emulation.
236

Paralelização de um modelo global de previsão do tempo em malhas localmente refinadas / Parallelization of a numerical weather prediction global model with local refinement grids

Vidaurre Navarrete, Nelson Leonardo 31 October 2014 (has links)
O objetivo principal deste trabalho é a paralelização de um modelo global de previsão do tempo em diferenças finitas com refinamento local. Este é baseado nas equações primitivas, e faz uso de uma discretização semi-Lagrangiana e semi-implícita em três níveis no tempo em uma malha de Lorenz na vertical e uma malha do tipo C de Arakawa na horizontal. A discretização horizontal é feita através de diferenças finitas de segunda ordem. A equação escalar elíptica tridimensional resultante é desacoplada em um sistema de equações bidimensionais do tipo Helmholtz, o qual é resolvido por meio de um método multigrid. O modelo de paralelização foi desenvolvido para máquinas com memória distribuída, fazendo uso de MPI para passagens de mensagens e baseado em técnicas de decomposição de domínio. O acoplamento apenas local dos operadores de diferenças finitas viabiliza a decomposição em duas direções horizontais. Evitamos a decomposição vertical, tendo em vista o forte acoplamento nesta direção das parametrizações de fenômenos físicos. A estratégia de paralelização foi elaborada visando o uso eficiente de centenas ou alguns milhares de processadores, dependendo da resolução do modelo. Para tal, a malha localmente refinada é separada em três regiões: uma grossa, uma de transição e uma fina, onde cada uma delas é dividida de forma independente entre um número de processadores proporcional ao número de pontos que cada uma armazena, garantindo assim um balanceamento de carga adequado. Não obstante, para resolver o sistema de equações bidimensionais do tipo Helmholtz foi necessário mudar a estratégia de paralelização, dividindo o domínio unicamente nas direções vertical e latitudinal. Ambas partes do modelo com paralelizações diferentes estão conectadas por meio da estratégia de transposição de dados. Testamos nosso modelo utilizando até 1024 processadores e os resultados ainda mostraram uma boa escalabilidade. / The main goal of this work is the parallelization of a weather prediction model employing finite differences on locally refined meshes. The model is based on the primitive equations and uses a three-time-level semi-implicit semi-Lagrangian temporal discretization on a Lorenz-type vertical grid combined with a horizontal Arakawa C-grid. The horizontal discretization is performed by means of second order finite differences. The resulting three-dimensional scalar elliptic equation is decoupled into a set of Helmholtz-type two-dimensional equations, solved by a multigrid method. The parallelization has been written for distributed-memory machines, employing the MPI message passing standard and was based on domain decomposition techniques. The local coupling of the finite difference operators was exploited in a two-dimensional horizontal decomposition. We avoid a vertical decomposition due to the strong coupling of physical parameterization routines. The parallelization strategy has been designed in order to allow the efficient use of hundreds to a few thousand processors, depending on the model resolution. In order to achieve this, the locally refined mesh is split into three regions: a coarse, a transition and a fine one, each decomposed independently. The number of allocated processors for each region is proportional to the number of the grid-points it contains, in order to guarantee a good load-balancing distribution. However, to solve the set of Helmholtz-type bidimensional equations it was necessary to change the parallelization strategy, splitting the domain only in vertical and latitudinal directions. Both parts of the model with different parallelizations are related by means the data transposition strategy. We tested our model using up to 1024 processors and the results still showed a good scalability.
237

Fast Optimization Methods for Model Predictive Control via Parallelization and Sparsity Exploitation / 並列化とスパース性の活用によるモデル予測制御の高速最適化手法

DENG, HAOYANG 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22808号 / 情博第738号 / 新制||情||126(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 大塚 敏之, 教授 加納 学, 教授 太田 快人 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
238

Extended Hydrodynamics Using the Discontinuous-Galerkin Hancock Method

Kaufmann, Willem 15 September 2021 (has links)
Moment methods derived from the kinetic theory of gases can be used for the prediction of continuum and non-equilibrium flows and offer numerical advantages over other methods, such as the Navier-Stokes model. Models developed in this fashion are described by first-order hyperbolic partial differential equations (PDEs) with stiff local relaxation source terms. The application of discontinuous-Galerkin (DG) methods for the solution of such models has many benefits. Of particular interest is the third-order accurate, coupled space-time discontinuous-Galerkin Hancock (DGH) method. This scheme is accurate, as well as highly efficient on large-scale distributed-memory computers. The current study outlines a general implementation of the DGH method used for the parallel solution of moment methods in one, two, and three dimensions on modern distributed clusters. An algorithm for adaptive mesh refinement (AMR) was developed alongside the implementation of the scheme, and is used to achieve even higher accuracy and efficiency. Many different first-order hyperbolic and hyperbolic-relaxation PDEs are solved to demonstrate the robustness of the scheme. First, a linear convection-relaxation equation is solved to verify the order of accuracy of the scheme in three dimensions. Next, some classical compressible Euler problems are solved in one, two, and three dimensions to demonstrate the scheme's ability to capture discontinuities and strong shocks, as well as the efficacy of the implemented AMR. A special case, Ringleb's flow, is also solved in two-dimensions to verify the order of accuracy of the scheme for non-linear PDEs on curved meshes. Following this, the shallow water equations are solved in two dimensions. Afterwards, the ten-moment (Gaussian) closure is applied to two-dimensional Stokes flow past a cylinder, showing the abilities of both the closure and scheme to accurately compute classical viscous solutions. Finally, the one-dimensional fourteen-moment closure is solved.
239

Simulace fyzikálních jevů s využitím celulárních automatů / Simulation of Physical Phenomena Using Cellular Automata

Martinek, Dominik January 2010 (has links)
This master's thesis deals with modelling and simulation of physical phenomena by cellular automata. The basic methods which model physical phenomena is enumerated and descibed in this thesis. One of the important part of this thesis is a set of demonstration models. Each model is focused on one selected area of physical phenomena. All models are described by transtition rules and the procedure of derivation of these rules is also presented here. There rules were used in implemented models.  Another part of this thesis contains of a simulation application for these models. The real application had been implemented in accord with this design and it has been used to perform the simulation experiments with exemplary models. Results of the simulation experiments are discussed in conclusion of this thesis. One exemplary model had also been adapted for parallel processing. The performances on a computer with different count of working processors were measured and are also discussed in the conclusion of this thesis
240

Paralelizace Goertzelova algoritmu / Parallel implementation of Goertzel algorithm

Skulínek, Zdeněk January 2017 (has links)
Technical problems make impossible steadily increase processor's clock frequency. Their power are currently growing due to increasing number of cores. It brings need for new approaches in programming such parallel systems. This thesis shows how to use paralelism in digital signal processing. As an example, it will be presented here implementation of the Geortzel's algorithm using the processing power of the graphics chip.

Page generated in 0.0623 seconds