• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Classification of complex two-dimensional images in a parallel distributed processing architecture

Simpson, Robert Gilmour January 1992 (has links)
Neural network analysis is proposed and evaluated as a method of analysis of marine biological data, specifically images of plankton specimens. The quantification of the various plankton species is of great scientific importance, from modelling global climatic change to predicting the economic effects of toxic red tides. A preliminary evaluation of the neural network technique is made by the development of a back-propagation system that successfully learns to distinguish between two co-occurring morphologically similar species from the North Atlantic Ocean, namely Ceratium arcticum and C. longipes. Various techniques are developed to handle the indeterminately labelled source data, pre-process the images and successfully train the networks. An analysis of the network solutions is made, and some consideration given to how the system might be extended.
2

Effective and Efficient Methodologies for Social Network Analysis

Pan, Long 16 January 2008 (has links)
Performing social network analysis (SNA) requires a set of powerful techniques to analyze structural information contained in interactions between social entities. Many SNA technologies and methodologies have been developed and have successfully provided significant insights for small-scale interactions. However, these techniques are not suitable for analyzing large social networks, which are very popular and important in various fields and have special structural properties that cannot be obtained from small networks or their analyses. There are a number of issues that need to be further studied in the design of current SNA techniques. A number of key issues can be embodied in three fundamental and critical challenges: long processing time, large computational resource requirements, and network dynamism. In order to address these challenges, we discuss an anytime-anywhere methodology based on a parallel/distributed computational framework to effectively and efficiently analyze large and dynamic social networks. In our methodology, large social networks are decomposed into intra-related smaller parts. A coarse-level of network analysis is built based on comprehensively analyzing each part. The partial analysis results are incrementally refined over time. Also, during the analyses process, network dynamic changes are effectively and efficiently adapted based on the obtained results. In order to evaluate and validate our methodology, we implement our methodology for a set of SNA metrics which are significant for SNA applications and cover a wide range of difficulties. Through rigorous theoretical and experimental analyses, we demonstrate that our anytime-anywhere methodology is / Ph. D.
3

Performance Modeling of Large-Scale Parallel-Distributed Processing for Cloud Environment / クラウド環境における大規模並列分散処理の性能モデル

Hirai, Tsuguhito 23 May 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21280号 / 情博第674号 / 新制||情||116(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 山下 信雄, 准教授 増山 博之, 教授 笠原 正治 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
4

Routing and Efficient Evaluation Techniques for Multi-hop Mobile Wireless Networks

Lee, Young-Jun 03 August 2005 (has links)
In this dissertation, routing protocols, load-balancing protocols, and efficient evaluation techniques for multi-hop mobile wireless networks are explored. With the advancements made in wireless communication and computer technologies, a new type of mobile wireless network, known as a mobile ad hoc network (MANET), has drawn constant attention. In recent years, several routing protocols for MANETs have been proposed. However, there still remains the need for mechanisms for better scalability support with respect to network size, traffic volume, and mobility. To address this issue, a new method for multi-hop routing in MANETs called Dynamic NIx-Vector Routing (DNVR) is proposed. DNVR has several distinct features compared to other existing on-demand routing protocols, which lead to more stable routes and better scalability. Currently, ad hoc routing protocols lack load-balancing capabilities. Therefore they often fail to provide good service quality, especially in the presence of a large volume of network traffic since the network load concentrates on some nodes, resulting in a highly congested environment. To address this issue, a novel load-balancing technique for ad hoc on-demand routing protocols is proposed. The new method is simple but very effective in achieving load balance and congestion alleviation. In addition, it operates in a completely distributed fashion. To evaluate and verify wireless network protocols effectively, especially to test their scalability properties, scalable and efficient network simulation methods are required. Usually simulation of such large-scale wireless networks needs a long execution time and requires a large amount of computing resources such as powerful CPUs and memory. Traditionally, to cope with this problem, parallel network simulation techniques with parallel computing capabilities have been considered. This dissertation explores a different type of method, which is efficient and can be achieved with a sequential simulation, as well as a parallel and distributed technique for large-scale mobile wireless networks.
5

Résilience dans les Systèmes de Workflow Distribués pour les Applications d’Optimisation Numérique : Conception et Expériences / Collaborative platform for multidiscipline optimization

Trifan, Laurentiu 21 October 2013 (has links)
Cette thèse vise à la conception d'un environnement pour le calcul haute performance dans un cadre d'optimisation numérique. Les outils de conception et d’optimisation sont répartis dans plusieurs équipes distantes, académiques et industrielles, qui collaborent au sein des mêmes projets. Les outils doivent être fédérés au sein d’un environnement commun afin d'en faciliter l'accès aux chercheurs et ingénieurs. L'environnement que nous proposons, pour répondre aux conditions précédentes, se compose d’un système de workflow et d’un système de calcul distribué. Le premier a pour objectif de faciliter la tâche de conception de l'application tandis que le second se charge de l’exécution sur des ressources de calcul distribuées. Bien sûr, des services de communication entre les deux systèmes doivent être développés. Les calculs doivent être réalisés de manière efficace, en prenant en compte le parallélisme interne de certains codes, l’exécution synchrone ou asynchrone des tâches, le transfert des données et les ressources matérielles et logicielles disponibles (répartition de charge par exemple). De plus, l’environnement doit assurer un bon niveau de tolérance aux pannes et aux défaillances logicielles, afin de minimiser leur influence sur le résultat final ou sur le temps de calcul. Une condition importante en particulier est de pouvoir implanter des dispositifs de reprise sur erreur, de telle sorte que le temps supplémentaire de traitement des erreurs reste très inférieur au temps de re-exécution total. Dans le cadre de ce travail, notre choix s'est porté sur le moteur de workflow Yawl, qui présente de bonnes caractéristiques en termes i) d'indépendance vis à vis du matériel et du logiciel (système client-serveur pouvant fonctionner sur du matériel hétérogène) et ii) de mécanisme de reprise sur erreur. Pour la partie calcul distribué, nos expériences ont été réalisées sur la plateforme Grid5000, en utilisant jusqu'à 64 machines différentes réparties sur cinq sites géographiques. Ce document détaille les choix de conception de cet environnement ainsi que les ajouts et modifications que nous avons été amenés à apporter à Yawl pour lui permettre de fonctionner sur une plateforme distribuée. / This thesis aims conceiving an environment for high performance computing in a numerical optimization context. The tools for conception and optimization are distributed across several teams, both academics and industrial, which collaborate inside a unique project. The tools should be federated within a common environment to facilitate access to researchers and engineers. The environment that we offer, in order to meet the above conditions, consists of a workflow system and a distributed computing system. The first system aims to facilitate the application design task while the latter is responsible for executing on distributed computing resources. Of course, communication services between the two systems must be developed. The computation must be performed effectively, taking into account the internal parallelism of some software code, synchronous or asynchronous task execution, the transfer of data and hardware and software resources available (e.g. load balancing). In addition, the environment should provide a good level of fault tolerance and software failures, to minimize their influence on the final result or the computation time. An important condition in particular is to implement recovery devices on error occurence, so that the extra time for error handling remains well below the total time of re-execution. As part of this work, our choice fell on the Yawl workflow engine, which has good characteristics in terms of i) hardware and software independence (client-server system that can run on heterogeneous hardware) and ii) error recovery mechanism. For distributed computing part, our experiments were performed on the Grid5000 platform, using up to 64 different machines on five geographical sites. This document details the design of this environment and the extensions and changes we have had to perform on Yawl to enable it to run on a distributed platform.
6

The functionality of spatial and time domain artificial neural models

Capanni, Niccolo Francesco January 2006 (has links)
This thesis investigates the functionality of the units used in connectionist Artificial Intelligence systems. Artificial Neural Networks form the foundation of the research and their units, Artificial Neurons, are first compared with alternative models. This initial work is mainly in the spatial-domain and introduces a new neural model, termed a Taylor Series neuron. This is designed to be flexible enough to assume most mathematical functions. The unit is based on Power Series theory and a specifically implemented Taylor Series neuron is demonstrated. These neurons are of particular usefulness in evolutionary networks as they allow the complexity to increase without adding units. Training is achieved via various traditiona and derived methods based on the Delta Rule, Backpropagation, Genetic Algorithms and associated evolutionary techniques. This new neural unit has been presented as a controllable and more highly functional alternative to previous models. The work on the Taylor Series neuron moved into time-domain behaviour and through the investigation of neural oscillators led to an examination of single-celled intelligence from which the later work developed. Connectionist approaches to Artificial Intelligence are almost always based on Artificial Neural Networks. However, another route towards Parallel Distributed Processing was introduced. This was inspired by the intelligence displayed by single-celled creatures called Protoctists (Protists). A new system based on networks of interacting proteins was introduced. These networks were tested in pattern-recognition and control tasks in the time-domain and proved more flexible than most neuron models. They were trained using a Genetic Algorithm and a derived Backpropagation Algorithm. Termed "Artificial BioChemical Networks" (ABN) they have been presented as an alternative approach to connectionist systems.
7

Étude de réseaux complexes et de leurs propriétés pour l’optimisation de modèles de routage / Study of complex networks properties for the optimization of routing models

Lancin, Aurélien 09 December 2014 (has links)
Cette thèse s’intéresse aux problématiques de routage dans les réseaux, notamment dans le graphe des systèmes autonomes (AS) d’Internet. Nous cherchons d’une part à mieux comprendre les propriétés du graphe de l’Internet qui sont utiles dans la conception de nouveaux paradigmes de routage. D’autre part, nous cherchons à évaluer par simulation les performances de ces paradigmes. La première partie de mes travaux porte sur l’étude d’une propriété́ métrique, l’hyperbolicité́ selon Gromov, utilisée dans la conception de nouveaux paradigmes de routage. Je présente dans un premier temps une nouvelle approche pour le calcul de l’hyperbolicité́ d’un graphe utilisant une décomposition du graphe par les cliques-séparatrices et la notion de paires éloignées. Je propose ensuite un nouvel algorithme pour le calcul de l’hyperbolicité́ qui, combiné avec la méthode de décomposition par les cliques-séparatrices, permet son calcul sur des graphes composés de 58 000 sommets en quelques heures. La deuxième partie de mes travaux porte sur le développement de DRMSim, une nouvelle plate-forme de simulation de modèles de routage dynamiques. Celle-ci permet l’évaluation des performances des schémas de routage et leur comparaison au protocole de référence, le protocole de routeur frontière, BGP. DRMSim a permis l’étude par simulation de différents schémas de routage compact sur des topologies à O(10k) nœuds. Je détaille l’architecture de DRMSim et quelques exemples d’utilisation. Puis, je présente une étude réalisée en vue de développer une version parallèle et distribuée de DRMSim dans le cadre de la simulation de BGP / This thesis considers routing issues in networks, and particularly the graph of the autonomous systems (AS) of the Internet. Firstly, we aim at better understanding the properties of the Internet that are useful in the design of new routing paradigms. Secondly, we want to evaluate by simulation the performance of these paradigms. The first part of my work concerns the study of the Gromov hyperbolicity, a useful metric property for the design of new routing paradigms. I show how to use a decomposition of the graph by clique-separators as a pre-processing method for the computation of the hyperbolicity. Then, I propose a new algorithm to compute this property. Altogether, these methods allows us for computing the hyperbolicity of graphs up to 58 000 nodes. The second part of my work concerns the development of DRMSim, a new Dynamic Routing Model Simulator. It facilitates the evaluation of the performances of various routing schemes and their comparison to the standard routing scheme of the Internet, the border router protocol BGP. Using DRMSim, we performed simulations of several compact routing schemes on topologies up to O(10k) nodes. I describe its architecture and detail some examples. Then, I present a feasibility study for the design of a parallel/distributed version of DRMSim in order to simulate BGP on larger topologies.
8

Escalabilidade Paralela de um Algoritmo de Migra??o Reversa no Tempo (RTM) Pr?-empilhamento / PARALLEL SCALABILITY OF A PRESTACK REVERSE TIME MIGRATION (RTM) ALGORITHM

Ros?rio, Desnes Augusto Nunes do 21 December 2012 (has links)
Made available in DSpace on 2014-12-17T14:56:09Z (GMT). No. of bitstreams: 1 DesnesANR_DISSERT.pdf: 3501359 bytes, checksum: 5155a508018af1e52dae20205b8f726b (MD5) Previous issue date: 2012-12-21 / The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors / A s?smica ? uma ?rea de extrema import?ncia na geof?sica. Associada principalmente ? explora??o de petr?leo, essa linha de pesquisa concentra boa parte de todo o investimento realizado nesta grande ?rea. A aquisi??o, o processamento e a interpreta??o dos dados s?smicos s?o as partes que comp?em um estudo s?smico. O processamento s?smico em especial tem como objetivo ? obten??o de uma imagem que represente as estruturas geol?gicas em subsuperf?cie. O processamento s?smico evoluiu significativamente nas ?ltimas d?cadas devido ?s demandas da ind?stria petrol?fera, e aos avan?os tecnol?gicos de hardware que proporcionaram maiores capacidades de armazenamento e processamento de informa??es digitais, que por sua vez possibilitaram o desenvolvimento de algoritmos de processamento mais sofisticados, tais como os que utilizam arquiteturas paralelas de processamento. Uma das etapas importantes contidas no processamento s?smico ? o imageamento. A migra??o ? uma das t?cnicas usadas para no imageamento com o objetivo de obter uma se??o s?smica que represente de forma mais precisa e fiel as estruturas geol?gicas. O resultado da migra??o ? uma imagem 2D ou 3D na qual ? poss?vel a identifica??o de falhas e domos salinos dentre outras estruturas de interesse, poss?veis reservat?rios de hidrocarbonetos. Entretanto, uma migra??o rica em qualidade e precis?o pode ser um processo demasiadamente longo, devido ?s heur?sticas matem?ticas do algoritmo e ? quantidade extensa de entradas e sa?das de dados envolvida neste processo, podendo levar dias, semanas e at? meses de execu??o ininterrupta em supercomputadores, o que representa grande custo computacional e financeiro, o que pode inviabilizar a aplica??o desses m?todos. Tendo como objetivo a melhoria de desempenho, este trabalho realizou a paraleliza??o do n?cleo de um algoritmo de Migra??o Reversa no Tempo (RTM - do ingl?s: Reverse Time Migration), utilizando o modelo de programa??o paralela OpenMP (do ingl?s: Open Multi-Processing), devido ao alto esfor?o computacional demandado por essa t?cnica de migra??o. Al?m disso, foram realizadas an?lises de desempenho tais como de speedup, efici?ncia, e, por fim, a identifica??o do grau de escalabilidade algor?tmica com rela??o ao avan?o tecnol?gico esperado para futuros processadores

Page generated in 0.0717 seconds