• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 91
  • 65
  • 63
  • 50
  • 45
  • 44
  • 41
  • 35
  • 33
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Parallelizing support vector machines for scalable image annotation

Alham, Nasullah Khalid January 2011 (has links)
Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them Support Vector Machines (SVMs) are used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large. In this thesis distributed computing paradigms have been investigated to speed up SVM training, by partitioning a large training dataset into small data chunks and process each chunk in parallel utilizing the resources of a cluster of computers. A resource aware parallel SVM algorithm is introduced for large scale image annotation in parallel using a cluster of computers. A genetic algorithm based load balancing scheme is designed to optimize the performance of the algorithm in heterogeneous computing environments. SVM was initially designed for binary classifications. However, most classification problems arising in domains such as image annotation usually involve more than two classes. A resource aware parallel multiclass SVM algorithm for large scale image annotation in parallel using a cluster of computers is introduced. The combination of classifiers leads to substantial reduction of classification error in a wide range of applications. Among them SVM ensembles with bagging is shown to outperform a single SVM in terms of classification accuracy. However, SVM ensembles training are notably a computationally intensive process especially when the number replicated samples based on bootstrapping is large. A distributed SVM ensemble algorithm for image annotation is introduced which re-samples the training data based on bootstrapping and training SVM on each sample in parallel using a cluster of computers. The above algorithms are evaluated in both experimental and simulation environments showing that the distributed SVM algorithm, distributed multiclass SVM algorithm, and distributed SVM ensemble algorithm, reduces the training time significantly while maintaining a high level of accuracy in classifications.
42

An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access

Hansen, Jarle January 2012 (has links)
In this thesis we look into usability in the context of smartphone applications. We selected three research areas to investigate, namely Wireless Personal Area Networks, Context-awareness, and Remote Information Access. These areas are investigated through a series of experiments, which focuses on important aspects of usability within software applications. Additionally, we mainly use smartphone devices in the experiments. In experiment 1, Multi-Platform Bluetooth Remote Control, we investigated Wireless Personal Area Networks. Specifically, we implemented a system consisting of two clients, which were created for Java ME and Windows Mobile, and integrated with a server application installed on a Bluetooth-enabled laptop. For experiments 2 and 3, Context-aware Meeting Room and PainDroid: an Android Application for Pain Management, we looked closely at the research area of Contextawareness. The Context-aware Meeting Room was created to automatically send meeting participants useful meeting notes during presentations. In experiment 3, we investigated the use of on-device sensors for the Android platform, providing an additional input mechanism for a pain management application, where the accelerometer and magnetometer were used. Finally, the last research area we investigated was Remote Information Access, where we conducted experiment 4, Customised Android Home Screen. We created a system that integrated both a cloud-based server application and a mobile client running on the Android platform. We used the cloud-computing platform to provide context management features, such as the ability to store the user configuration that was automatically pushed to the mobile devices.
43

A Software Architecture for Client-Server Telemetry Data Analysis

Brockett, Douglas M., Aramaki, Nancy J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / An increasing need among telemetry data analysts for new mechanisms for efficient access to high-speed data in distributed environments has led BBN to develop a new architecture for data analysis. The data sets of concern can be from either real-time or post-test sources. This architecture consists of an expandable suite of tools based upon a data distribution software "backbone" which allows the interchange of high volume data streams among server processes and client workstations. One benefit of this architecture is that it allows one to assemble software systems from a set of off-the-shelf, interoperable software modules. This modularity and interoperability allows these systems to be configurable and customizable, while requiring little applications programming by the system integrator.
44

Distributed cost-optimal planning / Planification optimale distribuée

Jezequel, Loïg 13 November 2012 (has links)
La planification est un domaine de l'intelligence artificielle qui a pour but de proposer des méthodes permettant d'automatiser la recherche et l'ordonnancement d'ensembles d'actions afin d'atteindre un objectif donné. Un ensemble ordonné d'actions solution d'un problème de planification est appelé un plan. Parfois, les actions disponibles peuvent avoir un coût - on souhaite alors trouver des plans minimisant la somme des coûts des actions les constituant. Ceci correspond en fait à la recherche d'un chemin de coût minimal dans un graphe, et est donc traditionnellement résolu en utilisant des algorithmes tels que A*. Dans cette thèse, nous nous intéressons à une approche particulière de la planification, dite factorisée ou modulaire. Il s'agit de décomposer un problème en plusieurs sous-problèmes (généralement appelés composants) le plus indépendants possibles, et d'assembler des plans pour ces sous-problèmes en un plan pour le problème d'origine. L'intérêt de cette approche est que, pour certaines classes de problèmes de planification, les composants peuvent être bien plus simples à résoudre que le problème initial. Dans un premier temps, nous présentons une méthode de planification factorisée basée sur l'utilisation d'algorithmes dits à passage de messages. Une représentation des composants sous forme d'automates à poids nous permet de capturer l'ensemble des plans d'un sous-problème, et donc de trouver des plans de coût minimal, ce que ne permettaient pas les approches précédentes de la planification factorisée. Cette première méthode est ensuite étendue~: en utilisant des algorithmes dits « turbos », permettant une résolution approchée des problèmes considérés, puis en proposant une représentation différente des sous-problèmes, afin de prendre en compte le fait que certaines actions ne font que lire dans un composant. Ensuite, nous proposons une autre approche de la planification factorisée, basée sur une version distribuée de l'algorithme A*. Dans chaque composant, un agent réalise la recherche d'un plan local en utilisant sa connaissance du sous-problème qu'il traite, ainsi que des informations transmises par les autres agents. La principale différence entre cette méthode et la précédente est qu'il s'agit d'une approche distribuée de la planification modulaire. / Automated planning is a field of artificial intelligence that aims at proposing methods to chose and order sets of actions with the objective of reaching a given goal. A sequence of actions solving a planning problem is usually called a plan. In many cases, one does not only have to find a plan but an optimal one. This notion of optimality can be defined by assigning costs to actions. An optimal plan is then a plan minimizing the sum of the costs of its actions. Planning problems are standardly solved using algorithms such as A* that search for minimum cost paths in graphs. In this thesis we focus on a particular approach to planning called factored planning or modular planning. The idea is to consider a decomposition of a planning problem into almost independent sub-problems (or components). One then searches for plans into each component and try to assemble these local plans into a global plan for the original planning problem. The main interest of this approach is that, for some classes of planning problems, the components considered can be planning problems much simpler to solve than the original one. First, we present a study of the use of some message passing algorithms for factored planning. In this case the components of a problem are represented by weighted automata. This allows to handle all plans of a sub-problems, and permits to perform factored cost-optimal planning. Achieving cost-optimality of plans was not possible with previous factored planning methods. This approach is then extended by using approximate resolution techniques ("turbo" algorithms) and by proposing another representation of components for handling actions which read-only in some components. Then we describe another approach to factored planning: a distributed version of the famous A* algorithm. Each component is managed by an agent which is responsible for finding a local plan in it. For that, she uses information about her own component, but also information about the rest of the problem, transmitted by the other agents. The main difference between this approach and the previous one is that it is not only modular but also distributed.
45

Simulação computacional distribuída: aplicação a problemas de folding de heteropolímeros / Distributed computer simulation: Application on folding of heteropolymers problems

Silva, Pablo Andrei 17 January 2017 (has links)
Nesta dissertação apresentamos o desenvolvimento de ferramentas computacionais dedicadas a racionalizar processos que envolvem simulações Monte Carlo e análises de suas aplicações; são evocados conceitos pertinentes às principais áreas envolvidas (computação, tecnologia da informação, matemática e física). São introduzidas e discutidas técnicas de simulação computacional distribuída e o método Monte Carlo, com ênfase à aplicação em heteropolímeros. Exemplos ilustrativos de aplicação da ferramenta também são providos, mediante simulações e análise dos resultados de três tipos de cadeias heteropoliméricas em rede regular: cadeia polar (todos monômeros polares); cadeia hidrofóbica (todos monômeros apolares); cadeia com mescla de monômeros polares e apolares (modelo HP). O propósito motivador deste trabalho é o estudo do problema de folding de heteropolímeros, o que inclui proteínas. Contudo, a ferramenta em questão, poderá ser generalizada e aplicada a praticamente todos os tipos de polímeros lineares em rede, pois o usuário poderá definir e implementar o modelo de cadeia que desejar / We present the development of computational tools dedicated to streamlining processes involving Monte Carlo simulations and analysis of some applications. Relevant concepts to the main areas involved (computation, information technology, mathematics and physics) are evoked. We introduce and discuss techniques of distributed computer simulation and Monte Carlo method with emphasis on heteropolymers. Illustrative examples of the application of the tool are also provided through simulations and analysis of results from three types of polymeric chains: polar chain (all polar monomers); hydrophobic chain (all nonpolar monomers); and a chain with mixture of polar and nonpolar monomers (HP model). The motivating purpose of this work is the study of the folding problem of heteropolymers, which includes proteins. However, the tool in question, can be generalized and applied to virtually all types of linear polymers network, once users can define and implement the chain models they want
46

Uma infraestrutura para aplicações distribuídas baseadas em atores Scala / An infrastructure for distributed applications based on Scala actors

Coraini, Thiago Henrique 28 November 2011 (has links)
Escrever aplicações concorrentes é comumente tido como uma tarefa difícil e propensa a erros. Isso é particularmente verdade para aplicações escritas nas linguagens de uso mais disseminado, como C++ e Java, que oferecem um modelo de programação concorrente baseado em memória compartilhada e travas. Muitos consideram que o modo de se programar concorrentemente nessas linguagens é inadequado e dificulta a construção de sistemas livres de problemas como condições de corrida e deadlocks. Por conta disso e da popularização de processadores com múltiplos núcleos, nos últimos anos intensificou-se a busca por ferramentas mais adequadas para o desenvolvimento de aplicações concorrentes. Uma alternativa que vem ganhando atenção é o modelo de atores, proposto inicialmente na década de 1970 e voltado especificamente para a computação concorrente. Nesse modelo, cada ator é uma entidade isolada, que não compartilha memória com outros atores e se comunica com eles somente por meio de mensagens assíncronas. A implementação mais bem sucedida do modelo de atores é a oferecida por Erlang, a linguagem que (provavelmente) explorou esse modelo de forma mais eficiente. A linguagem Scala, surgida em 2003, roda na JVM e possui muitas semelhanças com Java. No entanto, no que diz respeito à programação concorrente, os criadores de Scala buscaram oferecer uma solução mais adequada. Assim, essa linguagem oferece uma biblioteca que implementa o modelo de atores e é fortemente inspirada nos atores de Erlang. O objetivo deste trabalho é explorar o uso do modelo de atores na linguagem Scala, especificamente no caso de aplicações distribuídas. Aproveitando o encapsulamento imposto pelos atores e a concorrência inerente ao modelo, propomos uma plataforma que gerencie a localização dos atores de modo totalmente transparente ao desenvolvedor e que tem o potencial de promover o desenvolvimento de aplicações eficientes e escaláveis. Nossa infraestrutura oferece dois serviços principais, ambos voltados ao gerenciamento da localização de atores: distribuição automática e migração. O primeiro deles permite que o programador escreva sua aplicação pensando apenas nos atores que devem ser instanciados e na comunicação entre esses atores, sem se preocupar com a localização de cada ator. É responsabilidade da infraestrutura definir onde cada ator será executado, usando algoritmos configuráveis. Já o mecanismo de migração permite que a execução de um ator seja suspensa e retomada em outro computador. A migração de atores possibilita que as aplicações se adaptem a mudanças no ambiente de execução. Nosso sistema foi construído tendo-se em mente possibilidades de extensão, em particular por algoritmos que usem o mecanismo de migração para melhorar o desempenho de uma aplicação. / Writing concurrent applications is generally seen as a dificult and error-prone task. This is particularly true for applications written in the most widely used languages, such as C++ and Java, which offer a concurrent programming model based upon shared memory and locks. Many claim that the way concurrent programming is done in these languages is inappropriate and makes it harder to build systems free from problems such as race conditions and deadlocks. For that reason, and also due to the popularization of multi-core processors, the pursuit for tools better suited to the development of concurrent applications has increased in recent years. An alternative that is gaining attention is the actor model, originally proposed in the 1970s and focused specifically in concurrent computing. In this model, each actor is an isolated entity, which does not share memory with other actors and communicates with them only by asynchronous message passing. The most successful implementation of the actor model is likely to be the one provided by Erlang, a language that supports actors in a very efficient way. The Scala language, which appeared in 2003, runs in the JVM and has many similarities with Java. Its creators, however, sought to provide a better solution for concurrent programming. So the language has a library that implements the actor model and is heavily inspired by Erlang actors. The goal of this work is to explore the usage of the actor model in Scala, speciffically for distributed applications. Taking advantage of the encapsulation imposed by actors and of the concurrency inherent to their model, we propose a platform that manages actor location in a way that is fully transparent to the developer. Our proposed platform has the potential of promoting the development of efficient and scalable applications. Our infrastructure offers two major services, both aimed at managing actor location: automatic distribution and migration. The first one allows the programmer to write his application thinking only about the actors that must be instantiated and about the communication among these actors, without being concerned with where each actor will be located. The infrastructure has the responsibility of defining where each actor will run. It accomplishes this task by using some configurable algorithm. The migration mechanism allows the execution of an actor to be suspended and resumed in another computer. Actor migration allows applications to adapt to changes in the execution environment. Our system has been built with extension possibilities in mind, and particularly to be extended by algorithms that use the migration mechanism to improve application performance.
47

Peer-to-peer support for Matlab-style computing

Agrawal, Rajeev 30 September 2004 (has links)
Peer-to-peer technologies have shown a lot of promise in sharing the remote resources effectively. The resources shared by peers are information, bandwidth, storage space or the computing power. When used properly, they can prove to be very advantageous as they scale well, are dynamic, autonomous, fully distributed and can exploit the heterogeneity of peers effectively. They provide an efficient infrastructure for an application seeking to distribute numerical computation. In this thesis, we investigate the feasibility of using a peer-to-peer infrastructure to distribute the computational load of Matlab and similar applications to achieve performance benefits and scalability. We also develop a proof of concept application to distribute the computation of a Matlab style application.
48

Fault tolerant pulse synchronization

Deconda, Keerthi 15 May 2009 (has links)
Pulse synchronization is the evolution of spontaneous firing action across a network of sensor nodes. In the pulse synchronization model all nodes across a network produce a pulse, or "fire", at regular intervals even without access to a shared global time. Previous researchers have proposed the Reachback Firefly algorithm for pulse synchronization, in which nodes react to the firings of other nodes by changing their period. We propose an extension to this algorithm for tolerating arbitrary or Byzantine faults of nodes. Our algorithm queues up all the firings heard in the current cycle and discards outliers at the end of the cycle. An adjustment is computed with the remaining values and used as a starting point of the next cycle. Through simulation we validate the performance of our algorithm and study the overhead in terms of convergence time and periodicity. The simulation considers two specific kinds of Byzantine faults, the No Jump model where faulty nodes follow their own firing cycle without reacting to firings heard from other nodes and the Random Jump model where faulty nodes fire at any random time in their cycle.
49

Constant-RMR Implementations of CAS and Other Synchronization Primitives Using Read and Write Operations

Golab, Wojciech 15 February 2011 (has links)
We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and load-linked/store-conditional using only a constant number of remote memory references (RMRs), in both the cache-coherent and the distributed-shared-memory models of such multiprocessors. Our implementations are blocking, rather than wait-free: they ensure progress provided all processes that invoke the implemented primitive are live. Our results imply that any algorithm using read and write operations, comparison primitives and load-linked/store-conditional, can be simulated by an algorithm that uses read and write operations only, with at most a constant-factor increase in RMR complexity.
50

Local-spin Algorithms for Variants of Mutual Exclusion Using Read and Write Operations

Danek, Robert 30 August 2011 (has links)
Mutual exclusion (ME) is used to coordinate access to shared resources by concurrent processes. We investigate several new N-process shared-memory algorithms for variants of ME, each of which uses only reads and writes, and is local-spin, i.e., has bounded remote memory reference (RMR) complexity. We study these algorithms under two different shared-memory models: the distributed shared-memory (DSM) model, and the cache-coherent (CC) model. In particular, we present the first known algorithm for first- come-first-served (FCFS) ME that has O(log N) RMR complexity in both the DSM and CC models, and uses only atomic reads and writes. Our algorithm is also adaptive to point contention, i.e., the number of processes that are simultaneously active during a passage by some process. More precisely, the number of RMRs a process makes per passage in our algorithm is \Theta(min(c, log N)), where c is the point contention. We also present the first known FCFS abortable ME algorithm that is local-spin and uses only atomic reads and writes. This algorithm has O(N) RMR complexity in both the DSM and CC models, and is in the form of a transformation from abortable ME to FCFS abortable ME. In conjunction with other results, this transformation also yields the first known local-spin group mutual exclusion algorithm that uses only atomic reads and writes. Additionally, we present the first known local-spin k-exclusion algorithms that use only atomic reads and writes and tolerate up to k − 1 crash failures. These algorithms have RMR complexity O(N) in both the DSM and CC models. The simplest of these algorithms satisfies a new fairness property, called k-FCFS, that generalizes the FCFS fairness property for ME algorithms. A modification of this algorithm satisfies the stronger first-in-first-enabled (FIFE) fairness property. Finally, we present a modification to the FIFE k-exclusion algorithm that works with non-atomic reads and writes. The high-level structure of all our k-exclusion algorithms is inspired by Lamport’s famous Bakery algorithm.

Page generated in 0.16 seconds