• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 91
  • 65
  • 63
  • 50
  • 45
  • 44
  • 41
  • 35
  • 33
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A distributed analysis and monitoring framework for the compact Muon solenoid experiment and a pedestrian simulation

Karavakis, Edward January 2010 (has links)
The design of a parallel and distributed computing system is a very complicated task. It requires a detailed understanding of the design issues and of the theoretical and practical aspects of their solutions. Firstly, this thesis discusses in detail the major concepts and components required to make parallel and distributed computing a reality. A multithreaded and distributed framework capable of analysing the simulation data produced by a pedestrian simulation software was developed. Secondly, this thesis discusses the origins and fundamentals of Grid computing and the motivations for its use in High Energy Physics. Access to the data produced by the Large Hadron Collider (LHC) has to be provided for more than five thousand scientists all over the world. Users who run analysis jobs on the Grid do not necessarily have expertise in Grid computing. Simple, userfriendly and reliable monitoring of the analysis jobs is one of the key components of the operations of the distributed analysis; reliable monitoring is one of the crucial components of the Worldwide LHC Computing Grid for providing the functionality and performance that is required by the LHC experiments. The CMS Dashboard Task Monitoring and the CMS Dashboard Job Summary monitoring applications were developed to serve the needs of the CMS community.
62

On using Desktop Grid Computing in software industry

Cederström, Andreas January 2010 (has links)
Context. When dealing with large data sets and heavy calculations the common solution is clusters, supercomputers or Grids of these two. However, there are ways of gaining large computational power by utilizing the unused cycles of regular home or office computers, this are referred to as Desktop Grids. Objectives. In this study we review the current field of solutions for open source Desktop Grid computing capable of dealing with a heterogeneous set of clients and dynamic size of the Desktop Grid. We investigate current use, interest of use and priority of key attributes of Desktop Grids. Finally we want to show how time effective Desktop Grids are compared to execution on a single machine and in the process show effort needed to setup a Desktop Grid and start computing. The overall purpose of this study is to provide a path for industry organizations to take when taking the first step into Desktop Grid computing. Methods. We use a systematic review to collect information of existing open source Desktop Grid solutions. Studies are selected based on inclusion criterions and a quality assessment. A survey questioner is used to assess industry usage, interest and prioritization of attributes of Desktop Grids. We will conduct an experiment to show execution speedup as well as setup effort. Results. We found ten open source Desktop Grids fulfilling our requirements. The survey shows that Desktop Grids is used to a very little extent within industry while a majority of the participants state that there is an interest for Desktop Grids. As result of the experiment, we can say that we achieved very high speedup and that effort needed to setup a Desktop Grid is about 40 hours for one person with no prior experience to the selected Desktop Grid system. Conclusions. We conclude that industry organizations have a possible need for Desktop Grids but in order to be more successful, Desktop Grid developers must put more effort into areas as automated testing and code compilation.
63

Scalable applications in a distributed environment

Andersson, Filip, Norberg, Simon January 2011 (has links)
As the amount of simultaneous users of distributed systems increase, scalability is becoming an important factor to consider during software development. Without sufficient scalability, systems might have a hard time to manage high loads, and might not be able to support a high amount of users. We have determined how scalability can best be implemented, and what extra costs this leads to. Our research is based on both a literature review, where we have looked at what others in the field of computer engineering thinks about scalability, and by implementing a highly scalable system of our own. In the end we came up with a couple of general pointers which can help developers to determine if they should focus on scalable development, and what they should consider if they choose to do so.
64

Uma infraestrutura para aplicações distribuídas baseadas em atores Scala / An infrastructure for distributed applications based on Scala actors

Thiago Henrique Coraini 28 November 2011 (has links)
Escrever aplicações concorrentes é comumente tido como uma tarefa difícil e propensa a erros. Isso é particularmente verdade para aplicações escritas nas linguagens de uso mais disseminado, como C++ e Java, que oferecem um modelo de programação concorrente baseado em memória compartilhada e travas. Muitos consideram que o modo de se programar concorrentemente nessas linguagens é inadequado e dificulta a construção de sistemas livres de problemas como condições de corrida e deadlocks. Por conta disso e da popularização de processadores com múltiplos núcleos, nos últimos anos intensificou-se a busca por ferramentas mais adequadas para o desenvolvimento de aplicações concorrentes. Uma alternativa que vem ganhando atenção é o modelo de atores, proposto inicialmente na década de 1970 e voltado especificamente para a computação concorrente. Nesse modelo, cada ator é uma entidade isolada, que não compartilha memória com outros atores e se comunica com eles somente por meio de mensagens assíncronas. A implementação mais bem sucedida do modelo de atores é a oferecida por Erlang, a linguagem que (provavelmente) explorou esse modelo de forma mais eficiente. A linguagem Scala, surgida em 2003, roda na JVM e possui muitas semelhanças com Java. No entanto, no que diz respeito à programação concorrente, os criadores de Scala buscaram oferecer uma solução mais adequada. Assim, essa linguagem oferece uma biblioteca que implementa o modelo de atores e é fortemente inspirada nos atores de Erlang. O objetivo deste trabalho é explorar o uso do modelo de atores na linguagem Scala, especificamente no caso de aplicações distribuídas. Aproveitando o encapsulamento imposto pelos atores e a concorrência inerente ao modelo, propomos uma plataforma que gerencie a localização dos atores de modo totalmente transparente ao desenvolvedor e que tem o potencial de promover o desenvolvimento de aplicações eficientes e escaláveis. Nossa infraestrutura oferece dois serviços principais, ambos voltados ao gerenciamento da localização de atores: distribuição automática e migração. O primeiro deles permite que o programador escreva sua aplicação pensando apenas nos atores que devem ser instanciados e na comunicação entre esses atores, sem se preocupar com a localização de cada ator. É responsabilidade da infraestrutura definir onde cada ator será executado, usando algoritmos configuráveis. Já o mecanismo de migração permite que a execução de um ator seja suspensa e retomada em outro computador. A migração de atores possibilita que as aplicações se adaptem a mudanças no ambiente de execução. Nosso sistema foi construído tendo-se em mente possibilidades de extensão, em particular por algoritmos que usem o mecanismo de migração para melhorar o desempenho de uma aplicação. / Writing concurrent applications is generally seen as a dificult and error-prone task. This is particularly true for applications written in the most widely used languages, such as C++ and Java, which offer a concurrent programming model based upon shared memory and locks. Many claim that the way concurrent programming is done in these languages is inappropriate and makes it harder to build systems free from problems such as race conditions and deadlocks. For that reason, and also due to the popularization of multi-core processors, the pursuit for tools better suited to the development of concurrent applications has increased in recent years. An alternative that is gaining attention is the actor model, originally proposed in the 1970s and focused specifically in concurrent computing. In this model, each actor is an isolated entity, which does not share memory with other actors and communicates with them only by asynchronous message passing. The most successful implementation of the actor model is likely to be the one provided by Erlang, a language that supports actors in a very efficient way. The Scala language, which appeared in 2003, runs in the JVM and has many similarities with Java. Its creators, however, sought to provide a better solution for concurrent programming. So the language has a library that implements the actor model and is heavily inspired by Erlang actors. The goal of this work is to explore the usage of the actor model in Scala, speciffically for distributed applications. Taking advantage of the encapsulation imposed by actors and of the concurrency inherent to their model, we propose a platform that manages actor location in a way that is fully transparent to the developer. Our proposed platform has the potential of promoting the development of efficient and scalable applications. Our infrastructure offers two major services, both aimed at managing actor location: automatic distribution and migration. The first one allows the programmer to write his application thinking only about the actors that must be instantiated and about the communication among these actors, without being concerned with where each actor will be located. The infrastructure has the responsibility of defining where each actor will run. It accomplishes this task by using some configurable algorithm. The migration mechanism allows the execution of an actor to be suspended and resumed in another computer. Actor migration allows applications to adapt to changes in the execution environment. Our system has been built with extension possibilities in mind, and particularly to be extended by algorithms that use the migration mechanism to improve application performance.
65

Parallel implementation of Davidson-type methods for large-scale eigenvalue problems

Romero Alcalde, Eloy 17 April 2012 (has links)
El problema de valores propios (tambien llamado de autovalores, o eigenvalues) esta presente en diversas tareas cienficas a traves de la resolucion de ecuaciones diferenciales, analisis de modelos y calculos de funciones matriciales, entre otras muchas aplicaciones. Si los problemas son de dimension moderada (menor a 106), pueden ser abordados mediante los llamados metodos directos, como el algoritmo iterativo QR o el metodo de divide y vencerlas. Sin embargo, si el problema es de gran dimension y solo se requieren unas pocas soluciones (comparado con el tama~no del problema) y con un cierto grado de aproximacion, los metodos iterativos pueden resultar mas eficientes. Ademas los metodos iterativos pueden ofrecer mejores prestaciones en arquitecturas de altas prestaciones, como las de memoria distribuida, en las que existen un cierto numero de nodos computacionales con espacio de memoria propios y solo pueden compartir informacion y sincronizarse mediante el paso de mensajes. Esta tesis aborda la implementacion de metodos de tipo Davidson, destacando Generalized Davidson y Jacobi-Davidson, una clase de metodos iterativos que puede ser competitiva en casos especialmente dificiles como calcular valores propios en el interior del espectro o cuando la factorizacion de matrices es prohibitiva o ineficiente, y solo es posible una factorizacion aproximada. La implementacion se desarrolla en SLEPc (Scalable Library for Eigenvalue Problem Computations), libreria libre destacada en la resolucion de problemas de gran tama~no de valores propios, problemas cuadraticos de valores propios y problemas de valores singulares, entre otros. A su vez, SLEPc se desarrolla bajo el marco de PETSc (Portable, Extensible Toolkit for Scientic Computation), que ofrece implementaciones eficientes de operaciones basicas del algebra lineal, como operaciones con matrices y vectores, resolucion aproximada de sistemas lineales, factorizaciones exactas y aproximadas de matrices, etc. / Romero Alcalde, E. (2012). Parallel implementation of Davidson-type methods for large-scale eigenvalue problems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15188 / Palancia
66

Real-time distributed system architecture using local area networks

Young, Richard January 1992 (has links)
Bibliography: pages 61-66. / This dissertation addresses system architecture concepts for the implementation of real-time distributed systems. In particular, it addresses the requirements of a specific mission and real-time critical distributed system application as this exemplifies most of the issues of concern. Of specific significance is the integration of real-time distributed data services into a platform-wide Information Management Infrastructure. The dissertation commences with an overview of the system-level allocated requirements. Derived requirements for an Information Management Infrastructure (IMI) are then determined. A generic system architecture is then presented in terms of the allocated and derived requirements. A specific topology, based on this architecture, as well as available technology, is described. The scalability of the architecture to -different platforms, including non-surface platforms, is discussed. As financial considerations are an important design driver and constraint, some anticipated order-of-magnitude system acquisition costs for a range of system complexities and configurations are briefly reviewed. Finally some conclusions and recommendations within the context of the allocated and derived requirements, as well as the RSA's politico-economic environment, are offered.
67

An Empirical Study of the Distributed Ellipsoidal Trust Region Method for Large Batch Training

Alnasser, Ali 10 February 2021 (has links)
Neural networks optimizers are dominated by first-order methods, due to their inexpensive computational cost per iteration. However, it has been shown that firstorder optimization is prone to reaching sharp minima when trained with large batch sizes. As the batch size increases, the statistical stability of the problem increases, a regime that is well suited for second-order optimization methods. In this thesis, we study a distributed ellipsoidal trust region model for neural networks. We use a block diagonal approximation of the Hessian, assigning consecutive layers of the network to each process. We solve in parallel for the update direction of each subset of the parameters. We show that our optimizer is fit for large batch training as well as increasing number of processes.
68

Distribuované výpočty s využitím technologie ActionScript / Distributed Computing by force of Action Script

Minář, Petr January 2009 (has links)
This Master thesis deals with designing and realization of optimalization software in ActionScript 3.0 background. The designed applications realized in terms of computation by standalone and distributed variant and by means of chosen heuristic method called HC12. For verification of a function and performance was select set of testing examples. The main purpose of the work was create public clients in the network internet and calculated partial computing by it. The completed clients are selected by competence and carry on tasks. This research has been supported by the Czech Ministry of Education in the frame of MSM 0021630529 Research Intention Intelligent Systems in Automation.
69

Distribuované zpracování zachycené síťové komunikace / Captured Communication Processing on Distributed System

Hvězda, Matěj January 2016 (has links)
When you need to assess or troubleshoot network by analysing capture file, you want it done as fast as possible and you do not always have a high-performance computer. Here comes the distributed system, which allows you to use his high computing power and lot of available memory. I introduce distributed application, which is scalable, extensible and capable of processing captured network communication and is developed for Windows platform. That provides technology, like Microsoft HPC Pack and Windows Communication foundation. The application supports multiple capture formats. In parallel system (cluster), exists database in order to save statistics and data of captured communication in order to save user's computer memory so client's application can be used for low-performance computers or make data available to a client after distributed processing.
70

Programming Support for Scalable, Serializable and Elastic Cloud Applications

Bo Sang (5930225) 30 July 2020 (has links)
<div>Elasticity is an essential feature for cloud applications to handle varying and unpredictable workloads in a cost-effective way on cloud platforms. However, implementing a stateful elastic application is hard, as programmers have to: (1) reason about concurrent execution in the applications (serializability); (2) guarantee the application can process more requests with larger scale (scalability); and (3) provide elasticity management to improve performance and resource efficiency for applications (efficient elasticity management). Unfortunately, addressing all those concerns requires deep understanding and rich experience in distributed systems and cloud computing. </div><div> </div><div>In this dissertation, we provide programming support to help programmers implement their stateful elastic cloud applications in a simpler manner. Specifically, we present AEON, an actor-based programming language, and \arch, an elastic programming framework. On the one hand, AEON provides programmers with scalability and serialzability, executing actor-based programs in a serialized manner while still retaining a high degree of parallelism. Meanwhile, AEON can adjust programs' scale via fine-grained live actor migration. On the other hand, PLASMA includes (1) an elastic programming language as a second ``level'' of programming (complementing the main application programming language) for describing elasticity behaviors, and (2) a novel semantics-aware elasticity management runtime that tracks program execution and acts upon application features as suggested by elasticity behaviors. </div><div>With these, PLASMA can provide efficient elasticity management to cloud applications</div>

Page generated in 0.1299 seconds