• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 394
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 41
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

KTHFS – A HIGHLY AVAILABLE ANDSCALABLE FILE SYSTEM

D'Souza, Jude Clement January 2013 (has links)
KTHFS is a highly available and scalable file system built from the version 0.24 of the Hadoop Distributed File system. It provides a platform to overcome the limitations of existing distributed file systems. These limitations include scalability of metadata server in terms of memory usage, throughput and its availability. This document describes KTHFS architecture and how it addresses these problems by providing a well coordinated distributed stateless metadata server (or in our case, Namenode) architecture. This is backed with the help of a persistence layer such as NDB cluster. Its primary focus is towards High Availability of the Namenode. It achieves scalability and recovery by persisting the metadata to an NDB cluster. All namenodes are connected to this NDB cluster and hence are aware of the state of the file system at any point in time. In terms of High Availability, KTHFS provides Multi-Namenode architecture. Since these namenodes are stateless and have a consistent view of the metadata, clients can issue requests on any of the namenodes. Hence, if one of these servers goes down, clients can retry its operation on the next available namenode. We next discuss the evaluation of KTHFS in terms of its metadata capacity for medium and large size clusters, throughput and high availability of the Namenode and an analysis of the underlying NDBcluster. Finally, we conclude this document with a few words on the ongoing and future work in KTHFS.
172

Orchestration of HPC Workflows: Scalability Testing and Cross-System Execution

Tronge, Jacob 14 April 2022 (has links)
No description available.
173

Software Licensing in Cloud Computing : A CASE STUDY ABOUT RELATIONSHIPS FROM ACLOUD SERVICE PROVIDER’S PERSPECTIVE

KABIR, SANZIDA January 2015 (has links)
One of the most important attribute a cloud service provider (CSP) offers their customers through their cloud services is scalability. Scalability gives customers the ability to vary the amount of capacity when required. A cloud service can be divided in three service layers, Infrastructure-as-a-Service (IaaS), Platform-as- a-Service (PaaS) and Software-as-a-Service (SaaS). Scalability of a certain service depends  on  software licenses on these layers. When a customer wants to increase the capacity it will be determined by the CSP's licenses bought from its suppliers in advance. If a CSP scales up more than what was agreed on, then there is a risk that the CSP needs to pay a penalty fee to the supplier. If the CSP invests in too many licenses that does not get utilized, then it will be an investment loss. A second challenge with software licensing is when a customer outsources their applications to the CSP’s platform. As each application comes with a set of licenses, there is a certain level of scalability that cannot be exceeded. If a customer wants the CSP scale up more than usual for an application then the customer need to inform the vendors. However, a common misunderstanding is that the customer expects the CSP to notify the vendor. Then there is a risk that the vendor never gets notified and the customer is in danger of paying a penalty fee. This in turn hurts the CSP’s  relationship with the customer. The recommendation to the CSP under study is to create a successful customer relationship management (CRM) and a supplier relationship management (SRM). By creating a CRM with the customer will minimize the occurring misunderstandings and highlight the responsibilities when a customer outsources an application to the CSP. By creating a SRM with the supplier will help the CSP to maintain a flexible paying method that they have with a certain supplier. Furthermore, it will set  an example to the remaining suppliers to change their inflexible paying method. By achieving a flexible payment method with the suppliers will make it easier for the CSP to find equilibrium between scalability and licenses.
174

Reducing Inter-Process Communication Overhead in Parallel Sparse Matrix-Matrix Multiplication

Ahmed, Salman, Houser, Jennifer, Hoque, Mohammad A., Raju, Rezaul, Pfeiffer, Phil 01 July 2017 (has links)
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., O (log (p)) where p is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes O(p) run-time complexity for accumulating the results.
175

Cohérence dans les systèmes de stockage distribués : fondements théoriques avec applications au cloud storage / Consistency in distributed storage systems : theoretical foundations with applications to cloud storage

Viotti, Paolo 06 April 2017 (has links)
La conception des systèmes distribués est une tâche onéreuse : les objectifs de performance, d’exactitude et de fiabilité sont étroitement liés et ont donné naissance à des compromis complexes décrits par de nombreux résultats théoriques. Ces compromis sont devenus de plus en plus importants à mesure que le calcul et le stockage se sont déplacés vers des architectures distribuées. De plus, l’absence d’approches systématiques de ces problèmes dans les outils de programmation modernes les a aggravés — d’autant que de nos jours la plupart des programmeurs doivent relever les défis liés aux applications distribués. En conséquence, il existe un écart évident entre les abstractions de programmation, les exigences d’application et la sémantique de stockage, ce qui entrave le travail des concepteurs et des développeurs. Cette thèse présente un ensemble de contributions tourné vers la conception de systèmes de stockage distribués fiables, en examinant ces questions à travers le prisme de la cohérence. Nous commençons par fournir un cadre uniforme et déclarative pour définir formellement les modèles de cohérence. Nous utilisons ce cadre pour décrire et comparer plus de cinquante modèles de cohérence non transactionnelles proposés dans la littérature. La nature déclarative et composite de ce cadre nous permet de construire un classement partiel des modèles de cohérence en fonction de leur force sémantique. Nous montrons les avantages pratiques de la composabilité en concevant et en implémentant Hybris, un système de stockage qui utilise différents modèles pour améliorer la cohérence faible généralement offerte par les services de stockage dans les nuages. Nous démontrons l’efficacité d’Hybris et montrons qu’il peut tolérer les erreurs arbitraires des services du nuage au prix des pannes. Enfin, nous proposons une nouvelle technique pour vérifier les garanties de cohérence offertes par les systèmes de stockage du monde réel. Cette technique s’appuie sur notre approche déclarative de la cohérence : nous considérons les modèles de cohérence comme invariants sur les représentations graphiques des exécutions des systèmes de stockage. Une mise en œuvre préliminaire prouve cette approche pratique et utile pour améliorer l’état de l’art sur la vérification de la cohérence. / Engineering distributed systems is an onerous task: the design goals of performance, correctness and reliability are intertwined in complex tradeoffs, which have been outlined by multiple theoretical results. These tradeoffs have become increasingly important as computing and storage have shifted towards distributed architectures. Additionally, the general lack of systematic approaches to tackle distribution in modern programming tools, has worsened these issues — especially as nowadays most programmers have to take on the challenges of distribution. As a result, there exists an evident divide between programming abstractions, application requirements and storage semantics, which hinders the work of designers and developers.This thesis presents a set of contributions towards the overarching goal of designing reliable distributed storage systems, by examining these issues through the prism of consistency. We begin by providing a uniform, declarative framework to formally define consistency semantics. We use this framework to describe and compare over fifty non-transactional consistency semantics proposed in previous literature. The declarative and composable nature of this framework allows us to build a partial order of consistency models according to their semantic strength. We show the practical benefits of composability by designing and implementing Hybris, a storage system that leverages different models and semantics to improve over the weak consistency generally offered by public cloud storage platforms. We demonstrate Hybris’ efficiency and show that it can tolerate arbitrary faults of cloud stores at the cost of tolerating outages. Finally, we propose a novel technique to verify the consistency guarantees offered by real-world storage systems. This technique leverages our declarative approach to consistency: we consider consistency semantics as invariants over graph representations of storage systems executions. A preliminary implementation proves this approach practical and useful in improving over the state-of-the-art on consistency verification.
176

Mechatronics development of a scalable exoskeleton for the lower part of a handicapped person. / Développement mécatronique d'un exosquelette évolutif pour la partie inférieure d'une personne handicapée.

Kardofaki, Mohamad 11 June 2019 (has links)
Cette thèse présente l'importance des exosquelettes évolutifs des membres inférieurs pour les adolescents handicapés souffrant de troubles neuromusculaires et autres pathologies. Le nouveau terme " évolutif" décrit la capacité de l'exosquelette à grandir physiquement avec l'utilisateur, et à s'adapter à sa morphologie.Une analyse distincte des manifestations physiques qui subissent a été faite, en ce qui concerne la poussée de croissance pubertaire et les effets secondaires éventuelles. L'étude de la littérature montre qu'il n'existe pas de dispositif de réadaptation suffisamment adapté aux besoins d'un adolescent en pleine croissance en raison de la croissance rapide de ses membres et de la nature progressive de ses maladies. Comme c'est la première fois que le terme «évolutivité» est utilisé pour les exosquelettes, ses exigences fonctionnelles sont définies. Le développement mécatronique d'un exosquelette évolutif est aussi présenté, incluant le développement de son actionneur articulaire et sa structure mécanique.Enfin, les résultats préliminaires des performances de l'actionneur articulaire lors de la simulation des mouvements fonctionnels liés à la croissance montrent une grande capacité de suivi et d'exécution des mouvements basés sur les couples, tandis que les résultats liés à la structure évolutive montrent la capacité du système à s'adapter aux différents utilisateurs. / This thesis introduces the importance of the scalable lower limb exoskeletons for disabled teenagers suffering from neuromuscular disorders & other pathological conditions. The new term "scalable" describes the ability of the exoskeleton to physically grow up with the user and to be adapted to his/her morphology.A distinctive analysis of the physical manifestations that the patients experience has been done concerning the pubertal growth spurt and to the future secondary effects. The study of the literature shows that no rehabilitation device is customized enough to the needs of a growing teenagers due to the fast growth of their bodies and to the progressiveness nature of their diseases. As this is the first time the term "scalability" is brought up for exoskeletons, its functional requirements are defined in order to determine the constraints imposed on the design of the new exoskeleton. The mechatronics development of a scalable exoskeleton is presented, including the development of its joint actuator, its mechanical structure and attachments.Finally, the preliminary results of the joint actuator performance when simulating functional movements related to the growth show a high capability of trajectory following and executing torques based motions, while the findings associated with the scalable structure show the system able to be adapted to the different user sizes and ages.
177

The Link Between Image Segmentation and Image Recognition

Sharma, Karan 01 January 2012 (has links)
A long standing debate in computer vision community concerns the link between segmentation and recognition. The question I am trying to answer here is, Does image segmentation as a preprocessing step help image recognition? In spite of a plethora of the literature to the contrary, some authors have suggested that recognition driven by high quality segmentation is the most promising approach in image recognition because the recognition system will see only the relevant features on the object and not see redundant features outside the object (Malisiewicz and Efros 2007; Rabinovich, Vedaldi, and Belongie 2007). This thesis explores the following question: If segmentation precedes recognition, and segments are directly fed to the recognition engine, will it help the recognition machinery? Another question I am trying to address in this thesis is of scalability of recognition systems. Any computer vision system, concept or an algorithm, without exception, if it is to stand the test of time, will have to address the issue of scalability.
178

Popcorn Linux: enabling efficient inter-core communication in a Linux-based multikernel operating system

Shelton, Benjamin H. 31 May 2013 (has links)
As manufacturers introduce new machines with more cores, more NUMA-like architectures, and more tightly integrated heterogeneous processors, the traditional abstraction of a monolithic OS running on a SMP system is encountering new challenges.  One proposed path forward is the multikernel operating system.  Previous efforts have shown promising results both in scalability and in support for heterogeneity.  However, one effort\'s source code is not freely available (FOS), and the other effort is not self-hosting and does not support a majority of existing applications (Barrelfish). In this thesis, we present Popcorn, a Linux-based multikernel operating system.  While Popcorn was a group effort, the boot layer code and the memory partitioning code are the author\'s work, and we present them in detail here.  To our knowledge, we are the first to support multiple instances of the Linux kernel on a 64-bit x86 machine and to support more than 4 kernels running simultaneously. We demonstrate that existing subsystems within Linux can be leveraged to meet the design goals of a multikernel OS.  Taking this approach, we developed a fast inter-kernel network driver and messaging layer.  We demonstrate that the network driver can share a 1 Gbit/s link without degraded performance and that in combination with guest kernels, it meets or exceeds the performance of SMP Linux with an event-based web server.  We evaluate the messaging layer with microbenchmarks and conclude that it performs well given the limitations of current x86-64 hardware.  Finally, we use the messaging layer to provide live process migration between cores. / Master of Science
179

Efficient data and metadata processing in large-scale distributed systems

Shi, Rong, Shi January 2018 (has links)
No description available.
180

Cloud-Based Collaborative Local-First Software

Vallin, Tor January 2023 (has links)
Local-first software has the potential to offer users a great experience by combining the best aspects of traditional applications with cloud-based applications. However, not much has been documented regarding developing backends for local-first software, particularly one that is scalable while still supporting end-to-end encryption. This thesis presents a backend architecture that was then implemented and evaluated. The implementation was shown to be scalable and was able to maintain an estimated end-to-end latency of around 30-50ms as the number of simulated clients increased. The architecture supports end-to-end encryption to offer user privacy and to ensure that neither cloud nor service providers can access user data. Furthermore, by occasionally performing snapshots the encryption overhead was shown to be manageable compared to the raw data, at around 18.2% in the best case and 118.9% when using data from automerge-perf, a standard benchmark. Lastly, the processing times were shown to be upwards of 50 times faster when using snapshots compared to handling individual changes.

Page generated in 0.0609 seconds