Spelling suggestions: "subject:"scalability."" "subject:"calability.""
171 |
Activities of champions implementing e-Learning processes in higher educationBeukes-Amiss, Catherine Margaret 14 February 2012 (has links)
The increasing rate at which e-Learning is implemented in institutions of higher education has been reported widely. The literature suggests that institutions of higher education, across the globe, use the efforts of champions to initiate and establish e-Learning activities. The paucity of research about the activities of e-Learning champions in an African context is noticeable, while implementation of e-Learning is spreading rapidly in Africa. It is to provide information within the African context of this activity that this study sought to identify and explain the activities and characteristics (through strategies) as well as the qualities (through motivations) of e-Learning champions as they engage in innovative practices in institutions of higher education in Africa. Two research questions guided the study which examined the activities of champions and how (activities and characteristics through strategies) and why (qualities through motivations) they engaged in their activities within their institutions. To address these questions, the study followed a qualitative research design, using semi-structured interviews with champions and policy level staff in institutions of higher education in Namibia, South Africa and Kenya as well as documents as its data sources. The intention was not to compare champions and their activities in these countries, but rather to establish understanding of these champions and their contexts as a group. The contextual relevance was solely based on the availability of champions and policy level staff due to the purposive and convenience sampling techniques applied. The study’s findings show that the activities of champions in Africa are not significantly different from those as described in recent literature in non-African countries. Rather, particular strategies and motivational factors are found that are related to activities, characteristics and qualities of champions. The support factors identified by policy level staff and in institutional policy documents differed from those thought to be motivating by champions themselves. Champions have expressed the need for an approved budget, sufficient infrastructure, an e-Learning unit with specialised staff, and dedicated time for e-Learning activities. Policy staff point to a level of support already in place in the form of some financial considerations for e-Learning and incentives. No explicit reference could be found in policy documents to the role of champions or what motivates them. This disjuncture between the environment of the champions and that of the established institution is explained by a maturity model of institutionalisation of innovations. The study’s contribution to the scholarly domain is at several levels. Firstly, the proposed conceptual framework is a contribution to academic discourse in that it contributed variables of analysis (strategies and motivations) of champions who engage in innovation within established institutions, institutional procedures, directives (through guidelines) and policies (through intentions), as well as goals which lead to a common objective in achieving scalability and sustainability. Secondly, the finding that institutions that wish to have innovations institutionalised must be aware of the disturbances that such innovations can bring and thus must create policies that recognise the role of champions and are able to accommodate, tolerate and support them. Thirdly, the synthesis of characteristics of champions, and their qualities with the support needed by them, and issues in relation to scalability and sustainability that may motivate institutions of higher education to support champions (or not) contribute guidelines which may be used to identify, acknowledge or recruit potential champions, where champions are needed. / Thesis (PhD)--University of Pretoria, 2011. / Science, Mathematics and Technology Education / unrestricted
|
172 |
KTHFS – A HIGHLY AVAILABLE ANDSCALABLE FILE SYSTEMD'Souza, Jude Clement January 2013 (has links)
KTHFS is a highly available and scalable file system built from the version 0.24 of the Hadoop Distributed File system. It provides a platform to overcome the limitations of existing distributed file systems. These limitations include scalability of metadata server in terms of memory usage, throughput and its availability. This document describes KTHFS architecture and how it addresses these problems by providing a well coordinated distributed stateless metadata server (or in our case, Namenode) architecture. This is backed with the help of a persistence layer such as NDB cluster. Its primary focus is towards High Availability of the Namenode. It achieves scalability and recovery by persisting the metadata to an NDB cluster. All namenodes are connected to this NDB cluster and hence are aware of the state of the file system at any point in time. In terms of High Availability, KTHFS provides Multi-Namenode architecture. Since these namenodes are stateless and have a consistent view of the metadata, clients can issue requests on any of the namenodes. Hence, if one of these servers goes down, clients can retry its operation on the next available namenode. We next discuss the evaluation of KTHFS in terms of its metadata capacity for medium and large size clusters, throughput and high availability of the Namenode and an analysis of the underlying NDBcluster. Finally, we conclude this document with a few words on the ongoing and future work in KTHFS.
|
173 |
Orchestration of HPC Workflows: Scalability Testing and Cross-System ExecutionTronge, Jacob 14 April 2022 (has links)
No description available.
|
174 |
Software Licensing in Cloud Computing : A CASE STUDY ABOUT RELATIONSHIPS FROM ACLOUD SERVICE PROVIDER’S PERSPECTIVEKABIR, SANZIDA January 2015 (has links)
One of the most important attribute a cloud service provider (CSP) offers their customers through their cloud services is scalability. Scalability gives customers the ability to vary the amount of capacity when required. A cloud service can be divided in three service layers, Infrastructure-as-a-Service (IaaS), Platform-as- a-Service (PaaS) and Software-as-a-Service (SaaS). Scalability of a certain service depends on software licenses on these layers. When a customer wants to increase the capacity it will be determined by the CSP's licenses bought from its suppliers in advance. If a CSP scales up more than what was agreed on, then there is a risk that the CSP needs to pay a penalty fee to the supplier. If the CSP invests in too many licenses that does not get utilized, then it will be an investment loss. A second challenge with software licensing is when a customer outsources their applications to the CSP’s platform. As each application comes with a set of licenses, there is a certain level of scalability that cannot be exceeded. If a customer wants the CSP scale up more than usual for an application then the customer need to inform the vendors. However, a common misunderstanding is that the customer expects the CSP to notify the vendor. Then there is a risk that the vendor never gets notified and the customer is in danger of paying a penalty fee. This in turn hurts the CSP’s relationship with the customer. The recommendation to the CSP under study is to create a successful customer relationship management (CRM) and a supplier relationship management (SRM). By creating a CRM with the customer will minimize the occurring misunderstandings and highlight the responsibilities when a customer outsources an application to the CSP. By creating a SRM with the supplier will help the CSP to maintain a flexible paying method that they have with a certain supplier. Furthermore, it will set an example to the remaining suppliers to change their inflexible paying method. By achieving a flexible payment method with the suppliers will make it easier for the CSP to find equilibrium between scalability and licenses.
|
175 |
Reducing Inter-Process Communication Overhead in Parallel Sparse Matrix-Matrix MultiplicationAhmed, Salman, Houser, Jennifer, Hoque, Mohammad A., Raju, Rezaul, Pfeiffer, Phil 01 July 2017 (has links)
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., O (log (p)) where p is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes O(p) run-time complexity for accumulating the results.
|
176 |
Cohérence dans les systèmes de stockage distribués : fondements théoriques avec applications au cloud storage / Consistency in distributed storage systems : theoretical foundations with applications to cloud storageViotti, Paolo 06 April 2017 (has links)
La conception des systèmes distribués est une tâche onéreuse : les objectifs de performance, d’exactitude et de fiabilité sont étroitement liés et ont donné naissance à des compromis complexes décrits par de nombreux résultats théoriques. Ces compromis sont devenus de plus en plus importants à mesure que le calcul et le stockage se sont déplacés vers des architectures distribuées. De plus, l’absence d’approches systématiques de ces problèmes dans les outils de programmation modernes les a aggravés — d’autant que de nos jours la plupart des programmeurs doivent relever les défis liés aux applications distribués. En conséquence, il existe un écart évident entre les abstractions de programmation, les exigences d’application et la sémantique de stockage, ce qui entrave le travail des concepteurs et des développeurs. Cette thèse présente un ensemble de contributions tourné vers la conception de systèmes de stockage distribués fiables, en examinant ces questions à travers le prisme de la cohérence. Nous commençons par fournir un cadre uniforme et déclarative pour définir formellement les modèles de cohérence. Nous utilisons ce cadre pour décrire et comparer plus de cinquante modèles de cohérence non transactionnelles proposés dans la littérature. La nature déclarative et composite de ce cadre nous permet de construire un classement partiel des modèles de cohérence en fonction de leur force sémantique. Nous montrons les avantages pratiques de la composabilité en concevant et en implémentant Hybris, un système de stockage qui utilise différents modèles pour améliorer la cohérence faible généralement offerte par les services de stockage dans les nuages. Nous démontrons l’efficacité d’Hybris et montrons qu’il peut tolérer les erreurs arbitraires des services du nuage au prix des pannes. Enfin, nous proposons une nouvelle technique pour vérifier les garanties de cohérence offertes par les systèmes de stockage du monde réel. Cette technique s’appuie sur notre approche déclarative de la cohérence : nous considérons les modèles de cohérence comme invariants sur les représentations graphiques des exécutions des systèmes de stockage. Une mise en œuvre préliminaire prouve cette approche pratique et utile pour améliorer l’état de l’art sur la vérification de la cohérence. / Engineering distributed systems is an onerous task: the design goals of performance, correctness and reliability are intertwined in complex tradeoffs, which have been outlined by multiple theoretical results. These tradeoffs have become increasingly important as computing and storage have shifted towards distributed architectures. Additionally, the general lack of systematic approaches to tackle distribution in modern programming tools, has worsened these issues — especially as nowadays most programmers have to take on the challenges of distribution. As a result, there exists an evident divide between programming abstractions, application requirements and storage semantics, which hinders the work of designers and developers.This thesis presents a set of contributions towards the overarching goal of designing reliable distributed storage systems, by examining these issues through the prism of consistency. We begin by providing a uniform, declarative framework to formally define consistency semantics. We use this framework to describe and compare over fifty non-transactional consistency semantics proposed in previous literature. The declarative and composable nature of this framework allows us to build a partial order of consistency models according to their semantic strength. We show the practical benefits of composability by designing and implementing Hybris, a storage system that leverages different models and semantics to improve over the weak consistency generally offered by public cloud storage platforms. We demonstrate Hybris’ efficiency and show that it can tolerate arbitrary faults of cloud stores at the cost of tolerating outages. Finally, we propose a novel technique to verify the consistency guarantees offered by real-world storage systems. This technique leverages our declarative approach to consistency: we consider consistency semantics as invariants over graph representations of storage systems executions. A preliminary implementation proves this approach practical and useful in improving over the state-of-the-art on consistency verification.
|
177 |
Mechatronics development of a scalable exoskeleton for the lower part of a handicapped person. / Développement mécatronique d'un exosquelette évolutif pour la partie inférieure d'une personne handicapée.Kardofaki, Mohamad 11 June 2019 (has links)
Cette thèse présente l'importance des exosquelettes évolutifs des membres inférieurs pour les adolescents handicapés souffrant de troubles neuromusculaires et autres pathologies. Le nouveau terme " évolutif" décrit la capacité de l'exosquelette à grandir physiquement avec l'utilisateur, et à s'adapter à sa morphologie.Une analyse distincte des manifestations physiques qui subissent a été faite, en ce qui concerne la poussée de croissance pubertaire et les effets secondaires éventuelles. L'étude de la littérature montre qu'il n'existe pas de dispositif de réadaptation suffisamment adapté aux besoins d'un adolescent en pleine croissance en raison de la croissance rapide de ses membres et de la nature progressive de ses maladies. Comme c'est la première fois que le terme «évolutivité» est utilisé pour les exosquelettes, ses exigences fonctionnelles sont définies. Le développement mécatronique d'un exosquelette évolutif est aussi présenté, incluant le développement de son actionneur articulaire et sa structure mécanique.Enfin, les résultats préliminaires des performances de l'actionneur articulaire lors de la simulation des mouvements fonctionnels liés à la croissance montrent une grande capacité de suivi et d'exécution des mouvements basés sur les couples, tandis que les résultats liés à la structure évolutive montrent la capacité du système à s'adapter aux différents utilisateurs. / This thesis introduces the importance of the scalable lower limb exoskeletons for disabled teenagers suffering from neuromuscular disorders & other pathological conditions. The new term "scalable" describes the ability of the exoskeleton to physically grow up with the user and to be adapted to his/her morphology.A distinctive analysis of the physical manifestations that the patients experience has been done concerning the pubertal growth spurt and to the future secondary effects. The study of the literature shows that no rehabilitation device is customized enough to the needs of a growing teenagers due to the fast growth of their bodies and to the progressiveness nature of their diseases. As this is the first time the term "scalability" is brought up for exoskeletons, its functional requirements are defined in order to determine the constraints imposed on the design of the new exoskeleton. The mechatronics development of a scalable exoskeleton is presented, including the development of its joint actuator, its mechanical structure and attachments.Finally, the preliminary results of the joint actuator performance when simulating functional movements related to the growth show a high capability of trajectory following and executing torques based motions, while the findings associated with the scalable structure show the system able to be adapted to the different user sizes and ages.
|
178 |
The Link Between Image Segmentation and Image RecognitionSharma, Karan 01 January 2012 (has links)
A long standing debate in computer vision community concerns the link between segmentation and recognition. The question I am trying to answer here is, Does image segmentation as a preprocessing step help image recognition? In spite of a plethora of the literature to the contrary, some authors have suggested that recognition driven by high quality segmentation is the most promising approach in image recognition because the recognition system will see only the relevant features on the object and not see redundant features outside the object (Malisiewicz and Efros 2007; Rabinovich, Vedaldi, and Belongie 2007). This thesis explores the following question: If segmentation precedes recognition, and segments are directly fed to the recognition engine, will it help the recognition machinery? Another question I am trying to address in this thesis is of scalability of recognition systems. Any computer vision system, concept or an algorithm, without exception, if it is to stand the test of time, will have to address the issue of scalability.
|
179 |
Popcorn Linux: enabling efficient inter-core communication in a Linux-based multikernel operating systemShelton, Benjamin H. 31 May 2013 (has links)
As manufacturers introduce new machines with more cores, more NUMA-like architectures, and more tightly integrated heterogeneous processors, the traditional abstraction of a monolithic OS running on a SMP system is encountering new challenges. One proposed path forward is the multikernel operating system. Previous efforts have shown promising results both in scalability and in support for heterogeneity. However, one effort\'s source code is not freely available (FOS), and the other effort is not self-hosting and does not support a majority of existing applications (Barrelfish).
In this thesis, we present Popcorn, a Linux-based multikernel operating system. While Popcorn was a group effort, the boot layer code and the memory partitioning code are the author\'s work, and we present them in detail here. To our knowledge, we are the first to support multiple instances of the Linux kernel on a 64-bit x86 machine and to support more than 4 kernels running simultaneously.
We demonstrate that existing subsystems within Linux can be leveraged to meet the design goals of a multikernel OS. Taking this approach, we developed a fast inter-kernel network driver and messaging layer. We demonstrate that the network driver can share a 1 Gbit/s link without degraded performance and that in combination with guest kernels, it meets or exceeds the performance of SMP Linux with an event-based web server. We evaluate the messaging layer with microbenchmarks and conclude that it performs well given the limitations of current x86-64 hardware. Finally, we use the messaging layer to provide live process migration between cores. / Master of Science
|
180 |
Efficient data and metadata processing in large-scale distributed systemsShi, Rong, Shi January 2018 (has links)
No description available.
|
Page generated in 0.0568 seconds