• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A comparative study of cloud computing environments and the development of a framework for the automatic deployment of scaleable cloud based applications

Mlawanda, Joyce 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: Modern-day online applications are required to deal with an ever-increasing number of users without decreasing in performance. This implies that the applications should be scalable. Applications hosted on static servers are in exible in terms of scalability. Cloud computing is an alternative to the traditional paradigm of static application hosting and o ers an illusion of in nite compute and storage resources. It is a way of computing whereby computing resources are provided by a large pool of virtualised servers hosted on the Internet. By virtually removing scalability, infrastructure and installation constraints, cloud computing provides a very attractive platform for hosting online applications. This thesis compares the cloud computing infrastructures Google App Engine and AmazonWeb Services for hosting web applications and assesses their scalability performance compared to traditionally hosted servers. After the comparison of the three application hosting solutions, a proof-of-concept software framework for the provisioning and deployment of automatically scaling applications is built on Amazon Web Services which is shown to be best suited for the development of such a framework.
92

Localised routing algorithms in communication networks with Quality of Service constraints : performance evaluation and enhancement of new localised routing approaches to provide Quality of Service for computer and communication networks

Mohammad, Abdulbaset H. T. January 2010 (has links)
The Quality of Service (QoS) is a profound concept which is gaining increasing attention in the Internet industry. Best-effort applications are now no longer acceptable in certain situations needing high bandwidth provisioning, low loss and streaming of multimedia applications. New emerging multimedia applications are requiring new levels of quality of services beyond those supported by best-effort networks. Quality of service routing is an essential part in any QoS architecture in communication networks. QoS routing aims to select a path among the many possible choices that has sufficient resources to accommodate the QoS requirements. QoS routing can significantly improve the network performance due to its awareness of the network QoS state. Most QoS routing algorithms require maintenance of the global network's state information to make routing decisions. Global state information needs to be periodically exchanged among routers since the efficiency of a routing algorithm depends on link-state information accuracy. However, most QoS routing algorithms suffer from scalability due to the high communication overhead and the high computation effort associated with maintaining accurate link state information and distributing global state information to each node in the network. The ultimate goal of this thesis is to contribute towards enhancing the scalability of QoS routing algorithms. Towards this goal, the thesis is focused on Localised QoS routing algorithms proposed to overcome the problems of using global network state information. Using such an approach, the source node makes routing decisions based on the local state information for each node in the path. Localised QoS routing algorithms avoid the problems associated in the global network state, like high communication and processing overheads. In Localised QoS routing algorithms each source node maintains a predetermined set of candidate paths for each destination and avoids the problems associated with the maintenance of a global network state by using locally collected flow statistics and flow blocking probabilities.
93

Understanding Scalability and Sustainability in Mobile Learning : A Systems Development Framework

Wingkvist, Anna January 2009 (has links)
The rapid development of mobile technologies combined with access to content almost everywhere and every time allows people to experience new situations regarding learning in a wide variety of situations. Mobile learning brings the promise of learning "on the move" by allowing learners to take control over time and space, thus making learning "more natural". The field of mobile learning has rapidly evolved in the last ten years and many initiatives have been conducted worldwide. However, research results indicate that few of these efforts have produced any lasting outcomes. It is evident that these initiatives are faced with inherently complex settings and that the outcomes might not live up to their prom- ises; will not be adopted and, hence, will not become sustainable. Many of the complex issues faced by mobile learning initiatives are similar to those faced by the development of information systems. This latest statement suggests that an improved development practice might hold one piece of the key to sustainable mobile learning. The aim of the research presented in this thesis is to investigate the relation between information systems development practice and mobile learning development; and if methods and models originated within information systems development can be used to strengthen mobile learning initiatives. In order to investigate this relation, this thesis studies several mobile learning initiatives with a particular focus on how and why development and research was initiated and conducted. Concepts found in mobile learning practices are strengthened by providing a theoretical perspective with roots in information systems development. The outcomes of the studies presented in this thesis indicate that the development practice of mobile learning initiatives can be redefined in order to achieve more sustainable results. The core of this thesis consists of eight peer-reviewed scientific publications that have been presented at different international conferences. Five of the papers explore the field of mobile learning and its practice while the other three publications present the central ideas that serve as the basis for the proposed framework, how it has been developed, and the motivations behind its creation. The main contribution of this thesis is a novel development framework aimed at researchers and practitioners in the field of mobile learning. The framework defines the life-cycle of a mobile learning initiative and identifies the importance of emphasizing the concepts of scalability and sustainability during the development process. This may be a way to reduce the complexity inherent to mobile learning and its settings, and a means to improve the outcomes of coming mobile learning initiatives in terms of long lasting usable results.
94

A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression

Wei, Ming 05 1900 (has links)
In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, being comparable to the H263+ scheme, with a slightly lower (Peak Signal Noise Ratio (PSNR) value, produces a more visually pleasing result. This implementation also preserves scalability of wavelet embedded coding technique. Thirdly, the scheme to handle optimal bit allocation among color bands for still imagery has been modified and extended to accommodate the spatial-temporal sensitivity of the HVS model. The bit allocation among color bands based on Kelly's spatio-temporal CSF model is designed to achieve the perceptual optimum for human eyes. A perceptually tuned, wavelet based, rate scalable video encoding/decoding system has been designed and implemented based on this new bit allocation scheme. Finally to present the potential applications of our rate scalable video codec, a prototype system for rate scalable video streaming over the Internet has been designed and implemented to deal with the bandwidth unpredictability of the Internet.
95

Improve the Performance and Scalability of RAID-6 Systems Using Erasure Codes

Wu, Chentao 15 November 2012 (has links)
RAID-6 is widely used to tolerate concurrent failures of any two disks to provide a higher level of reliability with the support of erasure codes. Among many implementations, one class of codes called Maximum Distance Separable (MDS) codes aims to offer data protection against disk failures with optimal storage efficiency. Typical MDS codes contain horizontal and vertical codes. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, existing RAID-6 systems suffer several important problems on performance and scalability, such as low write performance, unbalanced I/O, and high migration cost in the scaling process. To address these problems, in this dissertation, we design techniques for high performance and scalable RAID-6 systems. It includes high performance and load balancing erasure codes (H-Code and HDP Code), and Stripe-based Data Migration (SDM) scheme. We also propose a flexible MDS Scaling Framework (MDS-Frame), which can integrate H-Code, HDP Code and SDM scheme together. Detailed evaluation results are also given in this dissertation.
96

Virtualization and Distribution of the BGP Control Plane / Virtualisation et distribution du plan de contrôle BGP

Oprescu, Mihaela Iuniana 18 October 2012 (has links)
L'Internet est organisé sous la forme d'une multitude de réseaux appelés Systèmes Autonomes (AS). Le Border Gateway Protocol (BGP) est le langage commun qui permet à ces domaines administratifs de s'interconnecter. Grâce à BGP, deux utilisateurs situés n'importe où dans le monde peuvent communiquer, car ce protocole est responsable de la propagation des messages de routage entre tous les réseaux voisins. Afin de répondre aux nouvelles exigences, BGP a dû s'améliorer et évoluer à travers des extensions fréquentes et de nouvelles architectures. Dans la version d'origine, il était indispensable que chaque routeur maintienne une session avec tous les autres routeurs du réseau. Cette contrainte a soulevé des problèmes de scalabilité, puisque le maillage complet des sessions BGP internes (iBGP) était devenu difficile à réaliser dans les grands réseaux. Pour couvrir ce besoin de connectivité, les opérateurs de réseaux font appel à la réflection de routes (RR) et aux confédérations. Mais si elles résolvent un problème de scalabilité, ces deux solutions ont soulevé des nouveaux défis car elles sont accompagnées de multiples défauts; la perte de diversité des routes candidates au processus de sélection BGP ou des anomalies comme par exemple des oscillations de routage, des déflections et des boucles en font partie. Les travaux menés dans cette thèse se concentrent sur oBGP, une nouvelle architecture pour redistribuer les routes externes à l'intérieur d'un AS. `A la place des classiques sessions iBGP, un réseau de type overlay est responsable (I) de l'´echange d'informations de routage avec les autres AS, (II) du stockage distribué des routes internes et externes, (III) de l'application de la politique de routage au niveau de l'AS et (IV) du calcul et de la redistribution des meilleures routes vers les destinations de l'Internet pour tous les routeurs clients présents dans l'AS / The Internet is organized as a collection of networks called Autonomous Systems (ASes). The Border Gateway Protocol (BGP) is the glue that connects these administrative domains. Communication is thus possible between users worldwide and each network is responsible of sharing reachability information to peers through BGP. Protocol extensions are periodically added because the intended use and design of BGP no longer fit the current demands. Scalability concerns make the required internal BGP (iBGP) full mesh difficult to achieve in today's large networks and therefore network operators resort to confederations or Route Reflectors (RRs) to achieve full connectivity. These two options come with a set of flaws of their own such as route diversity loss, persistent routing oscillations, deflections, forwarding loops etc. In this dissertation we present oBGP, a new architecture for the redistribution of external routes inside an AS. Instead of relying on the usual statically configured set of iBGP sessions, we propose to use an overlay of routing instances that are collectively responsible for (I) the exchange of routes with other ASes, (II) the storage of internal and external routes, (III) the storage of the entire routing policy configuration of the AS and (IV) the computation and redistribution of the best routes towards Internet destinations to each client router in the AS
97

Vysoce výkonná platforma pro účely výzkumu malwaru / High-Performance Platform for Malware Research

Plaskoň, Pavol January 2019 (has links)
Anti-malware companies analyze large number of files every day. In order to speed up their analysis, many automatized tools were implemented. Detection definitions that detect malicious software are often generated automatically. Information about currently spreading malware is scattered across several tools and they are sometimes too generic. This work proposes a new tool that will aggregate, prioritize, and evaluate all the available information. Due to large amount of incoming data, high performance and scalability of the system is necessary. Files, detection definitions, and other objects will be tagged using the given information directly or inferred. Collected information will be accessible via interface for further analysis and statistics. Everything was implemented, tested and put into production.
98

Peer-to-peer network architecture for massive online gaming

Shongwe, Bongani 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / Virtual worlds and massive multiplayer online games are amongst the most popular applications on the Internet. In order to host these applications a reliable architecture is required. It is essential for the architecture to handle high user loads, maintain a complex game state, promptly respond to game interactions, and prevent cheating, amongst other properties. Many of today’s Massive Multiplayer Online Games (MMOG) use client-server architectures to provide multiplayer service. Clients (players) send their actions to a server. The latter calculates the game state and publishes the information to the clients. Although the client-server architecture has been widely adopted in the past for MMOG, it suffers from many limitations. First, applications based on a client-server architecture are difficult to support and maintain given the dynamic user base of online games. Such architectures do not easily scale (or handle heavy loads). Also, the server constitutes a single point of failure. We argue that peer-to-peer architectures can provide better support for MMOG. Peer-to-peer architectures can enable the user base to scale to a large number. They also limit disruptions experienced by players due to other nodes failing. This research designs and implements a peer-to-peer architecture for MMOG. The peer-to-peer architecture aims at reducing message latency over the network and on the application layer. We refine the communication between nodes in the architecture to reduce network latency by using SPDY, a protocol designed to reduce web page load time. For the application layer, an event-driven paradigm was used to process messages. Through user load simulation, we show that our peer-to-peer design is able to process and reliably deliver messages in a timely manner. Furthermore, by distributing the work conducted by a game server, our research shows that a peer-to-peer architecture responds quicker to requests compared to client-server models.
99

Improving performance on NUMA systems / Amélioration de performance sur les architectures NUMA

Lepers, Baptiste 24 January 2014 (has links)
Les machines multicœurs actuelles utilisent une architecture à Accès Mémoire Non-Uniforme (Non-Uniform Memory Access - NUMA). Dans ces machines, les cœurs sont regroupés en nœuds. Chaque nœud possède son propre contrôleur mémoire et est relié aux autres nœuds via des liens d'interconnexion. Utiliser ces architectures à leur pleine capacité est difficile : il faut notamment veiller à éviter les accès distants (i.e., les accès d'un nœud vers un autre nœud) et la congestion sur les bus mémoire et les liens d'interconnexion. L'optimisation de performance sur une machine NUMA peut se faire de deux manières : en implantant des optimisations ad-hoc au sein des applications ou de manière automatique en utilisant des heuristiques. Cependant, les outils existants fournissent trop peu d'informations pour pouvoir implanter efficacement des optimisations et les heuristiques existantes ne permettent pas d'éviter les problèmes de congestion. Cette thèse résout ces deux problèmes. Dans un premier temps nous présentons MemProf, le premier outil d'analyse permettant d'implanter efficacement des optimisations NUMA au sein d'applications. Pour ce faire, MemProf construit des flots d'interactions entre threads et objets. Nous évaluons MemProf sur 3 machines NUMA et montrons que les optimisations trouvées grâce à MemProf permettent d'obtenir des gains de performance significatifs (jusqu'à 2.6x) et sont très simples à implanter (moins de 10 lignes de code). Dans un second temps, nous présentons Carrefour, un algorithme de gestion de la mémoire pour machines NUMA. Contrairement aux heuristiques existantes, Carrefour se concentre sur la réduction de la congestion sur les machines NUMA. Carrefour permet d'obtenir des gains de performance significatifs (jusqu'à 3.3x) et est toujours plus performant que les heuristiques existantes. / Modern multicore systems are based on a Non-Uniform Memory Access (NUMA) design. In a NUMA system, cores are grouped in a set of nodes. Each node has a memory controller and is interconnected with other nodes using high speed interconnect links. Efficiently exploiting such architectures is notoriously complex for programmers. Two key objectives on NUMA multicore machines are to limit as much as possible the number of remote memory accesses (i.e., accesses from a node to another node) and to avoid contention on memory controllers and interconnect links. These objectives can be achieved by implementing application-level optimizations or by implementing application-agnostic heuristics. However, in many cases, existing profilers do not provide enough information to help programmers implement application-level optimizations and existing application-agnostic heuristics fail to address contention issues. The contributions of this thesis are twofold. First we present MemProf, a profiler that allows programmers to choose and implement efficient application-level optimizations for NUMA systems. MemProf builds temporal flows of interactions between threads and objects, which help programmers understand why and which memory objects are accessed remotely. We evaluate MemProf on Linux on three different machines. We show how MemProf helps us choose and implement efficient optimizations, unlike existing profilers. These optimizations provide significant performance gains (up to 2.6x), while requiring very lightweight modifications (10 lines of code or less). Then we present Carrefour, an application-agnostic memory management algorithm. Contrarily to existing heuristics, Carrefour focuses on traffic contention on memory controllers and interconnect links. Carrefour provides significant performance gains (up to 3.3x) and always performs better than existing heuristics.
100

Evaluation of Couchbase As a Tool to Solve a Scalability Problem with Shared Geographical Objects / Utvärdering av Couchbase som ett verktyg för att lösa ett skalbarhetsproblem med delade geografiska objekt

Yildiz, George, Wallström, Fredrik January 2019 (has links)
Sharing a large amount of data between many mobile devices can lead to scalability problems. One of these scalability problems is that the data becomes too large to store on mobile devices and that many updates are sent to each device. In this thesis, Couchbase is evaluated as a tool to solve this problem where the data has a geographical position. The scalability problem is solved by partitioning the data with the help of Couchbase channels and Google’s tile-based mapping system. Synchronising and storing only data of interest for each user has been in focus. The result showed that it was effective to use a Couchbase solution together with Google’s tile-based mapping system to reduce the amount of data that was required to be stored for each user. It was shown to be more effective to store objects encoded as base64 data instead of their binary data representation for the data set used in this study. The reason for this is because Couchbase stores Binary Large Objects (BLOBs) as separate files and the BLOBs in the data set had much smaller file size than what the disk sector size was. A test to find how the synchronisation time was affected by the number of channels was conducted. It showed that the synchronisation time increased linearly with an increasing number of channels when the objects were stored in separate files. When the objects were encoded as base64 data, the number of channels used had a minor effect on the synchronisation time. The conclusion is that the approach presented in this study has been effective. However, the results are data dependent and therefore it is recommended to rerun similar tests in order to decide the number of channels to use when partitioning the data.

Page generated in 0.0514 seconds