• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 393
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 40
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Localised routing algorithms in communication networks with Quality of Service constraints : performance evaluation and enhancement of new localised routing approaches to provide Quality of Service for computer and communication networks

Mohammad, Abdulbaset H. T. January 2010 (has links)
The Quality of Service (QoS) is a profound concept which is gaining increasing attention in the Internet industry. Best-effort applications are now no longer acceptable in certain situations needing high bandwidth provisioning, low loss and streaming of multimedia applications. New emerging multimedia applications are requiring new levels of quality of services beyond those supported by best-effort networks. Quality of service routing is an essential part in any QoS architecture in communication networks. QoS routing aims to select a path among the many possible choices that has sufficient resources to accommodate the QoS requirements. QoS routing can significantly improve the network performance due to its awareness of the network QoS state. Most QoS routing algorithms require maintenance of the global network's state information to make routing decisions. Global state information needs to be periodically exchanged among routers since the efficiency of a routing algorithm depends on link-state information accuracy. However, most QoS routing algorithms suffer from scalability due to the high communication overhead and the high computation effort associated with maintaining accurate link state information and distributing global state information to each node in the network. The ultimate goal of this thesis is to contribute towards enhancing the scalability of QoS routing algorithms. Towards this goal, the thesis is focused on Localised QoS routing algorithms proposed to overcome the problems of using global network state information. Using such an approach, the source node makes routing decisions based on the local state information for each node in the path. Localised QoS routing algorithms avoid the problems associated in the global network state, like high communication and processing overheads. In Localised QoS routing algorithms each source node maintains a predetermined set of candidate paths for each destination and avoids the problems associated with the maintenance of a global network state by using locally collected flow statistics and flow blocking probabilities.
92

Understanding Scalability and Sustainability in Mobile Learning : A Systems Development Framework

Wingkvist, Anna January 2009 (has links)
The rapid development of mobile technologies combined with access to content almost everywhere and every time allows people to experience new situations regarding learning in a wide variety of situations. Mobile learning brings the promise of learning "on the move" by allowing learners to take control over time and space, thus making learning "more natural". The field of mobile learning has rapidly evolved in the last ten years and many initiatives have been conducted worldwide. However, research results indicate that few of these efforts have produced any lasting outcomes. It is evident that these initiatives are faced with inherently complex settings and that the outcomes might not live up to their prom- ises; will not be adopted and, hence, will not become sustainable. Many of the complex issues faced by mobile learning initiatives are similar to those faced by the development of information systems. This latest statement suggests that an improved development practice might hold one piece of the key to sustainable mobile learning. The aim of the research presented in this thesis is to investigate the relation between information systems development practice and mobile learning development; and if methods and models originated within information systems development can be used to strengthen mobile learning initiatives. In order to investigate this relation, this thesis studies several mobile learning initiatives with a particular focus on how and why development and research was initiated and conducted. Concepts found in mobile learning practices are strengthened by providing a theoretical perspective with roots in information systems development. The outcomes of the studies presented in this thesis indicate that the development practice of mobile learning initiatives can be redefined in order to achieve more sustainable results. The core of this thesis consists of eight peer-reviewed scientific publications that have been presented at different international conferences. Five of the papers explore the field of mobile learning and its practice while the other three publications present the central ideas that serve as the basis for the proposed framework, how it has been developed, and the motivations behind its creation. The main contribution of this thesis is a novel development framework aimed at researchers and practitioners in the field of mobile learning. The framework defines the life-cycle of a mobile learning initiative and identifies the importance of emphasizing the concepts of scalability and sustainability during the development process. This may be a way to reduce the complexity inherent to mobile learning and its settings, and a means to improve the outcomes of coming mobile learning initiatives in terms of long lasting usable results.
93

A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression

Wei, Ming 05 1900 (has links)
In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, being comparable to the H263+ scheme, with a slightly lower (Peak Signal Noise Ratio (PSNR) value, produces a more visually pleasing result. This implementation also preserves scalability of wavelet embedded coding technique. Thirdly, the scheme to handle optimal bit allocation among color bands for still imagery has been modified and extended to accommodate the spatial-temporal sensitivity of the HVS model. The bit allocation among color bands based on Kelly's spatio-temporal CSF model is designed to achieve the perceptual optimum for human eyes. A perceptually tuned, wavelet based, rate scalable video encoding/decoding system has been designed and implemented based on this new bit allocation scheme. Finally to present the potential applications of our rate scalable video codec, a prototype system for rate scalable video streaming over the Internet has been designed and implemented to deal with the bandwidth unpredictability of the Internet.
94

Improve the Performance and Scalability of RAID-6 Systems Using Erasure Codes

Wu, Chentao 15 November 2012 (has links)
RAID-6 is widely used to tolerate concurrent failures of any two disks to provide a higher level of reliability with the support of erasure codes. Among many implementations, one class of codes called Maximum Distance Separable (MDS) codes aims to offer data protection against disk failures with optimal storage efficiency. Typical MDS codes contain horizontal and vertical codes. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, existing RAID-6 systems suffer several important problems on performance and scalability, such as low write performance, unbalanced I/O, and high migration cost in the scaling process. To address these problems, in this dissertation, we design techniques for high performance and scalable RAID-6 systems. It includes high performance and load balancing erasure codes (H-Code and HDP Code), and Stripe-based Data Migration (SDM) scheme. We also propose a flexible MDS Scaling Framework (MDS-Frame), which can integrate H-Code, HDP Code and SDM scheme together. Detailed evaluation results are also given in this dissertation.
95

Virtualization and Distribution of the BGP Control Plane / Virtualisation et distribution du plan de contrôle BGP

Oprescu, Mihaela Iuniana 18 October 2012 (has links)
L'Internet est organisé sous la forme d'une multitude de réseaux appelés Systèmes Autonomes (AS). Le Border Gateway Protocol (BGP) est le langage commun qui permet à ces domaines administratifs de s'interconnecter. Grâce à BGP, deux utilisateurs situés n'importe où dans le monde peuvent communiquer, car ce protocole est responsable de la propagation des messages de routage entre tous les réseaux voisins. Afin de répondre aux nouvelles exigences, BGP a dû s'améliorer et évoluer à travers des extensions fréquentes et de nouvelles architectures. Dans la version d'origine, il était indispensable que chaque routeur maintienne une session avec tous les autres routeurs du réseau. Cette contrainte a soulevé des problèmes de scalabilité, puisque le maillage complet des sessions BGP internes (iBGP) était devenu difficile à réaliser dans les grands réseaux. Pour couvrir ce besoin de connectivité, les opérateurs de réseaux font appel à la réflection de routes (RR) et aux confédérations. Mais si elles résolvent un problème de scalabilité, ces deux solutions ont soulevé des nouveaux défis car elles sont accompagnées de multiples défauts; la perte de diversité des routes candidates au processus de sélection BGP ou des anomalies comme par exemple des oscillations de routage, des déflections et des boucles en font partie. Les travaux menés dans cette thèse se concentrent sur oBGP, une nouvelle architecture pour redistribuer les routes externes à l'intérieur d'un AS. `A la place des classiques sessions iBGP, un réseau de type overlay est responsable (I) de l'´echange d'informations de routage avec les autres AS, (II) du stockage distribué des routes internes et externes, (III) de l'application de la politique de routage au niveau de l'AS et (IV) du calcul et de la redistribution des meilleures routes vers les destinations de l'Internet pour tous les routeurs clients présents dans l'AS / The Internet is organized as a collection of networks called Autonomous Systems (ASes). The Border Gateway Protocol (BGP) is the glue that connects these administrative domains. Communication is thus possible between users worldwide and each network is responsible of sharing reachability information to peers through BGP. Protocol extensions are periodically added because the intended use and design of BGP no longer fit the current demands. Scalability concerns make the required internal BGP (iBGP) full mesh difficult to achieve in today's large networks and therefore network operators resort to confederations or Route Reflectors (RRs) to achieve full connectivity. These two options come with a set of flaws of their own such as route diversity loss, persistent routing oscillations, deflections, forwarding loops etc. In this dissertation we present oBGP, a new architecture for the redistribution of external routes inside an AS. Instead of relying on the usual statically configured set of iBGP sessions, we propose to use an overlay of routing instances that are collectively responsible for (I) the exchange of routes with other ASes, (II) the storage of internal and external routes, (III) the storage of the entire routing policy configuration of the AS and (IV) the computation and redistribution of the best routes towards Internet destinations to each client router in the AS
96

Vysoce výkonná platforma pro účely výzkumu malwaru / High-Performance Platform for Malware Research

Plaskoň, Pavol January 2019 (has links)
Anti-malware companies analyze large number of files every day. In order to speed up their analysis, many automatized tools were implemented. Detection definitions that detect malicious software are often generated automatically. Information about currently spreading malware is scattered across several tools and they are sometimes too generic. This work proposes a new tool that will aggregate, prioritize, and evaluate all the available information. Due to large amount of incoming data, high performance and scalability of the system is necessary. Files, detection definitions, and other objects will be tagged using the given information directly or inferred. Collected information will be accessible via interface for further analysis and statistics. Everything was implemented, tested and put into production.
97

Peer-to-peer network architecture for massive online gaming

Shongwe, Bongani 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / Virtual worlds and massive multiplayer online games are amongst the most popular applications on the Internet. In order to host these applications a reliable architecture is required. It is essential for the architecture to handle high user loads, maintain a complex game state, promptly respond to game interactions, and prevent cheating, amongst other properties. Many of today’s Massive Multiplayer Online Games (MMOG) use client-server architectures to provide multiplayer service. Clients (players) send their actions to a server. The latter calculates the game state and publishes the information to the clients. Although the client-server architecture has been widely adopted in the past for MMOG, it suffers from many limitations. First, applications based on a client-server architecture are difficult to support and maintain given the dynamic user base of online games. Such architectures do not easily scale (or handle heavy loads). Also, the server constitutes a single point of failure. We argue that peer-to-peer architectures can provide better support for MMOG. Peer-to-peer architectures can enable the user base to scale to a large number. They also limit disruptions experienced by players due to other nodes failing. This research designs and implements a peer-to-peer architecture for MMOG. The peer-to-peer architecture aims at reducing message latency over the network and on the application layer. We refine the communication between nodes in the architecture to reduce network latency by using SPDY, a protocol designed to reduce web page load time. For the application layer, an event-driven paradigm was used to process messages. Through user load simulation, we show that our peer-to-peer design is able to process and reliably deliver messages in a timely manner. Furthermore, by distributing the work conducted by a game server, our research shows that a peer-to-peer architecture responds quicker to requests compared to client-server models.
98

Improving performance on NUMA systems / Amélioration de performance sur les architectures NUMA

Lepers, Baptiste 24 January 2014 (has links)
Les machines multicœurs actuelles utilisent une architecture à Accès Mémoire Non-Uniforme (Non-Uniform Memory Access - NUMA). Dans ces machines, les cœurs sont regroupés en nœuds. Chaque nœud possède son propre contrôleur mémoire et est relié aux autres nœuds via des liens d'interconnexion. Utiliser ces architectures à leur pleine capacité est difficile : il faut notamment veiller à éviter les accès distants (i.e., les accès d'un nœud vers un autre nœud) et la congestion sur les bus mémoire et les liens d'interconnexion. L'optimisation de performance sur une machine NUMA peut se faire de deux manières : en implantant des optimisations ad-hoc au sein des applications ou de manière automatique en utilisant des heuristiques. Cependant, les outils existants fournissent trop peu d'informations pour pouvoir implanter efficacement des optimisations et les heuristiques existantes ne permettent pas d'éviter les problèmes de congestion. Cette thèse résout ces deux problèmes. Dans un premier temps nous présentons MemProf, le premier outil d'analyse permettant d'implanter efficacement des optimisations NUMA au sein d'applications. Pour ce faire, MemProf construit des flots d'interactions entre threads et objets. Nous évaluons MemProf sur 3 machines NUMA et montrons que les optimisations trouvées grâce à MemProf permettent d'obtenir des gains de performance significatifs (jusqu'à 2.6x) et sont très simples à implanter (moins de 10 lignes de code). Dans un second temps, nous présentons Carrefour, un algorithme de gestion de la mémoire pour machines NUMA. Contrairement aux heuristiques existantes, Carrefour se concentre sur la réduction de la congestion sur les machines NUMA. Carrefour permet d'obtenir des gains de performance significatifs (jusqu'à 3.3x) et est toujours plus performant que les heuristiques existantes. / Modern multicore systems are based on a Non-Uniform Memory Access (NUMA) design. In a NUMA system, cores are grouped in a set of nodes. Each node has a memory controller and is interconnected with other nodes using high speed interconnect links. Efficiently exploiting such architectures is notoriously complex for programmers. Two key objectives on NUMA multicore machines are to limit as much as possible the number of remote memory accesses (i.e., accesses from a node to another node) and to avoid contention on memory controllers and interconnect links. These objectives can be achieved by implementing application-level optimizations or by implementing application-agnostic heuristics. However, in many cases, existing profilers do not provide enough information to help programmers implement application-level optimizations and existing application-agnostic heuristics fail to address contention issues. The contributions of this thesis are twofold. First we present MemProf, a profiler that allows programmers to choose and implement efficient application-level optimizations for NUMA systems. MemProf builds temporal flows of interactions between threads and objects, which help programmers understand why and which memory objects are accessed remotely. We evaluate MemProf on Linux on three different machines. We show how MemProf helps us choose and implement efficient optimizations, unlike existing profilers. These optimizations provide significant performance gains (up to 2.6x), while requiring very lightweight modifications (10 lines of code or less). Then we present Carrefour, an application-agnostic memory management algorithm. Contrarily to existing heuristics, Carrefour focuses on traffic contention on memory controllers and interconnect links. Carrefour provides significant performance gains (up to 3.3x) and always performs better than existing heuristics.
99

Evaluation of Couchbase As a Tool to Solve a Scalability Problem with Shared Geographical Objects / Utvärdering av Couchbase som ett verktyg för att lösa ett skalbarhetsproblem med delade geografiska objekt

Yildiz, George, Wallström, Fredrik January 2019 (has links)
Sharing a large amount of data between many mobile devices can lead to scalability problems. One of these scalability problems is that the data becomes too large to store on mobile devices and that many updates are sent to each device. In this thesis, Couchbase is evaluated as a tool to solve this problem where the data has a geographical position. The scalability problem is solved by partitioning the data with the help of Couchbase channels and Google’s tile-based mapping system. Synchronising and storing only data of interest for each user has been in focus. The result showed that it was effective to use a Couchbase solution together with Google’s tile-based mapping system to reduce the amount of data that was required to be stored for each user. It was shown to be more effective to store objects encoded as base64 data instead of their binary data representation for the data set used in this study. The reason for this is because Couchbase stores Binary Large Objects (BLOBs) as separate files and the BLOBs in the data set had much smaller file size than what the disk sector size was. A test to find how the synchronisation time was affected by the number of channels was conducted. It showed that the synchronisation time increased linearly with an increasing number of channels when the objects were stored in separate files. When the objects were encoded as base64 data, the number of channels used had a minor effect on the synchronisation time. The conclusion is that the approach presented in this study has been effective. However, the results are data dependent and therefore it is recommended to rerun similar tests in order to decide the number of channels to use when partitioning the data.
100

A scalable microservice-based open source platform for smart cities / Uma plataforma escalável de código aberto baseada em microsserviços para cidades inteligentes

Esposte, Arthur de Moura Del 18 June 2018 (has links)
Smart City technologies emerge as a potential solution to tackle common problems in large urban centers by using city resources efficiently and providing quality services for citizens. Despite the various advances in middleware technologies to support future smart cities, there are yet no widely accepted platforms. Most of the existing solutions do not provide the required flexibility to be shared across cities. Moreover, the extensive use and development of non-open-source software leads to interoperability issues and limits the collaboration among R&D groups. Our research explores the use of a microservices architecture to address key practical challenges in smart city platforms. More specifically, we are concerned with the impact of microservices on addressing the key non-functional requirements to enable the development of smart cities such as supporting different scalability demands and providing a flexible architecture which can easily evolve over time. To this end, we are developing InterSCity, a microservice-based open source smart city platform that aims at supporting the development of sophisticated, cross- domain applications and services. Our early experience shows that microservices can be properly used as building blocks to achieve a loosely coupled, flexible architecture. Experimental results point towards the applicability of our approach in the context of smart cities since the platform can support multiple scalability demands. We expect to enable collaborative, novel smart city research, development, and deployment initiatives through the InterSCity platform. The full validation of the platform will be conducted using different smart city scenarios and workloads. Future work comprises the ongoing design and development effort on data processing services as well as more comprehensive evaluation of the proposed platform through scalability experiments. / As tecnologias de Cidades Inteligentes surgem como uma potencial solução para lidar com problemas comuns em grandes centros urbanos, utilizando os recursos da cidade de maneira eficiente e fornecendo serviços de qualidade para os cidadãos. Apesar dos vários avanços nas tecnologias de middleware para suporte às cidades inteligentes do futuro, ainda não existem plataformas amplamente aceitas. A maioria das soluções existentes não oferece a flexibilidade necessária para ser compartilhada entre as cidades. Além disso, o vasto uso e desenvolvimento de software proprietário levam a problemas de interoperabilidade e limitam a colaboração entre grupos de P&D. Nesta dissertação, exploramos uso de uma arquitetura de microsserviços para abordar os principais desafios práticos em plataformas de cidades inteligentes. Mais especificamente, estamos preocupados com o impacto dos microsserviços sobre requisitos não-funcionais para permitir o desenvolvimento de cidades inteligentes, tais como o suporte a diferentes demandas de escalabilidade e o fornecimento de uma arquitetura flexível que pode evoluir facilmente. Para esse fim, criamos a InterSCity, uma plataforma para cidades inteligentes de código aberto baseada em microsserviços que visa apoiar o desenvolvimento de aplicativos e serviços sofisticados em múltiplos domínios. Nossa experiência inicial mostra que os microsserviços podem ser usados adequadamente como blocos de construção para obter uma arquitetura flexível e fracamente acoplada. Resultados experimentais apontam para a aplicabilidade de nossa abordagem no contexto de cidades inteligentes, já que a plataforma pode suportar diferentes demandas de escalabilidade. Esperamos permitir pesquisas colaborativas e inovadoras em cidades inteligentes, assim como o desenvolvimento e iniciativas de implantações reais através da plataforma InterSCity. A validação completa da plataforma será realizada usando diferentes cenários de cidades inteligentes e cargas de trabalho. Os trabalhos futuros compreendem o esforço contínuo de projetar e desenvolver novos serviços de processamento de dados, bem como a realização de avaliações mais abrangentes da plataforma proposta por meio de experimentos de escalabilidade.

Page generated in 0.0554 seconds