• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Stratégies de Cache basées sur la popularité pour Content Centric Networking / Popularity-Based Caching Strategies for Content Centric Networking

Bernardini, César 05 May 2015 (has links)
Content Centric Networking (CCN) est une architecture pour l'Internet du futur. CCN inclut des fonctionnalités de cache dans tous les noeuds du réseau. Son efficacité dépend largement de la performance de ses stratégies de cache. C'est pour cela que plusieurs études proposent des nouvelles stratégies de cache pour améliorer la performance d'un réseau CCN. Cependant parmi toutes ces stratégies, ce n'est pas évident de décider laquelle fonctionne le mieux. Il manque un environnement commun pour comparer ces stratégies. De plus, il n'est pas certain que ces approches soient les meilleures alternatives pour améliorer la performance du réseau. Dans cette thèse, on vise le problème de choisir les meilleures stratégies de caches pour CCN et les contributions sont les suivantes. On construit un environnement commun d'évaluation dans lequel on compare via simulation les stratégies de caches disponibles: Leave Copy Everywhere (LCE), Leave Copy Down (LCD), ProbCache, Cache "Less For More" et MAGIC. On analyse la performance de toutes ces stratégies et on décide la meilleure stratègie de cache pour chaque scénario. Ensuite, on propose deux stratégies de cache basées sur la popularité pour CCN. On commence avec un étude de la popularité de contenu et on présent la stratégie Most Popular Caching (MPC). MPC privilèges la distribution de contenu populaire dans les caches afin d'ameliorer les autres stratégies de cache. Dans une deuxième étape, on présent une stratègie de cache basé dans l'information des réseaux sociaux: Socially-Aware Caching Strategy (SACS). SACS privilèges la distribution de contenu publié par les utilisateurs les plus importantes / Content Centric Networking (CCN) is a new architecture for a future Internet. CCN includes in-network caching capabilities at every node. Its effciency depends drastically on performances of caching strategies. A lot of studies proposing new caching strategies to improve the performances of CCN. However, among all these strategies, it is still unclear which one performs better as there is a lack of common environment to compare these strategies. In this thesis, we address the challenge of selecting the best caching strategies for CCN. The contribution of this thesis are the following. We build a common evaluation scenario and we compare via simulation the state of the art caching strategies: Leave Copy Everywhere (LCE), Leave Copy Down (LCD), ProbCache, Cache "Less" For More and MAGIC. We analyze the performance of all the strategies in terms of Cache Hit, Stretch, Diversity and Complexity, and determine the cache strategy that fits the best with every scenario. Later on, we propose two novel caching strategies for CCN based on popularity. First, we study popularity of content and we present Most Popular Caching (MPC) strategy. MPC privileges distribution of popular caches into the caches and thus, it overcomes other caching strategies. Second, we present an alternative caching strategy based on social networks: Socially-Aware Caching Strategy (SACS). SACS privileges distribution of content published by influential users into the network. Both caching strategies overcome state of the art mechanisms and, to the best of our knowledge, we are the first to use social information to build caching strategies
72

A Peer To Peer Web Proxy Cache For Enterprise Networks

Ravindranath, C K 06 1900 (has links)
In this thesis, we propose a decentralized peer-to-peer (P2P) Web proxy cache for enterprise networks (ENs). Currently, enterprises use a centralized proxy-based Web cache, where a dedicated proxy server does the caching. A dedicated proxy Web Cache has to be over-provisioned to handle peak loads. It is expensive, a single point of failure, and a bottleneck. In a P2P Web Cache, the clients themselves cooperate in caching the Web objects without any dedicated proxy cache. The resources from the client machines are pooled together to form a Web cache. This eliminates the need for extra hardware and the single point of failure, and improves the average response time, since all the machines serve the request queue. The most important attraction for the P2P scheme is its inherent scalability. Squirrel was the earliest P2P Web cache. Squirrel is built upon a structured P2P protocol called Pastry. Pastry is based on consistent hashing; a special hashing that performs well in the presence of client membership changes. Consistent hashing based protocols are designed for Internet-wide environments to handle very large membership sizes and high rates of membership change. To minimize the protocol bandwidth, the membership state maintained at each peer is very small. This state consists of the information about the peer’s immediate neighbours, and those of a few other P2P members, to achieve faster look-up. This scheme has the following advantages: (i) since peers do not maintain information about all the other peers in the system, any peer needing an object has to find the peer responsible for the object through a multi-hop lookup, thereby increasing the latency, and (ii) the number of objIds assigned to a peer depends on the hashing used, and this can be skewed, which affects the load distribution. The popular applications of the P2P paradigm have been file-sharing systems. These systems are deployed across the Internet. Hence, the existing P2P protocols were designed to operate within the constraints of Internet environments. The P2P proxy Web cache has been a recent application of the P2P paradigm. P2P Web Proxy caches operate across the entire network of an enterprise. An enterprise network(EN) comprises all the computing and communications capabilities of an institution. Institutions typically consist of many departments, with each department having and managing its own local area netwok (LAN). The available bandwidth in LANs is very high. LANs have low latency and low error rates. EN environments have smaller membership size, less frequent membership changes and more available bandwidth. Hence, in such environments, the P2P protocol can afford to store more membership information. This thesis explores the significant differences between EN and Internet environments. It proposes a new P2P protocol designed to exploit these differences, and a P2P Web proxy caching scheme based on this new protocol. Specifically, it shows that it is possible to maintain complete the consistent membership information on ENs. The thesis then presents a load distribution policy for a P2P system with complete and consistent membership information to achieve (i) load balance and (ii) minimum object migrations subsequent to each node join or node leave event. The proposed system requires extra storage and bandwidth costs. We have seen that the necessary storage is available in general workstations and the required bandwidth is feasible in modern networks. We then evaluated the improvement in performance achieved by the system over existing consistent hashing based systems. We have shown that without investing in any special hardware, the P2P system can match the performance of dedicated proxy caches. We have further shown that the buddy based P2P scheme has a better load distribution, especially under heavy loads when load balancing becomes critical. We have also shown that for large P2P systems, the buddy based scheme has a lower latency than the consistent hashing based schemes. Further, we have compared the costs of the proposed scheme and the existing consistent hashing based scheme for different loads (i.e., rate of Web object requests), and identified the situations in which the proposed scheme is likely to perform best. In summary, the thesis shows that (i) the membership dynamics of P2P systems on ENs are different from that of Internet file-sharing systems and (ii) it is feasible in ENs, to maintain complete the consistent view of the P2P membership at all the peers. We have designed a structured P2P protocol for LANs that maintains a complete and consistent view of membership information at all peers. P2P Web caches achieve single hop routing and a better balanced load distribution using this scheme. Complete and consistent view of membership information enabled a single-hop lookup and a flexible load assignment.
73

Analyse de Performance des Services de Vidéo Streaming Adaptatif dans les Réseaux Mobiles / Performance Analysis of HTTP Adaptive Video Streaming Services in Mobile Networks

Ye, Zakaria 02 May 2017 (has links)
Le trafic vidéo a subi une augmentation fulgurante sur Internet ces dernières années. Pour pallier à cette importante demande de contenu vidéo, la technologie du streaming adaptatif sur HTTP est utilisée. Elle est devenue par ailleurs très populaire car elle a été adoptée par les différents acteurs du domaine de la vidéo streaming. C’est une technologie moins couteuse qui permet aux fournisseurs de contenu, la réutilisation des serveurs web et des caches déjà déployés. En plus, elle est exempt de tout blocage car elle traverse facilement les pare-feux et les translations d’adresses sur Internet. Dans cette thèse, nous proposons une nouvelle méthode de vidéo streaming adaptatif appelé “Backward-Shifted Coding (BSC)”. Il se veut être une solution complémentaire au standard DASH, le streaming adaptatif et dynamique utilisant le protocole HTTP. Nous allons d’abord décrire ce qu’est la technologie BSC qui se base sur le codec (encodeur décodeur) à multi couches SVC, un algorithme de compression extensible ou évolutif. Nous détaillons aussi l’implémentation de BSC dans un environnement DASH. Ensuite,nous réalisons une évaluation analytique de BSC en utilisant des résultats standards de la théorie des files d’attente. Les résultats de cette analyse mathématique montrent que le protocole BSC permet de réduire considérablement le risque d’interruption de la vidéo pendant la lecture, ce dernier étant très pénalisant pour les utilisateurs. Ces résultats vont nous permettre de concevoir des algorithmes d’adaptation de qualité à la bande passante en vue d’améliorer l’expérience utilisateur. Ces algorithmes permettent d’améliorer la qualité de la vidéo même étant dans un environnement où le débit utilisateur est très instable.La dernière étape de la thèse consiste à la conception de stratégies de caching pour optimiser la transmission de contenu vidéo utilisant le codec SVC. En effet, dans le réseau, des serveurs de cache sont déployés dans le but de rapprocher le contenu vidéo auprès des utilisateurs pour réduire les délais de transmission et améliorer la qualité de la vidéo. Nous utilisons la programmation linéaire pour obtenir la solution optimale de caching afin de le comparer avec nos algorithmes proposés. Nous montrons que ces algorithmes augmentent la performance du système tout en permettant de décharger les liens de transmission du réseau cœur. / Due to the growth of video traffic over the Internet in recent years, HTTP AdaptiveStreaming (HAS) solution becomes the most popular streaming technology because ithas been succesfully adopted by the different actors in Internet video ecosystem. Itallows the service providers to use traditional stateless web servers and mobile edgecaches for streaming videos. Further, it allows users to access media content frombehind Firewalls and NATs.In this thesis we focus on the design of a novel video streaming delivery solutioncalled Backward-Shifted Coding (BSC), a complementary solution to Dynamic AdaptiveStreaming over HTTP (DASH), the standard version of HAS. We first describe theBackward-Shifted Coding scheme architecture based on the multi-layer Scalable VideoCoding (SVC). We also discuss the implementation of BSC protocol in DASH environment.Then, we perform the analytical evaluation of the Backward-Sihifted Codingusing results from queueing theory. The analytical results show that BSC considerablydecreases the video playback interruption which is the worst event that users can experienceduring the video session. Therefore, we design bitrate adaptation algorithms inorder to enhance the Quality of Experience (QoE) of the users in DASH/BSC system.The results of the proposed adaptation algorithms show that the flexibility of BSC allowsus to improve both the video quality and the variations of the quality during thestreaming session.Finally, we propose new caching policies to be used with video contents encodedusing SVC. Indeed, in DASH/BSC system, cache servers are deployed to make contentsclosed to the users in order to reduce network latency and improve user-perceived experience.We use Linear Programming to obtain optimal static cache composition tocompare with the results of our proposed algorithms. We show that these algorithmsincrease the system overall hit ratio and offload the backhaul links by decreasing thefetched content from the origin web servers.
74

Machine Learning for Network Resource Management / Apprentissage Automatique pour la Gestion des Ressources Réseau

Ben Hassine, Nesrine 06 December 2017 (has links)
Une exploitation intelligente des données qui circulent sur les réseaux pourrait entraîner une amélioration de la qualité d'expérience (QoE) des utilisateurs. Les techniques d'apprentissage automatique offrent des fonctionnalités multiples, ce qui permet d’optimiser l'utilisation des ressources réseau.Dans cette thèse, deux contextes d’application sont étudiés : les réseaux de capteurs sans fil (WSNs) et les réseaux de contenus (CDNs). Dans les WSNs, il s’agit de prédire la qualité des liens sans fil afin d’améliorer la qualité des routes et donc d’augmenter le taux de remise des paquets ce qui améliore la qualité de service offerte à l’utilisateur. Dans les CDNs, il s’agit de prédire la popularité des contenus vidéo afin de mettre en cache les contenus les plus populaires, au plus près des utilisateurs qui les demandent. Ceci contribue à réduire la latence pour satisfaire les requêtes des utilisateurs.Dans ce travail, nous avons orchestré des techniques d’apprentissage issues de deux domaines différents, à savoir les statistiques et le Machine Learning. Chaque technique est représentée par un expert dont les paramètres sont réglés suite à une analyse hors-ligne. Chaque expert est chargé de prédire la prochaine valeur de la métrique. Vu la variété des experts retenus et comme aucun d’entre eux ne domine toujours tous les autres, un deuxième niveau d’expertise est nécessaire pour fournir la meilleure prédiction. Ce deuxième niveau est représenté par un expert particulier, appelé forecaster. Le forecaster est chargé de fournir des prédictions à partir des prédictions fournies par un sous ensemble des meilleurs experts.Plusieurs méthodes d’identification de ce sous ensemble sont étudiées. Elles dépendent de la fonction de perte utilisée pour évaluer les prédictions des experts et du nombre k, représentant les k meilleurs experts. Les tâches d’apprentissage et de prédiction sont effectuées en-ligne sur des data sets réels issus d’un WSN déployé à Stanford et de YouTube pour le CDN. La méthodologie adoptée dans cette thèse s’applique à la prédiction de la prochaine valeur d’une série temporelle.Plus précisément, nous montrons comment dans le contexte WSN, la qualité des liens peut être évaluée par le Link Quality Indicator (LQI) et comment les experts Single Exponential Smoothing (SES) et Average Moving Window (AMW) peuvent prédire la prochaine valeur de LQI. Ces experts réagissent rapidement aux changements des valeurs LQI que ce soit lors d’une brusque baisse de la qualité du lien ou au contraire lors d’une forte augmentation de la qualité. Nous proposons deux forecasters, Exponential Weighted Average (EWA) et Best Expert (BE), et fournissons la combinaison Expert-Forecaster permettant de fournir la meilleure prédiction.Dans le contexte des CDNs, nous évaluons la popularité de chaque contenu vidéo par le nombre journalier de requêtes. Nous utilisons à la fois des experts statistiques (ARMA) et des experts issus du Machine Learning (DES, régression polynômiale). Nous introduisons également des forecasters qui diffèrent par rapport à l’horizon des observations utilisées pour la prédiction, la fonction de perte et le nombre d’experts utilisés. Ces prédictions permettent de décider quels contenus seront placés dans les caches proches des utilisateurs. L’efficacité de la technique de caching basée sur la prédiction de la popularité est évaluée en termes de hit ratio et d’update ratio. Nous mettons en évidence les apports de cette technique de caching par rapport à un algorithme de caching classique, Least Frequently Used (LFU).Cette thèse se termine par des recommandations concernant l’utilisation des techniques d’apprentissage en ligne et hors-ligne pour les réseaux (WSN, CDN). Au niveau des perspectives, nous proposons différentes applications où l’utilisation de ces techniques permettrait d’améliorer la qualité d’expérience des utilisateurs mobiles ou des utilisateurs des réseaux IoT. / An intelligent exploitation of data carried on telecom networks could lead to a very significant improvement in the quality of experience (QoE) for the users. Machine Learning techniques offer multiple operating, which can help optimize the utilization of network resources.In this thesis, two contexts of application of the learning techniques are studied: Wireless Sensor Networks (WSNs) and Content Delivery Networks (CDNs). In WSNs, the question is how to predict the quality of the wireless links in order to improve the quality of the routes and thus increase the packet delivery rate, which enhances the quality of service offered to the user. In CDNs, it is a matter of predicting the popularity of videos in order to cache the most popular ones as close as possible to the users who request them, thereby reducing latency to fulfill user requests.In this work, we have drawn upon learning techniques from two different domains, namely statistics and Machine Learning. Each learning technique is represented by an expert whose parameters are tuned after an off-line analysis. Each expert is responsible for predicting the next metric value (i.e. popularity for videos in CDNs, quality of the wireless link for WSNs). The accuracy of the prediction is evaluated by a loss function, which must be minimized. Given the variety of experts selected, and since none of them always takes precedence over all the others, a second level of expertise is needed to provide the best prediction (the one that is the closest to the real value and thus minimizes a loss function). This second level is represented by a special expert, called a forecaster. The forecaster provides predictions based on values predicted by a subset of the best experts.Several methods are studied to identify this subset of best experts. They are based on the loss functions used to evaluate the experts' predictions and the value k, representing the k best experts. The learning and prediction tasks are performed on-line on real data sets from a real WSN deployed at Stanford, and from YouTube for the CDN. The methodology adopted in this thesis is applied to predicting the next value in a series of values.More precisely, we show how the quality of the links can be evaluated by the Link Quality Indicator (LQI) in the WSN context and how the Single Exponential Smoothing (SES) and Average Moving Window (AMW) experts can predict the next LQI value. These experts react quickly to changes in LQI values, whether it be a sudden drop in the quality of the link or a sharp increase in quality. We propose two forecasters, Exponential Weighted Average (EWA) and Best Expert (BE), as well as the Expert-Forecaster combination to provide better predictions.In the context of CDNs, we evaluate the popularity of each video by the number of requests for this video per day. We use both statistical experts (ARMA) and experts from the Machine Learning domain (e.g. DES, polynomial regression). These experts are evaluated according to different loss functions. We also introduce forecasters that differ in terms of the observation horizon used for prediction, loss function and number of experts selected for predictions. These predictions help decide which videos will be placed in the caches close to the users. The efficiency of the caching technique based on popularity prediction is evaluated in terms of hit rate and update rate. We highlight the contributions of this caching technique compared to a classical caching algorithm, Least Frequently Used (LFU).This thesis ends with recommendations for the use of online and offline learning techniques for networks (WSN, CDN). As perspectives, we propose different applications where the use of these techniques would improve the quality of experience for mobile users (cellular networks) or users of IoT (Internet of Things) networks, based, for instance, on Time Slotted Channel Hopping (TSCH).
75

Communication centrée sur les utilisateurs et les contenus dans les réseaux sans fil / User-centric content-aware communication in wireless networks

Chen, Zheng 16 December 2016 (has links)
Cette thèse porte sur plusieurs technologies de déchargement cellulaire pour les futurs réseaux sans fil avec l’amélioration envisagée sur la efficacité spatiale du spectre et l’efficacité énergétique. Notre recherche concerne deux directions principales, y compris la communication D2D underlaid dans les réseaux cellulaires et le caching proactif au bord de réseau.La première partie de cette thèse contient deux chapitres qui présentent nos résultats de recherche sur les réseaux cellulaire avec D2D underlaid. Notre recherche se focalise sur l’accès opportuniste distribué, dont la performance en termes du débit D2D est optimisé dans deux scénarios: 1) en supposant que l’utilisateur cellulaire avec un trafic saturé peut avoir une probabilité de couverture minimale; 2) en supposant que le trafic discontinu à l’utilisateur cellulaire, dont le délai moyen doit être maintenue au-dessous d’un certain seuil. La deuxième partie de cette thèse se focalise sur les méthodes de caching proactif au bord de réseau, y compris le caching aux petites cellules et aux appareils des utilisateurs. Tout d’abord, nous étudions le placement de contenu probabiliste dans différents types de réseaux et avec différents objectifs d’optimisation. Deuxièmement, pour le caching aux petites cellules, nous proposons un schéma coopérative parmi les petites stations de base, qui exploite le gain combiné du caching coopérative et les techniques de multipoint coordonnée. Les modèles de processus ponctuel nous permet de créer la connexion entre la diversité de transmission en couche PHY et la diversité de contenus stockés. / This thesis focuses on several emerging technologies towards future wireless networks with envisaged improvement on the area spectral efficiency and energy efficiency. The related research involves two major directions, including deviceto- device (D2D) communication underlaid cellular networks and proactive caching at network edge. The first part of this thesis starts with introducing D2D underlaid cellular network model and distributed access control methods for D2D users that reuse licensed cellular uplink spectrum. We aim at optimize the throughput of D2D network in the following two scenarios: 1) assuming always backlogged cellular users with coverage probability constraint, 2) assuming bursty packet arrivals at the cellular user, whose average delay must be kept below a certain threshold. The second part of this thesis focuses on proactive caching methods at network edge, including at small base stations (SBSs) and user devices. First, we study and compare the performance of probabilistic content placement in different types of wireless caching networks and with different optimization objectives. Second, we propose a cooperative caching and transmission strategy in a cluster-centric small cell networks (SCNs), which exploits the combined gain of cache-level cooperation and CoMP technique. Using spatial models from stochastic geometry, we build the connection between PHY transmission diversity and the content diversity in local caches.
76

A framework for evolutionary optimization applications in water distribution systems

Morley, Mark S. January 2008 (has links)
The application of optimization to Water Distribution Systems encompasses the use of computer-based techniques to problems of many different areas of system design, maintenance and operational management. As well as laying out the configuration of new WDS networks, optimization is commonly needed to assist in the rehabilitation or reinforcement of existing network infrastructure in which alternative scenarios driven by investment constraints and hydraulic performance are used to demonstrate a cost-benefit relationship between different network intervention strategies. Moreover, the ongoing operation of a WDS is also subject to optimization, particularly with respect to the minimization of energy costs associated with pumping and storage and the calibration of hydraulic network models to match observed field data. Increasingly, Evolutionary Optimization techniques, of which Genetic Algorithms are the best-known examples, are applied to aid practitioners in these facets of design, management and operation of water distribution networks as part of Decision Support Systems (DSS). Evolutionary Optimization employs processes akin to those of natural selection and “survival of the fittest” to manipulate a population of individual solutions, which, over time, “evolve” towards optimal solutions. Such algorithms are characterized, however, by large numbers of function evaluations. This, coupled with the computational complexity associated with the hydraulic simulation of water networks incurs significant computational overheads, can limit the applicability and scalability of this technology in this domain. Accordingly, this thesis presents a methodology for applying Genetic Algorithms to Water Distribution Systems. A number of new procedures are presented for improving the performance of such algorithms when applied to complex engineering problems. These techniques approach the problem of minimising the impact of the inherent computational complexity of these problems from a number of angles. A novel genetic representation is presented which combines the algorithmic simplicity of the classical binary string of the Genetic Algorithm with the performance advantages inherent in an integer-based representation. Further algorithmic improvements are demonstrated with an intelligent mutation operator that “learns” which genes have the greatest impact on the quality of a solution and concentrates the mutation operations on those genes. A technique for implementing caching of solutions – recalling the results for solutions that have already been calculated - is demonstrated to reduce runtimes for Genetic Algorithms where applied to problems with significant computation complexity in their evaluation functions. A novel reformulation of the Genetic Algorithm for implementing robust stochastic optimizations is presented which employs the caching technology developed to produce an multiple-objective optimization methodology that demonstrates dramatically improved quality of solutions for given runtime of the algorithm. These extensions to the Genetic Algorithm techniques are coupled with a supporting software library that represents a standardized modelling architecture for the representation of connected networks. This library gives rise to a system for distributing the computational load of hydraulic simulations across a network of computers. This methodology is established to provide a viable, scalable technique for accelerating evolutionary optimization applications.
77

A Power Conservation Methodology for Hard Drives by Combining Prefetching Algorithms and Flash Memory

Halper, Raymond 01 January 2013 (has links)
Computing system power consumption is a concern as it has financial and environmental implications. These concerns will increase in the future due to the current trends in data growth, information availability requirements, and increases in the cost of energy. Data growth is compounded daily because of the accessibility of portable devices, increased connectivity to the Internet, and a trend toward storing information electronically. These three factors also result in an increased demand for the data to be available for access at all times which results in more electronic devices requiring power. As more electricity is required the overall cost of energy increases due to demand and limited resource availability. The environment also suffers as most electricity is generated from fossil fuels which increase emission of carbon dioxide into the atmosphere. In order to reduce the amount of energy required while maintaining data availability researchers have focused on changing how data is accessed from hard drives. Hard drives have been found to consume 10 to 86 percent of a system's energy. Through changing the way data is accessed by implementing multi speed hard drives, algorithms that prefetch, cache, and batch data requests, or by implementing flash drive caches researchers have been able to reduce the energy required from hard drive operation. However, these approaches often result in reduced I/O performance or reduced data availability. This dissertation provides a new method of reducing hard drive energy consumption by implementing a prefetching technique that predicts a chain of future requests based upon previous request observations. The files to be prefetched are given to a caching system which uses a flash memory device for caching. This caching system implements energy sensitive algorithms to optimize the value of files stored in the flash memory device. Through prefetching files the hard drive on a system can be placed in a low power sleep state. This results in reduced power consumption while providing high I/O performance and data availability. Analysis of simulator results confirmed that this new method increased I/O performance and data availability over previous studies while also providing a higher level of energy savings. Out of 30 scenarios, the new method displayed better energy savings in 26 scenarios and better performance in all 30 scenarios over previous studies. The new method also displayed it could achieve results of 50.9 percent less time and 34.6 percent less energy for a workload over previous methodologies.
78

Zvyšování rychlosti moderních webových aplikací / Acceleration of Modern Web Applications

Čepl, Radek Unknown Date (has links)
The thesis deals with function and structure of web applications, describes the individual technologies used in these applications. It also explains how to create for the purpose of high efficiency and easy development. The main part presents technologies to speed up the applications, explain their settings and properties. Finally, the technologies are thoroughly tested, evaluated benefits of use and recommended the application for future development.
79

Melhoria do tempo de resposta para execução de jogos em um sistema em Cloud Gaming com implementação de camadas e predição de movimento. / Improvement of the response time to execute games in a cloud games system with layers caching and movement prediction.

Sadaike, Marcelo Tetsuhiro 11 July 2017 (has links)
Com o crescimento da indústria dos jogos eletrônicos, surgem novos mercados e tecnologias. Os jogos eletrônicos da última geração exigem cada vez mais processamento e placas de vídeo mais poderosas. Uma solução que vem ganhando cada vez mais destaque é o Cloud Gaming, no qual o jogador realiza um comando e a informação é enviada e processada remotamente em uma nuvem, localizada na Internet, e que retorna com as imagens como uma sequência de vídeo para o jogador. Para melhorar a qualidade de experiência (QoE) é proposto um modelo que diminui o tempo de resposta entre o jogador e a nuvem, através de um arcabouço chamado Cloud Manager que utiliza a técnica de implementação de camadas, na camada do plano de fundo e predição de movimentos, utilizando uma matriz de predição, na camada do personagem. Para validar os resultados é utilizado um jogo de ação com ponto de vista onipresente dentro do sistema de Cloud Gaming Uniquitous. / With the growing video games industry, new markets and technologies are emerging. Electronic games of the new generation are increasingly requiring more processing and powerful video cards. The solution that is gaining more prominence is Cloud Gaming, which the player performs a command, the information is sent and processed remotely on a cloud, then the images return as a video stream back to the player using the Internet. To improve the Quality of Experience (QoE), it is proposed a model that reduces the response time between the player command and the stream of the resulting game scenes through a framework called Cloud Manager that use layer caching techniques, in the background, and future state prediction using a prediction matrix, in the character layer. To validate the results, a action game with god-view as point of view is used in a Cloud Gaming system called Uniquitous.
80

A Study of Replicated and Distributed Web Content

John, Nitin Abraham 10 August 2002 (has links)
" With the increase in traffic on the web, popular web sites get a large number of requests. Servers at these sites are sometimes unable to handle the large number of requests and clients to such sites experience long delays. One approach to overcome this problem is the distribution or replication of content over multiple servers. This approach allows for client requests to be distributed to multiple servers. Several techniques have been suggested to direct client requests to multiple servers. We discuss these techniques. With this work we hope to study the extent and method of content replication and distribution at web sites. To understand the distribution and replication of content we ran client programs to retrieve headers and bodies of web pages and observed the changes in them over multiple requests. We also hope to understand possible problems that could face clients to such sites due to caching and standardization of newer protocols like HTTP/1.1. The main contribution of this work is to understand the actual implementation of replicated and distributed content on multiple servers and its implication for clients. Our investigations showed issues with replicated and distributed content and its effects on caching due to incorrect identifers being send by different servers serving the same content. We were able to identify web sites doing application layer switching mechanisms like DNS and HTTP redirection. Lower layers of switching needed investigation of the HTTP responses from servers, which were hampered by insuffcient tags send by servers. We find web sites employ a large amount of distribution of embedded content and its ramifcations on HTTP/1.1 need further investigation. "

Page generated in 0.0463 seconds