• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Melhoria do tempo de resposta para execução de jogos em um sistema em Cloud Gaming com implementação de camadas e predição de movimento. / Improvement of the response time to execute games in a cloud games system with layers caching and movement prediction.

Marcelo Tetsuhiro Sadaike 11 July 2017 (has links)
Com o crescimento da indústria dos jogos eletrônicos, surgem novos mercados e tecnologias. Os jogos eletrônicos da última geração exigem cada vez mais processamento e placas de vídeo mais poderosas. Uma solução que vem ganhando cada vez mais destaque é o Cloud Gaming, no qual o jogador realiza um comando e a informação é enviada e processada remotamente em uma nuvem, localizada na Internet, e que retorna com as imagens como uma sequência de vídeo para o jogador. Para melhorar a qualidade de experiência (QoE) é proposto um modelo que diminui o tempo de resposta entre o jogador e a nuvem, através de um arcabouço chamado Cloud Manager que utiliza a técnica de implementação de camadas, na camada do plano de fundo e predição de movimentos, utilizando uma matriz de predição, na camada do personagem. Para validar os resultados é utilizado um jogo de ação com ponto de vista onipresente dentro do sistema de Cloud Gaming Uniquitous. / With the growing video games industry, new markets and technologies are emerging. Electronic games of the new generation are increasingly requiring more processing and powerful video cards. The solution that is gaining more prominence is Cloud Gaming, which the player performs a command, the information is sent and processed remotely on a cloud, then the images return as a video stream back to the player using the Internet. To improve the Quality of Experience (QoE), it is proposed a model that reduces the response time between the player command and the stream of the resulting game scenes through a framework called Cloud Manager that use layer caching techniques, in the background, and future state prediction using a prediction matrix, in the character layer. To validate the results, a action game with god-view as point of view is used in a Cloud Gaming system called Uniquitous.
82

Paralelizando unidades de cache hierárquicas para roteadores ICN

Mansilha, Rodrigo Brandão January 2017 (has links)
Um desafio fundamental em ICN (do inglês Information-Centric Networking) é desenvolver Content Stores (ou seja, unidades de cache) que satisfaçam três requisitos: espaço de armazenamento grande, velocidade de operação rápida e custo acessível. A chamada Hierarchical Content Store (HCS) é uma abordagem promissora para atender a esses requisitos. Ela explora a correlação temporal entre requisições para prever futuras solicitações. Por exemplo, assume-se que um usuário que solicita o primeiro minuto de um filme também solicitará o segundo minuto. Teoricamente, essa premissa permitiria transferir proativamente conteúdos de uma área de cache relativamente grande, mas lenta (Layer 2 - L2), para uma área de cache mais rápida, porém menor (Layer 1 - L1). A estrutura hierárquica tem potencial para incrementar o desempenho da CS em uma ordem de grandeza tanto em termos de vazão como de tamanho, mantendo o custo. Contudo, o desenvolvimento de HCS apresenta diversos desafios práticos. É necessário acoplar as hierarquias de memória L2 e L1 considerando as suas taxas de transferência e tamanhos, que dependem tanto de aspectos de hardware (por exemplo, taxa de leitura da L2, uso de múltiplos SSD físicos em paralelo, velocidade de barramento, etc.), como de software (por exemplo, controlador do SSD, gerenciamento de memória, etc.). Nesse contexto, esta tese apresenta duas contribuições principais. Primeiramente, é proposta uma arquitetura para superar os gargalos inerentes ao sistema através da paralelização de múltiplas HCS. Em resumo, o esquema proposto supera desafios inerentes à concorrência (especificamente, sincronismo) através do particionamento determinístico das requisições de conteúdos entre múltiplas threads. Em segundo lugar, é proposta uma metodologia para investigar o desenvolvimento de HCS explorando técnicas de emulação e modelagem analítica conjuntamente. A metodologia proposta apresenta vantagens em relação a metodologias baseadas em prototipação e simulação. A L2 é emulada para viabilizar a investigação de uma variedade de cenários de contorno (tanto em termos de hardware como de software) maior do que seria possível através de prototipação (considerando as tecnologias atuais). Além disso, a emulação emprega código real de um protótipo para os outros componentes do HCS (por exemplo L1, gerência das camadas e API) para fornecer resultados mais realistas do que seriam obtidos através de simulação. / A key challenge in Information Centric Networking (ICN) is to develop cache units (also called Content Store - CS) that meet three requirements: large storage space, fast operation, and affordable cost. The so-called HCS (Hierarchical Content Store) is a promising approach to satisfy these requirements jointly. It explores the correlation between content requests to predict future demands. Theoretically, this idea would enable proactively content transfers from a relatively large but slow cache area (Layer 2 - L2) to a faster but smaller cache area (Layer 1 - L1). Thereby, it would be possible to increase the throughput and size of CS in one order of magnitude, while keeping the cost. However, the development of HCS introduces several practical challenges. HCS requires a careful coupling of L2 and L1 memory levels considering their transfer rates and sizes. This requirement depends on both hardware specifications (e.g., read rate L2, use of multiple physical SSD in parallel, bus speed, etc.), and software aspects (e.g., the SSD controller, memory management, etc.). In this context, this thesis presents two main contributions. First, we propose an architecture for overcoming the HCS bottlenecks by parallelizing multiple HCS. In summary, the proposed scheme overcomes racing condition related challenges through deterministic partitioning of content requests among multiple threads. Second, we propose a methodology to investigate the development of HCS exploiting emulation techniques and analytical modeling jointly. The proposed methodology offers advantages over prototyping and simulation-based methods. We emulate the L2 to enable the investigation of a variety of boundary scenarios that are richer (regarding both hardware and software aspects) than would be possible through prototyping (considering current technologies). Moreover, the emulation employs real code from a prototype for the other components of the HCS (e.g., L1, layers management and API) to provide more realistic results than would be obtained through simulation.
83

Certificate Revocation Table: Leveraging Locality of Reference in Web Requests to Improve TLS Certificate Revocation

Dickinson, Luke Austin 01 October 2018 (has links)
X.509 certificate revocation defends against man-in-the-middle attacks involving a compromised certificate. Certificate revocation strategies face scalability, effectiveness, and deployment challenges as HTTPS adoption rates have soared. We propose Certificate Revocation Table (CRT), a new revocation strategy that is competitive with or exceeds alternative state-of-the-art solutions in effectiveness, efficiency, certificate growth scalability, mass revocation event scalability, revocation timeliness, privacy, and deployment requirements. The CRT periodically checks the revocation status of X.509 certificates recently used by an organization, such as clients on a university's private network. By prechecking the revocation status of each certificate the client is likely to use, the client can avoid the security problems of on-demand certificate revocation checking. To validate both the effectiveness and efficiency of using a CRT, we used 60 days of TLS traffic logs from Brigham Young University to measure the effects of actively refreshing certificates for various certificate working set window lengths. Using a certificate working set window size of 45 days, an average of 99.86% of the TLS handshakes from BYU would have revocation information cached in advance using our approach. Revocation status information can be initially downloaded by clients with a 6.7 MB file and then subsequently updated using only 205.1 KB of bandwidth daily. Updates to this CRT that only include revoked certificates require just 215 bytes of bandwidth per day.
84

Filecules: A New Granularity for Resource Management in Grids

Doraimani, Shyamala 26 March 2007 (has links)
Grids provide an infrastructure for seamless, secure access to a globally distributed set of shared computing resources. Grid computing has reached the stage where deployments are run in production mode. In the most active Grid community, the scientific community, jobs are data and compute intensive. Scientific Grid deployments offer the opportunity for revisiting and perhaps updating traditional beliefs related to workload models and hence reevaluate traditional resource management techniques. In this thesis, we study usage patterns from a large-scale scientificGrid collaboration in high-energy physics. We focus mainly on data usage, since data is the major resource for this class of applications. We perform a detailed workload characterization which led us to propose a new data abstraction, filecule, that groups correlated files. We characterize filecules and show that they are an appropriate data granularity for resource management. In scientific applications, job scheduling and data staging are tightly coupled. The only algorithm previously proposed for this class of applications, Greedy Request Value (GRV), uses a function that assigns a relative value to a job. We wrote a cache simulator that uses the same technique of combining cache replacement with job reordering to evaluate and compare quantitatively a set of alternative solutions. These solutions are combinations of Least Recently Used (LRU) and GRV from the cache replacement space with First-Come First-Served (FCFS) and the GRV-specific job reordering from the scheduling space. Using real workload from the DZero Experiment at Fermi National Accelerator Laboratory, we measure and compare performance based on byte hit rate, cache change, job waiting time, job waiting queue length, and scheduling overhead. Based on our experimental investigations, we propose a new technique that combines LRU for cache replacement and job scheduling based onthe relative request value. This technique incurs less data transfer costs than the GRV algorithm and shorter job processing delays than FCFS. We also propose using filecules for data management to further improve the results obtained from the above LRU and GRV combination. We show that filecules can be identified in practical situations and demonstrate how the accuracy of filecule identification influences caching performance.
85

Optimization of vido Delivery in Telco-CDN

LI, Zhe 25 January 2013 (has links) (PDF)
The exploding HD video streaming traffic calls for deploying content servers deeper inside network operators infrastructures. Telco-CDN are new content distribution services that are managed by Internet Service Providers (ISP). Since the network operator controls both the infrastructure and the content delivery overlay, it is in position to engineer Telco-CDN so that networking resources are optimally utilized. In this thesis, we focus on the optimal resource placement in Telco-CDN. We first investigated the placement of application components in Telco-CDN. Popular services like Facebook or Twitter, with a size in the order of hundreds of Terabytes, cannot be fully replicated on a single data-center. Instead, the idea is to partition the service into smaller components and to locate the components on distinct sites. It is the same and unique method for Telco-CDN operators. We addressed this k-Component Multi-Site Placement Problem from an optimization standpoint. We developed linear programming models, designed approximation and heuristic algorithms to minimize the overall service delivery cost. Thereafter, we extend our works to address the problem of optimal video place- ment for Telco-CDN. We modeled this problem as a k-Product Capacitated Facility Location Problem, which takes into account network conditions and users¿ prefer- ences. We designed a genetic algorithm in order to obtain near-optimal performances of such "push" approach, then we implemented it on the MapReduce framework in order to deal with very large data sets. The evaluation signifies that our optimal placement keeps align with cooperative LRU caching in term of storage efficiency although its impact on network infrastructure is less severe. We then explore the caching decision problem in the context of Information Cen- tric Network (ICN), which could be a revolutionary design of Telco-CDN. In ICN, routers are endowed with caching capabilities. So far, only a basic Least Recently Used (LRU) policy implemented on every router has been proposed. Our first contri- bution is the proposition of a cooperative caching protocol, which has been designed for the treatment of large video streams with on-demand access. We integrated our new protocol into the main router software (CCNx) and developed a platform that automatically deploys our augmented CCNx implementation on real machines. Ex- periments show that our cooperative caching significantly reduces the inter-domain traffic for an ISP with acceptable overhead. Finally, we aim at better understanding the behavior of caching policies other than LRU. We built an analytical model that approximates the performance of a set of policies ranging from LRU to Least Frequently Used (LFU) in any type of network topologies. We also designed a multi-policy in-network caching, where every router implements its own caching policy according to its location in the network. Compared to the single LRU policy, the multi-caching strategy considerably increases the hit- ratio of the in-network caching system in the context of Video-on-Demand application. All in one, this thesis explores different aspects related to the resource placement in Telco-CDN. The aim is to explore optimal and near-optimal performances of various approaches.
86

Node Caching Enhancement of Reactive Ad Hoc Routing Protocol

Jung, Sunsook 12 January 2006 (has links)
Enhancing route request broadcasting protocols constitutes a substantial part of research in mobile ad hoc network routing. In the thesis, enhancements of ad hoc routing protocols, energy efficiency metrics and clustered topology generators are discussed. The contributions include the followings. First, a node caching enhancement of Ad-hoc On-demand Distance Vector (AODV) routing protocol is introduced. Extensive simulation studies of the enhanced AODV in NS2 shows up to 9-fold reduction in the routing overhead, up to 20% improvement in the packet delivery ratio and up to 60% reduction in the end-to-end delay. The largest improvement happens to highly stressed situations. Secondly, new metrics for evaluating energy efficiency of routing protocols are suggested. New node cached AODV protocols employing non-adaptive and adaptive load balancing techniques were proposed for extending network lifetime and increasing network throughput. Finally, the impact of node clustered topology on ad hoc network is explored. A novel method for generating clustered layout in NS2 is introduced and experiments indicate performance degradation of AODV protocols for the case of two clusters.
87

Foraging behaviours and population dynamics of arctic foxes

Samelius, Gustaf 22 August 2006
Northern environments are often characterised by large seasonal and annual fluctuations in food abundance. In this thesis, I examined how arctic foxes (</i>Alopex lagopus</i>) used seasonally superabundant foods (geese and their eggs) and how access to these foods influenced population dynamics of arctic foxes. I addressed this against a backdrop of variation in lemming and vole abundance (small mammals hereafter) the main foods of arctic foxes throughout most of their range. Field work was done at the large goose colony at Karrak Lake and surrounding areas in the Queen Maud Gulf Bird Sanctuary in Nunavut, Canada, in the spring and summers of 2000 to 2004. <p> Behavioural observations of individually-marked arctic foxes showed that they took and cached 2,000-3,000 eggs per fox each year and that the rate at which they took eggs was largely unrelated to individual attributes of foxes (e.g. sex, size, and breeding status) and nesting distribution of geese. Further, the rate at which foxes took eggs varied considerably within individuals in that foxes were efficient at taking eggs at times and inefficient at other times. This may have resulted from foxes switching between foraging actively and taking eggs opportunistically while performing other demands such as territorial behaviours. <p>Comparison of stable isotope ratios (13C and 15N) of fox tissues and those of their foods showed that the contribution of cached eggs to arctic fox diets was inversely related to collared lemming (<i>Dicrostonyx torquatus</i>) abundance. In fact, the contribution of cached eggs to overall fox diets increased from <28% in years when collared lemmings were abundant to 30-74% in years when collared lemmings were scarce. Furthermore, arctic foxes used cached eggs well into the following spring (almost 1 year after eggs were acquired) a pattern which differs from that of carnivores generally storing foods for only a few days before consumption. <p>A field-study of experimental caches showed that survival rate of these caches was related to age of cache sites in the first year of the study (e.g. 0.80 and 0.56 per 18-day period for caches from new and 1 month old cache sites, respectively) and departure by geese after hatch in the second year of the study (e.g. 0.98 and 0.74 per 18-day period during and after goose nesting, respectively). Food abundance and deterioration of cache sites (e.g. loss of soil cover and partial exposure of caches) were, thus, important factors affecting cache loss at Karrak Lake. Further, annual variation in the importance of these factors suggests that strategies to prevent cache loss are not fixed in time but vary with existing conditions. Evolution of caching behaviours by arctic foxes may, thus, have been shaped by multiple selective pressures. <p>Comparisons of reproductive output and abundance of arctic foxes inside and outside the goose colony at Karrak Lake showed that (i) breeding density and fox abundance were 2-3 times higher inside the colony than they were outside the colony and (ii) litter size, breeding density, and annual variation in fox abundance followed that of small mammal abundance. Small mammal abundance was, thus, the main governor of population dynamics of arctic foxes whereas geese and their eggs elevated fox abundance and breeding density above that which small mammals could support. These results highlight both the influence of seasonal and annual variation on population dynamics of consumers and the linkage between arctic environments and wintering areas by geese thousands of kilometres to the south.
88

The application of the in-tree knapsack problem to routing prefix caches

Nicholson, Patrick 24 April 2009 (has links)
Modern routers use specialized hardware, such as Ternary Content Addressable Memory (TCAM), to solve the Longest Prefix Matching Problem (LPMP) quickly. Due to the fact that TCAM is a non-standard type of memory and inherently parallel, there are concerns about its cost and power consumption. This problem is exacerbated by the growth in routing tables, which demands ever larger TCAMs. To reduce the size of the TCAMs in a distributed forwarding environment, a batch caching model is proposed and analyzed. The problem of determining which routing prefixes to store in the TCAMs reduces to the In-tree Knapsack Problem (ITKP) for unit weight vertices in this model. Several algorithms are analysed for solving the ITKP, both in the general case and when the problem is restricted to unit weight vertices. Additionally, a variant problem is proposed and analyzed, which exploits the caching model to provide better solutions. This thesis concludes with discussion of open problems and future experimental work.
89

The application of the in-tree knapsack problem to routing prefix caches

Nicholson, Patrick 24 April 2009 (has links)
Modern routers use specialized hardware, such as Ternary Content Addressable Memory (TCAM), to solve the Longest Prefix Matching Problem (LPMP) quickly. Due to the fact that TCAM is a non-standard type of memory and inherently parallel, there are concerns about its cost and power consumption. This problem is exacerbated by the growth in routing tables, which demands ever larger TCAMs. To reduce the size of the TCAMs in a distributed forwarding environment, a batch caching model is proposed and analyzed. The problem of determining which routing prefixes to store in the TCAMs reduces to the In-tree Knapsack Problem (ITKP) for unit weight vertices in this model. Several algorithms are analysed for solving the ITKP, both in the general case and when the problem is restricted to unit weight vertices. Additionally, a variant problem is proposed and analyzed, which exploits the caching model to provide better solutions. This thesis concludes with discussion of open problems and future experimental work.
90

Foraging behaviours and population dynamics of arctic foxes

Samelius, Gustaf 22 August 2006 (has links)
Northern environments are often characterised by large seasonal and annual fluctuations in food abundance. In this thesis, I examined how arctic foxes (</i>Alopex lagopus</i>) used seasonally superabundant foods (geese and their eggs) and how access to these foods influenced population dynamics of arctic foxes. I addressed this against a backdrop of variation in lemming and vole abundance (small mammals hereafter) the main foods of arctic foxes throughout most of their range. Field work was done at the large goose colony at Karrak Lake and surrounding areas in the Queen Maud Gulf Bird Sanctuary in Nunavut, Canada, in the spring and summers of 2000 to 2004. <p> Behavioural observations of individually-marked arctic foxes showed that they took and cached 2,000-3,000 eggs per fox each year and that the rate at which they took eggs was largely unrelated to individual attributes of foxes (e.g. sex, size, and breeding status) and nesting distribution of geese. Further, the rate at which foxes took eggs varied considerably within individuals in that foxes were efficient at taking eggs at times and inefficient at other times. This may have resulted from foxes switching between foraging actively and taking eggs opportunistically while performing other demands such as territorial behaviours. <p>Comparison of stable isotope ratios (13C and 15N) of fox tissues and those of their foods showed that the contribution of cached eggs to arctic fox diets was inversely related to collared lemming (<i>Dicrostonyx torquatus</i>) abundance. In fact, the contribution of cached eggs to overall fox diets increased from <28% in years when collared lemmings were abundant to 30-74% in years when collared lemmings were scarce. Furthermore, arctic foxes used cached eggs well into the following spring (almost 1 year after eggs were acquired) a pattern which differs from that of carnivores generally storing foods for only a few days before consumption. <p>A field-study of experimental caches showed that survival rate of these caches was related to age of cache sites in the first year of the study (e.g. 0.80 and 0.56 per 18-day period for caches from new and 1 month old cache sites, respectively) and departure by geese after hatch in the second year of the study (e.g. 0.98 and 0.74 per 18-day period during and after goose nesting, respectively). Food abundance and deterioration of cache sites (e.g. loss of soil cover and partial exposure of caches) were, thus, important factors affecting cache loss at Karrak Lake. Further, annual variation in the importance of these factors suggests that strategies to prevent cache loss are not fixed in time but vary with existing conditions. Evolution of caching behaviours by arctic foxes may, thus, have been shaped by multiple selective pressures. <p>Comparisons of reproductive output and abundance of arctic foxes inside and outside the goose colony at Karrak Lake showed that (i) breeding density and fox abundance were 2-3 times higher inside the colony than they were outside the colony and (ii) litter size, breeding density, and annual variation in fox abundance followed that of small mammal abundance. Small mammal abundance was, thus, the main governor of population dynamics of arctic foxes whereas geese and their eggs elevated fox abundance and breeding density above that which small mammals could support. These results highlight both the influence of seasonal and annual variation on population dynamics of consumers and the linkage between arctic environments and wintering areas by geese thousands of kilometres to the south.

Page generated in 0.0484 seconds