• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 745
  • 291
  • 279
  • 144
  • 100
  • 93
  • 90
  • 87
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Politicas de gerenciamento de web caches

Oliveira, Rodrigo Machado 25 July 2018 (has links)
Orientador: Nelson Luis Saldanha da Fonseca / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-07-25T23:06:45Z (GMT). No. of bitstreams: 1 Oliveira_RodrigoMachado_M.pdf: 3049913 bytes, checksum: 10dee95732671283666f0167acee9b15 (MD5) Previous issue date: 1999 / Resumo: Atualmente, a Web tornou-se um dos principais gargalos no desempenho da Internet. Pesquisas recentes mostram que o parâmetro de Qualidade de Serviço (QoS) mais importante para os usuários da Web é o tempo de resposta na recuperação de objetos. Uma das alternativas para reduzir a latência na recuperação de documentos é a replicação de documentos populares em repositórios fisicamente próximos aos usuários, o que é denominado Web caching. As políticas de gerenciamento de Web caches têm grande influência na Qualidade de Serviço. As políticas de expulsão definem quais documentos devem ser retirados da cache, a fim de liberar espaço para um novo documento a ser armazenado. As políticas de admissão, por sua vez, procuram determinar quais documentos podem ser armazenados na cache. Esta dissertação investiga o tempo de recuperação de documentos como chave em políticas de expulsão e de controle de admissão. São propostas diversas políticas com o objetivo de diminuir o tempo de resposta na recuperação de objetos da Web. Além disso, é feita uma investigação sobre os tempos entre misses na cache, na tentativa de caracterizar seu tipo de distribuição / Abstract: Recent surveys indicate that Web users consider the retrieval time the most important Quality of Service parameter. Web caches have been massively adopted in the Internet in order to reduce the retrieval time of documents as well as to alleviate the ever increasing Internet traffic due to Web traffic. Web cache management policies have great impact on the perceived Quality of Service. Removal policies define which documents should be removed from the cache, to make room for an incoming document. Admission control policies try to determine which documents can be stored in the cache. This dissertation investigates retrieval time of web documents as key for removal and admission control policies. Several policies are proposed. Moreover, the distribution of intermiss time is investigated / Mestrado / Mestre em Ciência da Computação
542

Performance evaluation of multithreading in a Diameter Credit Control Application

Åkesson, Gustav, Rantzow, Pontus January 2010 (has links)
Moore's law states that the amount of computational power available at a given cost doubles every 18 months and indeed, for the past 20 years there has been a tremendous development in microprocessors. However, for the last few years, Moore's law has been subject for debate, since to manage heat issues, processor manufacturers have begun favoring multicore processors, which means parallel computation has become necessary to fully utilize the hardware. This also means that software has to be written with multiprocessing in mind to take full advantage of the hardware, and writing parallel software introduces a whole new set of problems. For the last couple of years, the demands on telecommunication systems have increased and to manage the increasing demands, multiprocessor servers have become a necessity. Applications must fully utilize the hardware and such an application is the Diameter Credit Control Application (DCCA). The DCCA uses the Diameter networking protocol and the DCCA's purpose is to provide a framework for real-time charging. This could, for instance, be to grant or deny a user's request of a specific network activity and to account for the eventual use of that network resource. This thesis investigates whether it is possible to develop a Diameter Credit Control Application that achieves linear scaling and the eventual pitfalls that exist when developing a scalable DCCA server. The assumption is based on the observation that the DCCA server's connections have little to nothing in common (i.e. little or no synchronization), and introducing more processors should therefore give linear scaling. To investigate whether a DCCA server's performance scales linearly, a prototype has been developed. Along with the development of the prototype, constant performance analysis was conducted to see what affected performance and server scalability in a multiprocessor DCCA environment. As the results show, quite a few factors besides synchronization and independent connections affected scalability of the DCCA prototype. The results show that the DCCA prototype did not always achieve linear scaling. However, even if it was not linear, certain design decisions gave considerable performance increase when more processors were introduced.
543

Generationsskräpsamling med explicit kontroll av hårdvarucache

Karlsson, Karl-Johan January 2006 (has links)
This report evaluates whether an interpreted high-level garbage collected language has enough information about its memory behaviour to make better cache decisions than modern general CPU hardware. With a generational garbage collector, depending on promotion algorithm and generation size, around 90% of all objects never leave the first generation. This report is based on the hypothesis that, because of the low promotion rate, accesses to higher generations are sufficiently rare not to benefit from caching. To test this hypothesis, we built an operating system with a Scheme interpreter in kernel mode, where the interpreter controls the cache. Generic x86 PC hardware was used, since it allows fine-grained control of cache decisions. Measurements of execution time in this interpreter show that disabling the cache for generations higher than the first does not give any performance gain, but rather a performance loss of up to 50%. We conclude that this interpreter design is not an improvement, but cannot conclude that the hypothesis is false in general. We suggest building a better CPU simulator to gather more data from which to make better caching decisions, moving internal interpreter data structures into the garbage collected heap and modifying the hardware to allow control in the currently rigid dimension of where data is cached---for example separate control of instruction and data caches and separate data caches for different areas of memory.
544

Improving an FPGA Optimized Processor

Davari, Mahdad January 2011 (has links)
This work aims at improving an existing soft microprocessor core optimized for Xilinx Virtex®-4 FPGA. Instruction and data caches will be designed and implemented. Interrupt support will be added as well, preparing the microprocessor core to host operating systems. Thorough verification of the added modules is also emphasized in this work. Maintaining core clock frequency at its maximum has been the main concern through all the design and implementation steps.
545

HTML5-­utveckling av en kommunikationsaggregator för Android : Analys av problem och lösningssatser

Wänglöf, Johan January 2015 (has links)
Om ett företag ska utveckla en ny mobilapplikation innebär det att det måste tas fram många olika typer för att deras applikation ska kunna användas på de största mobiloperativsystem. En hybridapplikation löser detta genom att enbart utveckla de delar som måste använda enhetens hårdvarufunktioner. Detta examensarbete kommer ta upp vilka problem, och hur de löstes, som uppstod i avseende på notifikationer, cache och offlineläge i utveckling av en kommunikationsaggregator i HTML5 för Android. Det visade sig att det inte uppstod några större problem med att implementera notifikationer och cache medan offlinestödet var svårare att implementera.
546

A geochemical and geothermometric study of the Nahlin ophiolite, northwestern British Columbia

McGoldrick, Siobhan S.G. 22 August 2017 (has links)
The Nahlin ophiolite represents one of the largest (~80 km long) and best-preserved ophiolites in the Cordillera of British Columbia and Yukon, Canada, yet it has been understudied compared to other ophiolites worldwide. Bedrock mapping at 1:20,000 scale in the Menatatuline Range area shows that the ophiolite is structurally disrupted with mantle bodies divisible into two massifs: Hardluck and Menatatuline. Studies of 30 samples show that both massifs consist of spinel harzburgites and minor lherzolites that have been strongly depleted by melt extraction (<2 wt % Al2O3 and ~45 wt % MgO). Clinopyroxene REE abundances determined by LA-ICP-MS illustrate different extents of depletion between the two massifs, with YbN varying from 2.3 – 5.0 and 1.7 – 2.2 in the Hardluck and Menatatuline massifs, respectively. Inversion modelling of the clinopyroxene REE abundances yields ~10 – 16% melting in the Hardluck massif and ~16 – 20% melting in the Menatatuline massif, with melt compositions that are compositionally similar to the gabbros and basalts proximal to the mantle rocks. All these extrusive and intrusive rocks in the ophiolite have an arc-signature, implying that the Nahlin ophiolite formed in a supra-subduction zone (SSZ) environment. The Nahlin peridotites document a two-stage evolution: depletion of a locally heterogeneous mantle source by hydrous fractional melting, followed by refertilization of the refractory harzburgite in the mantle wedge evidenced by LREE enrichment in clinopyroxene and whole-rock chemistry. This two-stage evolution is also recorded by the thermal history of the harzburgites. The REE-in-two-pyroxene thermometry has been reset following cryptic and modal metasomatism and relatively slow cooling, whereas major element two pyroxene geothermometry records temperatures varying from near solidus (~1290 °C) to ~800 °C, with the highest temperatures recorded in samples from the Menatatuline massif. The refractory nature of the Menatatuline harzburgites in combination with the arc-influenced volcanic geochemistry provides overwhelming evidence for a SSZ origin. Peridotite from the Hardluck massif displays characteristics of both abyssal and SSZ peridotites. These geochemical and geothermometric constraints can be reconciled by evolution of the Hardluck and Menatatuline massifs as two separate segments along a backarc ridge system, later juxtaposed by dextral strike-slip faulting. Alternatively, the Nahlin ophiolite may represent proto-forearc seafloor spreading associated with subduction initiation akin to the proposed origins of the Izu-Bonin-Mariana arc (Stern et al. 2012; Maffione et al. 2015). In any case, the geochemical data for peridotites and magmatic rocks herein require that the SSZ-type Nahlin ophiolite reside in the upper plate at an intraoceanic convergent margin. This interpretation has strong implications for models of northern Cordilleran tectonics, where the Cache Creek terrane is typically shown as a subducting ocean basin during Cordilleran orogenesis. / Graduate
547

Proximity coherence for chip-multiprocessors

Barrow-Williams, Nick January 2011 (has links)
Many-core architectures provide an efficient way of harnessing the growing numbers of transistors available in modern fabrication processes; however, the parallel programs run on these platforms are increasingly limited by the energy and latency costs of communication. Existing designs provide a functional communication layer but do not necessarily implement the most efficient solution for chip-multiprocessors, placing limits on the performance of these complex systems. In an era of increasingly power limited silicon design, efficiency is now a primary concern that motivates designers to look again at the challenge of cache coherence. The first step in the design process is to analyse the communication behaviour of parallel benchmark suites such as Parsec and SPLASH-2. This thesis presents work detailing the sharing patterns observed when running the full benchmarks on a simulated 32-core x86 machine. The results reveal considerable locality of shared data accesses between threads with consecutive operating system assigned thread IDs. This pattern, although of little consequence in a multi-node system, corresponds to strong physical locality of shared data between adjacent cores on a chip-multiprocessor platform. Traditional cache coherence protocols, although often used in chip-multiprocessor designs, have been developed in the context of older multi-node systems. By redesign- ing coherence protocols to exploit new patterns such as the physical locality of shared data, improving the efficiency of communication, specifically in chip-multiprocessors, is possible. This thesis explores such a design - Proximity Coherence - a novel scheme in which L1 load misses are optimistically forwarded to nearby caches via new dedicated links rather than always being indirected via a directory structure.
548

Caching Strategies And Design Issues In CD-ROM Based Multimedia Storage

Shastri, Vijnan 04 1900 (has links) (PDF)
No description available.
549

Structural relations between the Shuswap and "Cache Creek" complexes near Kalamalka Lake, southern British Columbia

Solberg, Peter Harvey January 1976 (has links)
Five phases of deformation are recognized in Shuswap metamorphics south of Vernon, British Columbia. Phase 1 and 2 deformations are isoclinal gently dipping folds which trend N and ESE respectively. Some thermal activity may have occurred prior to phase 2 deformation but metamorphism culminated in the amphibolite facies during and following phase 2. Metamorphism waned prior to the development of NE trending phase 3, folds of which are angular and moderately tight with one steep and one shallowly dipping limb. Phase 4 and 5 deformations trend NE and N respectively, and comprise open upright buckle folds and fractures which are contemporaneous with abundant hydrothermal alteration. The 42- 10 m.y. B.P. sr/Rb whole rock age date secured from a phase 2 sill probably represents thermal upgrading. Low metamorphic grade "Cache Creek" metasediments west of Vernon have undergone 4 recognized deformational phases. Phase 1 folds are tight, steeply dipping, and trend WNW. Phase 2 comprises E trending, angular mesoscopic folds. Phase 3 and 4 comprise NE and N trending fracture sets. A large amphibolite sill defines the "Cache Creek" albite-epidote-amphi-bolite facies metamorphic culmination. Metamorphic hornblendes from the amphibolite yield a 178 + 6 m.y. B.P. age date, using the K/Ar method. Hydrothermal activity occurred in association with phase 3 and 4 deformations. The final four phases of Shuswap deformation appear to correlate with respective "Cache Creek" phases, based on structural similarities. This suggests that the two complexes may be, at least in part, structural equivalents. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
550

On exploiting location flexibility in data-intensive distributed systems

Yu, Boyang 12 October 2016 (has links)
With the fast growth of data-intensive distributed systems today, more novel and principled approaches are needed to improve the system efficiency, ensure the service quality to satisfy the user requirements, and lower the system running cost. This dissertation studies the design issues in the data-intensive distributed systems, which are differentiated from other systems by the heavy workload of data movement and are characterized by the fact that the destination of each data flow is limited to a subset of available locations, such as those servers holding the requested data. Besides, even among the feasible subset, different locations may result in different performance. The studies in this dissertation improve the data-intensive systems by exploiting the data storage location flexibility. It addresses how to reasonably determine the data placement based on the measured request patterns, to improve a series of performance metrics, such as the data access latency, system throughput and various costs, by the proposed hypergraph models for data placement. To implement the proposal with a lower overhead, a sketch-based data placement scheme is presented, which constructs the sparsified hypergraph under a distributed and streaming-based system model, achieving a good approximation on the performance improvement. As the network can potentially become the bottleneck of distributed data-intensive systems due to the frequent data movement among storage nodes, the online data placement by reinforcement learning is proposed which intelligently determines the storage locations of each data item at the moment that the item is going to be written or updated, with the joint-awareness of network conditions and request patterns. Meanwhile, noticing that distributed memory caches are effective measures in lowering the workload to the backend storage systems, the auto-scaling of memory cache clusters is studied, which tries to balance the energy cost of the service and the performance ensured. As the outcome of this dissertation, the designed schemes and methods essentially help to improve the running efficiency of data-intensive distributed systems. Therefore, they can either help to improve the user-perceived service quality under the same level of system resource investment, or help to lower the monetary expense and energy consumption in maintaining the system under the same performance standard. From the two perspectives, both the end users and the system providers could obtain benefits from the results of the studies. / Graduate

Page generated in 0.0447 seconds