• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 741
  • 289
  • 279
  • 143
  • 99
  • 93
  • 90
  • 86
  • 79
  • 69
  • 64
  • 46
  • 43
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

School District Reorganization and Consolidation in Cache County, Utah

Bagley, Grant Richard 01 May 1964 (has links)
A historical study of school organization and school district consolidation enables both educators and lay citizens to have a better understanding and appreciation of schools as they are today. By studying past developments of a given institution, one can better evaluate current requirements and affect future changes as the needs arise. The Cache County School System as presently constituted has evolved over the years from a cluster of small independent village schools with separate boards of education to a highly centralized system with one board of education and consolidated schools. The purpose of this study is to trace and analyze the development of this system.
82

An Interpolative Analytical Cache Model with Application to Performance-Power Design Space Exploration

Peng, Bing, Wong, Weng Fai, Tay, Yong Chiang 01 1900 (has links)
Caches are known to consume up to half of all system power in embedded processors. Co-optimizing performance and power of the cache subsystems is therefore an important step in the design of embedded systems, especially those employing application specific instruction processors. In this project, we propose an analytical cache model that succinctly captures the miss performance of an application over the entire cache parameter space. Unlike exhaustive trace driven simulation, our model requires that the program be simulated once so that a few key characteristics can be obtained. Using these application-dependent characteristics, the model can span the entire cache parameter space consisting of cache sizes, associativity and cache block sizes. In our unified model, we are able to cater for direct-mapped, set and fully associative instruction, data and unified caches. Validation against full trace-driven simulations shows that our model has a high degree of fidelity. Finally, we show how the model can be coupled with a power model for caches such that one can very quickly decide on pareto-optimal performance-power design points for rapid design space exploration. / Singapore-MIT Alliance (SMA)
83

Integration of Memory Subsystem with Microprocessor Supporting On-Chip Real Time Trace Compression

Lai, Chun-hung 06 September 2007 (has links)
In this thesis, we integrate the memory subsystem, including cache and MMU¡]Memory Management Unit¡^ with the embedded 32 bits microprocessor SYS32TM-II to support the virtual memory mechanism of the operating system and make memory management effectively among multi-processes in the system. To provide the virtual to physical address translation with MMU and to improve the system performance with cache. We reuse the memory subsystem of the LEON2 SoC platform and design the communication interface to coordinate the processor core SYS32TM-II with the LEON2 memory subsystem, and modify the LEON2 memory subsystem to compatible with SYS32TM-II. After the integration of memory subsystem, a reusing cache for program address trace compression in real time is proposed. The advantage is that reusing cache with minor hardware modification can not only save the hardware compressor overhead but also obtain a high compression ratio. Experimental results show that the proposed approach causes few hardware area overhead but achieves approximately 90% compression ratio at real-time. Therefore, this thesis is the memory subsystem with parameterized design and with the ability to support system debugging. The role of the memory subsystem is not only to improve the system performance and to provide the hardware support requiring by the operating system, with minor modification, the memory susbsystem can also capture the dynamic program execution trace in parallel with microprocessor. The address trace compression mechanism will not effect the program execution and capable to compress at real-time.
84

Low Latency Stochastic Filtering Software Firewall Architecture

Ghoshal, Pritha 14 March 2013 (has links)
Firewalls are an integral part of network security. They are pervasive throughout networks and can be found in mobile phones, workstations, servers, switches, routers, and standalone network devices. Their primary responsibility is to track and discard unauthorized network traffic, and may be implemented using costly special purpose hardware to flexible inexpensive software running on commodity hardware. The most basic action of a firewall is to match packets against a set of rules in an Access Control List (ACL) to determine whether they should be allowed or denied access to a network or resource. By design, traditional firewalls must sequentially search through the ACL table, leading to increasing latencies as the number of entries in the table increase. This is particularly true for software firewalls implemented in commodity server hardware. Reducing latency in software firewalls may enable them to replace hardware firewalls in certain applications. In this thesis, we propose a software firewall architecture which removes the sequential ACL lookup from the critical path and thus decreases the latency per packet in the common case. To accomplish this we implement a Bloom filter-based, stochastic pre-classification stage, enabling the bifurcation of the predicted good and predicted bad packet code paths, greatly improving performance. Our proposed architecture improves firewall performance 67% to 92% under anonymized trace based workloads from CAIDA servers. While our approach has the possibility of incorrectly classifying a small subset of bad packets as good, we show that these holes are neither predictable nor permanent, leading to a vanishingly small probability of firewall penetration.
85

Managing Cache Consistency to Scale Dynamic Web Systems

Wasik, Chris January 2007 (has links)
Data caching is a technique that can be used by web servers to speed up the response time of client requests. Dynamic websites are becoming more popular, but they pose a problem –- it is difficult to cache dynamic content, as each user may receive a different version of a webpage. Caching fragments of content in a distributed way solves this problem, but poses a maintainability challenge: cached fragments may depend on other cached fragments, or on underlying information in a database. When the underlying information is updated, care must be taken to ensure cached information is also invalidated. If new code is added that updates the database, the cache can very easily become inconsistent with the underlying data. The deploy-time dependency analysis method solves this maintainability problem by analyzing web application source code at deploy-time, and statically writing cache dependency information into the deployed application. This allows for the significant performance gains distributed object caching can allow, without any of the maintainability problems that such caching creates.
86

Reducing the load on transaction-intensive systems through distributed caching

Andersson, Joachim, Lindbom Byggnings, Johan January 2012 (has links)
Scania is an international trucks, buses and engines manufacturer with sales and service organization in more than 100 countries all over the globe (Scania, 2011). In 2011 alone, Scania delivered over 80 000 vehicles, which is an increase by a margin of 26% from the previous year. The company continues to deliver more trucks each year while expanding to other areas of the world, which means that the data traffic is going to increase remarkably in the transaction- intensive fleet management system (FMS). This increases the need for a scalable system; adding more sources to handle these requests in parallel. Distributed caching is one technique that can solve this issue. The technique makes applications and systems more scalable, and it can be used to reduce load on the underlying data sources. The purpose of this thesis is to evaluate whether or not distributed caching is a suitable technical solution for Scania FMS. The aim of the study is to identify scenarios in FMS where a distributed cache solution could be of use, and to test the performance of two distributed cache products while simulating these scenarios.  The results from the tests are then used to evaluate the distributed cache products and to compare distributed caching performance to a single database. The products evaluated in this thesis are Alachisoft NCache and Microsoft Appfabric. The results from the performance tests show that that NCache outperforms AppFabric in all aspects. In conclusion, distributed caching has been demonstrated  to be a viable option when scaling out the system.
87

Banking and Finance in Cache Valley, 1856-1956

Hurren, Patricia Kaye 01 January 1956 (has links)
Having a special interest in banking through her close associates with bankers and practical banking activity, the writer quite naturally gravitated to the field of finance in her search for a thesis problem. While some aspects of the economic history of Cache Valley had been studied, nothing had been done with its financial history. On the eve of Cache Valley's Centennial year, BANKING AND FINANCE IN CACHE VALLEY was thought to be an especially timely subject since source material, much of which time is daily erasing, was available for the study.
88

A Management Study of the Cache Elk Herd

Hancock, Norman V. 01 January 1955 (has links)
The present study was undertaken to acquire additional management information for both the North and South Cache units. It was recognized that effectiveness of elk management could be increased if such information were available as population data, age composition figures, effectiveness of the winter feeding program, herd productivity and mortality, summer and winter distribution, and the inter-specific role of deer and domestic livestock with the elk. The present study was commenced during late fall of 1951. Formal field work continued through the spring of 1953, though limited field work extended through the early 1954 winter. The study has been dedicated to the procurement of elk management information on both the North and South Cache units.
89

Cache Design for a Hardware Accelerated Sparse Texture Storage System

Yee, Wai Min January 2004 (has links)
Hardware texture mapping is essential for real-time rendering. Unfortunately the memory bandwidth and latency often bounds performance in current graphics architectures. Bandwidth consumption can be reduced by compressing the texture map or by using a cache. However, the way a texture map occupies memory and how it is accessed affects the pattern of memory accesses, which in turn affects cache performance. Thus texture compression schemes and cache architectures must be designed in conjunction with each other. We define a sparse texture to be a texture where a substantial percentage of the texture is constant. Sparse textures are of interest as they occur often, and they are used as parts of more general texture compression schemes. We present a hardware compatible implementation of sparse textures based on B-tree indexing and explore cache designs for it. We demonstrate that it is possible to have the bandwidth consumption and miss rate due to the texture data alone scale with the area of the region of interest. We also show that the additional bandwidth consumption and hideable latency due to the B-tree indices are low. Furthermore, the caches necessary for these textures can be quite small.
90

Managing Cache Consistency to Scale Dynamic Web Systems

Wasik, Chris January 2007 (has links)
Data caching is a technique that can be used by web servers to speed up the response time of client requests. Dynamic websites are becoming more popular, but they pose a problem –- it is difficult to cache dynamic content, as each user may receive a different version of a webpage. Caching fragments of content in a distributed way solves this problem, but poses a maintainability challenge: cached fragments may depend on other cached fragments, or on underlying information in a database. When the underlying information is updated, care must be taken to ensure cached information is also invalidated. If new code is added that updates the database, the cache can very easily become inconsistent with the underlying data. The deploy-time dependency analysis method solves this maintainability problem by analyzing web application source code at deploy-time, and statically writing cache dependency information into the deployed application. This allows for the significant performance gains distributed object caching can allow, without any of the maintainability problems that such caching creates.

Page generated in 0.0452 seconds