• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 22
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 127
  • 127
  • 127
  • 50
  • 38
  • 33
  • 21
  • 20
  • 19
  • 17
  • 15
  • 14
  • 13
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Cache structures based on the execution stack for high level languages /

Borgwardt, Peter Arthur. January 1981 (has links)
Thesis (Ph. D.)--University of Washington, 1981. / Vita. Bibliography: leaves [173]-177.
102

Switch-based Fast Fourier Transform processor

Mohd, Bassam Jamil, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
103

Protection unit for radiation induced errors in flash memory systems

Bryer, Bevan 12 1900 (has links)
Thesis (MScEng)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: Flash memory and the errors induced in it by radiation were studied. A test board was then designed and developed as well as a radiation test program. The system was irradiated. This gave successful results, which confirmed aspects of the study and gave valuable insight into flash memory behaviour. To date, the board is still being used to test various flash devices for radiation-harsh environments. A memory protection unit (MPU) was conceptually designed and developed to morntor flash devices, increasing their reliability in radiation-harsh environments. This unit was designed for intended use onboard a micro-satellite. The chosen flash device for this study was the K9F1208XOA model from SAMSUNG. The MPU was designed to detect, maintain, mitigate and report radiation induced errors in this flash device. Most of the design was implemented in field programmable gate arrays and was realised using VHDL. Simulations were performed to verify the functionality of the design subsystems. These simulations showed that the various emulated errors were handled successfully by the MPU. A modular design methodology was followed, therefore allowing the chosen flash device to be replaced with any flash device, following a small reconfiguration. This also allows parts of the system to be duplicated to protect more than one device. / AFRIKAANSE OPSOMMING: 'n Studie is gemaak van" Flash" geheue en die foute daarop wat deur radiasie veroorsaak word. 'n Toetsbord is ontwerp en ontwikkel asook 'n radiasie toetsprogram waarna die stelsel bestraal is. Die resultate was suksesvol en het aspekte van die studie bevestig en belangrike insig gegee ten opsigte van "flash" komponente in radiasie intensiewe omgewmgs. 'n Geheue Beskermings Eenheid (GBE) is konseptueel ontwerp en ontwikkelom die "flash" komponente te monitor. Dit verhoog die betroubaarheid in radiasie intensiewe omgewings. Die eenheid was ontwerp met die oog om dit aan boord 'n mikro-satelliet te gebruik. Die gekose "flash" komponent vir die studie was die K9F1208XOA model van SAMSUNG. Die GBE is ontwerp om foute wat deur radiasie geïnduseer word in die "flash" komponent te identifiseer, herstel en reg te maak. Die grootste deel van die implementasie is gedoen in "field programmable gate arrays" and is gerealiseer deur gebruik te maak van VHDL. Simulasies is gedoen om die funksionaliteit van die ontwikkelde substelsels te verifieer. Hierdie simulasies het getoon dat die verskeie geëmuleerde foute suksesvol deur die GBE hanteer is. 'n Modulre ontwerpsmetodologie is gevolg sodat die gekose "flash" komponent deur enige ander flash komponent vervang kan word na gelang van 'n eenvoudige herkonfigurasie. Dit stelook dele van die sisteem in staat om gedupliseer te word om sodoende meer as een komponent te beskerm.
104

ESetStore: an erasure-coding based distributed storage system with fast data recovery

Liu, Chengjian 31 August 2018 (has links)
The past decade has witnessed the rapid growth of data in large-scale distributed storage systems. Triplication, a reliability mechanism with 3x storage overhead and adopted by large-scale distributed storage systems, introduces heavy storage cost as data amount in storage systems keep growing. Consequently, erasure codes have been introduced in many storage systems because they can provide a higher storage efficiency and fault tolerance than data replication. However, erasure coding has many performance degradation factors in both I/O and computation operations, resulting in great performance degradation in large-scale erasure-coded storage systems. In this thesis, we investigate how to eliminate some key performance issues in I/O and computation operations for applying erasure coding in large-scale storage systems. We also propose a prototype named ESetStore to improve the recovery performance of erasure-coded storage systems. We introduce our studies as follows. First, we study the encoding and decoding performance of the erasure coding, which can be a key bottleneck with the state-of-the-art disk I/O throughput and network bandwidth. We propose a graphics processing unit (GPU)-based implementation of erasure coding named G-CRS, which employs the Cauchy Reed-Solomon (CRS) code, to improve the encoding and decoding performance. To maximize the coding performance of G-CRS by fully utilizing the GPU computational power, we designed and implemented a set of optimization strategies. Our evaluation results demonstrated that G-CRS is 10 times faster than most of the other coding libraries. Second, we investigate the performance degradation introduced by intensive I/O operations in recovery for large-scale erasure-coded storage systems. To improve the recovery performance, we propose a data placement algorithm named ESet. We define a configurable parameter named overlapping factor for system administrators to easily achieve desirable recovery I/O parallelism. Our simulation results show that ESet can significantly improve the data recovery performance without violating the reliability requirement by distributing data and code blocks across different failure domains. Third, we take a look at the performance of applying coding techniques to in-memory storage. A reliable in-memory cache for key-value stores named R-Memcached is designed and proposed. This work can be served as a prelude of applying erasure coding to in-memory metadata storage. R-Memcached exploits coding techniques to achieve reliability, and can tolerate up to two node failures. Our experimental results show that R-Memcached can maintain very good latency and throughput performance even during the period of node failures. At last, we design and implement a prototype named ESetStore for erasure-coded storage systems. The ESetStore integrates our data placement algorithm ESet to bring fast data recovery for storage systems.
105

An adaptive discrete cosine transform coding scheme for digital x-ray images

Mclean, Ivan Hugh January 1989 (has links)
The ongoing development of storage devices and technologies for medical image management has led to a growth in the digital archiving of these images. The characteristics of medical x-rays are examined, and a number of digital coding methods are considered. An investigation of several fast cosine transform algorithms is carried out. An adaptive cosine transform coding technique is implemented which produces good quality images using bit rates lower than 0.38 bits per picture element
106

Statistical analysis of large scale data with perturbation subsampling

Yao, Yujing January 2022 (has links)
The past two decades have witnessed rapid growth in the amount of data available to us. Many fields, including physics, biology, and medical studies, generate enormous datasets with a large sample size, a high number of dimensions, or both. For example, some datasets in physics contains millions of records. It is forecasted by Statista Survey that in 2022, there will be over 86 millions users of health apps in United States, which will generate massive mHealth data. In addition, more and more large studies have been carried out, such as the UK Biobank study. This gives us unprecedented access to data and allows us to extract and infer vital information. Meanwhile, it also poses new challenges for statistical methodologies and computational algorithms. For increasingly large datasets, computation can be a big hurdle for valid analysis. Conventional statistical methods lack the scalability to handle such large sample size. In addition, data storage and processing might be beyond usual computer capacity. The UK Biobank genotypes and phenotypes dataset contains about 500,000 individuals and more than 800,000 genotyped single nucleotide polymorphism (SNP) measurements per person, the size of which may well exceed a computer's physical memory. Further, the high dimensionality combined with the large sample size could lead to heavy computational cost and algorithmic instability. The aim of this dissertation is to provide some statistical approaches to address the issues. Chapter 1 provides a review on existing literature. In Chapter 2, a novel perturbation subsampling approach is developed based on independent and identically distributed stochastic weights for the analysis of large scale data. The method is justified based on optimizing convex criterion functions by establishing asymptotic consistency and normality for the resulting estimators. The method can provide consistent point estimator and variance estimator simultaneously. The method is also feasible for a distributed framework. The finite sample performance of the proposed method is examined through simulation studies and real data analysis. In Chapter 3, a repeated block perturbation subsampling is developed for the analysis of large scale longitudinal data using generalized estimating equation (GEE) approach. The GEE approach is a general method for the analysis of longitudinal data by fitting marginal models. The proposed method can provide consistent point estimator and variance estimator simultaneously. The asymptotic properties of the resulting subsample estimators are also studied. The finite sample performances of the proposed methods are evaluated through simulation studies and mHealth data analysis. With the development of technology, large scale high dimensional data is also increasingly prevailing. Conventional statistical methods for high dimensional data such as adaptive lasso (AL) lack the scalability to handle processing of such large sample size. Chapter 4 introduces the repeated perturbation subsampling adaptive lasso (RPAL), a new procedure which incorporates features of both perturbation and subsampling to yield a robust, computationally efficient estimator for variable selection, statistical inference and finite sample false discovery control in the analysis of big data. RPAL is well suited to modern parallel and distributed computing architectures and furthermore retains the generic applicability and statistical efficiency. The theoretical properties of RPAL are studied and simulation studies are carried out by comparing the proposed estimator to the full data estimator and traditional subsampling estimators. The proposed method is also illustrated with the analysis of omics datasets.
107

A layered virtual memory manager.

Mason, Andrew Halstead. January 1977 (has links)
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering, 1977 / Bibliography : leaves 127-132. / Elec. E. / Elec. E. Massachusetts Institute of Technology, Department of Electrical Engineering
108

A Quantitative Analysis of Memory Controller Page Policies

Blackmore, Matthew 28 February 2013 (has links)
Two common goals in computing system design are increasing performance and decreasing power consumption. DRAM-based memory subsystems are a major component of both system performance and power consumption. Memory controllers employ strategies to efficiently schedule DRAM operations to reduce latency and to utilize DRAM low power modes when possible. One of the most important of these is the page policy, which determines when to close pages in DRAM. An effective DRAM memory controller page policy is important to minimizing power consumption and increasing system performance. This thesis explores the impact memory controller page policy has on performance as measured by the number of page-hits minus page-misses and estimated average memory access latency. I captured real-time DDR3 command and address memory traces for the SPEC CPU2006 benchmarks under three memory controller page policies: closed page, fixed open-page, and Intel's adaptive open-page [1]. Traces were captured using a programmable memory traffic analyzer (PMTA), a device interposed between the DIMM slot and DDR3 DIMM on the motherboard. The memory traces for each benchmark were analyzed to determine the absolute number of page-hits and page-misses that occurred. In software post-processing I simulated a theoretically perfect "oracle" page policy for each captured trace to compare the efficiency of existing policies. The SPEC CPU 2006 benchmarks under the oracle page policy for each trace exhibited an average increase in the number of page-hits minus page-misses of 280.3% and an average decrease in the average memory latency of 11.1%. Two new adaptive open-page policies are proposed and simulated using the captured memory traces. These proposed policies result in an average increase of 74.8% and 62.4% in the number of page-hits minus page-misses over Intel's adaptive open-page policy and an average decrease in the average memory latency of 3.8% and 3.4%.
109

Hardware related optimizations in a Java virtual machine

Gu, Dayong. January 2007 (has links)
No description available.
110

A design methodology for robust, energy-efficient, application-aware memory systems

Chatterjee, Subho 28 August 2012 (has links)
Memory design is a crucial component of VLSI system design from area, power and performance perspectives. To meet the increasingly challenging system specifications, architecture, circuit and device level innovations are required for existing memory technologies. Emerging memory solutions are widely explored to cater to strict budgets. This thesis presents design methodologies for custom memory design with the objective of power-performance benefits across specific applications. Taking example of STTRAM (spin transfer torque random access memory) as an emerging memory candidate, the design space is explored to find optimal energy design solution. A thorough thermal reliability study is performed to estimate detection reliability challenges and circuit solutions are proposed to ensure reliable operation. Adoption of the application-specific optimal energy solution is shown to yield considerable energy benefits in a read-heavy application called MBC (memory based computing). Circuit level customizations are studied for the volatile SRAM (static random access memory) memory, which will provide improved energy-delay product (EDP) for the same MBC application. Memory design has to be aware of upcoming challenges from not only the application nature but also from the packaging front. Taking 3D die-folding as an example, SRAM performance shift under die-folding is illustrated. Overall the thesis demonstrates how knowledge of the system and packaging can help in achieving power efficient and high performance memory design.

Page generated in 0.1012 seconds