• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 220
  • 220
  • 85
  • 73
  • 48
  • 43
  • 32
  • 25
  • 24
  • 22
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

IMPROVING THE PERFORMANCE AND ENERGY EFFICIENCY OF EMERGING MEMORY SYSTEMS

Guo, Yuhua 01 January 2018 (has links)
Modern main memory is primarily built using dynamic random access memory (DRAM) chips. As DRAM chip scales to higher density, there are mainly three problems that impede DRAM scalability and performance improvement. First, DRAM refresh overhead grows from negligible to severe, which limits DRAM scalability and causes performance degradation. Second, although memory capacity has increased dramatically in past decade, memory bandwidth has not kept pace with CPU performance scaling, which has led to the memory wall problem. Third, DRAM dissipates considerable power and has been reported to account for as much as 40% of the total system energy and this problem exacerbates as DRAM scales up. To address these problems, 1) we propose Rank-level Piggyback Caching (RPC) to alleviate DRAM refresh overhead by servicing memory requests and refresh operations in parallel; 2) we propose a high performance and bandwidth efficient approach, called SELF, to breaking the memory bandwidth wall by exploiting die-stacked DRAM as a part of memory; 3) we propose a cost-effective and energy-efficient architecture for hybrid memory systems composed of high bandwidth memory (HBM) and phase change memory (PCM), called Dual Role HBM (DR-HBM). In DR-HBM, hot pages are tracked at a cost-effective way and migrated to the HBM to improve performance, while cold pages are stored at the PCM to save energy.
62

Memory Architecture Template for Fast Block Matching Algorithms on Field Programmable Gate Arrays

Chandrakar, Shant 01 December 2009 (has links)
Fast Block Matching (FBM) algorithms for video compression are well suited for acceleration using parallel data-path architectures on Field Programmable Gate Arrays (FPGAs). However, designing an efficient on-chip memory subsystem to provide the required throughput to this parallel data-path architecture is a complex problem. This thesis presents a memory architecture template that can be parameterized for a given FBM algorithm, number of parallel Processing Elements (PEs), and block size. The template can be parameterized with well known exploration techniques to design efficient on-chip memory subsystems. The memory subsystems are derived for two existing FBM algorithms and are implemented on a Xilinx Virtex 4 family of FPGAs. Results show that the derived memory subsystem in the best case supports up to 27 more parallel PEs than the three existing subsystems and processes integer pixels in a 1080p video sequence up to a rate of 73 frames per second. The speculative execution of an FBM algorithm for the same number of PEs increases the number of frames processed per second by 49%.
63

Oncologists' perceptions of the ethical, legal and social implications of genetic testing and microfluidic lab-on-chip technology

Wallin, Crystal 14 June 2006
The objectives of this study are twofold: firstly, to give an account of the current methods of knowledge production, and secondly to contribute a consultation piece on oncologists perceptions of non-technical issues regarding the ethical, legal and social implications of microfluidic lab-on-chip technology (MF LOC). Two connected theses statements are put forth. First, understanding the transformations of knowledge production will allow for a more socially and ethically informed mode of governance to emerge. Second, it is important to consider who might use the technology and how it might impact institutions and individuals. <p>Interviews were conducted with 31 Canadian oncologists during August 2004 to February 2005. Qualitative analysis was used to examine the oncologists responses. It was found that of the different types of knowledge production that were reviewed (Mode-1, Mode-2, Triple Helix, and Post-normal science) the Triple Helix thesis was most supported. However, an integration of characteristics of Mode-2 with the Triple Helix thesis best accounts for the current description of knowledge production. The principles inherent in Post-Normal Science provide a starting point for developing an approach for building capacity for an independent institution that examines the ethical, legal and social concerns regarding transformative technologies. In relation to the second thesis, the results indicate that MF LOC devices have great potential to transform institutional practices and affect individual lives. And it is important to understand that the oncologists studied constructed their understanding of MF LOC technology within a scientific and biomedical repertoire consequently, future research should assess the perceptions and concerns of other groups of people that are different from the scientific and biomedical repertoire.
64

Dynamic Data Extraction and Data Visualization with Application to the Kentucky Mesonet

Paidipally, Anoop Rao 01 May 2012 (has links)
There is a need to integrate large-scale database, high-performance computing engines and geographical information system technologies into a user-friendly web interface as a platform for data visualization and customized statistical analysis. We present some concepts and design ideas regarding dynamic data storage and extraction by making use of open-source computing and mapping technologies. We implemented our methods to the Kentucky Mesonet automated weather mapping workflow. The main components of the work flow includes a web based interface, a robust database and computing infrastructure designed for both general users and power users such as modelers and researchers.
65

Oncologists' perceptions of the ethical, legal and social implications of genetic testing and microfluidic lab-on-chip technology

Wallin, Crystal 14 June 2006 (has links)
The objectives of this study are twofold: firstly, to give an account of the current methods of knowledge production, and secondly to contribute a consultation piece on oncologists perceptions of non-technical issues regarding the ethical, legal and social implications of microfluidic lab-on-chip technology (MF LOC). Two connected theses statements are put forth. First, understanding the transformations of knowledge production will allow for a more socially and ethically informed mode of governance to emerge. Second, it is important to consider who might use the technology and how it might impact institutions and individuals. <p>Interviews were conducted with 31 Canadian oncologists during August 2004 to February 2005. Qualitative analysis was used to examine the oncologists responses. It was found that of the different types of knowledge production that were reviewed (Mode-1, Mode-2, Triple Helix, and Post-normal science) the Triple Helix thesis was most supported. However, an integration of characteristics of Mode-2 with the Triple Helix thesis best accounts for the current description of knowledge production. The principles inherent in Post-Normal Science provide a starting point for developing an approach for building capacity for an independent institution that examines the ethical, legal and social concerns regarding transformative technologies. In relation to the second thesis, the results indicate that MF LOC devices have great potential to transform institutional practices and affect individual lives. And it is important to understand that the oncologists studied constructed their understanding of MF LOC technology within a scientific and biomedical repertoire consequently, future research should assess the perceptions and concerns of other groups of people that are different from the scientific and biomedical repertoire.
66

Magnetic Head Flyability on Patterned Media

Horton, Brian David 13 July 2004 (has links)
The goal of this thesis is to experimentally characterize the flyability of current generation read/write heads over media patterned to densities above the superparamagnetic limit. The superparamagnetic limit is the physical limit to magnetic storage density. In magnetic storage, superparamagnetism is the uncontrollable switching of stored bits during the lifespan of a hard disk. Theoretical analysis has predicted that densities of ~50 Gbit/in2 are not possible using traditional continuous media. One strategy to achieve high storage density, above the superparamagnetic limit, is patterned media. With patterned media the physical separation of magnetic domains increases their stability. One of the major challenges of development of patterned media is achieving acceptable flyability of the read/write head. In that vein, a test stand is built to measure head liftoff speed, head to disk intermittent contact and head fly height. Tangential friction, an indicator of head liftoff is measured by a Wheatstone bridge strain circuit attached to a cantilever beam. Intermittent contact is quantified by the amount of noise emanating from the interface, which is measured by a high frequency acoustic emission sensor. Head fly height is measured indirectly with a capacitance circuit built around the head to disk interface. Experimental samples of current generation read/write heads and media are obtained from industry. Current generation media is patterned using focused ion beam milling to a density of 10 Gbit/in2. Other, extremely dense samples, above 700 Gbit/in2, are created via thin film self assembly on silicon substrate. Conclusions on slider head flyability over patterned media are based on comparison with flyability over non-patterned media. It is demonstrated that loss of hydrodynamic lubrication is small for small pattern regions with high conserved surface area ratio. Conserved surface area ratio is defined as total surface area minus etched surface area all divided by the total surface area of the storage media. For wafer scale patterned media with low conserved surface area ratio, head liftoff cannot be achieved at designed normal load. However, a 50% reduction of load allows slider head liftoff.
67

Policy architecture for distributed storage systems

Belaramani, Nalini Moti 15 October 2009 (has links)
Distributed data storage is a building block for many distributed systems such as mobile file systems, web service replication systems, enterprise file systems, etc. New distributed data storage systems are frequently built as new environment, requirements or workloads emerge. The goal of this dissertation is to develop the science of distributed storage systems by making it easier to build new systems. In order to achieve this goal, it proposes a new policy architecture, PADS, that is based on two key ideas: first, by providing a set of common mechanisms in an underlying layer, new systems can be implemented by defining policies that orchestrate these mechanisms; second, policy can be separated into routing and blocking policy, each addresses different parts of the system design. Routing policy specifies how data flow among nodes in order to meet performance, availability, and resource usage goals, whereas blocking policy specifies when it is safe to access data in order to meet consistency and durability goals. This dissertation presents a PADS prototype that defines a set of distributed storage mechanisms that are sufficiently flexible and general to support a large range of systems, a small policy API that is easy to use and captures the right abstractions for distributed storage, and a declarative language for specifying policy that enables quick, concise implementations of complex systems. We demonstrate that PADS is able to significantly reduce development effort by constructing a dozen significant distributed storage systems spanning a large portion of the design space over the prototype. We find that each system required only a couple of weeks of implementation effort and required a few dozen lines of policy code. / text
68

Coding and Signal Processing Techniques for High Efficiency Data Storage and Transmission Systems

Pan, Lu January 2013 (has links)
Generally speaking, a communication channel refers to a medium through which an information-bearing signal is corrupted by noise and distortion. A communication channel may result from data storage over time or data transmission through space. A primary task for communication engineers is to mathematically characterize the channel to facilitate the design of appropriate detection and coding systems. In this dissertation, two different channel modeling challenges for ultra-high density magnetic storage are investigated: two-dimensional magnetic recording (TDMR) and bit-patterned magnetic recording (BPMR). In the case of TDMR, we characterize the error mechanisms during the write/read process of data on a TDMR medium by a finite-state machine, and then design a state-based detector that provides soft decisions for use by an outer decoder. In the case of BPMR, we employ an insertion/deletion (I/D) model. We propose a LDPC-CRC product coding scheme that enables the error detection without the involvement of Marker codes specifically designed for an I/D channel. We also propose a generalized Gilbert-Elliott (GE) channel to approximate the I/D channel in the sense of an equivalent I/D event rate. A lower bound of the channel capacity for the BPMR channel is derived, which supports our claim that commonly used error-correction codes are effective on the I/D channel under the assumption that I/D events are limited to a finite length. Another channel model we investigated is perpendicular magnetic recording model. Advanced signal processing for the pattern-dependent-noise-predictive channel detectors is our focus. Specifically, we propose an adaptive scheme for a hardware design that reduces the complexity of the detector and the truncation/saturation error caused by a fix-point representation of values in the detector. Lastly, we designed a sequence detector for compressively sampled Bluetooth signals, thus allowing data recovery via sub-Nyquist sampling. This detector skips the conventional step of reconstructing the original signal from compressive samples prior to detection. We also propose an adaptive design of the sampling matrix, which almost achieves Nyquist sampling performance with a relatively high compression ratio. Additionally, this adaptive scheme can automatically choose an appropriate compression ratio as a function of E(b)/N₀ without explicit knowledge of it.
69

Continuation of the Arizona Water Information System (AWIS)

Foster, Kennith E., DeCook, Kenneth J. January 1975 (has links)
Research Project Technical Completion Report / Office of Water Research and Technology Project A-031-ARIZ / Annual Allotment Agreement No. 14-31-0001-5003 / FCST Research Category VII-C; OWRT Problem Area: 10 / Project Duration June 1971 to June 1975 / No publication date on item; publication date from catalog. / The Arizona Water Information System (AWIS) was developed for storage and retrieval of water resources data and for dissemination of water resources information pertaining to the State of Arizona. Collectively, the AWIS system contains a number of distinct elements. The Activity File is a listing of water resource activities and projects dating from 1961, which can be accessed by keywords or by agency to retrieve abstracts and information on approximately 1,000 projects; the file recently was updated and additional projects covered in a regional program pertaining to the Lower Colorado River Basin portions of Arizona, California, and Nevada. A bimonthly Arizona Water Resources News Bulletin and a companion Project Information Bulletin were initiated under this project and will be continued as a cooperative effort of the Arizona Water Commission and the University of Arizona Water Resources Research Center and Office of Arid Lands Studies. A cassette-tape pilot series on Arizona water trends also was produced and evaluated for use potential, which appears favorable. A western state conference on water information dissemination, sponsored by this project and OWRT, was held in Phoenix in 1973, to discuss the above kinds of activities in the several states and the possibilities for cooperative regional activities. The capability for interactive hydrologic data processing, utilizing the DEC -10 computer system at the University of Arizona, was developed in 1974 with the support of the Arizona Water Commission (AWC). Ground-water and quality-of-water data furnished by AWC have been stored progressively in the system, and are retrievable by remote terminal through telephone hookup, by quarter- township grid location or by drainage basin. Routine inquiries can be answered rapidly, or more complex retrievals can be made as desired.
70

Korporatyvinės įmonės duomenų saugyklos modelio sudarymas ir tyrimas / Corporative enterprise data storage model development and analysis

Buškauskaitė, Laima 16 August 2007 (has links)
Šiuolaikinis verslas naudoja didžiulį duomenų kiekį, tačiau įmonėje šie duomenys taip ir liks tik balastas, jeigu nesugebėsime jų išanalizuoti ir tinkamai interpretuoti. Tik duomenų analizė, naudojant specialius programinius įrankius, iš „žalios“ informacijos leis atrinkti naudingus grūdelius ir perdirbti juos į vertingas žinias, kuriuos taps teisingų verslo sprendimų pagrindu. Naudojant OLAP (On-line Analytical Processing) priemones, sukuriama duomenų saugykla, kuri leidžia greitai, bei patogiai analizuoti duomenis. Taip pat šis produktas leidžia analizuoti duomenis, kurie yra gauti iš skirtingų verslo valdymo sistemų, kurios yra naudojamos skirtinguose geografiškai nutolusiuose įmonės padaliniuose. Kas tai - OLAP? Šis terminas naudojamas norint apibūdinti programinius produktus, kurie leidžia visapusiškai analizuoti verslo informaciją realiuoju laiku. Sąveika su tokiomis sistemomis vyksta interaktyviai, atsakymai net į daug skaičiavimų reikalaujančias užklausas gaunami per kelias sekundes. OLAP sistemos yra vienas iš daugelio verslo analitikos (Business Intelligence) grupei priskiriamų produktų. Pasaulyje yra sukurta nemažai sistemų, priklausančių šiai produktų grupei: nuo paprasčiausių MS SQL OLAP kubų iki tokių sistemų kaip „Business Objects“, „Cognos“, Corporate Planner“, „Microstrategy“. / Business use a big amount of data in our days, that data become just ballast if we incapasity to sift it. Just data mining using special software, transform data in to information. That information become reason of correct business rule. OLAP software make data warehouse, that help analyse data quick and convenient, even data are from diferent environment. OLAP technology uses measures and tools to transform and store information, create and execute queries, and generate graphic user interface. Because OLAP systems are becoming more affordable companies have to face the issue of how to choose the best product and then design and implement OLAP systems according to their business requirements. This research aims at comparing different OLAP systems at functional and data structure level in order to design and implement an OLAP system. What is OLAP? OLAP is On-Line Analitical Procesing. This term use to describe software products, thats have possibility to analyse information in real time. OLAP systems are one from a lot of Business Intelligence group products. In the worl are a lot products which are the part of this group: from MS SQL OLAP cubes to „Business Objects“, „Cognos“, Corporate Planner“ and „Microstrategy“ systems. The purpose of this work is to create and analyze the OLAP data warehouse models of large corporations.

Page generated in 0.0892 seconds