• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 26
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 87
  • 87
  • 24
  • 13
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

Tseng, Hsin-Wu, Fan, Jiahua, Kupinski, Matthew A. 28 July 2016 (has links)
The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
22

Live deduplication storage of virtual machine images in an open-source cloud.

January 2012 (has links)
重覆數據删除技術是一個消除冗餘數據存儲塊的技術。尤其是,在儲存數兆位元組的虛擬機器影像時,它已被證明可以減少使用磁碟空間。但是,在會經常加入和讀取虛擬機器影像的雲端平台,部署重覆數據删除技術仍然存在挑戰。我們提出了一個在內核運行的重覆數據删除檔案系統LiveDFS,它可以在一個在低成本硬件配置的開源雲端平台中作為儲存虛擬機器影像的後端。LiveDFS有幾個新穎的特點。具體來說,LiveDFS中最重要的特點是在考慮檔案系統佈局時,它利用空間局部性放置重覆數據删除中繼資料。LiveDFS是POSIX兼容的Linux內核檔案系統。我們透過使用42個不同Linux發行版的虛擬機器影像,在實驗平台測試了LiveDFS的讀取和寫入性能。我們的工作證明了在低成本硬件配置的雲端平台部署LiveDFS的可行性。 / Deduplication is a technique that eliminates the storage of redundant data blocks. In particular, it has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, there remain challenging deployment issues of enabling deduplication in a cloud platform, where VM images are regularly inserted and retrieved. We propose a kernel-space deduplication file systems called LiveDFS, which can serve as a VM image storage backend in an open-source cloud platform that is built on low-cost commodity hardware configurations. LiveDFS is built on several novel design features. Specifically, the main feature of LiveDFS is to exploit spatial locality of placing deduplication metadata on disk with respect to the underlying file system layout. LiveDFS is POSIX-compliant and is implemented as Linux kernel-space file systems. We conduct testbed experiments of the read/write performance of LiveDFS using a dataset of 42 VM images of different Linux distributions. Our work justifies the feasibility of deploying LiveDFS in a cloud platform under commodity settings. / Detailed summary in vernacular field only. / Ng, Chun Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- LiveDFS Design --- p.5 / Chapter 2.1 --- File System Layout --- p.5 / Chapter 2.2 --- Deduplication Primitives --- p.6 / Chapter 2.3 --- Deduplication Process --- p.8 / Chapter 2.3.1 --- Fingerprint Store --- p.9 / Chapter 2.3.2 --- Fingerprint Filter --- p.11 / Chapter 2.4 --- Prefetching of Fingerprint Stores --- p.14 / Chapter 2.5 --- Journaling --- p.15 / Chapter 2.6 --- Ext4 File System --- p.17 / Chapter 3 --- Implementation Details --- p.18 / Chapter 3.1 --- Choice of Hash Function --- p.18 / Chapter 3.2 --- OpenStack Deployment --- p.19 / Chapter 4 --- Experiments --- p.21 / Chapter 4.1 --- I/O Throughput --- p.21 / Chapter 4.2 --- OpenStack Deployment --- p.26 / Chapter 5 --- Related Work --- p.34 / Chapter 6 --- Conclusions and Future Work --- p.37 / Bibliography --- p.39
23

Kernel-space inline deduplication file systems for virtual machine image storage.

January 2013 (has links)
從文件系統設計的角度,我們探索了利用重復數據删除技術來消除硬盤陣列存儲設備當中的重復數據。我們提出了ScaleDFS,一個重復數據删除技術的文件系統, 旨在硬盤陣列存儲設備上實現可擴展的吞吐性能。ScaleDFS有三個主要的特點。第一,利用多核CPU並行計算出用作識別重復數據的加密指紋,以提高寫入速度。第二,緩存曾經讀取過的重復數據塊,以顯著提高讀取速度。第三,優化用作查找指紋的內存數據結構,以更加節省內存。ScaleDFS是一個以Linux系統內核模塊開發的,與POSIX兼容的,可以用在一般低成本硬件配置上的文件系統。我們進行了一系列的微觀性能測試,以及用42個不同版本的Linux虛擬鏡像文件進行了宏觀性能測試。我們證實,ScaleDFS在磁盤陣列上比目前已有的開源重復數據删除文件系統擁有更好的讀寫性能。 / We explore the use of deduplication for eliminating the storage of redundant data in RAID from a file-system design perspective. We propose ScaleDFS, a deduplication file system that seeks to achieve scalable read/write throughput in RAID. ScaleDFS is built on three novel design features. First, we improve the write throughput by exploiting multiple CPU cores to parallelize the processing of the cryptographic fingerprints that are used to identify redundant data. Second, we improve the read throughput by specifically caching in memory the recently read blocks that have been deduplicated. Third, we reduce the memory usage by enhancing the data structures that are used for fingerprint lookups. ScaleDFS is implemented as a POSIX-compliant, kernel-space driver module that can be deployed in commodity hardware configurations. We conduct microbenchmark experiments using synthetic workloads, and macrobenchmark experiments using a dataset of 42 VM images of different Linux distributions. We show that ScaleDFS achieves higher read/write throughput than existing open-source deduplication file systems in RAID. / Detailed summary in vernacular field only. / Ma, Mingcao. / "October 2012." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 2 --- Literature Review --- p.5 / Chapter 2.1 --- Backup systems --- p.5 / Chapter 2.2 --- Use of special hardware --- p.6 / Chapter 2.3 --- Scalable storage --- p.6 / Chapter 2.4 --- Inline DFSs --- p.6 / Chapter 2.5 --- VM image storage with deduplication --- p.7 / Chapter 3 --- ScaleDFS Background --- p.8 / Chapter 3.1 --- Spatial Locality of Fingerprint Placement --- p.9 / Chapter 3.2 --- Prefetching of Fingerprint Stores --- p.12 / Chapter 3.3 --- Journaling --- p.13 / Chapter 4 --- ScaleDFS Design --- p.15 / Chapter 4.1 --- Parallelizing Deduplication --- p.15 / Chapter 4.2 --- Caching Read Blocks --- p.17 / Chapter 4.3 --- Reducing Memory Usage --- p.17 / Chapter 5 --- Implementation --- p.20 / Chapter 5.1 --- Choice of Hash Function --- p.20 / Chapter 5.2 --- OpenStack Deployment --- p.21 / Chapter 6 --- Experiments --- p.23 / Chapter 6.1 --- Microbenchmarks --- p.23 / Chapter 6.2 --- OpenStack Deployment --- p.28 / Chapter 6.3 --- VM Image Operations in a RAID Setup --- p.33 / Chapter 7 --- Conclusions and FutureWork --- p.38 / Bibliography --- p.39
24

Design, implementation, and evaluation of node placement and data reduction algorithms for large scale wireless networks

Mehta, Hardik, January 2003 (has links) (PDF)
Thesis (Ph. D.)--School of Electrical and Computer Engineering, Georgia Institute of Technology, 2004. Directed by Douglas M. Blough. / Includes bibliographical references (leaves 62-63).
25

Efficient verification/testing of system-on-chip through fault grading and analog behavioral modeling

Jeong, Jae Hoon 10 February 2014 (has links)
This dissertation presents several cost-effective production test solutions using fault grading and mixed-signal design verification cases enabled by analog behavioral modeling. Although the latest System-on-Chip (SOC) is getting denser, faster, and more complex, the manufacturing technology is dominated by subtle defects that are introduced by small-scale technology. Thus, SOC requires more mature testing strategies. By performing various types of testing, better quality SoC can be manufactured, but test resources are too limited to accommodate all those tests. To create the most efficient production test flow, any redundant or ineffective tests need to be removed or minimized. Chapter 3 proposes new method of test data volume reduction by combining the nonlinear property of feedback shift register (FSR) and dictionary coding. Instead of using the nonlinear FSR for actual hardware implementation, the expanded test set by nonlinear expansion is used as the one-column test sets and provides big reduction ratio for the test data volume. The experimental results show the combined method reduced the total test data volume and increased the fault coverage. Due to the increased number of test patterns, total test time is increased. Chapter 4 addresses a whole process of functional fault grading. Fault grading has always been a ”desire-to-have” flow because it can bring up significant value for cost saving and yield analysis. However, it is very hard to perform the fault grading on the complex large scale SOC. A commercial tool called Z01X is used as a fault grading platform, and whole fault grading process is coordinated and each detailed execution is performed. Simulation- based functional fault grading identifies the quality of the given functional tests against the static faults and transition delay faults. With the structural tests and functional tests, functional fault grading can indicate the way to achieve the same test coverage by spending minimal test time. Compared to the consumed time and resource for fault grading, the contribution to the test time saving might not be acceptable as very promising, but the fault grading data can be reused for yield analysis and test flow optimization. For the final production testing, confident decisions on the functional test selection can be made based on the fault grading results. Chapter 5 addresses the challenges of Package-on-Package (POP) testing. Because POP devices have pins on both the top and the bottom of the package, the increased test pins require more test channels to detect packaging defects. Boundary scan chain testing is used to detect those continuity defects by relying on leakage current from the power supply. This proposed test scheme does not require direct test channels on the top pins. Based on the counting algorithm, minimal numbers of test cycles are generated, and the test achieved full test coverage for any combinations of pin-to-pin shortage defects on the top pins of the POP package. The experimental results show about 10 times increased leakage current from the shorted defect. Also, it can be expanded to multi-site testing with less test channels for high-volume production. Fault grading is applied within different structural test categories in Chapter 6. Stuck-at faults can be considered as TDFs having infinite delay. Hence, the TDF Automatic Test Pattern Generation (ATPG) tests can detect both TDFs and stuck-at faults. By removing the stuck-at faults being detected by the given TDF ATPG tests, the tests that target stuck-at faults can be reduced, and the reduced stuck-at fault set results in fewer stuck-at ATPG patterns. The structural test time is reduced while keeping the same test coverage. This TDF grading is performed with the same ATPG tool used to generate the stuck-at and TDF ATPG tests. To expedite the mixed-signal design verification of complex SoC, analog behavioral modeling methods and strategies are addressed in Chapter 7 and case studies for detailed verification with actual mixed-signal design are ad- dressed in Chapter 8. Analog modeling effort can enhance verification quality for a mixed-signal design with less turnaround time, and it enables compatible integration of the mixed-signal design cores into the SoC. The modeling process may reveal any potential design errors or incorrect testbench setup, and it results in minimizing unnecessary debugging time for quality devices. Two mixed-signal design cases were verified by me using the analog models. A fully hierarchical digital-to-analog converter (DAC) model is implemented and silicon mismatches caused by process variation are modeled and inserted into the DAC model, and the calibration algorithm for the DAC is successfully verified by model-based simulation at the full DAC-level. When the mismatch amount is increased and exceeded the calibration capability of the DAC, the simulation results show the increased calibration error with some outliers. This verification method can identify the saturation range of the DAC and predict the yield of the devices from process variation. A phase-locked loop (PLL) design cases were also verified by me using the analog model. Both open-loop PLL model and closed-loop PLL model cases are presented. Quick bring-up of open-loop PLL model provides low simulation overhead for widely-used PLLs in the SOC and enables early starting of design verification for the upper-level design using the PLL generated clocks. Accurate closed-loop PLL model is implemented for DCO-based PLL design, and the mixed-simulation with analog models and schematic designs enables flexible analog verification. Only focused analog design block is set to the schematic design and the rest of the analog design is replaced by the analog model. Then, this scaled-down SPICE simulation is performed about 10 times to 100 times faster than full-scale SPICE simulation. The analog model of the focused block is compared with the scaled-down SPICE simulation result and the quality of the model is iteratively enhanced. Hence, the analog model enables both compatible integration and flexible analog design verification. This dissertation contributes to reduce test time and to enhance test quality, and helps to set up efficient production testing flows. Depending on the size and performance of CUT, proper testing schemes can maximize the efficiency of production testing. The topics covered in this dissertation can be used in optimizing the test flow and selecting the final production tests to achieve maximum test capability. In addition, the strategies and benefits of analog behavioral modeling techniques that I implemented are presented, and actual verification cases shows the effectiveness of analog modeling for better quality SoC products. / text
26

Moving Data Analysis into the Acquisition Hardware

Buckley, Dave 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / Data acquisition for flight test is typically handled by dedicated hardware which performs specific functions and targets specific interfaces and buses. Through the use of an FPGA state machine based design approach, performance and robustness can be guaranteed. Up to now sufficient flexibility has been provided by allowing the user to configure the hardware depending on the particular application. However by allowing custom algorithms to be run on the data acquisition hardware, far greater control and flexibility can be offered to the flight test engineer. As the volume of the acquired data increases, this extra control can be used to vastly reduce the amount of data to be recorded or telemetered. Also real-time analysis of test points can now be done where post processing would previously have been required. This paper examines examples of data acquisition, recording and processing and investigates where data reduction and time savings can be achieved by enabling the flight test engineer to run his own algorithms on the hardware.
27

Generalization of boosting algorithms and applications of Bayesian inference for massive datasets /

Ridgeway, Gregory Kirk, January 1999 (has links)
Thesis (Ph. D.)--University of Washington, 1999. / Vita. Includes bibliographical references (p. 159-169).
28

Data reduction techniques for Very Long Baseline Interferometric spectropolarimetry

Kemball, Athol James January 1993 (has links)
This thesis reports the results of an investigation into techniques for the calibration and imaging of spectral line polarization observations in Very Long Baseline Interferometry (VLBI). A review is given of the instrumental and propagation effects which need to be removed in the course of calibrating such obervations, with particular reference to their polarization dependence. The removal of amplitude and phase errors and the determination of the instrumental feed response is described. The polarization imaging of such data is discussed with particular reference to the case of poorly sampled cross-polarization data. The software implementation of the algorithms within the Astronomical Image Processing System (AlPS) is discussed and the specific case of spectral line polarization reduction for data observed using the MK3 VLBI system is considered in detail. VLBI observations at two separate epochs of the 1612 MHz OH masers towards the source IRC+ 10420 are reduced as part of this work. Spectral line polarization maps of the source structure are presented, including a discussion of source morphology and variability. The source is sigmficantly circularly polarized at VLBI resolution, but does not display appreciable linear polarization. A proper motion study of the circumstellar envelope is presented, which supports an ellipsoidal kinematic model with anisotropic radial outflow. Kinematic modelling of the measured proper motions suggests a distance to the source of ~ 3 kpc. The cirumstellar magnetic field strength in the masing regions is determined as 1-3 mG, assuming Zeeman splitting as the polarization mechanism.
29

Mitigating Inconsistencies by Coupling Data Cleaning, Filtering, and Contextual Data Validation in Wireless Sensor Networks

Bakhtiar, Qutub A 26 March 2009 (has links)
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
30

The Use of Short-Interval GPS Data for Construction Operations Analysis

Hildreth, John C. 05 March 2003 (has links)
The global positioning system (GPS) makes use of extremely accurate measures of the time to determine position. The times required for electronic signals to travel at the speed of light from at least four orbiting satellites to a receiver on earth is measured precisely and used to calculate the distances from the satellites to the receiver. The calculated distances are used to determine the position of the receiver through triangulation. This research takes an approach opposite the original GPS research, focusing on the use of position to determine the time at which events occur. Specifically, this work addresses the question: Can the information pertaining to position and speed contained in a GPS record be used to autonomously identify the times at which critical events occur within a production cycle? The research question was answered by determining the hardware needs for collecting the desired data in a useable format an developing a unique data collection tool to meet those needs. The tool was field evaluated and the data collected was used to determine the software needs for automated reduction of the data to the times at which key events occurred. The software tools were developed in the form of Time Identification Modules (TIMs). The TIMs were used to reduce data collected from a load and haul earthmoving operation to duration measures for the load, haul, dump, and return activities. The value of the developed system was demonstrated by investigating correlations between performance times in construction operations and by using field data to verify the results obtained from productivity estimating tools. Use of the system was shown to improve knowledge and provide additional insight into operations analysis studies. / Ph. D.

Page generated in 0.1154 seconds