• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 32
  • 32
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A 0.18 um CMOS half-rate clock and data recovery circuit with reference-less dual loop /

Huang, Wenjie. January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2006. / Includes bibliographical references (p. 70-71). Also available in electronic format on the Internet.
22

Analyzing the performance of new TCP extensions over satellite links

Hayes, Christopher. January 1997 (has links)
Thesis (M.S.)--Ohio University, August, 1997. / Title from PDF t.p.
23

Design and analysis of high-performance and recoverable data storages /

Xiao, Weijun, January 2009 (has links)
Thesis (Ph.D.) -- University of Rhode Island, 2009. / Typescript. Includes bibliographical references (leaves 128-137).
24

ESetStore: an erasure-coding based distributed storage system with fast data recovery

Liu, Chengjian 31 August 2018 (has links)
The past decade has witnessed the rapid growth of data in large-scale distributed storage systems. Triplication, a reliability mechanism with 3x storage overhead and adopted by large-scale distributed storage systems, introduces heavy storage cost as data amount in storage systems keep growing. Consequently, erasure codes have been introduced in many storage systems because they can provide a higher storage efficiency and fault tolerance than data replication. However, erasure coding has many performance degradation factors in both I/O and computation operations, resulting in great performance degradation in large-scale erasure-coded storage systems. In this thesis, we investigate how to eliminate some key performance issues in I/O and computation operations for applying erasure coding in large-scale storage systems. We also propose a prototype named ESetStore to improve the recovery performance of erasure-coded storage systems. We introduce our studies as follows. First, we study the encoding and decoding performance of the erasure coding, which can be a key bottleneck with the state-of-the-art disk I/O throughput and network bandwidth. We propose a graphics processing unit (GPU)-based implementation of erasure coding named G-CRS, which employs the Cauchy Reed-Solomon (CRS) code, to improve the encoding and decoding performance. To maximize the coding performance of G-CRS by fully utilizing the GPU computational power, we designed and implemented a set of optimization strategies. Our evaluation results demonstrated that G-CRS is 10 times faster than most of the other coding libraries. Second, we investigate the performance degradation introduced by intensive I/O operations in recovery for large-scale erasure-coded storage systems. To improve the recovery performance, we propose a data placement algorithm named ESet. We define a configurable parameter named overlapping factor for system administrators to easily achieve desirable recovery I/O parallelism. Our simulation results show that ESet can significantly improve the data recovery performance without violating the reliability requirement by distributing data and code blocks across different failure domains. Third, we take a look at the performance of applying coding techniques to in-memory storage. A reliable in-memory cache for key-value stores named R-Memcached is designed and proposed. This work can be served as a prelude of applying erasure coding to in-memory metadata storage. R-Memcached exploits coding techniques to achieve reliability, and can tolerate up to two node failures. Our experimental results show that R-Memcached can maintain very good latency and throughput performance even during the period of node failures. At last, we design and implement a prototype named ESetStore for erasure-coded storage systems. The ESetStore integrates our data placement algorithm ESet to bring fast data recovery for storage systems.
25

Biometric system security and privacy: data reconstruction and template protection

Mai, Guangcan 31 August 2018 (has links)
Biometric systems are being increasingly used, from daily entertainment to critical applications such as security access and identity management. It is known that biometric systems should meet the stringent requirement of low error rate. In addition, for critical applications, the security and privacy issues of biometric systems are required to be concerned. Otherwise, severe consequence such as the unauthorized access (security) or the exposure of identity-related information (privacy) can be caused. Therefore, it is imperative to study the vulnerability to potential attacks and identify the corresponding risks. Furthermore, the countermeasures should also be devised and patched on the systems. In this thesis, we study the security and privacy issues in biometric systems. We first make an attempt to reconstruct raw biometric data from biometric templates and demonstrate the security and privacy issues caused by the data reconstruction. Then, we make two attempts to protect biometric templates from being reconstructed and improve the state-of-the-art biometric template protection techniques.
26

Toward A Secure Account Recovery: Machine Learning Based User Modeling for protection of Account Recovery in a Managed Environment

Alubala, Amos Imbati January 2023 (has links)
As a result of our heavy reliance on internet usage and running online transactions, authentication has become a routine part of our daily lives. So, what happens when we lose or cannot use our digital credentials? Can we securely recover our accounts? How do we ensure it is the genuine user that is attempting a recovery while at the same time not introducing too much friction for the user? In this dissertation, we present research results demonstrating that account recovery is a growing need for users as they increase their online activity and use different authentication factors. We highlight that the account recovery process is the weakest link in the authentication domain because it is vulnerable to account takeover attacks because of the less secure fallback authentication mechanisms usually used. To close this gap, we study user behavior-based machine learning (ML) modeling as a critical part of the account recovery process. The primary threat model for ML implementation in the context of authentication is poisoning and evasion attacks. Towards that end, we research randomized modeling techniques and present the most effective randomization strategy in the context of user behavioral biometrics modeling for account recovery authentication. We found that a randomization strategy that exclusively relied on the user’s data, such as stochastically varying the features used to generate an ensemble of models, outperformed a design that incorporated external data, such as adding gaussian noise to outputs. This dissertation asserts that account recovery process security posture can be vastly improved by incorporating user behavior modeling to add resiliency against account takeover attacks and nudging users towards voluntary adoption of more robust authentication factors.
27

Researches On Reverse Lookup Problem In Distributed File System

Zhang, Junyao 01 January 2010 (has links)
Recent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at large scale, a reverse lookup problem, namely finding the objects list for the failed node remains open. This is especially true for storage systems with high requirement of data integrity and availability, such as scientific research data clusters and etc. Existing solutions are either time consuming or expensive. Meanwhile, replication based block placement can be used to realize fast reverse lookup. However, they are designed for centralized, small-scale storage architectures. In this thesis, we propose a fast and efficient reverse lookup scheme named Group-based Shifted Declustering (G-SD) layout that is able to locate the whole content of the failed node. G-SD extends our previous shifted declustering layout and applies to large-scale file systems. Our mathematical proofs and real-life experiments show that G-SD is a scalable reverse lookup scheme that is up to one order of magnitude faster than existing schemes.
28

Data security and reliability in cloud backup systems with deduplication.

January 2012 (has links)
雲存儲是一個新興的服務模式,讓個人和企業的數據備份外包予較低成本的遠程雲服務提供商。本論文提出的方法,以確保數據的安全性和雲備份系統的可靠性。 / 在本論文的第一部分,我們提出 FadeVersion,安全的雲備份作為今天的雲存儲服務上的安全層服務的系統。 FadeVersion實現標準的版本控制備份設計,從而消除跨不同版本備份的冗餘數據存儲。此外,FadeVersion在此設計上加入了加密技術以保護備份。具體來說,它實現細粒度安全删除,那就是,雲客戶可以穩妥地在雲上删除特定的備份版本或文件,使有關文件永久無法被解讀,而其它共用被删除數據的備份版本或文件將不受影響。我們實現了試驗性原型的 FadeVersion並在亞馬遜S3之上進行實證評價。我們證明了,相對於不支援度安全删除技術傳統的雲備份服務 FadeVersion只增加小量額外開鎖。 / 在本論文的第二部分,提出 CFTDedup一個分佈式代理系統,利用通過重複數據删除增加雲存儲的效率,而同時確保代理之間的崩潰容錯。代理之間會進行同步以保持重複數據删除元數據的一致性。另外,它也分批更新元數據減輕同步帶來的開銷。我們實現了初步的原型CFTDedup並通過試驗台試驗,以存儲虛擬機映像評估其重複數據删除的運行性能。我們還討論了幾個開放問題,例如如何提供可靠、高性能的重複數據删除的存儲。我們的CFTDedup原型提供了一個平台來探討這些問題。 / Cloud storage is an emerging service model that enables individuals and enterprises to outsource the storage of data backups to remote cloud providers at a low cost. This thesis presents methods to ensure the data security and reliability of cloud backup systems. / In the first part of this thesis, we present FadeVersion, a secure cloud backup system that serves as a security layer on top of todays cloud storage services. FadeVersion follows the standard version-controlled backup design, which eliminates the storage of redundant data across different versions of backups. On top of this, FadeVersion applies cryptographic protection to data backups. Specifically, it enables ne-grained assured deletion, that is, cloud clients can assuredly delete particular backup versions or files on the cloud and make them permanently in accessible to anyone, while other versions that share the common data of the deleted versions or les will remain unaffected. We implement a proof-of-concept prototype of FadeVersion and conduct empirical evaluation atop Amazon S3. We show that FadeVersion only adds minimal performance overhead over a traditional cloud backup service that does not support assured deletion. / In the second part of this thesis, we present CFTDedup, a distributed proxy system designed for providing storage efficiency via deduplication in cloud storage, while ensuring crash fault tolerance among proxies. It synchronizes deduplication metadata among proxies to provide strong consistency. It also batches metadata updates to mitigate synchronization overhead. We implement a preliminary prototype of CFTDedup and evaluate via test bed experiments its runtime performance in deduplication storage for virtual machine images. We also discuss several open issues on how to provide reliable, high-performance deduplication storage. Our CFTDedup prototype provides a platform to explore such issues. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Rahumed, Arthur. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 47-51). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cloud Based Backup and Assured Deletion --- p.1 / Chapter 1.2 --- Crash Fault Tolerance for Backup Systems with Deduplication --- p.4 / Chapter 1.3 --- Outline of Thesis --- p.6 / Chapter 2 --- Background and Related Work --- p.7 / Chapter 2.1 --- Deduplication --- p.7 / Chapter 2.2 --- Assured Deletion --- p.7 / Chapter 2.3 --- Policy Based Assured Deletion --- p.8 / Chapter 2.4 --- Convergent Encryption --- p.9 / Chapter 2.5 --- Cloud Based Backup Systems --- p.10 / Chapter 2.6 --- Fault Tolerant Deduplication Systems --- p.10 / Chapter 3 --- Design of FadeVersion --- p.12 / Chapter 3.1 --- Threat Model and Assumptions for Fade Version --- p.12 / Chapter 3.2 --- Motivation --- p.13 / Chapter 3.3 --- Main Idea --- p.14 / Chapter 3.4 --- Version Control --- p.14 / Chapter 3.5 --- Assured Deletion --- p.16 / Chapter 3.6 --- Assured Deletion for Multiple Policies --- p.18 / Chapter 3.7 --- Key Management --- p.19 / Chapter 4 --- Implementation of FadeVersion --- p.20 / Chapter 4.1 --- System Entities --- p.20 / Chapter 4.2 --- Metadata Format in FadeVersion --- p.22 / Chapter 5 --- Evaluation of FadeVersion --- p.24 / Chapter 5.1 --- Setup --- p.24 / Chapter 5.2 --- Backup/Restore Time --- p.26 / Chapter 5.3 --- Storage Space --- p.28 / Chapter 5.4 --- Monetary Cost --- p.29 / Chapter 5.5 --- Conclusions --- p.30 / Chapter 6 --- CFTDedup Design --- p.31 / Chapter 6.1 --- Failure Model --- p.31 / Chapter 6.2 --- System Overview --- p.32 / Chapter 6.3 --- Distributed Deduplication --- p.33 / Chapter 6.4 --- Crash Fault Tolerance --- p.35 / Chapter 6.5 --- Implementation --- p.36 / Chapter 7 --- Evaluation of CFTDedup --- p.37 / Chapter 7.1 --- Setup --- p.37 / Chapter 7.2 --- Experiment 1 (Archival) --- p.38 / Chapter 7.3 --- Experiment 2 (Restore) --- p.39 / Chapter 7.4 --- Experiment 3 (Recovery) --- p.40 / Chapter 7.5 --- Summary --- p.41 / Chapter 8 --- Future work and Conclusions of CFTDedup --- p.43 / Chapter 8.1 --- Future Work --- p.43 / Chapter 8.2 --- Conclusions --- p.44 / Chapter 9 --- Conclusion --- p.45 / Bibliography --- p.47
29

IMPRESS improving multicore performance and reliability via efficient software support for monitoring /

Nagarajan, Vijayanand. January 2009 (has links)
Thesis (Ph. D.)--University of California, Riverside, 2009. / Includes abstract. Title from first page of PDF file (viewed March 12, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 151-158). Also issued in print.
30

Design considerations for high speed clock and data recovery circuits /

Beshara, Michel, January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2002. / Includes bibliographical references (p. 93-95). Also available in electronic format on the Internet.

Page generated in 0.1257 seconds