• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Enhanced coding, clock recovery and detection for a magnetic credit card

Smith, Daniel Felix January 1998 (has links)
This thesis describes the background, investigation and construction of a system for storing data on the magnetic stripe of a standard three-inch plastic credit in: inch card. Investigation shows that the information storage limit within a 3.375 in by 0.11 in rectangle of the stripe is bounded to about 20 kBytes. Practical issues limit the data storage to around 300 Bytes with a low raw error rate: a four-fold density increase over the standard. Removal of the timing jitter (that is prob-' ably caused by the magnetic medium particle size) would increase the limit to 1500 Bytes with no other system changes. This is enough capacity for either a small digital passport photograph or a digitized signature: making it possible to remove printed versions from the surface of the card. To achieve even these modest gains has required the development of a new variable rate code that is more resilient to timing errors than other codes in its efficiency class. The tabulation of the effects of timing errors required the construction of a new code metric and self-recovering decoders. In addition, a new method of timing recovery, based on the signal 'snatches' has been invented to increase the rapidity with which a Bayesian decoder can track the changing velocity of a hand-swiped card. The timing recovery and Bayesian detector have been integrated into one computation (software) unit that is self-contained and can decode a general class of (d, k) constrained codes. Additionally, the unit has a signal truncation mechanism to alleviate some of the effects of non-linear distortion that are present when a magnetic card is read with a magneto-resistive magnetic sensor that has been driven beyond its bias magnetization. While the storage density is low and the total storage capacity is meagre in comparison with contemporary storage devices, the high density card may still have a niche role to play in society. Nevertheless, in the face of the Smart card its long term outlook is uncertain. However, several areas of coding and detection under short-duration extreme conditions have brought new decoding methods to light. The scope of these methods is not limited just to the credit card.
2

A Display System for Bliss Symbolics

Callway, E.G. 04 1900 (has links)
<p> A microprocessor driven display was built and programmed for the storage and reproduction of Bliss symbols. An explanation is offered for the success of the symbol language in teaching the handicapped.</p> <p> The hardware was designed to be inexpensive enough for classroom use, but still deliver adequate flexibility and resolution. Due to the complexity and variety of the symbols a method of data compaction was developed to reduce the required storage space.</p> <p> Initial tests are presented and suggestions are made for continuing the work.</p> / Thesis / Master of Engineering (MEngr)
3

Design and analysis of high-performance and recoverable data storages /

Xiao, Weijun, January 2009 (has links)
Thesis (Ph.D.) -- University of Rhode Island, 2009. / Typescript. Includes bibliographical references (leaves 128-137).
4

ESetStore: an erasure-coding based distributed storage system with fast data recovery

Liu, Chengjian 31 August 2018 (has links)
The past decade has witnessed the rapid growth of data in large-scale distributed storage systems. Triplication, a reliability mechanism with 3x storage overhead and adopted by large-scale distributed storage systems, introduces heavy storage cost as data amount in storage systems keep growing. Consequently, erasure codes have been introduced in many storage systems because they can provide a higher storage efficiency and fault tolerance than data replication. However, erasure coding has many performance degradation factors in both I/O and computation operations, resulting in great performance degradation in large-scale erasure-coded storage systems. In this thesis, we investigate how to eliminate some key performance issues in I/O and computation operations for applying erasure coding in large-scale storage systems. We also propose a prototype named ESetStore to improve the recovery performance of erasure-coded storage systems. We introduce our studies as follows. First, we study the encoding and decoding performance of the erasure coding, which can be a key bottleneck with the state-of-the-art disk I/O throughput and network bandwidth. We propose a graphics processing unit (GPU)-based implementation of erasure coding named G-CRS, which employs the Cauchy Reed-Solomon (CRS) code, to improve the encoding and decoding performance. To maximize the coding performance of G-CRS by fully utilizing the GPU computational power, we designed and implemented a set of optimization strategies. Our evaluation results demonstrated that G-CRS is 10 times faster than most of the other coding libraries. Second, we investigate the performance degradation introduced by intensive I/O operations in recovery for large-scale erasure-coded storage systems. To improve the recovery performance, we propose a data placement algorithm named ESet. We define a configurable parameter named overlapping factor for system administrators to easily achieve desirable recovery I/O parallelism. Our simulation results show that ESet can significantly improve the data recovery performance without violating the reliability requirement by distributing data and code blocks across different failure domains. Third, we take a look at the performance of applying coding techniques to in-memory storage. A reliable in-memory cache for key-value stores named R-Memcached is designed and proposed. This work can be served as a prelude of applying erasure coding to in-memory metadata storage. R-Memcached exploits coding techniques to achieve reliability, and can tolerate up to two node failures. Our experimental results show that R-Memcached can maintain very good latency and throughput performance even during the period of node failures. At last, we design and implement a prototype named ESetStore for erasure-coded storage systems. The ESetStore integrates our data placement algorithm ESet to bring fast data recovery for storage systems.
5

Användarnas förhållningssätt till lagringen av personlig information av IPA : En kvalitativ studie av användare / The users approach towards storing of personal information of IPA : A qualitative study of the users

Eriksson, Martin January 2020 (has links)
Kommunikationen mellan dator och användare har blivit en vanlig del av användarnas vardag. Intelligenta personliga assistenter tillåter kommunikation mellan dator och användare, där uppgifter inte längre behöver utföras manuellt av användaren. Intelligenta personliga assistenter har ökat under de senaste 10 åren och har blivit mer avanceranede med hjälp av artificiell intelligens och är anpassade efter användarnas behov. Samtidigt lagras det mer data om användare för att utvinna information och bli mer personlig. Antalet produkter har ökat under de senaste åren och fortsätter öka tillsammans med antalet användare. Intelligenta personliga assistenter har under senare tid börjat flytta in i användares hem och blivit en tydlig del av deras vardag. Nutidens lagar för lagring av information appliceras inte likadant för intelligenta personliga assistenter och samtidigt som lagringen av information ökar, dyker det även upp en del säkerhetsbrister.Studien har använt sig av en kvalitativ forskningsmetod. Till den kvalitativa metoden har användare av intelligenta personliga assistenter intervjuas. Empirin som samlats in med hjälp av intervjuer har analyserat och tillsammans med den insamlade empirin har studien analyserat användarbeteendet och hur korrelationen mellan lagring av data och användarna ser ut.Studien visar ett resultat där användarnas beteenden jämt mot lagring är komplex. Resultatet visar en delad bild bland användare och kan konstatera att användarnas beteende visar på ett motsägelsefullt beteende, där användarna inte lever som dom lär. Medan andra användare inte lägger lika stor vikt i den lagring som sker. / The communication between computers and users has become a usual part of the users everyday life. Intelligent personal assistants allows communication between computer and user, were assignments doesn’t need to be manual assigned by the user. Intelligent personal assistants have been increasing the past 10 years and has become more advanced with the help of artificial intelligence and are more suited for the users need. At the same time personal data of the user is being stored to gain information and become more personal for the user. The number of products and users have been increasing during the last years. Over recent years Intelligent personal assistants have been moving in to the users homes and are becoming a clear part of their everyday life’s. The laws of today’s storage of data isn’t applicable for the intelligent personal assistants and at the same time the storage of data is increasing and safety issues are appearing.The study is a qualitative study. With the qualitative method users of intelligent personal assistants have been interviewed. The empirical data that’s been collected during the interviews have analyzed the users behavior, and how the correlation between the storage of data and the users are being portrayed.The study shows results where the user behavior towards storage of data is complex. The result shows a parted picture among the users and can note how the users behavior shows a contradictory behavior, where the users doesn’t live the way they teach. While other users doesn’t put that much care into the storage of data.
6

ISCSI-based storage area network for disaster

Murphy, Matthew R. Harvey, Bruce A. January 2005 (has links)
Thesis (M.S.)--Florida State University, 2005. / Advisor: Dr. Bruce A. Harvey, Florida State University, College of Engineering, Dept. of Electrical and Computer Engineering. Title and description from dissertation home page (viewed June 10, 2005). Document formatted into pages; contains vii, 73 pages. Includes bibliographical references.
7

The Data Center under your Desk: How Disruptive is Modern Hardware for DB System Design?

Lehner, Wolfgang 10 January 2023 (has links)
While we are already used to see more than 1,000 cores within a single machine, the next processing platforms for database engines will be heterogeneous with built-in GPU-style processors as well as specialized FPGAs or chips with domain-specific instruction sets. Moreover, the traditional volatile as well as the upcoming non-volatile RAM with capacities in the 100s of TBytes per machine will provide great opportunities for storage engines but also call for radical changes on the architecture of such systems. Finally, the emergence of economically affordable, high-speed/low-latency interconnects as a basis for rack-scale computing is questioning long-standing folklore algorithmic assumptions but will certainly play an important role in the big picture of building modern data management platforms. In this talk, we will try to classify and review existing approaches from a performance, robustness, as well as energy efficiency perspective and pinpoint interesting starting points for further research activities.
8

Arquitetura com elevada taxa de processamento e reduzida largura de banda de mem?ria para a estima??o de movimento em v?deos digitais

Lopes, Alba Sandyra Bezerra 30 March 2011 (has links)
Made available in DSpace on 2014-12-17T15:47:56Z (GMT). No. of bitstreams: 1 AlbaSBL_DISSERT.pdf: 4454568 bytes, checksum: 25c4881845467354b0805f55975884ef (MD5) Previous issue date: 2011-03-30 / Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television / Diversos aparelhos eletr?nicos atuais d?o suporte ? utiliza??o de v?deos digitais: celulares, c?meras fotogr?ficas, filmadoras e TVs digitais s?o alguns exemplos. Entretanto, esses v?deos, tal como foram capturados, apresentam uma grande quantidade de informa??o, utilizando milh?es de bits para sua representa??o. Para realizar o armazenamento dos dados na sua forma prim?ria, seria necess?ria uma quantidade enorme de espa?o e uma grande largura de banda para realizar a transmiss?o. A compress?o de v?deos torna-se, ent?o, essencial para possibilitar o armazenamento e a transmiss?o destes dados. O estimador de movimento, um dos m?dulos do codificador, explora a redund?ncia temporal existente nas sequ?ncias de v?deo para reduzir a quantidade de dados necess?ria ? representa??o da informa??o. Este trabalho apresenta uma arquitetura em hardware para o estimador de movimento para v?deos de alta resolu??o, segundo o padr?o H.264/AVC. O padr?o H.264/AVC ? o mais novo padr?o de compress?o de v?deos que possibilita, gra?as a uma s?rie de inova??es, alcan?ar elevadas taxas de compress?o. A arquitetura apresentada neste trabalho foi projetada para permitir o m?ximo reuso de dados, visando a diminui??o da largura de banda necess?ria para realizar o processo de estima??o de movimento. ? na estima??o de movimento que residem os maiores ganhos do padr?o e, por isso, este m?dulo ? essencial para a efici?ncia do codificador como um todo. Este trabalho est? inserido no projeto Rede H.264, que visa desenvolver tecnologia brasileira para o Sistema Brasileiro de Televis?o Digital
9

A Dredging Knowledge-Base Expert System for Pipeline Dredges with Comparison to Field Data

Wilson, Derek Alan 2010 December 1900 (has links)
A Pipeline Analytical Program and Dredging Knowledge{Base Expert{System (DKBES) determines a pipeline dredge's production and resulting cost and schedule. Pipeline dredge engineering presents a complex and dynamic process necessary to maintain navigable waterways. Dredge engineers use pipeline engineering and slurry transport principles to determine the production rate of a pipeline dredge system. Engineers then use cost engineering factors to determine the expense of the dredge project. Previous work in engineering incorporated an object{oriented expert{system to determine cost and scheduling of mid{rise building construction where data objects represent the fundamental elements of the construction process within the program execution. A previously developed dredge cost estimating spreadsheet program which uses hydraulic engineering and slurry transport principles determines the performance metrics of a dredge pump and pipeline system. This study focuses on combining hydraulic analysis with the functionality of an expert{system to determine the performance metrics of a dredge pump and pipeline system and its resulting schedule. Field data from the U.S. Army Corps of Engineers pipeline dredge, Goetz, and several contract daily dredge reports show how accurately the DKBES can predict pipeline dredge production. Real{time dredge instrumentation data from the Goetz compares the accuracy of the Pipeline Analytical Program to actual dredge operation. Comparison of the Pipeline Analytical Program to pipeline daily dredge reports shows how accurately the Pipeline Analytical Program can predict a dredge project's schedule over several months. Both of these comparisons determine the accuracy and validity of the Pipeline Analytical Program and DKBES as they calculate the performance metrics of the pipeline dredge project. The results of the study determined that the Pipeline Analytical Program compared closely to the Goetz eld data where only pump and pipeline hydraulics a ected the dredge production. Results from the dredge projects determined the Pipeline Analytical Program underestimated actual long{term dredge production. Study results identi ed key similarities and di erences between the DKBES and spreadsheet program in terms of cost and scheduling. The study then draws conclusions based on these ndings and o ers recommendations for further use.

Page generated in 0.0584 seconds