• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 13
  • 12
  • 9
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 97
  • 97
  • 35
  • 22
  • 22
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Implementação de um sistema de arquivos para uma plataforma de computação reconfigurável / A file system implementation for a reconfigurable computing platform

Sanches, Adriano Kaminski 20 September 2006 (has links)
Em um sistema computacional, os dados são armazenados na unidade de armazenamento, segundo alguma lógica, em estruturas denominadas arquivos. O Sistema de Arquivos é o responsável por estruturar, identificar, acessar, proteger e gerenciar esses arquivos, além de agir como um elo de ligação entre o usuário e o dispositivo, traduzindo comandos de alta abstração (oriundos do usuário) em comandos de baixo nível, compreensível a unidade de armazenamento. O presente trabalho visa a implementação de um sistema de arquivos para aplicação em dispositivos móveis baseado em computação reconfigurável. Tal sistema servirá de suporte para as aplicações que necessitem armazenar e/ou restaurar grande volume de dados, como a aquisição de imagens digitalizadas de câmeras CMOS. Este sistema também será utilizado como uma ferramenta inicial para o desenvolvimento de um módulo de armazenamento em uma placa baseada em computação reconfigurável a ser utilizada para fins didáticos. O sistema de arquivos implementado foi a FAT16 e o dispositivo de armazenamento de massa utilizado foram os cartões de memória SD-Secure Digital e MMC-MultiMediaCard / In computational systems, usually the data are stored in storage units, according to some logic, in structures called files. The File System is responsible for structure, identification, access, protection and management of the files. It also acts as a connector link between the user and the device, translating high level commands (derived for the user) into commands of low level, understandable for the storage unit. The present work aims to implement a File System for application in mobile devices based on reconfigurable computation. Such system will act as a support for the applications that need to store and/or to restore large volume of data, such as the acquisition of digital images from CMOS cameras. This system will also be used as an initial tool for the development of a storage module of a board, based on reconfigurable computation, to be used for didactic purposes. The implemented File System is based on FAT16 and the storage device used was the memory cards SD (Secure Digital) and MMC (MultiMedia- Card)
32

Implementação de um sistema de arquivos para uma plataforma de computação reconfigurável / A file system implementation for a reconfigurable computing platform

Adriano Kaminski Sanches 20 September 2006 (has links)
Em um sistema computacional, os dados são armazenados na unidade de armazenamento, segundo alguma lógica, em estruturas denominadas arquivos. O Sistema de Arquivos é o responsável por estruturar, identificar, acessar, proteger e gerenciar esses arquivos, além de agir como um elo de ligação entre o usuário e o dispositivo, traduzindo comandos de alta abstração (oriundos do usuário) em comandos de baixo nível, compreensível a unidade de armazenamento. O presente trabalho visa a implementação de um sistema de arquivos para aplicação em dispositivos móveis baseado em computação reconfigurável. Tal sistema servirá de suporte para as aplicações que necessitem armazenar e/ou restaurar grande volume de dados, como a aquisição de imagens digitalizadas de câmeras CMOS. Este sistema também será utilizado como uma ferramenta inicial para o desenvolvimento de um módulo de armazenamento em uma placa baseada em computação reconfigurável a ser utilizada para fins didáticos. O sistema de arquivos implementado foi a FAT16 e o dispositivo de armazenamento de massa utilizado foram os cartões de memória SD-Secure Digital e MMC-MultiMediaCard / In computational systems, usually the data are stored in storage units, according to some logic, in structures called files. The File System is responsible for structure, identification, access, protection and management of the files. It also acts as a connector link between the user and the device, translating high level commands (derived for the user) into commands of low level, understandable for the storage unit. The present work aims to implement a File System for application in mobile devices based on reconfigurable computation. Such system will act as a support for the applications that need to store and/or to restore large volume of data, such as the acquisition of digital images from CMOS cameras. This system will also be used as an initial tool for the development of a storage module of a board, based on reconfigurable computation, to be used for didactic purposes. The implemented File System is based on FAT16 and the storage device used was the memory cards SD (Secure Digital) and MMC (MultiMedia- Card)
33

Design and Implementation of a QoS file transfer protocol over Hadoop distributed file system

Chen, Chih-yi 26 July 2010 (has links)
Cloud computing is pervasive in our daily life. For instance, I usually use Google¡¦s GMail to receive e-mail, Google Document to edit documents online and Google Calendar to make my daily schedule. We can say that Google provides a ¡§Platform as a Service (PaaS)¡¨, which delivers a computing platform as a service, and the platform sustaining lots of cloud applications such as I mentioned above. However, the cloud computing platform of Google is private: we cannot trace its source code and make cloud applications on it! Fortunately, there¡¦s an open source project supported by Apache named ¡§Hadoop¡¨, which has a distributed file system which is very like Google File System (GFS) called ¡§Hadoop distributed file system (HDFS)¡¨. In order to observe the properties of HDFS, we design and implement a HDFS-based FTP server system called FTP-ON-HDFS system, say, a FTP server whose storage is HDFS. There are a web-console for FTP administrator, a FreeRADIUS server and a MySQL database for user authentication, a NameNode daemon on its machine, a SecondaryNameNode on its machine and five DataNode daemons and on five different machines in FTP-ON-HDFS system. Our FTP-ON-HDFS system can tune two QoS parameters: ¡§data block size¡¨ and ¡§data replication¡¨. Then, we tuned ¡§data block size¡¨ and ¡§data replication¡¨ in our system and compared its performance with Hadoop File System (FS) shell command and normal vsftpd. On the other hand, FUSE can mount HDFS from remote cluster to local machine, and make use of the permission of the local machine to manage HDFS. So, we compared the performance of FUSE with HDFS (FUSE-DFS) and our FTP-ON-HDFS system.
34

The Umbrella File System: Storage Management Across Heterogeneous Devices

Garrison, John Allen 2010 May 1900 (has links)
With the advent of Flash based solid state devices (SSDs), the differences in physical devices used to store data in computers are becoming more and more pronounced. Effectively mapping the differences in storage devices to the files, and applications using the devices, is the problem addressed in this dissertation. This dissertation presents the Umbrella File System (UmbrellaFS), a layered file system designed to effectively map file and device level differences, while maintaining a single coherent directory structure for users. Particular files are directed to appropriate underlying file systems by intercepting system calls connecting the Virtual File System (VFS) to the underlying file systems. Files are evaluated by a policy module that can examine both filenames and file metadata to make decisions about final placement. Files are transparently directed to and moved between appropriate file systems based on their characteristics. A prototype of UmbrellaFS is implemented as a loadable kernel module in the 2.4 and 2.6 Linux kernels. In addition to providing the ability to direct files to file systems, UmbrellaFS enables different decisions at other layers of the storage stack. In particular, alternate page cache writeback methods are presented through the use of UmbrellaFS. A multiple queue strategy based on file sequentiality and a sorting strategy are presented as alternatives to standard Linux cache writeback protocols. These strategies are implemented in a 2.6 Linux kernel and show improvements in a variety of benchmarks and tests.
35

Forensic framework for honeypot analysis

Fairbanks, Kevin D. 05 April 2010 (has links)
The objective of this research is to evaluate and develop new forensic techniques for use in honeynet environments, in an effort to address areas where anti-forensic techniques defeat current forensic methods. The fields of Computer and Network Security have expanded with time to become inclusive of many complex ideas and algorithms. With ease, a student of these fields can fall into the thought pattern of preventive measures as the only major thrust of the topics. It is equally important to be able to determine the cause of a security breach. Thus, the field of Computer Forensics has grown. In this field, there exist toolkits and methods that are used to forensically analyze production and honeypot systems. To counter the toolkits, anti-forensic techniques have been developed. Honeypots and production systems have several intrinsic differences. These differences can be exploited to produce honeypot data sources that are not currently available from production systems. This research seeks to examine possible honeypot data sources and cultivate novel methods to combat anti-forensic techniques. In this document, three parts of a forensic framework are presented which were developed specifically for honeypot and honeynet environments. The first, TimeKeeper, is an inode preservation methodology which utilizes the Ext3 journal. This is followed with an examination of dentry logging which is primarily used to map inode numbers to filenames in Ext3. The final component presented is the initial research behind a toolkit for the examination of the recently deployed Ext4 file system. Each respective chapter includes the necessary background information and an examination of related work as well as the architecture, design, conceptual prototyping, and results from testing each major framework component.
36

Performance evaluation of high performance parallel I/O

Dhandapani, Mangayarkarasi. January 2003 (has links) (PDF)
Thesis (M.S.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
37

A TRUSTED STORAGE SYSTEM FOR THE CLOUD

Karumanchi, Sushama 01 January 2010 (has links)
Data stored in third party storage systems like the cloud might not be secure since confidentiality and integrity of data are not guaranteed. Though cloud computing provides cost-effective storage services, it is a third party service and so, a client cannot trust the cloud service provider to store its data securely within the cloud. Hence, many organizations and users may not be willing to use the cloud services to store their data in the cloud until certain security guarantees are made. In this thesis, a solution to the problem of securely storing the client’s data by maintaining the confidentiality and integrity of the data within the cloud is developed. Five protocols are developed which ensure that the client’s data is stored only on trusted storage servers, replicated only on trusted storage servers, and guarantee that the data owners and other privileged users of that data access the data securely. The system is based on trusted computing platform technology [11]. It uses a Trusted Platform Module, specified by the Trusted Computing Group [11]. An encrypted file system is used to encrypt the user’s data. The system provides data security against a system administrator in the cloud.
38

Comparison and End-to-End Performance Analysis of Parallel Filesystems

Kluge, Michael 20 September 2011 (has links) (PDF)
This thesis presents a contribution to the field of performance analysis for Input/Output (I/O) related problems, focusing on the area of High Performance Computing (HPC). Beside the compute nodes, High Performance Computing systems need a large amount of supporting components that add their individual behavior to the overall performance characteristic of the whole system. Especially file systems in such environments have their own infrastructure. File operations are typically initiated at the compute nodes and proceed through a deep software stack until the file content arrives at the physical medium. There is a handful of shortcomings that characterize the current state of the art for performance analyses in this area. This includes a system wide data collection, a comprehensive analysis approach for all collected data, an adjusted trace event analysis for I/O related problems, and methods to compare current with archived performance data. This thesis proposes to instrument all soft- and hardware layers to enhance the performance analysis for file operations. The additional information can be used to investigate performance characteristics of parallel file systems. To perform I/O analyses on HPC systems, a comprehensive approach is needed to gather related performance events, examine the collected data and, if necessary, to replay relevant parts on different systems. One larger part of this thesis is dedicated to algorithms that reduce the amount of information that are found in trace files to the level that is needed for an I/O analysis. This reduction is based on the assumption that for this type of analysis all I/O events, but only a subset of all synchronization events of a parallel program trace have to be considered. To extract an I/O pattern from an event trace, only these synchronization points are needed that describe dependencies among different I/O requests. Two algorithms are developed to remove negligible events from the event trace. Considering the related work for the analysis of a parallel file systems, the inclusion of counter data from external sources, e.g. the infrastructure of a parallel file system, has been identified as a major milestone towards a holistic analysis approach. This infrastructure contains a large amount of valuable information that are essential to describe performance effects observed in applications. This thesis presents an approach to collect and subsequently process and store the data. Certain ways how to correctly merge the collected values with application traces are discussed. Here, a revised definition of the term "performance counter" is the first step followed by a tree based approach to combine raw values into secondary values. A visualization approach for I/O patterns closes another gap in the analysis process. Replaying I/O related performance events or event patterns can be done by a flexible I/O benchmark. The constraints for the development of such a benchmark are identified as well as the overall architecture for a prototype implementation. Finally, different examples demonstrate the usage of the developed methods and show their potential. All examples are real use cases and are situated on the HRSK research complex and the 100GBit Testbed at TU Dresden. The I/O related parts of a Bioinformatics and a CFD application have been analyzed in depth and enhancements for both are proposed. An instance of a Lustre file system was deployed and tuned on the 100GBit Testbed by the extensive use of external performance counters.
39

SCMFS Performance Enhancement and Implementation on Mobile Platform

Cao, Qian 2012 August 1900 (has links)
This thesis presents a method for enhancing performance of Storage Class Memory File System (SCMFS) and an implementation of SCMFS on Android platform. It focuses on analyzing performance influencing factors of memory file systems and the differences in implementation of SCMFS on Android and Linux kernels. SCMFS allocates memory pages as file blocks and employs virtual memory addresses as file block addresses. SCMFS utilizes processor's memory management unit and TLB (Translation Lookaside Buffer) during file accesses. TLB is an expensive resource and has a limited number of entries to cache virtual to physical address translations. TLB miss results in expensive page walks through memory page table. Thus TLB misses play an important role in determining SCMFS performance. In this thesis, SCMFS is designed to support both 4KB and 2MB page sizes in order to reduce TLB misses and to avoid significant internal fragmentation. By comparing SCMFS with YAFFS2 and EXT4 using popular benchmarks, both advantages and disadvantages of SCMFS huge-page version and small-page version are revealed. In the second part of this thesis, an implementation of SCMFS on Android platform is presented. At the time of working on this research project, Android kernel was not merged into Linux kernel yet. Two main changes of SCMFS kernel code: memory zoning and inode functions, are made to be compatible with Android kernel. AndroSH, a file system benchmark for SCMFS on Android, is developed based on shell script. Evaluations are made from three perspectives to compare SCMFS with YAFFS2 and EXT4: I/O throughput, user data access latency, and application execution latency. SCMFS shows a performance advantage because of its small instruction footprint and its pre-allocation mechanism. However, the singly linked list used by SCMFS to store subdirectories is less efficient than HTree index used by EXT4. The future work can improve lookup efficiency of SCMFS.
40

Implementation of the HadoopMapReduce algorithm on virtualizedshared storage systems

Nethula, Shravya January 2016 (has links)
Context Hadoop is an open-source software framework developed for distributed storage and distributed processing of large sets of data. The implementation of the Hadoop MapReduce algorithm on virtualized shared storage by eliminating the concept of Hadoop Distributed File System (HDFS) is a challenging task. In this study, the Hadoop MapReduce algorithm is implemented on the Compuverde software that deals with virtualized shared storage of data. Objectives In this study, the effect of using virtualized shared storage with Hadoop framework is identified. The main objective of this study is to design a method to implement the Hadoop MapReduce algorithm on Compuverde software that deals with virtualized shared storage of big data. Finally, the performance of the MapReduce algorithm on Compuverde shared storage (Compuverde File System - CVFS) is evaluated and compared to the performance of the MapReduce algorithm on HDFS. Methods Initially a literature study is conducted to identify the effect of Hadoop implementation on virtualized shared storage. The Compuverde software is analyzed in detail during this literature study. The concepts of the MapReduce algorithms and the functioning of HDFS are scrutinized in detail. The next main research method that is adapted for this study is the implementation of a method where the Hadoop MapReduce algorithm is applied on the Compuverde software that deals with the virtualized shared storage by eliminating the HDFS. The next step is experimentation in which the performance of the implementation of the MapReduce algorithm on Compuverde shared storage (CVFS) in comparison with implementation of the MapReduce algorithm on Hadoop Distributed File System. Results The experiment is conducted in two different scenarios namely the CPU bound scenario and I/O bound scenario. In CPU bound scenario, the average execution time of WordCount program has a linear growth with respect to size of data set. This linear growth is observed for both the file systems, HDFS and CVFS. The same is the case with I/O bound scenario. There is linear growth for both the file systems. When the averages of execution time are plotted on the graph, both the file systems perform similarly in CPU bound scenario(multi-node environment). In the I/O bound scenario (multi-node environment), HDFS slightly out performs CVFS when the size of 1.0GB and both the file systems performs without much difference when the size of data set is 0.5GB and 1.5GB. Conclusions The MapReduce algorithm can be implemented on live data present in the virtualized shared storage systems without copying data into HDFS. In single node environment, distributed storage systems perform better than shared storage systems. In multi-node environment, when the CPU bound scenario is considered, both HDFS and CVFS file systems perform similarly. On the other hand, HDFS performs slightly better than CVFS for 1.0GB of data set in the I/O bound scenario. Hence we can conclude that distributed storage systems perform similar to the shared storage systems in both CPU bound and I/O bound scenarios in multi-node environment.

Page generated in 0.0747 seconds