• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 13
  • 12
  • 9
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 35
  • 22
  • 21
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Using AFS as a distributed file system for computational and data grids in high energy physics

Jones, Michael Angus Scott January 2005 (has links)
The use of the distributed file system, AFS, as a solution to the “input/output sandbox” problem in grid computing is studied. A computational grid middleware, primarily to accommodate the environment of the BaBar Computing Model, has been designed, written and is presented. A summary of the existing grid middleware and resources is discussed. A number of benchmarks (one written for this thesis) are used to test the performance of the AFS over the wide area network and grid environment. The performance of the AFS is also tested using a straightforward BaBar Analysis code on real data. Secure web-based and command-line interfaces created to monitor job submission and grid fabric are presented.
12

Research About the Efficient Recording Structure of Installed Data Recording Devices

Lee, Hyun-Kyu, Lee, Hyun-So, Song, Jae-Hoon 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Although the wireless data transmission technologies have evolved significantly, data recording devices are still being used because of the limitations of data rates and reliability issues over wireless environment in the avionics, military, space etc. Payload has limitation of weight. In addition, storage has limitation of capacity. So, we need to research about a data recording structure within a limited amount of memory. In this paper, we propose a new data recording structure through a condition necessary for efficient use of memory. The proposed structure has an equivalent function as other recording systems. But, it uses less memory than the other equivalent recording structures.
13

Enhancing the Accuracy of Synthetic File System Benchmarks

Farhat, Salam 01 January 2017 (has links)
File system benchmarking plays an essential part in assessing the file system’s performance. It is especially difficult to measure and study the file system’s performance as it deals with several layers of hardware and software. Furthermore, different systems have different workload characteristics so while a file system may be optimized based on one given workload it might not perform optimally based on other types of workloads. Thus, it is imperative that the file system under study be examined with a workload equivalent to its production workload to ensure that it is optimized according to its usage. The most widely used benchmarking method is synthetic benchmarking due to its ease of use and flexibility. The flexibility of synthetic benchmarks allows system designers to produce a variety of different workloads that will provide insight on how the file system will perform under slightly different conditions. The downside of synthetic workloads is that they produce generic workloads that do not have the same characteristics as production workloads. For instance, synthetic benchmarks do not take into consideration the effects of the cache that can greatly impact the performance of the underlying file system. In addition, they do not model the variation in a given workload. This can lead to file systems not optimally designed for their usage. This work enhanced synthetic workload generation methods by taking into consideration how the file system operations are satisfied by the lower level function calls. In addition, this work modeled the variations of the workload’s footprint when present. The first step in the methodology was to run a given workload and trace it by a tool called tracefs. The collected traces contained data on the file system operations and the lower level function calls that satisfied these operations. Then the trace was divided into chunks sufficiently small enough to consider the workload characteristics of that chunk to be uniform. Then the configuration file that modeled each chunk was generated and supplied to a synthetic workload generator tool that was created by this work called FileRunner. The workload definition for each chunk allowed FileRunner to generate a synthetic workload that produced the same workload footprint as the corresponding segment in the original workload. In other words, the synthetic workload would exercise the lower level function calls in the same way as the original workload. Furthermore, FileRunner generated a synthetic workload for each specified segment in the order that they appeared in the trace that would result in a in a final workload mimicking the variation present in the original workload. The results indicated that the methodology can create a workload with a throughput within 10% difference and with operation latencies, with the exception of the create latencies, to be within the allowable 10% difference and in some cases within the 15% maximum allowable difference. The work was able to accurately model the I/O footprint. In some cases the difference was negligible and in the worst case it was at 2.49% difference.
14

Podpora historie a verzování v zlomekFS / History and Backup Support for zlomekFS

Wartiak, Rastislav January 2010 (has links)
zlomekFS is a distributed file system that supports disconnected operation using local cache. During synchronization of local changes it offers easyto-use conflict resolution mechanism. Further improved it became a file system with no specific kernel code. It has therefore a good potential in future public use. As the content of this file system can be updated by many users, keeping history of the changes can be a useful feature. This thesis implements file versioning in zlomekFS, answering the questions such as how to store and access the history. On top of the versioning, the possibility of consistent backup is introduced into the file system. New functionality is derived from the analysis of other file systems with similar features and selection of the most suitable approach for zlomekFS.
15

Podpora historie a verzování v zlomekFS / History and Backup Support for zlomekFS

Wartiak, Rastislav January 2010 (has links)
zlomekFS is a distributed file system that supports disconnected operation using local cache. During synchronization of local changes it offers easytouse conflict resolution mechanism. Further improved it became a file system with no specific kernel code. It has therefore a good potential in future public use. As the content of this file system can be updated by many users, keeping history of the changes can be a useful feature. This thesis implements file versioning in zlomekFS, answering the questions such as how to store and access the history. On top of the versioning, the possibility of consistent backup is introduced into the file system. New functionality is derived from the analysis of other file systems with similar features and selection of the most suitable approach for zlomekFS.
16

The Performance of a Linux NFS Implementation

Boumenot, Christopher M 20 May 2002 (has links)
NFS is the dominant network file system used to share files between UNIX-derived operating system based hosts. At the onset of this research it was found that the tested NFS implementations did not achieve data writing throughput across a Gigabit Ethernet LAN commensurate with throughput achieved with the same hosts and network for packet streams generated without NFS. A series of tests were conducted involving variation of many system parameters directed towards identification of the bottleneck responsible for the large throughput ratio between non-NFS and NFS data transfers for high speed networks. Ultimately it was found that processor, disk, and network performance are not the source of low NFS throughput but rather it is caused by an avoidable NFS behavior, the effects of which worsen with increasing network latency.
17

Storage management for large scale systems

Wang, Wenguang 15 December 2004
<p>Because of the slow access time of disk storage, storage management is crucial to the performance of many large scale computer systems. This thesis studies performance issues in buffer cache management and disk layout management, two important components of storage management. </p><p>The buffer cache stores popular disk pages in memory to speed up the access to them. Buffer cache management algorithms used in real systems often have many parameters that require careful hand-tuning to get good performance. A self-tuning algorithm is proposed to automatically tune the page cleaning activity in the buffer cache management algorithm by monitoring the I/O activities of the buffer cache. This algorithm achieves performance comparable to the best manually tuned system.</p><p>The global data structure used by the buffer cache management algorithm is protected by a lock. Access to this lock can cause contention which can significantly reduce system throughput in multi-processor systems. Current solutions to eliminate lock contention decrease the hit ratio of the buffer cache, which causes poor performance when the system is I/O-bound. A new approach, called the multi-region cache, is proposed. This approach eliminates lock contention, maintains the hit ratio of the buffer cache, and incurs little overhead. Moreover, this approach can be applied to most buffer cache management algorithms.</p><p>Disk layout management arranges the layout of pages on disks to improve the disk I/O efficiency. The typical disk layout approach, called Overwrite, is optimized for sequential I/Os from a single file. Interleaved writes from multiple users can significantly decrease system throughput in large scale systems using Overwrite. Although the Log-structured File System (LFS) is optimized for such workloads, its garbage collection overhead can be expensive. In modern and future disks, because of the much faster improvement of disk transfer bandwidth over disk positioning time, LFS performs much better than Overwrite in most workloads, unless the disk is close to full. A new disk layout approach, called HyLog, is proposed. HyLog achieves performance comparable to the best of existing disk layout approaches in most cases.
18

Storage management for large scale systems

Wang, Wenguang 15 December 2004 (has links)
<p>Because of the slow access time of disk storage, storage management is crucial to the performance of many large scale computer systems. This thesis studies performance issues in buffer cache management and disk layout management, two important components of storage management. </p><p>The buffer cache stores popular disk pages in memory to speed up the access to them. Buffer cache management algorithms used in real systems often have many parameters that require careful hand-tuning to get good performance. A self-tuning algorithm is proposed to automatically tune the page cleaning activity in the buffer cache management algorithm by monitoring the I/O activities of the buffer cache. This algorithm achieves performance comparable to the best manually tuned system.</p><p>The global data structure used by the buffer cache management algorithm is protected by a lock. Access to this lock can cause contention which can significantly reduce system throughput in multi-processor systems. Current solutions to eliminate lock contention decrease the hit ratio of the buffer cache, which causes poor performance when the system is I/O-bound. A new approach, called the multi-region cache, is proposed. This approach eliminates lock contention, maintains the hit ratio of the buffer cache, and incurs little overhead. Moreover, this approach can be applied to most buffer cache management algorithms.</p><p>Disk layout management arranges the layout of pages on disks to improve the disk I/O efficiency. The typical disk layout approach, called Overwrite, is optimized for sequential I/Os from a single file. Interleaved writes from multiple users can significantly decrease system throughput in large scale systems using Overwrite. Although the Log-structured File System (LFS) is optimized for such workloads, its garbage collection overhead can be expensive. In modern and future disks, because of the much faster improvement of disk transfer bandwidth over disk positioning time, LFS performs much better than Overwrite in most workloads, unless the disk is close to full. A new disk layout approach, called HyLog, is proposed. HyLog achieves performance comparable to the best of existing disk layout approaches in most cases.
19

Performance Analysis of Relational Database over Distributed File Systems

Tsai, Ching-Tang 08 July 2011 (has links)
With the growing of Internet, people use network frequently. Many PC applications have moved to the network-based environment, such as text processing, calendar, photo management, and even users can develop applications on the network. Google is a company providing web services. Its popular services are search engine and Gmail which attract people by short response time and lots amount of data storage. It also charges businesses to place their own advertisements. Another hot social network is Facebook which is also a popular website. It processes huge instant messages and social relationships between users. The magic power of doing this depends on the new technique, Cloud Computing. Cloud computing has ability to keep high-performance processing and short response time, and its kernel components are distributed data storage and distributed data processing. Hadoop is a famous open source to build cloud distributed file system and distributed data analysis. Hadoop is suitable for batch applications and write-once-and-read-many applications. Thus, currently there are only fewer applications, such as pattern searching and log file analysis, to be implemented over Hadoop. However, almost all database applications are still using relational databases. To port them into cloud platform, it becomes necessary to let relational database running over HDFS. So we will test the solution of FUSE-DFS which is an interface to mount HDFS into a system and is used like a local filesystem. If we can make FUSE-DFS performance satisfy user¡¦s application, then we can easier persuade people to port their application into cloud platform with least overhead.
20

Design and Implementation of Cloud Data Backup System with Load Balance Strategy

Tsai, Chia-ping 15 August 2012 (has links)
The fast growing bandwidth has made the development of cloud storage. More and more resource has put in cloud storage. In this thesis, we proposed a new cloud storage that consists of a single main server and multiple data servers. The main server controls system-wide activities such as data server management. It also periodically communicates with each data server and collects its state. Data servers store data on local disks as Windows files. In order to response to the large number of data access, Selection of the server which is necessary to offer equalized performance. In this paper, we propose a server selection algorithm using different parameters to get the performance metrics which enables us to balance multi-resource from server-side. We design new cloud storage and implement the algorithm. According to upload experiment, the difference between the maximum and the minimum free space when using our algorithm is less than 5GB. But using the random mode, the free space difference is increased as time, and the maximum is 30GB. In the mixed experiment, we added the download mode, and our algorithm is fewer than 10GB. The result of the random mode approximated to the first experiment. Finally, our algorithm obtains 10% and 3% speedup in upload throughput by upload experiment and mixed experiment, 10% speedup in download throughput by mixed experiment.

Page generated in 0.0655 seconds