• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Freiheitsgrade beim Einsatz Verteilter Disks

Lemke, Bastian. January 2008 (has links)
Konstanz, Univ., Bachelorarb., 2008.
2

Verfahren zur redundanten und distributiven Datenverarbeitung in drahtlosen Sensornetzen

Coers, Alexander January 2005 (has links)
Zugl.: Duisburg, Essen, Univ., Diss., 2005
3

ClusterRAID architecture and prototype of a distributed fault-tolerant mass storage system for clusters /

Wiebalck, Arne. Unknown Date (has links) (PDF)
University, Diss., 2005--Heidelberg.
4

Proposta de um ambiente de simulação e aprendizado inteligente para RAID.

Lobato, Daniel Corrêa 25 May 2000 (has links)
O desempenho global dos sistemas computacionais é limitado, geralmente, pelo componente de menor desempenho. Os processadores e a memória principal têm experimentado um aumento de desempenho bem maior que o da memória secundária, como os discos magnéticos. Em 1984, Johnson introduziu o conceito de fragmentação, onde um dado é gravado em uma matriz de discos, de forma que os seus fragmentos podem ser recuperados em paralelo e, por conseqüência, de forma mais rápida. O principal problema da fragmentação é a redução da confiabilidade da matriz pois, a falha de um dos discos torna o dado inacessível. Patterson, Gibson e Katz propuseram, em 1988, 5 formas de armazenar informação redundante na matriz de discos e, dessa forma, aumentar sua confiabilidade. A essas formas foi dado o nome de RAID - Redundant Arrays of Independent Disks. Com o passar do tempo, outras formas de armazenamento de redundância foram criadas, tornando complexa a taxonomia da área. Além disso, alterações de parâmetros na matriz implicam em variações de desempenho nem sempre fáceis de se perceber em um primeiro momento. Com o objetivo de facilitar a compreensão da taxonomia e permitir que sejam feitos experimentos na matriz buscando um melhor desempenho, esta dissertação propõe um ambiente de simulação e aprendizado para RAID, onde o usuário pode interagir com diversos modelos de RAID, ou até criar o seu próprio, para avaliar seu desempenho em várias situações, além de oferecer ao usuário acesso ao conhecimento da área, agindo como um tutor. Esta dissertação apresenta, ainda, um protótipo de um simulador de discos magnéticos que pode ser utilizado como base para o desenvolvimento de um simulador de RAID para ser utilizado pelo ambiente. / The component with the worst performance usually limits the overall performance of a computing system. The performance of processors and main memory has improved faster than the secondary memory, such as magnetic disks. Johnson, in 1984, introduced the concept of fragmentation, in which a data file is written into a disk array, in a way that its stripes can be recovered in parallel and therefore, in a faster way. The main problem with fragmentation is the reduction of the reliability. If one disk fails, all data file becomes inaccessible. Patterson, Gibson and Katz proposed, in 1988, five ways to store redundant information in the array, increasing the reliability, comprising the main RAID (Redundant Array of Independent Disks) configurations. Some other ways to store the redundant information have been proposed over the years, making the RAID taxonomy more complex. Furthermore, changes in the array parameters takes to performance variations that are not always understood. With the purpose of facilitating the comprehension of the taxonomy and allowing the execution of experiments looking forward to improve performance, this MSc Dissertation proposes an Intelligent Simulation and Learning Environment for RAID, where the user can interact with several RAID models, or even create his/her own models, in order to evaluate their performance under different situations. The environment also allows the user to interact with the knowledge of the area, acting as a tutor. This Dissertation also presents a prototype of a magnetic disk simulator, that can be used as the kernel for the development of a RAID simulator to be used by the environment.
5

Generating and Analyzing Synthetic Workloads using Iterative Distillation

Kurmas, Zachary Alan 14 May 2004 (has links)
The exponential growth in computing capability and use has produced a high demand for large, high-performance storage systems. Unfortunately, advances in storage system research have been limited by (1) a lack of evaluation workloads, and (2) a limited understanding of the interactions between workloads and storage systems. We have developed a tool, the Distiller that helps address both limitations. Our thesis is as follows: Given a storage system and a workload for that system, one can automatically identify a set of workload characteristics that describes a set of synthetic workloads with the same performance as the workload they model. These representative synthetic workloads increase the number of available workloads with which storage systems can be evaluated. More importantly, the characteristics also identify those workload properties that affect disk array performance, thereby highlighting the interactions between workloads and storage systems. This dissertation presents the design and evaluation of the Distiller. Specifically, our contributions are as follows. (1) We demonstrate that the Distiller finds synthetic workloads with at most 10% error for six out of the eight workloads we tested. (2) We also find that all of the potential error metrics we use to compare workload performance have limitations. Additionally, although the internal threshold that determines which attributes the Distiller chooses has a small effect on the accuracy of the final synthetic workloads, it has a large effect on the Distiller's running time. Similarly, (3) we find that we can reduce the precision with which we measure attributes and only moderately reduce the resulting synthetic workload's accuracy. Finally, (4) we show how to use the information contained in the chosen attributes to predict the performance effects of modifying the storage system's prefetch length and stripe unit size.
6

Disková pole RAID a jejich budoucnost v éře SSD / Future of disk arrays in SSD era

Sládek, Petr January 2012 (has links)
The thesis aims at verification of using emerging Solid-State drives in disk arrays. The advent of SSD disks caused a small revolution in area of data storage, because the growth performance of hard drives has been slow compared to other PC components. But an entirely different principle of operation could mean compatibility problems between SSD and related technologies, such as RAID. This thesis aims at analyzing all the relevant technologies, mainly HDD, SSD and RAID. To achieve this objective, information from literature, articles and other appropriate sources will be used. Other objectives of this thesis are to determine how much are the SSDs suitable for use in the disk array, because low performance RAID controllers or different principles of operation could limit their efficiency. This question should be answered by submission of selected types of storage arrays to synthetic and practical tests of performance. The final goal is to use financial analysis of the test solutions as a shared file storage. Today, remote access to data is used by a wide range of job positions. Slow storage could mean inefficient use of working time and therefore unnecessary financial costs. The goal of my work is primarily to provide answers to the questions mentioned above. Currently it is very hard to find tests of more complex forms of disk arrays based on solid-state drives. This article can be also very useful for companies where fileservers are used to share user data. Based on the result of cost analysis, the company can then decide what type of storage is best for its purpose.
7

Proposta de um ambiente de simulação e aprendizado inteligente para RAID.

Daniel Corrêa Lobato 25 May 2000 (has links)
O desempenho global dos sistemas computacionais é limitado, geralmente, pelo componente de menor desempenho. Os processadores e a memória principal têm experimentado um aumento de desempenho bem maior que o da memória secundária, como os discos magnéticos. Em 1984, Johnson introduziu o conceito de fragmentação, onde um dado é gravado em uma matriz de discos, de forma que os seus fragmentos podem ser recuperados em paralelo e, por conseqüência, de forma mais rápida. O principal problema da fragmentação é a redução da confiabilidade da matriz pois, a falha de um dos discos torna o dado inacessível. Patterson, Gibson e Katz propuseram, em 1988, 5 formas de armazenar informação redundante na matriz de discos e, dessa forma, aumentar sua confiabilidade. A essas formas foi dado o nome de RAID - Redundant Arrays of Independent Disks. Com o passar do tempo, outras formas de armazenamento de redundância foram criadas, tornando complexa a taxonomia da área. Além disso, alterações de parâmetros na matriz implicam em variações de desempenho nem sempre fáceis de se perceber em um primeiro momento. Com o objetivo de facilitar a compreensão da taxonomia e permitir que sejam feitos experimentos na matriz buscando um melhor desempenho, esta dissertação propõe um ambiente de simulação e aprendizado para RAID, onde o usuário pode interagir com diversos modelos de RAID, ou até criar o seu próprio, para avaliar seu desempenho em várias situações, além de oferecer ao usuário acesso ao conhecimento da área, agindo como um tutor. Esta dissertação apresenta, ainda, um protótipo de um simulador de discos magnéticos que pode ser utilizado como base para o desenvolvimento de um simulador de RAID para ser utilizado pelo ambiente. / The component with the worst performance usually limits the overall performance of a computing system. The performance of processors and main memory has improved faster than the secondary memory, such as magnetic disks. Johnson, in 1984, introduced the concept of fragmentation, in which a data file is written into a disk array, in a way that its stripes can be recovered in parallel and therefore, in a faster way. The main problem with fragmentation is the reduction of the reliability. If one disk fails, all data file becomes inaccessible. Patterson, Gibson and Katz proposed, in 1988, five ways to store redundant information in the array, increasing the reliability, comprising the main RAID (Redundant Array of Independent Disks) configurations. Some other ways to store the redundant information have been proposed over the years, making the RAID taxonomy more complex. Furthermore, changes in the array parameters takes to performance variations that are not always understood. With the purpose of facilitating the comprehension of the taxonomy and allowing the execution of experiments looking forward to improve performance, this MSc Dissertation proposes an Intelligent Simulation and Learning Environment for RAID, where the user can interact with several RAID models, or even create his/her own models, in order to evaluate their performance under different situations. The environment also allows the user to interact with the knowledge of the area, acting as a tutor. This Dissertation also presents a prototype of a magnetic disk simulator, that can be used as the kernel for the development of a RAID simulator to be used by the environment.
8

RANGE UPGRADE FOR DATA RECORDING AND REPRODUCTION

Nystrom, Ingemar, Gatton, Tim 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / Flexible data multiplexing that supports both low-speed (4 Mbps) to very high-speed output devices (networks and recording systems up to 480 Mbps), along with data network formatting, can greatly enhance the results of range upgrading.
9

Odhad výkonnosti diskových polí s využitím prediktivní analytiky / Estimating performance of disk arrays using predictive analytics

Vlha, Matej January 2017 (has links)
Thesis focuses on disk arrays, where the goal is to design test scenarios to measure performance of disk array and use predictive analytics tools to train a model that will predict the selected performance parameter on a measured set of data. The implemented web application demonstrates the functionality of the trained model and shows estimate of the disk array performance.

Page generated in 0.0508 seconds