• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 16
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Kvantitativní analýza schémat zálohování dat / Quantitative analysis of data backup schemes

Süss, Ivo January 2019 (has links)
The aim of master thesis was to create a program for the quantitative analysis of data backup schemes and with its help to identify and analyze the properties of commonly used schemes for different loads. Based on the obtained results, compile a set of principles for choosing the optimal data backup scheme. The program was created by Matlab. It can be used to find out parameters for individual backup schemes: Parameters C (total backup volume) and E (medium volume of recovery backups), backups size of individual days, workload of individual storages, cost of storages and cost of overall backup scheme, size the total amount of data written per storage per time slot. At the end of the thesis is defined a scheme for choosing the optimal backup scheme.
12

Metodika návrhu optimálního způsobu zálohování velkých objemů dat / A guide to designing optimal method of backup for big volumes of data

Bartoňová, Veronika January 2012 (has links)
This diploma thesis deals with backup for big volumes of data. Data backup is a overlooked area of information technology and the data can be lost by trivial user error or breakdown on any components. In this thesis is discussed theory of backup - archive bit and his behavior based on various type of backup (full, incremental, differential or combination thereof), duration and frequency of backups or the point of ultimate recovery. In addition there are mentioned rotation schemes Round-Robin, GFS and Tower of Hanoi, where are described their principles and graphic diagram of rotation. In chapter Strategy of backup describes the backup strategy via choosing the right choice, taking into technical and economic parameters. Impact analysis, which is explained also in this chapter, describes the important moments in data recovery. For select the optimal strategy is necessary to consider not only the whole capacity of the backup data, but also the size of the backup window for data storage or storage location. In chapter Media storage you can acquainted with all backup media and their technical parameters that are available on the market and can be used for data backup. In section Guide methodology of large volumes of data backup is designed a backup plan with the necessary inputs for the actual implementation of the backup. The design methodology puts emphasis on regular backups and check their location. On practical demonstration is shown that the rotation scheme Tower of Hanoi are among the smallest need for backup media. A part of this thesis is also design of methodology for backup small amounts of data.
13

Applicability of satellite and NWP precipitation for flood modeling and forecasting in transboundary Chenab River Basin, Pakistan

Ahmed, Ehtesham 11 April 2024 (has links)
This research was aimed to evaluate the possibility of using satellite precipitation products (SPPs) and Numerical Weather Prediction (NWP) of precipitation for better hydrologic simulations and flood forecasting in the trans-boundary Chenab River Basin (CRB) in Pakistan. This research was divided into three parts. In the first part, two renowned SPPs, i.e., global precipitation mission (GPM) IMERG-F v6 and tropical rainfall measuring mission (TRMM) 3B42 v7, were incorporated in a semidistributed hydrological model, i.e., the soil and water assessment tool (SWAT), to assess the daily and monthly runoff pattern in Chenab River at the Marala Barrage gauging site in Pakistan. The results exhibit higher correlation between observed and simulated discharges at monthly timescale simulations rather than daily timescale simulations. Moreover, results show that IMERG-F is superior to 3B42 by indicating higher R2, higher Nash–Sutcliffe efficiency (NSE), and lower percent bias (PBIAS) at both monthly and daily timescale. In the second part, three latest half-hourly (HH) and daily (D) SPPs, i.e., 'IMERG-E', 'IMERGL', and 'IMERG-F', were evaluated for daily and monthly flow simulations in the SWAT model. The study revealed that monthly flow simulation performance is better than daily flow simulation in all sub-daily and daily SPPs-based models. Results depict that IMERGHHF and IMERG-DF yield the best performance among the other latency levels of SPPs. However, the IMERG-HHF based model has a reasonably higher daily correlation coefficient (R) and lower daily root mean square error (RMSE) than IMERG-DF. IMERG-HHF displays the lowest PBIAS for daily and monthly flow validations and it also represents relatively higher values of R2 and NSE than any other model for daily and monthly model validation. Moreover, the sub-daily IMERG based model outperformed the daily IMERG based model for all calibration and validation scenarios. IMERG-DL based model demonstrates poor performance among all of the SPPs, in daily and monthly flow validation, with low R2, low NSE, and high PBIAS. Additionally, the IMERG-HHE model outperformed IMERG-HHL. In the third and last part of this research, coupled hydro-meteorological precipitation information was used to forecast the 2016 flood event in the Chenab River Basin. The gaugecalibrated SPP, i.e., Global Satellite Mapping of Precipitation (GSMaP_Gauge), was selected to calibrate the Integrated Flood Analysis System (IFAS) model for the 2016 flood event. Precipitation from the Global Forecast System (GFS) NWP, with nine different lead times up to 4 days, was used in the calibrated IFAS model. This study revealed that the hydrologic simulations in IFAS, with global GFS forecasts, were unable to predict the flood peak for all lead times. Later, the Weather Research and Forecasting (WRF) model was used to downscale the precipitation forecasts with one-way and two-way nesting approaches. It was found in this study that the simulated hydrographs in the IFAS model, at different lead times, from the precipitation of two-way WRF nesting exhibited superior performance with the highest R2, NSE and the lowest PBIAS compared with one-way nesting. Moreover, it was concluded that the combination of GFS forecast and two-way WRF nesting can provide high-quality precipitation prediction to simulate flood hydrographs with a remarkable lead time of 96 h when applying coupled hydrometeorological flow simulation.
14

Sistemas de arquivos paralelos: alternativas para a redução do gargalo no acesso ao sistema de arquivos / Parallel File Systems: alternatives to reduce the bottleneck in accessing the file system

Carvalho, Roberto Pires de 23 September 2005 (has links)
Nos últimos anos, a evolução dos processadores e redes para computadores de baixo custo foi muito maior se comparada com o aumento do desempenho dos discos de armazenamento de dados. Com isso, muitas aplicações estão encontrando dificuldades em atingir o pleno uso dos processadores, pois estes têm de esperar até que os dados cheguem para serem utilizados. Uma forma popular para resolver esse tipo de empecílio é a adoção de sistemas de arquivos paralelos, que utilizam a velocidade da rede local, além dos recursos de cada máquina, para suprir a deficiência de desempenho no uso isolado de cada disco. Neste estudo, analisamos alguns sistemas de arquivos paralelos e distribuídos, detalhando aqueles mais interessantes e importantes. Por fim, mostramos que o uso de um sistema de arquivos paralelo pode ser mais eficiente e vantajoso que o uso de um sistema de arquivos usual, para apenas um cliente. / In the last years, the evolution of the data processing power and network transmission for low cost computers was much bigger if compared to the increase of the speed of getting the data stored in disks. Therefore, many applications are finding difficulties in reaching the full use of the processors, because they have to wait until the data arrive before using. A popular way to solve this problem is to use a parallel file system, which uses the local network speed to avoid the performance bottleneck found in an isolated disk. In this study, we analyze some parallel and distributed file systems, detailing the most interesting and important ones. Finally, we show the use of a parallel file system can be more efficient than the use of a usual local file system, for just one client.
15

Sistemas de arquivos paralelos: alternativas para a redução do gargalo no acesso ao sistema de arquivos / Parallel File Systems: alternatives to reduce the bottleneck in accessing the file system

Roberto Pires de Carvalho 23 September 2005 (has links)
Nos últimos anos, a evolução dos processadores e redes para computadores de baixo custo foi muito maior se comparada com o aumento do desempenho dos discos de armazenamento de dados. Com isso, muitas aplicações estão encontrando dificuldades em atingir o pleno uso dos processadores, pois estes têm de esperar até que os dados cheguem para serem utilizados. Uma forma popular para resolver esse tipo de empecílio é a adoção de sistemas de arquivos paralelos, que utilizam a velocidade da rede local, além dos recursos de cada máquina, para suprir a deficiência de desempenho no uso isolado de cada disco. Neste estudo, analisamos alguns sistemas de arquivos paralelos e distribuídos, detalhando aqueles mais interessantes e importantes. Por fim, mostramos que o uso de um sistema de arquivos paralelo pode ser mais eficiente e vantajoso que o uso de um sistema de arquivos usual, para apenas um cliente. / In the last years, the evolution of the data processing power and network transmission for low cost computers was much bigger if compared to the increase of the speed of getting the data stored in disks. Therefore, many applications are finding difficulties in reaching the full use of the processors, because they have to wait until the data arrive before using. A popular way to solve this problem is to use a parallel file system, which uses the local network speed to avoid the performance bottleneck found in an isolated disk. In this study, we analyze some parallel and distributed file systems, detailing the most interesting and important ones. Finally, we show the use of a parallel file system can be more efficient than the use of a usual local file system, for just one client.
16

Optimizations In Storage Area Networks And Direct Attached Storage

Dharmadeep, M C 02 1900 (has links)
The thesis consists of three parts. In the first part, we introduce the notion of device-cache-aware schedulers. Modern disk subsystems have many megabytes of memory for various purposes such as prefetching and caching. Current disk scheduling algorithms make decisions oblivious of the underlying device cache algorithms. In this thesis, we propose a scheduler architecture that is aware of underlying device cache. We also describe how the underlying device cache parameters can be automatically deduced and incorporated into the scheduling algorithm. In this thesis, we have only considered adaptive caching algorithms as modern high end disk subsystems are by default configured to use such algorithms. We implemented a prototype for Linux anticipatory scheduler, where we observed, compared with the anticipatory scheduler, upto 3 times improvement in query execution times with Benchw benchmark and upto 10 percent improvement with Postmark benchmark. The second part deals with implementing cooperative caching for the Redhat Global File System. The Redhat Global File System (GFS) is a clustered shared disk file system. The coordination between multiple accesses is through a lock manager. On a read, a lock on the inode is acquired in shared mode and the data is read from the disk. For a write, an exclusive lock on the inode is acquired and data is written to the disk; this requires all nodes holding the lock to write their dirty buffers/pages to disk and invalidate all the related buffers/pages. A DLM (Distributed Lock Manager) is a module that implements the functions of a lock manager. GFS’s DLM has some support for range locks, although it is not being used by GFS. While it is clear that a data sourced from a memory copy is likely to have lower latency, GFS currently reads from the shared disk after acquiring a lock (just as in other designs such as IBM’s GPFS) rather than from remote memory that just recently had the correct contents. The difficulties are mainly due to the circular relationships that can result between GFS and the generic DLM architecture while integrating DLM locking framework with cooperative caching. For example, the page/buffer cache should be accessible from DLM and yet DLM’s generality has to be preserved. The symmetric nature of DLM (including the SMP concurrency model) makes it even more difficult to understand and integrate cooperative caching into it (note that GPFS has an asymmetrical design). In this thesis, we describe the design of a cooperative caching scheme in GFS. To make it more effective, we also have introduced changes to the locking protocol and DLM to handle range locks more efficiently. Experiments with micro benchmarks on our prototype implementation reveal that, reading from a remote node over gigabit Ethernet can be upto 8 times faster than reading from a enterprise class SCSI disk for random disk reads. Our contributions are an integrated design for cooperative caching and lock manager for GFS, devising a novel method to do interval searches and determining when sequential reads from a remote memory perform better than sequential reads from a disk. The third part deals with selecting a primary network partition in a clustered shared disk system, when node/network failures occur. Clustered shared disk file systems like GFS, GPFS use methods that can fail in case of multiple network partitions and also in case of a 2 node cluster. In this thesis, we give an algorithm for fault-tolerant proactive leader election in asynchronous shared memory systems, and later its formal verification. Roughly speaking, a leader election algorithm is proactive if it can tolerate failure of nodes even after a leader is elected, and (stable) leader election happens periodically. This is needed in systems where a leader is required after every failure to ensure the availability of the system and there might be no explicit events such as messages in the (shared memory) system. Previous algorithms like DiskPaxos are not proactive. In our model, individual nodes can fail and reincarnate at any point in time. Each node has a counter which is incremented every period, which is same across all the nodes (modulo a maximum drift). Different nodes can be in different epochs at the same time. Our algorithm ensures that per epoch there can be at most one leader. So if the counter values of some set of nodes match, then there can be at most one leader among them. If the nodes satisfy certain timeliness constraints, then the leader for the epoch with highest counter also becomes the leader for the next epoch (stable property). Our algorithm uses shared memory proportional to the number of processes, the best possible. We also show how our protocol can be used in clustered shared disk systems to select a primary network partition. We have used the state machine approach to represent our protocol in Isabelle HOL logic system and have proved the safety property of the protocol.

Page generated in 0.0164 seconds