Spelling suggestions: "subject:"distributed atorage"" "subject:"distributed 2storage""
11 |
Distributed storage modeling in Soap Creek for flood control and agricultural practicesWunsch, Matthew John 01 May 2013 (has links)
In 1988, the counties of Appanoose, Davis, Monroe, and Wapello created the Soap Creek Watershed Board. This group put in place a plan to fund and construct 154 farm ponds in an effort to provide water for agriculture practices as well as provide flood protection for the residents inside the Soap Creek watershed. Through collaborative efforts and funding from federal, state, and local sources, to date 132 ponds have been constructed.
Currently there is no stream monitoring in place in the watershed to observe stream conditions. This leads to no stored data on the benefits of the projects in the basin and the reduced flood impacts. With funding from the Iowa Watershed Projects (IWP) through the IIHR - Hydroscience & Engineering lab, a lumped parameter surface water model was created to show the benefits of the constructed projects. Using detailed LiDAR data, a Hydrologic Engineering Center-Hydrologic Modeling System (HEC-HMS) model was created. This model used arcHydro and ARC-GeoHMS, tools in ARCgis. Detailed LiDAR, SURGGO soil data, and land cover data was used to create the model parameters. Several design and historical storms were modeled to quantify the benefits in peak flow reductions and in amounts of water stored behind the projects.
|
12 |
ImplementingDistributed Storage System by Network Coding in Presence of Link FailureChareonvisal, Tanakorn January 2012 (has links)
Nowadays increasing multimedia applications e.g., video and voice over IP, social networks and emails poses higher demands for sever storages and bandwidth in the networks. There is a concern that existing resource may not able to support higher demands and reliability. Network coding was introduced to improve distributed storage system. This thesis proposes the way to improve distributed storage system such as increase a chance to recover data in case there is a fail storage node or link fail in a network. In this thesis, we study the concept of network coding in distributed storage systems. We start our description from easy code which is replication coding then follow with higher complex code such as erasure coding. After that we implement these concepts in our test bed and measure performance by the probability of success in download and repair criteria. Moreover we compare success probability for reconstruction of original data between minimum storage regenerating (MSR) and minimum bandwidth regenerating (MBR) method. We also increase field size to increase probability of success. Finally, link failure was added in the test bed for measure reliability in a network. The results are analyzed and it shows that using maximum distance separable and increasing field size can improve the performance of a network. Moreover it also improves reliability of network in case there is a link failure in the repair process.
|
13 |
Exploitation du contenu pour l'optimisation du stockage distribué / Leveraging content properties to optimize distributed storage systemsKloudas, Konstantinos 06 March 2013 (has links)
Les fournisseurs de services de cloud computing, les réseaux sociaux et les entreprises de gestion des données ont assisté à une augmentation considérable du volume de données qu'ils reçoivent chaque jour. Toutes ces données créent des nouvelles opportunités pour étendre la connaissance humaine dans des domaines comme la santé, l'urbanisme et le comportement humain et permettent d'améliorer les services offerts comme la recherche, la recommandation, et bien d'autres. Ce n'est pas par accident que plusieurs universitaires mais aussi les médias publics se référent à notre époque comme l'époque “Big Data”. Mais ces énormes opportunités ne peuvent être exploitées que grâce à de meilleurs systèmes de gestion de données. D'une part, ces derniers doivent accueillir en toute sécurité ce volume énorme de données et, d'autre part, être capable de les restituer rapidement afin que les applications puissent bénéficier de leur traite- ment. Ce document se concentre sur ces deux défis relatifs aux “Big Data”. Dans notre étude, nous nous concentrons sur le stockage de sauvegarde (i) comme un moyen de protéger les données contre un certain nombre de facteurs qui peuvent les rendre indisponibles et (ii) sur le placement des données sur des systèmes de stockage répartis géographiquement, afin que les temps de latence perçue par l'utilisateur soient minimisés tout en utilisant les ressources de stockage et du réseau efficacement. Tout au long de notre étude, les données sont placées au centre de nos choix de conception dont nous essayons de tirer parti des propriétés de contenu à la fois pour le placement et le stockage efficace. / Cloud service providers, social networks and data-management companies are witnessing a tremendous increase in the amount of data they receive every day. All this data creates new opportunities to expand human knowledge in fields like healthcare and human behavior and improve offered services like search, recommendation, and many others. It is not by accident that many academics but also public media refer to our era as the “Big Data” era. But these huge opportunities come with the requirement for better data management systems that, on one hand, can safely accommodate this huge and constantly increasing volume of data and, on the other, serve them in a timely and useful manner so that applications can benefit from processing them. This document focuses on the above two challenges that come with “Big Data”. In more detail, we study (i) backup storage systems as a means to safeguard data against a number of factors that may render them unavailable and (ii) data placement strategies on geographically distributed storage systems, with the goal to reduce the user perceived latencies and the network and storage resources are efficiently utilized. Throughout our study, data are placed in the centre of our design choices as we try to leverage content properties for both placement and efficient storage.
|
14 |
The effects of increased infiltration and distributed storage on reducing peak discharges in an agricultural Iowa watershed: the Middle Raccoon RiverKlingner, William 01 May 2014 (has links)
The devastating Floods throughout Iowa in 2008 caused homes to be lost, people to be displaced, and cost billions in economic damages. This left State Officials pondering how to limit the damages of large magnitude floods in the future. From the legislative sessions following this tragedy came the Iowa Flood Center and funding through the Department of Housing and Urban Development (among others) to begin the Iowa Watersheds Project. The project was tasked with the planning, implementation and evaluation of watershed projects to lessen the severity and frequency of flooding in Iowa. One test watershed studied was the Middle Raccoon River watershed in West Central Iowa.
To study the impacts of basin-wide flood mitigation strategies on the Middle Raccoon River watershed, the hydrologic modeling software HEC-HMS was used in conjunction with the geographic analysis software, ArcGIS. A model was developed and calibrated to best represent the observed hydrologic response at USGS stream gages located at Bayard, IA and Panora, IA. Once complete, a series of flood mitigation techniques were applied to the watershed model, and run with the 10-, 25-, 50-, and 100-year SCS design storms. These techniques include increasing infiltration by modifying land use, and applying a distributed storage system (ponds). Both practices are shown to have the ability to reduce peak discharge, from 4 percent to 56 percent, depending on the location in the watershed, the severity of the design storm, and the extent of the flood mitigation technique.
Although research describing the effects of distributed storage and increased infiltration currently exist, this study details the process in which these effects can be modeled in a heavily agricultural Iowa watershed using a simplified lumped parameter model (HEC-HMS). With recent major flooding events in Iowa, the methods and tools in this report will be valuable in predicting the effectiveness of flood projects prior to project construction.
|
15 |
Engineering and legal aspects of a distributed storage flood mitigation system in IowaBaxter, Travis 01 December 2011 (has links)
This document presents a sketch of the engineering and legal considerations necessary to implement a distributed storage flood mitigation system in Iowa. This document first presents the results of a simulation done to assess the advantages of active storage reservoirs over passive reservoirs for flood mitigation. Next, this paper considers how forecasts improve the operation of a single reservoir in preventing floods. After demonstrating the effectiveness of accurate forecasts on a single active storage reservoir, this thesis moves on to a discussion of distributed storage with the idea that the advantages of active reservoirs with accurate forecasting could be applied to the distributed storage system. The analysis of distributed storage begins with a determination of suitable locations for reservoirs in the Clear Creek Watershed, near Coralville, Iowa, using two separate algorithms. The first algorithm selected the reservoirs based on the highest average reservoir depth, while the second located reservoirs based on maximizing the storage in two specific travel bands within the watershed. This paper also discusses the results of a land cover analysis on the reservoirs, determining that, based on the land cover inundated, several reservoirs would cause too much damage to be practical. The ultimate goal of a distributed storage system is to use the reservoirs to protect an urban area from significant flood damage. For this thesis, the Clear Creek data were extrapolated to the Cedar River basin with the intention to evaluate the feasibility and gain a rough approximation of the requirements for a distributed storage system to protect Cedar Rapids. Discussion then centered on an approximation of the distributed storage system that could have prevented the catastrophic Flood of 2008 in Cedar Rapids. There is significant potential for a distributed storage system to be a cost effective way of protecting Cedar Rapids from future flooding on the scale of the Flood of 2008. However, more analysis is needed to more accurately determine the costs and benefits of a distributed storage system in the Cedar River basin. This paper also recommends that a large scale distributed storage system should be controlled by an entity be created within the Iowa Department of Natural Resources. A smaller distributed storage system could be managed by a soil and water conservation subdistrict. Iowa allows for condemnation of the land needed for the gate structures and the flowage easements necessary to build and operate a distributed storage system. Finally, this paper discusses the environmental law concerns with a distributed storage system, particularly the Clean Water Act requirement for a National Pollutant Discharge Elimination System permit.
|
16 |
Managing Applications and Data in Distributed Computing InfrastructuresToor, Salman Zubair January 2012 (has links)
During the last decades the demand for large-scale computational and storage resources in science has increased dramatically. New computational infrastructures enable scientists to enter a new mode of science, e-science, which complements traditional theory and experiments. E-science is inherently interdisciplinary, involving researchers from several disciplines, and also opens up for large-scale collaborative efforts where physically distributed groups of scientists share software tools and data to make scientific progress. Within the field of e-science, new challenges are emerging in managing large-scale distributed computing efforts and distributed data sets. Different models, e.g. grids and clouds, have been introduced over the years, but new solutions built on these models are needed to enable easy and flexible use of distributed computing infrastructures by application scientists. In the first part of the thesis, application execution environments are studied. The goal is to hide technical details of the underlying distributed computing infrastructure and expose secure and user-friendly environments to the end users. First, a general-purpose solution using portal technology is described, enabling transparent and easy usage of a variety of grid systems. Then a problem-solving environment for genetic analysis is presented. Here the statistical software R is used as a workflow engine, enhanced with grid-enabled routines for performing the computationally demanding parts of the analysis. Finally, the issue of resource allocation in grid system is briefly studied and certain modifications in the distributed resource-brokering model for the ARC middleware are proposed. The second part of the thesis presents solutions for managing and analyzing scientific data using distributed storage resources. First, a new reliable and secure file-oriented distributed storage system, Chelonia, is presented. The architectural design of the system is described and implementation issues are considered. Also, the stability and scalable performance of Chelonia is verified using several test scenarios. Then, tools for providing an efficient and easy-to-use platform for data analysis built on Chelonia are presented. Here, a database driven approach is explored. An extended architecture where Chelonia is combined with the Web-Service MEDiator (WSMED) system is implemented, providing web service tools to query data without any further programming. This approach is then developed further and Chelonia is combined with SciSPARQL, a query language that extends SPARQL to queries over numeric scientific data. This results in a system that is capable of interactive analysis of distributed data sets. Writing customized modules in Java, Python or C can fulfill advanced application-specific analysis requirements. The viability of the approach is demonstrated by applying the system to data produced by URDME, a computational environment in systems biology and results for sample queries expressed in SciSPARQL are presented. Finally, the use of an open source storage cloud, Openstack – SWIFT, for analysis of data from CERN experiments is considered. Here, a pilot implementation for the ROOT data analysis framework is presented together with a performance evaluation. / eSSENCE
|
17 |
Répartition des moyens complémentaires de production et de stockage dans les réseaux faiblement interconnectés ou isolés / Distribution of supplementary means of storage and production in isolated or weakly interconnected networksVu, Thang 14 February 2011 (has links)
Cette thèse se situe dans le cadre de l'étude des réseaux faiblement interconnectés (puissance échangée limitée)ou isolés, alimentés principalement par des sources d'origine renouvelable. Afin d'équilibrer à chaque instant la production et la consommation, des groupes électrogènes ou des systèmes de stockage sont insérés. Les travaux portent sur deux grands objectifs. Le premier est de déterminer un mode de fonctionnement des moyens de stockage et de production afin d'exploiter le système à coût minimal en fonction des conditions météorologiques (prévision de la production renouvelable), tarifaires et de la consommation. Une seconde méthode d'optimisation est développée, prenant compte également les contraintes du réseau. Le deuxième objectif est la détermination de la meilleure localisation des moyens de stockage et de production sur le réseau. Une installation optimale permet de réduire les pertes en ligne, d'améliorer la qualité de la tension et ainsi de limiter le renforcement du le réseau aux points critiques. Le concept de stockage réparti (ou décentralisé) est introduit. La répartition de la capacité globale de stockage et le choix des paramètres de fonctionnement des onduleurs (pour répartir les appels de puissance) sont proposés. La simulation d'un cas d'application (réseau de Corse) permet de valider les outils développés. / This thesis concern isolated or weakly interconnected networks (limited power exchanged), powered essentially by renewable sources. To balance at any time between production and consumption, generators and storage systems are inserted.The work will focus on two main objectives. The first is to determine an operation mode of the generators and storage system at minimal cost depending on the weather (forecast of renewable generation), pricing and consumption. Optimization with network constraint is also developed. The second objective is to find the best places to install these resources on the network. A good location helps reduce line losses and improve voltage quality, which helps limit to reinforce the network at critical points. The concept of distributed storage (or decentralized) is introduced. The distribution of the overall storage capacity and choice of operating parameters of the inverters (to share the demanded power) are proposed. The simulations on an application case help to validate the developed tools.
|
18 |
Towards Malleable Distributed Storage Systems˸ From Models to Practice / Malléabilité des Systèmes de Stockage Distribués ˸ Des Modèles à la PratiqueCheriere, Nathanaël 05 November 2019 (has links)
Le Cloud, avec son modèle économique, offre la possibilité d’un gestion élastique des ressources; les utilisateurs peuvent louer des ressources selon leurs besoins. Cette élasticité permet de réduire les coûts énergétiques et financiers, et aide les applications à s’adapter aux charges de travail variables.Les applications manipulant de grandes quantités de données exécutées dans le Cloud ou sur des supercalculateurs sont souvent colocalisées avec un système de stockage distribué pour garantir un accès rapide aux données. Bien que de nombreux travaux aient été proposés pour redimensionner dynamiquement les capacités de calcul pour s’ajuster à la charge de travail, le stockage n’est pas considéré comme malléable (capable d’être redimensionné dynamiquement) puisque les transferts de grandes quantités de données nécessaires sont considérés trop lents. Cependant, le matériel et les techniques de stockage ont évolué et cette hypothèse doit être réévaluée.Dans cette thèse, nous présentons une étude sous différents angles des opérations de redimensionnement des systèmes de stockage distribués.Nous commençons par modéliser la durée minimale de ces opérations pour évaluer leur vitesse potentielle. Puis, nous développons un benchmark conçu pour mesurer la viabilité de la malléabilité d’un système de stockage sur une plateforme donnée. Finalement, nous implémentons un gestionnaire d’opérations de redimensionnement pour systèmes de stockage distribués qui décide et organise les transferts de données requis par ces opérations. / The Cloud, with its pay-as-you-go model, gives the possibility of elastic resource management; users can claim and release resources as needed. This elasticity leads to financial and energetical cost reductions, and helps applications to cope with varying workloads.Distributed cloud and HPC applications processing large amounts of data are often co-located with a distributed storage system in order to ensure fast data accesses. Although many works have been proposed to dynamically rescale the processing part of such systems to match their workload, the storage is never considered as malleable (able to be dynamically rescaled) since moving massive amounts of data around is assumed to be too slow in practice. However, in recent years hardware and storage techniques have evolved and this assumption needs to be revisited.In this thesis, we present a study of the rescaling operations in distributed storage systems approached from different angles. We start by modeling the minimal duration of rescaling operations to estimate their potential speed. Then, we develop a benchmark to measure the viability of distributed storage system malleability on a given platform. Last, we implement a rescaling manager for distributed storage systems that decides and organizes the data transfers required during a rescaling operation.
|
19 |
Efficient Resource Allocation Schemes for Wireless Networks with with Diverse Quality-of-Service RequirementsKumar, Akshay 16 August 2016 (has links)
Quality-of-Service (QoS) to users is a critical requirement of resource allocation in wireless networks and has drawn significant research attention over a long time. However, the QoS requirements differ vastly based on the wireless network paradigm. At one extreme, we have a millimeter wave small-cell network for streaming data that requires very high throughput and low latency. At the other end, we have Machine-to-Machine (M2M) uplink traffic with low throughput and low latency. In this dissertation, we investigate and solve QoS-aware resource allocation problems for diverse wireless paradigms.
We first study cross-layer dynamic spectrum allocation in a LTE macro-cellular network with fractional frequency reuse to improve the spectral efficiency for cell-edge users. We show that the resultant optimization problem is NP-hard and propose a low-complexity layered spectrum allocation heuristic that strikes a balance between rate maximization and fairness of allocation. Next, we develop an energy efficient downlink power control scheme in a energy harvesting small-cell base station equipped with local cache and wireless backhaul. We also study the tradeoff between the cache size and the energy harvesting capabilities. We next analyzed the file read latency in Distributed Storage Systems (DSS). We propose a heterogeneous DSS model wherein the stored data is categorized into multiple classes based on arrival rate of read requests, fault-tolerance for storage etc. Using a queuing theoretic approach, we establish bounds on the average read latency for different scheduling policies. We also show that erasure coding in DSS serves the dual purpose of reducing read latency and increasing the energy efficiency.
Lastly, we investigate the problem of delay-efficient packet scheduling in M2M uplink with heterogeneous traffic characteristics. We classify the uplink traffic into multiple classes and propose a proportionally-fair delay-efficient heuristic packet scheduler. Using a queuing theoretic approach, we next develop a delay optimal multiclass packet scheduler and later extend it to joint medium access control and packet scheduling for M2M uplink. Using extensive simulations, we show that the proposed schedulers perform better than state-of-the-art schedulers in terms of average delay and packet delay jitter. / PHD
|
20 |
Repairing Cartesian Codes with Linear Exact Repair SchemesValvo, Daniel William 10 June 2020 (has links)
In this paper, we develop a scheme to recover a single erasure when using a Cartesian code,in the context of a distributed storage system. Particularly, we develop a scheme withconsiderations to minimize the associated bandwidth and maximize the associateddimension. The problem of recovering a missing node's data exactly in a distributedstorage system is known as theexact repair problem. Previous research has studied theexact repair problem for Reed-Solomon codes. We focus on Cartesian codes, and show wecan enact the recovery using a linear exact repair scheme framework, similar to the oneoutlined by Guruswami and Wooters in 2017. / Master of Science / Distributed storage systems are systems which store a single data file over multiple storage nodes. Each storage node has a certain storage efficiency, the "space" required to store the information on that node. The value of these systems, is their ability to safely store data for extended periods of time. We want to design distributed storage systems such that if one storage node fails, we can recover it from the data in the remaining nodes. Recovering a node from the data stored in the other nodes requires the nodes to communicate data with each other. Ideally, these systems are designed to minimize the bandwidth, the inter-nodal communication required to recover a lost node, as well as maximize the storage efficiency of each node. A great mathematical framework to build these distributed storage systems on is erasure codes. In this paper, we will specifically develop distributed storage systems that use Cartesian codes. We will show that in the right setting, these systems can have a very similar bandwidth to systems build from Reed-Solomon codes, without much loss in storage efficiency.
|
Page generated in 0.1009 seconds