Spelling suggestions: "subject:"infrastructure."" "subject:"d'infrastructure.""
11 |
Hydrologic Information Systems: Advancing Cyberinfrastructure for Environmental ObservatoriesHorsburgh, Jeffery S. 01 May 2009 (has links)
Recently, community initiatives have emerged for the establishment of large-scale environmental observatories. Cyberinfrastructure is the backbone upon which these observatories will be built, and scientists' ability to access and use the data collected within observatories to address research questions will depend on the successful implementation of cyberinfrastructure. The research described in this dissertation advances the cyberinfrastructure available for supporting environmental observatories. This has been accomplished through both development of new cyberinfrastructure components as well as through the demonstration and application of existing tools, with a specific focus on point observations data. The cyberinfrastructure that was developed and deployed to support collection, management, analysis, and publication of data generated by an environmental sensor network in the Little Bear River environmental observatory test bed is described, as is the sensor network design and deployment. Results of several analyses that demonstrate how high-frequency data enable identification of trends and analysis of physical, chemical, and biological behavior that would be impossible using traditional, low-frequency monitoring data are presented. This dissertation also illustrates how the cyberinfrastructure components demonstrated in the Little Bear River test bed have been integrated into a data publication system that is now supporting a nationwide network of 11 environmental observatory test bed sites, as well as other research sites within and outside of the United States. Enhancements to the infrastructure for research and education that are enabled by this research are impacting a diverse community, including the national community of researchers involved with prospective Water and Environmental Research Systems (WATERS) Network environmental observatories as well as other observatory efforts, research watersheds, and test beds. The results of this research provide insight into and potential solutions for some of the bottlenecks associated with design and implementation of cyberinfrastructure for observatory support.
|
12 |
High-contrast imaging in the cloud with klipReduce and FindrHaug-Baltzell, Asher, Males, Jared R., Morzinski, Katie M., Wu, Ya-Lin, Merchant, Nirav, Lyons, Eric, Close, Laird M. 08 August 2016 (has links)
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loeve image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible-wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
|
13 |
Tracing the Evolution of Collaborative Virtual Research Environments: A Critical Events-Based PerspectiveTrudeau, Ashley B 08 1900 (has links)
A significant number of scientific projects pursuing large scale, complex investigations involve dispersed research teams, which conduct a large part or their work virtually. Virtual Research Environments (VREs), cyberinfrastructure that facilitates coordinated activities amongst dispersed scientists, thus provide a rich context to study organizational evolution. Due to the constantly evolving nature of technologies, it is important to understand how teams of scientists, system developers, and managers respond to critical incidents. Critical events are organizational situations that trigger strategic decision making to adjust structure or redirect processes in order to maintain balance or improve an already functioning system. This study examines two prominent VREs: The United States Virtual Astronomical Observatory (US-VAO) and the HathiTrust Research Center (HTRC) in order to understand how these environments evolve through critical events and strategic choices. Communication perspectives lend themselves well to a study of VRE development and evolution because of the central role occupied by communication technologies in both the functionality and management of VREs. Using the grounded theory approach, this study uses organizational reports to trace how critical events and their resulting strategic choices shape these organizations over time. The study also explores how disciplinary demands influence critical events.
|
14 |
Hazus-MH flood loss estimation on a web-based systemYildirim, Enes 01 August 2017 (has links)
In last decades, the importance of flood damage and loss estimation systems has increased significantly because of its social and economic outcomes. Flood damage and loss estimation systems are useful to understand possible impacts of flooding and prepare better resilience plans to manage and allocate resources for emergency decision makers. Recent web-based technologies can be utilized to create a system that can help to analyze flood impact both on the urban and rural area. With taking advantage of web-based systems, decision makers can observe effects of flooding considering many different scenarios with requiring less effort. Most of the emergency management plans have been created using paper-based maps or GIS (Geographical Information System) software. Paper-based materials generally illustrate floodplain maps and give basic instructions about what to do during flooding event and show main roads to evacuate people from their neighborhood. After the development of GIS (Geographic Information System) software, these plans have been prepared with giving more detail information about demographics, building, critical infrastructure etc.
With taking advantage of GIS, there are several software have been developed for the understanding of disaster impacts on the community. One of the widely-used GIS-based software called Hazus-MH (Multi-Hazard) which is created by FEMA (Federal Emergency Management Agency) can analyze disaster effects on both urban and rural area. Basically, it allows users to run a disaster simulation (earthquake, hurricane, and flood) to observe disaster effects. However, its capabilities are not broad as web-based technologies. Hazus-MH has some limitations in terms of working with specific software requirements, the ability to show a limited number of flood scenarios and lack of representing real time situation. For instance, the software is only compatible with Windows operated computers and specific version of ArcMap rather than other GIS software. Users must have GIS expertise to operate the software. In contrast, web-based system allows use to reduce all these limitations. Users can operate the system using the internet browser and do not require to have GIS knowledge. Thus, hundreds of people can connect to the system, observe flood impact in real time and explore their neighborhood to prepare for flooding.
In this study, Iowa Flood Damage Estimation Platform (IFDEP) is introduced. This platform is created using various data sources such as floodplain maps and rasters which are created by IFC (Iowa Flood Center), default Hazus-MH data, census data, National Structure Inventory, real-time USGS (United States Geological Survey) Stream gage data, real time IFC bridge sensor data, and flood forecast model which created by IFC. To estimate damage and loss, damage curves which are created by Army Corps of Engineers are implemented. All of these data are stored in PostgreSQL. Therefore, hundreds of different flood analyses can be queried with making cross-sectional analyses between floodplain data and census data. Regarding to level analyses which are defined by FEMA as three level, Level 3 type analysis can be done on the fly with using web-based technology. Furthermore, better and more accurate results are presented to the users. Using real-time stream gauge data and flood forecast data allow to demonstrate current and upcoming flood damage and loss which cannot be provided by current GIS-based desktop software. Furthermore, analyses are visualized using JavaScript and HTML5 for better illustration and communication rather than using limited visualization selection of GIS software.
To give the vision of this study, IFDEP can be widened using other data sources such as National Resources Inventory, National Agricultural Statistics Service, U.S. census data, Tax Assessor building data, land use data and more. This can be easily done on the database side. Need to address that augmented reality (AR) and virtual reality (VR) technologies can enhance to broad capabilities of this platform. For this purpose, Microsoft HoloLens can be utilized to connect IFDEP, real-time information can be visualized through the device. Therefore, IFDEP can be recruited both on headquarters for emergency managers and on the field for emergency management crew.
|
15 |
Advancing Streamflow Forecasts Through the Application of a Physically Based Energy Balance Snowmelt Model With Data Assimilation and Cyberinfrastructure ResourcesGichamo, Tseganeh Zekiewos 01 May 2019 (has links)
The Colorado Basin River Forecast Center (CBRFC) provides forecasts of streamflow for purposes such as flood warning and water supply. Much of the water in these basins comes from spring snowmelt, and the forecasters at CBRFC currently employ a suite of models that include a temperature-index snowmelt model. While the temperature-index snowmelt model works well for weather and land cover conditions that do not deviate from those historically observed, the changing climate and alterations in land use necessitate the use of models that do not depend on calibrations based on past data. This dissertation reports work done to overcome these limitations through using a snowmelt model based on physically invariant principles that depends less on calibration and can directly accommodate weather and land use changes. The first part of the work developed an ability to update the conditions represented in the model based on observations, a process referred to as data assimilation, and evaluated resulting improvements to the snowmelt driven streamflow forecasts. The second part of the research was the development of web services that enable automated and efficient access to and processing of input data to the hydrological models as well as parallel processing methods that speed up model executions. These tasks enable the more detailed models and data assimilation methods to be more efficiently used for streamflow forecasts.
|
16 |
Monitoring-as-a-service in the cloudMeng, Shicong 03 April 2012 (has links)
State monitoring is a fundamental building block for Cloud services.
The demand for providing state monitoring as services (MaaS) continues to grow and is evidenced by CloudWatch from Amazon EC2, which allows cloud consumers to pay for monitoring a selection of performance metrics with coarse-grained periodical sampling of runtime states. One of the key challenges for wide deployment of MaaS is to provide better balance among a set of critical quality and performance parameters, such as accuracy, cost, scalability and customizability.
This dissertation research is dedicated to innovative research and
development of an elastic framework for providing state monitoring as
a service (MaaS). We analyze limitations of existing techniques, systematically identify the need and the challenges at different layers of a Cloud monitoring service platform, and develop a suite of
distributed monitoring techniques to support for flexible monitoring
infrastructure, cost-effective state monitoring and monitoring-enhanced Cloud management. At the monitoring infrastructure layer, we develop techniques to support multi-tenancy of monitoring services by exploring cost sharing between monitoring tasks and safeguarding monitoring resource usage. To provide elasticity in monitoring, we propose techniques to allow the monitoring infrastructure to self-scale with monitoring demand. At the cost-effective state monitoring layer, we devise several new state monitoring functionalities to meet unique functional requirements in Cloud monitoring. Violation likelihood state monitoring explores the benefits of consolidating monitoring workloads by allowing utility-driven monitoring intensity tuning on individual monitoring tasks and identifying correlations between monitoring tasks. Window based state monitoring leverages distributed windows for the best monitoring accuracy and communication efficiency. Reliable state monitoring is robust to both transient and long-lasting communication issues caused by component failures or cross-VM performance interferences. At the monitoring-enhanced Cloud management layer, we devise a novel technique to learn about the performance characteristics of both Cloud infrastructure and Cloud applications from cumulative performance monitoring data to increase the cloud deployment efficiency.
|
17 |
An empirical approach to automated performance management for elastic n-tier applications in computing cloudsMalkowski, Simon J. 03 April 2012 (has links)
Achieving a high degree of efficiency is non-trivial when managing the performance of large web-facing applications such as e-commerce websites and social networks. While computing clouds have been touted as a good solution for elastic applications, many significant technological challenges still have to be addressed in order to leverage the full potential of this new computing paradigm. In this dissertation I argue that the automation of elastic n-tier application performance management in computing clouds presents novel challenges to classical system performance management methodology that can be successfully addressed through a systematic empirical approach. I present strong evidence in support of my thesis in a framework of three incremental building blocks: Experimental Analysis of Elastic System Scalability and Consolidation, Modeling and Detection of Non-trivial Performance Phenomena in Elastic Systems, and Automated Control and Configuration Planning of Elastic Systems. More concretely, I first provide a proof of concept for the feasibility of large-scale experimental database system performance analyses, and illustrate several complex performance phenomena based on the gathered scalability and consolidation data. Second, I extend these initial results to a proof of concept for automating bottleneck detection based on statistical analysis and an abstract definition of multi-bottlenecks. Third, I build a performance control system that manages elastic n-tier applications efficiently with respect to complex performance phenomena such as multi-bottlenecks. This control system provides a proof of concept for automated online performance management based on empirical data.
|
18 |
Hypoxia modeling in Corpus Christi Bay using a hydrologic information systemTo, Sin Chit 05 May 2015 (has links)
Hypoxia is frequently detected during summer in Corpus Christi Bay, Texas, and causes significant harm to benthic organism population and diversity. Hypoxia is associated with the density stratification in the Bay but the cause of stratification is uncertain. To support the study of hypoxia and stratification, a cyberinfrastructure based on the CUAHSI (Consortium of Universities for the Advancement of Hydrologic Science, Inc) Hydrologic Information System (HIS) is implemented. HIS unites the sensor networks in the Bay by providing a standard data language and protocol for transferring data. Thus hypoxia-related data from multiple sources can be compiled into a structured database. In Corpus Christi Bay, salinity data collected from many locations and times are synthesized into a three-dimensional space-time continuum using geostatistical methods. The three dimensions are the depth, the distance along a transect line, and time. The kriged salinity concentration in space and time illuminates the pattern of movement of a saline gravity current along the bottom of the Bay. The travel time of a gravity current in the Bay is estimated to be on the order of one week and the speed is on the order of 1 km per day. Statistical study of high-resolution wind data shows that the stratification pattern in the Bay is related to the occurrence of strong, southeasterly winds in the 5 days prior to the observation. This relationship supports the hypothesis that stratification is caused by the wind initiating hypersaline gravity currents which flow from Laguna Madre into Corpus Christi Bay. An empirical physical hypoxia model is created that tracks the fate and transport of the gravity currents. The model uses wind and water quality data from real-time sensors published by HIS to predict the extent and duration of hypoxic regions in the Bay. Comparison of model results with historical data from 2005 to 2008 shows that wind-driven gravity currents can explain the spatially heterogeneous patterns of hypoxic zones in Corpus Christi Bay. / text
|
19 |
Positioning the Reserve Headquarters Support (RHS) system for multi-layered enterprise useKoch, Douglas J. January 2009 (has links) (PDF)
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, September 2009. / Thesis Advisor(s): Cook, Glenn. "September 2009." Description based on title screen as viewed on 6 November 2009. Author(s) subject terms: Enterprise architecture, project management, business process transformation, operating model, IT governance, IT systems, data quality, data migration, business operating model, personnel IT systems, HRM, ERP. Includes bibliographical references (p. 89-92). Also available in print.
|
20 |
A Taxonomy of Parallel Vector Spatial Analysis AlgorithmsJanuary 2015 (has links)
abstract: Nearly 25 years ago, parallel computing techniques were first applied to vector spatial analysis methods. This initial research was driven by the desire to reduce computing times in order to support scaling to larger problem sets. Since this initial work, rapid technological advancement has driven the availability of High Performance Computing (HPC) resources, in the form of multi-core desktop computers, distributed geographic information processing systems, e.g. computational grids, and single site HPC clusters. In step with increases in computational resources, significant advancement in the capabilities to capture and store large quantities of spatially enabled data have been realized. A key component to utilizing vast data quantities in HPC environments, scalable algorithms, have failed to keep pace. The National Science Foundation has identified the lack of scalable algorithms in codified frameworks as an essential research product. Fulfillment of this goal is challenging given the lack of a codified theoretical framework mapping atomic numeric operations from the spatial analysis stack to parallel programming paradigms, the diversity in vernacular utilized by research groups, the propensity for implementations to tightly couple to under- lying hardware, and the general difficulty in realizing scalable parallel algorithms. This dissertation develops a taxonomy of parallel vector spatial analysis algorithms with classification being defined by root mathematical operation and communication pattern, a computational dwarf. Six computational dwarfs are identified, three being drawn directly from an existing parallel computing taxonomy and three being created to capture characteristics unique to spatial analysis algorithms. The taxonomy provides a high-level classification decoupled from low-level implementation details such as hardware, communication protocols, implementation language, decomposition method, or file input and output. By taking a high-level approach implementation specifics are broadly proposed, breadth of coverage is achieved, and extensibility is ensured. The taxonomy is both informed and informed by five case studies im- plemented across multiple, divergent hardware environments. A major contribution of this dissertation is a theoretical framework to support the future development of concrete parallel vector spatial analysis frameworks through the identification of computational dwarfs and, by extension, successful implementation strategies. / Dissertation/Thesis / Doctoral Dissertation Geography 2015
|
Page generated in 0.0821 seconds