Spelling suggestions: "subject:"cyberinfrastructure."" "subject:"cyberinfrastructures.""
1 |
Cultural Heritage Cyberinfrastructure: A Geographic Case Study of China / Geographic Case Study of ChinaJablonski, Jon R. 06 1900 (has links)
xii, 158 p. : ill., maps. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / The Internet affects many aspects of daily life and economic activity in globalized
economies. The network city thesis posits that the Internet enables disbursed methods of
production and new forms of economic activity. Existing economic geography literature
concentrates on revenue generating firms. The concept of Cultural Heritage
Cyberinfrastructure (CHCi) is developed in order to account for economic activities of
nongoverning and nonrevenue generating firms, and is tested against the online activities
of libraries. China, with its administratively homogeneous provincial library system and
rapidly changing economy, is examined. The central government and provincial libraries
are cooperatively building the National Digital Culture Network of China to provide
information services to urban migrants and subsidize rural development efforts through
CHCi. These projects are found to be more active in less-economically transitioned
western provinces. CHCi is found to be a useful construct for studying non-governing,
non-market segments of an economy. / Committee in Charge:
Dr. Alexander B. Murphy, Chair;
Dr. Xiaobo Su
|
2 |
Collective creativity in scientific communitiesZou, Guangyu, Yilmaz, Levent, January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes bibliographical references (p. 103-106).
|
3 |
Lowering the Technological Barrier in Developing, Sharing and Installing Web GIS ApplicationsKhattar, Rohit Kumar 22 June 2022 (has links)
Portability of web applications between web servers of different organizations can be challenging and can complicate sharing and collaborative use of such tools. Given the distributed nature of the web, this lack of portability is usually not a concern because a user in one organization can link to and use a web application hosted by another organization. However, access control or differentiation may be needed by an organization in terms of area of interest, input data, analytical techniques, access control, presentation, branding, and language. This is true for many government organizations, and their associated web sites, and servers. In such cases, there are compelling political, branding, security, and privacy motivations that require each organization or agency to host and manage web applications on their own servers rather than using third party web sites over which they have little or no control. Also, web applications are classically developed by setting up a local software development and testing environment which can be challenging for new developers, be restricted by the software and hardware availability, cost significantly to obtain software development licenses and compatible hardware and is prone to code and data loss due to hardware damage or software corruptions. To simplify the discovery, deployment of web-based applications, I present the design, development, and testing of a system for discovering, installing, and configuring environmental analysis web applications on localized web servers. The system works with applications developed using Tethys Platform, which is an open-source software stack for creating geospatially enabled web-based applications. The developed Tethys App Store includes a Tethys application user interface that allows a server manager to retrieve applications from the central repository and install them on a local server with relatively simplicity, similar to the installation of a mobile application to a mobile device from a mobile application store. Next, I present the design concept of a cloud-based web application development platform, Tethys App Nursery, that attempts to overcome the above hurdles associated with localized development environments. A prototype of this system is developed and presented which is tightly integrated with Tethys platform and various cloud technologies provided by Amazon Web Services. The developed app nursery allows users to register for new Tethys portal instances in the cloud, develop new applications and test existing applications, without installing any local dependencies or development tools. Various cloud components used in this service's development as well as their associated costs are described. These systems were developed to support development of water and environmental analysis web apps for the international Group on Earth Observations (GEO) Global Water Sustainability (GEOGloWS) initiative of the National Aeronautics and Space Administration (NASA) and several partner organizations.
|
4 |
HOW ENVIRONMENTAL SCIENCES BUILD INTERDISCIPLINARY KNOWLEDGE CLAIMS: CYBERINFRASTRUCTURE AFFORDANCES UNDER CONFLICTING INSTITUTIONAL LOGICSMcElroy, Charles Patrick 05 June 2017 (has links)
No description available.
|
5 |
The rationalities behind the adoption of cyberinfrastructure for e-science in the early 21st century U.S.A.Kee, Kerk Fong 02 November 2010 (has links)
Based on grounded theory and thematic analysis of 70 in-depth interviews conducted over 32 months (from November 2007 to June 2010) with domain scientists, computational technologists, supercomputer center administrators, program officers at the National Science Foundation, social scientists, policy analysts, and industry experts, this dissertation explores the rationalities behind initial adoption of cyberinfrastructure for e-science in the early 21st century U.S. This dissertation begins with Research Question 1 (i.e., how does cyberinfrastructure's nature influence its adoption process in early 21st century U.S.?) and identifying four areas of challenging conditions to reveal a lack of trialability/observability (due to the participatory/bespoke nature), a lack of simplicity (due to the meta/complex characteristic), a lack of perceived compatibility (due to the disruptive/revolutionary quality), and a lack of full control (due to the community/network property). Then analysis for Research Question 2 (i.e., what are the rationalities that drive cyberinfrastructure adoption in early 21st century U.S.?) suggests that there are three primary driving rationalities behind adoption. First, the adoption of cyberinfrastructure as a meta-platform of interrelated technologies is driven by the perceived need for computational power, massive storage, multi-scale integration, and distributed collaboration. Second, the adoption of cyberinfrastructure as an organizational/behavioral practice is driven by its relative advantages to produce quantitative and/or qualitative benefits that increase the possibility of major publications and scientific reputations. Third, the adoption of cyberinfrastructure as a new approach to science is driven and maintained by shared visions held by scientists, technologists, professional networks, and scientific communities. Findings suggests that initial adoption by pioneering users was driven by the logic of quantitative and qualitative benefits derived from optimizing cyberinfrastructure resources to enable breakthrough science and the vision of what is possible for the entire scientific community. The logic was sufficient to drive initial adoption despite the challenging conditions that reveal the socio-technical barriers and risky time-investment. Findings also suggest that rationalization is a structuration process, which is sustained by micro individual actions and governed by macro community norms simultaneously. Based on Browning’s (1992) framework of organizational communication, I argue that cyberinfrastructure adoption in the early 21st century lies at the intersection of technical rationalities (i.e., perceived needs, relative advantages, and shared visions) and narrative rationalities (i.e., trialability, observability/communicability, simplicity, perceived compatibility, and full control). / text
|
6 |
Advancement of Computing on Large Datasets via Parallel Computing and CyberinfrastructureYildirim, Ahmet Artu 01 May 2015 (has links)
Large datasets require efficient processing, storage and management to efficiently extract useful information for innovation and decision-making. This dissertation demonstrates novel approaches and algorithms using virtual memory approach, parallel computing and cyberinfrastructure. First, we introduce a tailored user-level virtual memory system for parallel algorithms that can process large raster data files in a desktop computer environment with limited memory. The application area for this portion of the study is to develop parallel terrain analysis algorithms that use multi-threading to take advantage of common multi-core processors for greater efficiency. Second, we present two novel parallel WaveCluster algorithms that perform cluster analysis by taking advantage of discrete wavelet transform to reduce large data to coarser representations so data is smaller and more easily managed than the original data in size and complexity. Finally, this dissertation demonstrates an HPC gateway service that abstracts away many details and complexities involved in the use of HPC systems including authentication, authorization, and data and job management.
|
7 |
Contaminant Hydrogeology Knowledge Base (CHKb) of Georgia, USASarajlic, Semir 18 December 2013 (has links)
Hydrogeologists collect data through studies that originate from a diverse and growing set of instruments that measure, for example, geochemical constituents of surface and groundwater. Databases store and publish the collected data on the Web, and the volume of data is quickly increasing, which makes accessing data problematic and time consuming for individuals. One way to overcome this problem is to develop ontology to formally and explicitly represent the domain (e.g., contaminant hydrogeology) knowledge. Using OWL and RDF, contaminant hydrogeology ontology (CHO) is developed to manage hydrological spatial data for Georgia, USA. CHO is a conceptual computer model for the contaminant hydrogeology domain in which concepts (e.g. contaminant, aquifer) and their relationships (e.g. pollutes) are formerly and explicitly defined. Cyberinfrastructure for exposing CHO and datasets (i.e., CHKb) as Linked Data on the Web is developed. Cyberinfrastructure consists of storing, managing, querying, and visualizing CHKb that can be accessed from URL: cho.gsu.edu.
|
8 |
Modeling Cyberinfrastructure Services through Collaborative ResearchHoward, John B. 02 May 2008 (has links)
Breakout session from the Living the Future 7 Conference, April 30-May 3, 2008, University of Arizona Libraries, Tucson, AZ. / The work of science is being transformed by the dynamics of several circumstances: change in many social, technological and environmental domains is so rapid that science has difficulty keeping up; science is becoming more data-intensive, driven by the need to observe and articulate theories about more and more complex phenomena, and data collection grows exponentially as new technologies facilitate data acquisition on a massive scale; ever more work occurs at the points where traditional scientific disciplines intersect; and there is a growing social expectation that science should help solve emergent, practical problems and project solutions into the future. In sum, the processes of science need to accelerate, to become increasingly inter- (and trans-) disciplinary, and to become more "solution-driven." What is the role of research libraries in addressing these challenges? In the absence of clear, successful organizational models, the ASU Libraries has been modeling cyberinfrastructure services in collaboration with multi-disciplinary, data-intensive sponsored research projects. This presentation presents a broad case study of the experience of the past three years, identifying challenges encountered and describing how strategic direction has been charted in response to needs of the scientific community. Topics to be discussed include: library identify and the culture of science; challenges of data classification and organization to enable integrative, multi-domain research; the role of data scientists; integrating scientific and data curation workflows; implementation of digital repository services; and how emergent synergies with research centers and institutes, informatics/computer science, and high-performance computing begin to blur administrative boundaries.
|
9 |
Extracting trust network information from scientific Web portalsCastañeda Chávez, Alejandro, January 2008 (has links)
Thesis (M.S.)--University of Texas at El Paso, 2008. / Title from title screen. Vita. CD-ROM. Includes bibliographical references. Also available online.
|
10 |
A Flexible Service-Oriented Approach to Address Hydroinformatic Challenges in Large-Scale Hydrologic PredictionsSouffront Alcantara, Michael Antonio 01 December 2018 (has links)
Water security is defined as a combination of water for achieving our goals as a society, and an acceptable level of water-related risks. Hydrologic modeling can be used to predict streamflow and aid in the decision-making process with the goal of attaining water security. Developed countries usually have their own hydrologic models; however, developing countries often lack hydrologic models due to factors such as the maintenance, computational costs, and technical capacity needed to run models. A global streamflow prediction system (GSPS) would help decrease vulnerabilities in developing countries and fill gaps in areas where no local models exist by providing extensive results that can be filtered for specific locations. The development of a GSPS has been deemed a grand challenge of the hydrologic community. To this end, many scientists and engineers have started to develop large-scale systems to an acceptable degree of success. Renowned models like the Global Flood Awareness System (GloFAS), the US National Water Model (NWM), and NASA's Land Assimilation System (LDAS) are proof that our ability to model large areas has improved remarkably. Even so, during this evolution the hydrologic community has started to realize that having a large-scale forecasting system does not make it immediately useful. New hydroinformatic challenges have surfaced that prevent these models from reaching their full potential. I have divided these challenges in four main categories: big data, data communication, adoption, and validation. I present a description of the background leading to the development of a GSPS including existing models, and the components needed to create an operational system. A case study with the NWM is also presented where I address the big data and data communication challenges by developing cyberinfrastructure and accessibility tools such as web applications and services. Finally, I used the GloFAS-RAPID model to create a forecasting system covering Africa, North America, South America, and South Asia using a service-oriented approach that includes the development of web applications, and services for providing improved data accessibility, and helping address adoption and validation challenges. I have developed customized services in collaboration with countries that include Argentina, Bangladesh, Colombia, Peru, Nepal, and the Dominican Republic. I also conducted validation tests to ensure that results are acceptable. Overall, a model-agnostic approach to operationalize a GSPS and provide meaningful results at the local level is provided with the potential to allow decision makers to focus on solving some of the most pressing water-related issues we face as a society.
|
Page generated in 0.0478 seconds