• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Development and Use of Scientific Software

Sanders, Rebecca 29 April 2008 (has links)
Scientific software, by which we mean application software that has a large computational component, models physical phenomena and provides data for decision support. This can be software that calculates loads on bridges, provides predictions for weather systems, images bone structures for surgical procedures, models subsystems at nuclear generating stations, or processes images from ground-based telescopes. There is no consensus on what the best practices for the development of scientific software are. We carried out a study at two Canadian universities in which we interviewed scientists and engineers who develop or use scientific software to identify characteristics of current development and usage. Through qualitative analysis, I identified key characteristics of scientific software development and usage and observed correlations between these characteristics. The results are a collection of observations about how scientific software is documented and designed; the nature of the scientific software lifecycle; the selection of development languages; approaches to testing, especially validation testing; and sources of risk. I also examine concerns scientists have with commercial software they use to determine what quality factors are of interest to them and also which seem to require special trade-offs. I find that scientific software development and use differs fundamentally from development in most other domains. / Thesis (Master, Computing) -- Queen's University, 2008-04-27 11:54:00.268
2

General Resource Management for Computationally Demanding Scientific Software

Xinchen Guo (13965024) 17 October 2022 (has links)
<p>Many scientific problems contain nonlinear systems of equations that require multiple iterations to reach converged results. Such software pattern follows the bulk synchronous parallel model. In that sense, an iteration is a superstep, which includes computation of local data, global communication to update data for the next iteration, and synchronization between iterations. In modern HPC environments, MPI is used to distribute data and OpenMP is used to accelerate computation of each data. More MPI processes increase the cost of communication and synchronization whereas more OpenMP threads increase the overhead of multithreading. A proper combination of MPI and OpenMP is critical to accelerate each superstep. Proper orchestration of MPI processes and OpenMP threads is also needed to efficiently use the underlying hardware resources.</p> <p>  </p> <p>Purdue’s multi-purpose nanodevice simulation tool NEMO5 distributes the computation of independent spectral points by MPI. The computation of each spectral point is accelerated with OpenMP threads. A few examples of resource utilization optimizations are presented. One type of simulation applies the non-equilibrium Green’s function method to accurately predict drug molecules. Our profiling results suggest the optimum combination has more MPI processes and fewer OpenMP threads. However, NEMO5's memory usage has large spikes for each spectral point. Such behavior limits the concurrency of spectral point calculation due to the lack of swap space on HPC nodes to prevent out-of-memory. </p> <p><br></p> <p>A distributed resource management framework is proposed and developed to automatically and dynamically manage memory and CPU usage. The concurrent calculation of spectral points is pipelined to avoid simultaneous peak memory usage. This allows more MPI processes and fewer OpenMP threads for higher parallel efficiency. Automatic CPU usage adjustment also reduces the time cost to fill and drain the calculation pipeline. The resource management framework requires minimum code intrusion and successfully speeds up the calculation. It can also be generalized for other simulation software.</p>
3

Creating Scientific Software, with Application to Phylogenetics and Oligonucleotide Probe Design

Nordberg, Eric Kinsley 09 December 2015 (has links)
The demands placed on scientific software are different from those placed on general purpose software, and as a result, creating software for science and for scientists requires a specialized approach. Much of software engineering practices have developed in situations in which a tool is desired to perform some definable task, with measurable and verifiable outcomes. The users and the developers know what the tool "should" do. Scientific software often uses unproven or experimental techniques to address unsolved problems. The software is often run on "experimental" High Performance Computing hardware, adding another layer of complexity. It may not be possible to say what the software should do, or what the results should be, as these may be connected to very scientific questions for which the software is being developed. Software development in this realm requires a deep understanding of the relevent scientific domain area. The present work describes applications resulting from a scientific software development process that builds upon detailed understanding of the scientific domain area. YODA is an application primarily for selecting microarray probe sequences for measuring gene expression. At the time of its development, none of the existing programs for this task satisfied the best-known requirements for microarray probe selection. The question of what makes a good microarray probe was a research area at the time, and YODA was developed to incorporate the latest understanding of these requirements, drawn from the research literature, into a tool that can be used by a research biologist. An appendix examines the response and use in the years since YODA was released. PEPR is a software system for inferring highly resolved whole-genome phylogenies for hundreds of genomes. It encodes a process developed through years of research and collaboration to produce some of the highest quality phylogenies available for large sets of bacterial genomes, with no manual intervention required. This process is described in detail, and results are compared with high quality results from the literature to show that the process is at least as successful as more labor-intensive manual efforts. An appendix presents additional results, including high quality phylogenies for many bacterial Orders. / Ph. D.
4

UVLabel A Tool for the Future of Interferometry Analysis

January 2019 (has links)
abstract: UVLabel was created to enable radio astronomers to view and annotate their own data such that they could then expand their future research paths. It simplifies their data rendering process by providing a simple user interface to better access sections of their data. Furthermore, it provides an interface to track trends in their data through a labelling feature. The tool was developed following the incremental development process in order to quickly create a functional and testable tool. The incremental process also allowed for feedback from radio astronomers to help guide the project's development. UVLabel provides both a functional product, and a modifiable and scalable code base for radio astronomer developers. This enables astronomers studying various astronomical interferometric data labelling capabilities. The tool can then be used to improve their filtering methods, pursue machine learning solutions, and discover new trends. Finally, UVLabel will be open source to put customization, scalability, and adaptability in the hands of these researchers. / Dissertation/Thesis / Masters Thesis Software Engineering 2019
5

A Model for Run-time Measurement of Input and Round-off Error

Meng, Nicholas Jie 25 September 2012 (has links)
For scientists, the accuracy of their results is a constant concern. As the programs they write to support their research grow in complexity, there is a greater need to understand what causes the inaccuracies in their outputs, and how they can be mitigated. This problem is difficult because the inaccuracies in the outputs come from a variety of sources in both the scientific and computing domains. Furthermore, as most programs lack a testing oracle, there is no simple way to validate the results. We define a model for the analysis of error propagation in software. Its novel combination of interval arithmetic and automatic differentiation allows for the error accumulated in an output to be measurable at runtime, attributable to individual inputs and functions, and identifiable as either input error, round-off error, or error from a different source. This allows for the identification of the subset of inputs and functions that are most responsible for the error seen in an output and how it can be best mitigated. We demonstrate the effectiveness of our model by analyzing a small case study from the field of nuclear engineering, where we are able to attribute the contribution of over 99% of the error to 3 functions out of 15, and identify the causes for the observed error. / Thesis (Master, Computing) -- Queen's University, 2012-09-24 14:12:25.659
6

Software Development Productivity Metrics, Measurements and Implications

Gupta, Shweta 06 September 2018 (has links)
The rapidly increasing capabilities and complexity of numerical software present a growing challenge to software development productivity. While many open source projects enable the community to share experiences, learn and collaborate; estimating individual developer productivity becomes more difficult as projects expand. In this work, we analyze some HPC software Git repositories with issue trackers and compute productivity metrics that can be used to better understand and potentially improve development processes. Evaluating productivity in these communities presents additional challenges because bug reports and feature requests are often done by using mailing lists instead of issue tracking, resulting in difficult-to-analyze unstructured data. For such data, we investigate automatic tag generation by using natural language processing techniques. We aim to produce metrics that help quantify productivity improvement or degradation over the projects lifetimes. We also provide an objective measurement of productivity based on the effort estimation for the developer's work.
7

Simulating Atmosphere and the TolTEC Detector Array for Data Reduction Pipeline Evaluation

January 2019 (has links)
abstract: TolTEC is a three-color millimeter wavelength camera currently being developed for the Large Millimeter Telescope (LMT) in Mexico. Synthesizing data from previous astronomy cameras as well as knowledge of atmospheric physics, I have developed a simulation of the data collection of TolTEC on the LMT. The simulation was built off smaller sub-projects that informed the development with an understanding of the detector array, the time streams for astronomical mapping, and the science behind Lumped Element Kinetic Inductance Detectors (LEKIDs). Additionally, key aspects of software development processes were integrated into the scientific development process to streamline collaboration across multiple universities and plan for integration on the servers at LMT. The work I have done benefits the data reduction pipeline team by enabling them to efficiently develop their software and test it on simulated data. / Dissertation/Thesis / Masters Thesis Software Engineering 2019
8

Improving the Usability of Complex Life Science Software : Designing for a significant difference in skill among users in a scientific work domain

Rabe, Erik January 2023 (has links)
The usability of complex scientific software is often lacking as it tends to not receive a high priority in development, in addition to the fact that developers are usually engineers with a low level of knowledge in usability areas. The study examines such a software in an environment with a significant difference in terms of user skill, which creates some issues in terms of improving usability. Novice users need to have a higher degree of learnability to better understand how to operate the system, but this cannot reduce the overall level of complexity since it is required by experienced users to perform more advanced tasks. To find out how usability could be increased under these conditions, qualitative interviews were conducted with users of the software. The gathered data was applied to a thematic analysis that was used as a foundation in the development of a functional prototype for a new design, which was iteratively tested and evaluated with users. The design integrates a somewhat novel feature through a zoom-in function as an adaptable view, where the user can visualize a more complex layer of the software. The study also highlights the importance of correctly identifying central user activities in an environment with a high difference in complexity among tasks, to make more informed design decisions around visual priority.
9

Scientific Software Integration: A Case Study of SWMM and PEST++

Kamble, Suraj January 2017 (has links)
No description available.
10

A DOCUMENT DRIVEN APPROACH TO CERTIFYING SCIENTIFIC COMPUTING SOFTWARE

Koothoor, Nirmitha 10 1900 (has links)
<p>With the general engineering practices being followed for the development of scientific software, scientists are seemingly able to simulate real world problems successfully and generate accurate numerical results. However, scientific software is rarely presented in such a way that an external reviewer would feel comfortable in certifying that the software is fit for its intended use. The documentation of the software development - Requirements, Design and Implementation, is not being given the importance it deserves. Often, the requirements are improperly and insufficiently recorded, which make the design decisions difficult. Similarly, incomplete documentation of design decisions and numerical algorithms make the implementation difficult. Lack of traceability between the requirements, design and the code leads to problems with building confidence in the results.</p> <p>To study the problems faced during certification, a case study was performed on a legacy software used by a nuclear power generating company in the 1980's for safety analysis in a nuclear reactor. Unlike many other scientific codes of that time, the nuclear power generating company included a full theory manual with their code. Although the theory manual was very helpful, the documentation and development approach still needed significant updating. During the case study, 27 issues were found with the documentation of the theory manual, 2 opportunities to update the design and 6 programming style issues were found in the original FORTRAN code. This shows room for improvement in the documentation techniques in the development of scientific software based on a physical model.</p> <p>This thesis provides a solution to the certification problem, by introducing software engineering methodologies in the documentation of the scientific software. This work proposes a new template for the Software Requirements Specification (SRS) to clearly and sufficiently state the functional and the non-functional requirements, while satisfying the desired qualities for a good SRS. Furthermore, the proposed template acts as a checklist and helps in systematically and adequately developing the requirements document. For developing the design and implementation, this thesis introduces Literate Programming (LP) as an alternative to traditional structured programming. Literate Programming documents the numerical algorithms, logic behind the development and the code together in the same document, the Literate Programmer's Manual (LPM). The LPM is developed in connection with the SRS. The explicit traceability between the theory, numerical algorithms and implementation (code), simplifies the process of verification and the associated certification.</p> / Master of Applied Science (MASc)

Page generated in 0.0855 seconds