• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A COMPARATIVE STUDY OF NEIGHBOR JOINING BASED APPROACHES FOR PHYLOGENETIC INFERENCE

Correa, Maria Fernanda 01 December 2010 (has links)
One of the most relevant issues in the field of biology is the unveiling of the evolutionary history of different species and organisms. The evolutionary relationships of these species and organisms are explained by constructing phylogenetic trees whose leaves represent species and whose internal nodes represent hypothesized ancestors. The tree reconstruction process is known as Phylogenetic Inference. Phylogenies can be used not only for explaining the evolutionary history of organisms but also for many other purposes such as the design of new drugs by tracking the evolution of diseases. In the last few years, the amount of genetic data collected from organisms and species has increased greatly. Based on this, biologists have sought methods that are capable of computing phylogenies of small, medium, and even large datasets in a reasonable time and with accuracy. The neighbor-joining method is one used most for phylogenetic inference because of its computation efficiency. Since the increase of datasets, novel neighbor-joining- based approaches have been developed with the goal of computing efficiency and accurate phylogenies of thousands of sequences. Therefore, this study compared the canonical neighbor-joining method represented by MEGA software with two novel neighbor-joining-based approaches--the NINJA method and the FastTree method--to identify the most efficient and effective method for the computational performance, topological accuracy, and topological similarity through the scalability of the sequences size. The study was accomplished by executing experiments using small, medium, and large protein and nucleotide sequences. The FastTree method was the most successful at balancing the trade-off among the Computational Performance, Topological Accuracy, and Topological Similarity when scaling up the number of sequences in this study.
2

Shortening time-series power flow simulations for cost-benefit analysis of LV network operation with PV feed-in

López, Claudio David January 2015 (has links)
Time-series power flow simulations are consecutive power flow calculations on each time step of a set of load and generation profiles that represent the time horizon under which a network needs to be analyzed. These simulations are one of the fundamental tools to carry out cost-benefit analyses of grid planing and operation strategies in the presence of distributed energy resources, unfortunately, their execution time is quite substantial. In the specific case of cost-benefit analyses the execution time of time-series power flow simulations can easily become excessive, as typical time horizons are in the order of a year and different scenarios need to be compared, which results in time-series simulations that require a rather large number of individual power flow calculations. It is often the case that only a set of aggregated simulation outputs is required for assessing grid operation costs, examples of which are total network losses, power exchange through MV/LV substation transformers, and total power provision from PV generators. Exploring alternatives to running time-series power flow simulations with complete input data that can produce approximations of the required results with a level of accuracy that is suitable for cost-benefit analyses but that require less time to compute can thus be beneficial. This thesis explores and compares different methods for shortening time-series power flow simulations based on reducing the amount of input data and thus the required number of individual power flow calculations, and focuses its attention on two of them: one consists in reducing the time resolution of the input profiles through downsampling while the other consists in finding similar time steps in the input profiles through vector quantization and simulating them only once. The results show that considerable execution time reductions and sufficiently accurate results can be obtained with both methods, but vector quantization requires much less data to produce the same level of accuracy as downsampling. Vector quantization delivers a far superior trade-off between data reduction, time savings, and accuracy when the simulations consider voltage control or when more than one simulation with the same input data is required, as in such cases the data reduction process can be carried out only once. One disadvantage of this method is that it does not reproduce peak values in the result profiles with accuracy, which is due to the way downsampling disregards certain time steps in the input profiles and to the averaging effect vector quantization has on the them. This disadvantage makes the simulations shortened through these methods less precise, for example, for detecting voltage violations.
3

Advancing the Cyberinfrastructure for Integrated Water Resources Modeling

Buahin, Caleb A. 01 December 2017 (has links)
Like other scientists, hydrologists encode mathematical formulations that simulate various hydrologic processes as computer programs so that problems with water resource management that would otherwise be manually intractable can be solved efficiently. These computer models are typically developed to answer specific questions within a specific study domain. For example, one computer model may be developed to solve for magnitudes of water flow and water levels in an aquifer while another may be developed to solve for magnitudes of water flow through a water distribution network of pipes and reservoirs. Interactions between different processes are often ignored or are approximated using overly simplistic assumptions. The increasing complexity of the water resources challenges society faces, including stresses from variable climate and land use change, means that some of these models need to be stitched together so that these challenges are not evaluated myopically from the perspective of a single research discipline or study domain. The research in this dissertation presents an investigation of the various approaches and technologies that can be used to support model integration. The research delves into some of the computational challenges associated with model integration and suggests approaches for dealing with these challenges. Finally, it advances new software that provides data structures that water resources modelers are more accustomed to and allows them to take advantage of advanced computing resources for efficient simulations.

Page generated in 0.3162 seconds