• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 9
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 101
  • 95
  • 51
  • 35
  • 28
  • 26
  • 26
  • 25
  • 22
  • 22
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Numerical Algorithms for Mapping of Multiple Quantitative Trait Loci in Experimental Populations

Ljungberg, Kajsa January 2005 (has links)
Most traits of medical or economic importance are quantitative, i.e. they can be measured on a continuous scale. Strong biological evidence indicates that quantitative traits are governed by a complex interplay between the environment and multiple quantitative trait loci, QTL, in the genome. Nonlinear interactions make it necessary to search for several QTL simultaneously. This thesis concerns numerical methods for QTL search in experimental populations. The core computational problem of a statistical analysis of such a population is a multidimensional global optimization problem with many local optima. Simultaneous search for d QTL involves solving a d-dimensional problem, where each evaluation of the objective function involves solving one or several least squares problems with special structure. Using standard software, already a two-dimensional search is costly, and searches in higher dimensions are prohibitively slow. Three efficient algorithms for evaluation of the most common forms of the objective function are presented. The computing time for the linear regression method is reduced by up to one order of magnitude for real data examples by using a new scheme based on updated QR factorizations. Secondly, the objective function for the interval mapping method is evaluated using an updating technique and an efficient iterative method, which results in a 50 percent reduction in computing time. Finally, a third algorithm, applicable to the imputation and weighted linear mixture model methods, is presented. It reduces the computing time by between one and two orders of magnitude. The global search problem is also investigated. Standard software techniques for finding the global optimum of the objective function are compared with a new approach based on the DIRECT algorithm. The new method is more accurate than the previously fastest scheme and locates the optimum in 1-2 orders of magnitude less time. The method is further developed by coupling DIRECT to a local optimization algorithm for accelerated convergence, leading to additional time savings of up to eight times. A parallel grid computing implementation of exhaustive search is also presented, and is suitable e.g for verifying global optima when developing efficient optimization algorithms tailored for the QTL mapping problem. Using the algorithms presented in this thesis, simultaneous search for at least six QTL can be performed routinely. The decrease in overall computing time is several orders of magnitude. The results imply that computations which were earlier considered impossible are no longer difficult, and that genetic researchers thus are free to focus on model selection and other central genetical issues.
2

Large scale numerical software development using functional languages

Angus, Christopher Michael January 1998 (has links)
Functional programming languages such as Haskell allow numerical algorithms to be expressed in a concise, machine-independent manner that closely reflects the underlying mathematical notation in which the algorithm is described. Unfortunately the price paid for this level of abstraction is usually a considerable increase in execution time and space usage. This thesis presents a three-part study of the use of modern purely-functional languages to develop numerical software. In Part I the appropriateness and usefulness of language features such as polymorphism. pattern matching, type-class overloading and non-strict semantics are discussed together with the limitations they impose. Quantitative statistics concerning the manner in which these features are used in practice are also presented. In Part II the information gathered from Part I is used to design and implement FSC. all experimental functional language tailored to numerical computing, motivated as much by pragmatic as theoretical issues. This language is then used to develop numerical software and its suitability assessed via benchmarking it against C/C++ and Haskell under various metrics. In Part III the work is summarised and assessed.
3

Voronoi site modeling a computer model to predict the binding affinity of small flexible molecules.

Richardson, Wendy Westenberg. January 1993 (has links)
Thesis (Ph. D.)--University of Michigan. / eContent provider-neutral record in process. Description based on print version record.
4

Voronoi site modeling a computer model to predict the binding affinity of small flexible molecules.

Richardson, Wendy Westenberg. January 1993 (has links)
Thesis (Ph. D.)--University of Michigan. / eContent provider-neutral record in process. Description based on print version record.
5

FASCS: A Family Approach for Developing Scientific Computing Software

Yu, Wen 01 1900 (has links)
Scientific Computing (SC) software has had considerable success in achieving improvements in the quality factors of accuracy, precision and efficiency. However other software quality factors, such as reusability, maintainability, reliability and usability are often neglected. This thesis proposes a new methodology, Family Approach for developing Scientific Computing Software (FASCS), to improve the overall quality of SC software. In particular, the aim is to benefit the development of professional end user developed SC programs. FASCS is the first methodology to apply a family approach to develop SC software, where all stages in both the domain engineering phase and the application engineering phase are included. In addition, the challenges for SC software and the characteristics of professional end user developers are also considered. A proof of concept program family, FFEMP, which can solve elasticity problems in solid mechanics using the Finite Element Method (FEM), is developed to illustrate how the proposed methodology can be used. Part of FASCS is a new methodology for systematically eliciting, analyzing and documenting common and variable requirements for a program family. The methodology is termed Goal Oriented Commonality Analysis (GOCA). GOCA proposes two layers of modeling, including the theoretical model and the computational model, to resolve the conflict between the continuous mathematical models that represent the underlying theories of SC problems and the discrete nature of a computer. In addition, the theoretical model and computational model are developed to be abstract and documented separately to improve reusability. Explicitly defined and documented terminology for models and requirements are included in GOCA, which helps avoid ambiguity, which is a potential source of reduced reliability. The traceability of current and future changes is used to potentially improve reusability and maintainability. FASCS includes a Family Member Development Environment (FMDE) for the automatic generation of family members. FMDE is apparently the first complete environment that facilitates automatically generating variable code and test cases for SC program families. The variable code for a specific member of the program family can be automatically generated from a list of variabilities written in a Domain Specific Language (DSL), which is considerably easier than manually writing code for the family member. Some benchmark test cases for the program family can also be automatically generated. Since both family members and test cases can be automatically generated, testing the program family can be performed on the same computational domain with different computational variabilities. This provides partially independent implementations for which test results can be compared to detect potential flaws. This capability partly addresses the unknown solution challenge for SC software. Documentation is also an important part of FASCS. Five new templates for documenting requirements and design are proposed. Traceability matrices, which provide relations between artifacts (and documents) in the different stages of the process, can facilitate understanding of the programs. The matrices can also improve reusability and maintainability by helping trace changes. Nonfunctional requirements, especially nonfunctional variable requirements, are rarely considered in the development of program families. To the knowledge of the author, nonfunctional variable requirements have never been considered in the development of SC program families. Since some nonfunctional requirements are important for SC software, FASCS proposes using some decision making techniques, such as the Analytic Hierarchy Process, to rank nonfunctional variable requirements and select appropriate components to fulfill the requirements. / Thesis / Doctor of Philosophy (PhD)
6

Fault diagnosis and yield enhancement in defect-tolerant VLSI/WSI parallel architectures.

Wang, Kuochen. January 1991 (has links)
This dissertation presents an integrated high-level computer-aided design (CAD) environment, the VAR (VHDL-based Array Reconfiguration) system, for the tasks of design, diagnosis, reconfiguration, simulation, and evaluation in a defect tolerant VLSI/WSI (Wafer Scale Integration) parallel architecture modeled by VHDL. Four issues in the VAR system are studied: (1) the development of a CAD framework for reconfigurable architectures, (2) the development of an array model, and its VHDL description and simulation, (3) the development of efficient fault diagnosis techniques, and (4) the development of a systematic method for evaluating architectures and yield. The first issue describes the modules in the CAD framework and their functionalities. The second issue addresses the hierarchical VHDL description and simulation of the array model, and the detailed designs of its components. The third issue proposes two fault diagnosis algorithms based on the parallel partition approach and the self-comparison approach respectively, and an optimal group diagnosis procedure. These fault diagnosis techniques all have the contribution of reducing testing time significantly under different application scenarios. The fourth issue depicts a complete set of figures of merits for quantitative architecture and yield evaluation. Although an easily diagnosable and reconfigurable two-dimensional defect tolerant array is used as an example to illustrate the methodology of VAR, the VAR environment can be equally applied to other parallel architectures. VAR allows the designers to study and evaluate fault diagnosis and reconfiguration algorithms by inserting faults, which are generated according to actual manufacturing yield data, into the array and then locating the faulty elements as well as simulating the reconfiguration process. Thus, VAR can assist the designers in evaluating different combinations of fault patterns, fault diagnosis and reconfiguration techniques, and reconfigurable architectures through the figures of merit with aim at architectural improvements. Extensive simulation and evaluation have been performed to demonstrate and support the effectiveness of VAR. The results from this research can drive the applications of large area VLSI or WSI closer to reality and result in producing low cost and high yield parallel architectures.
7

A pattern matching system for biosequences.

Mehldau, Gerhard. January 1991 (has links)
String pattern matching is an extensively studied area of computer science. Over the past few decades, many important theoretical results have been discovered, and a large number of practical algorithms has been developed for efficiently matching various classes of patterns. A variety of general pattern matching tools and specialized programming languages have been implemented for applications in areas such as lexical analysis, text editing, or database searching. Most recently, the field of molecular biology has been added to the growing list of applications that make use of pattern matching technology. The requirements of biological pattern matching differ from traditional applications in several ways. First, the amount of data to be processed is very large, and hence highly efficient pattern matching tools are required. Second, the data to be searched is obtained from biological experiments, where error rates of up to 5% are not uncommon. In addition, patterns are often averaged from several, biologically similar sequences. Therefore, to be useful, pattern matching tools must be able to accommodate some notion of approximate matching. Third, formal language notations such as regular expressions, which are commonly used in traditional applications, are insufficient for describing many of the patterns that are of interest to biologists. Hence, any conventional notation must be significantly enhanced to accommodate such patterns. Taken together, these differences combine to render most existing pattern matching tools inadequate, and have created a need for specialized pattern matching systems. This dissertation presents a pattern matching system that specifically addresses the three issues outlined above. A notation for defining patterns is developed by extending the regular expression syntax in a consistent way. Using this notation, virtually any pattern of interest to biologists can be expressed in an intuitive and concise manner. The system further incorporates a very flexible notion of approximate pattern matching that unifies most of the previously developed concepts. Last, but not least, the system employs a novel, optimized backtracking algorithm, which enables it to efficiently search even very large databases.
8

Computational Methods for Maximum Drawdown Options Under Jump-Diffusion

Fagnan, David Erik January 2011 (has links)
Recently, the maximum drawdown (MD) has been proposed as an alternative risk measure ideal for capturing downside risk. Furthermore, the maximum drawdown is associated with a Pain ratio and therefore may be a desirable insurance product. This thesis focuses on the pricing of the discrete maximum drawdown option under jump-diffusion by solving the associated partial integro differential equation (PIDE). To achieve this, a finite difference method is used to solve a set of one-dimensional PIDEs and appropriate observation conditions are applied at a set of observation dates. We handle arbitrary strikes on the option for both the absolute and relative maximum drawdown and then show that a similarity reduction is possible for the absolute maximum drawdown with zero strike, and for the relative maximum drawdown with arbitrary strike. We present numerical tests of validation and convergence for various grid types and interpolation methods. These results are in agreement with previous results for the maximum drawdown and indicate that scaled grids using a tri-linear interpolation achieves the best rate of convergence. A comparison with mutual fund fees is performed to illustrate a possible rationalization for why investors continue to purchase such funds, with high management fees.
9

Computational Methods for Maximum Drawdown Options Under Jump-Diffusion

Fagnan, David Erik January 2011 (has links)
Recently, the maximum drawdown (MD) has been proposed as an alternative risk measure ideal for capturing downside risk. Furthermore, the maximum drawdown is associated with a Pain ratio and therefore may be a desirable insurance product. This thesis focuses on the pricing of the discrete maximum drawdown option under jump-diffusion by solving the associated partial integro differential equation (PIDE). To achieve this, a finite difference method is used to solve a set of one-dimensional PIDEs and appropriate observation conditions are applied at a set of observation dates. We handle arbitrary strikes on the option for both the absolute and relative maximum drawdown and then show that a similarity reduction is possible for the absolute maximum drawdown with zero strike, and for the relative maximum drawdown with arbitrary strike. We present numerical tests of validation and convergence for various grid types and interpolation methods. These results are in agreement with previous results for the maximum drawdown and indicate that scaled grids using a tri-linear interpolation achieves the best rate of convergence. A comparison with mutual fund fees is performed to illustrate a possible rationalization for why investors continue to purchase such funds, with high management fees.
10

ProLAS: a Novel Dynamic Load Balancing Library for Advanced Scientific Computing

Krishnan, Manoj Kumar 13 December 2003 (has links)
Scientific and engineering problems are often large, complex, irregular and data-parallel. The performance of many parallel applications is affected by factors such as irregular nature of the problem, the difference in processor characteristics and runtime loads, the non-uniform distribution of data, and the unpredictable system behavior. These factors give rise to load imbalance. In general, in order to achieve high performance, dynamic load balancing strategies are embedded into solution algorithms. Over time, a number of dynamic load balancing algorithms have been implemented into software tools and successfully used in scientific applications. However, most of these dynamic load balancing tools use an iterative static approach that does not address irregularities during the application execution, and the scheduling overhead incurred is high. During the last decade, a number of dynamic loop scheduling strategies have been proposed to address causes of load imbalance in scientific applications running in parallel and distributed environments. However, there is no single strategy that works well for all scientific applications, and it is up to the user to select the best strategy and integrate it into the application. In most applications using dynamic load balancing, the load balancing algorithm is directly embedded in the application, with close coupling between the data structures of the application and the load balancing algorithm. This typical approach leads to two disadvantages. First, the integration of each newly developed load balancing algorithm into the application needs to be performed from scratch. Second, it is unlikely that the user has incorporated the optimal load balancing algorithm into the application. Moreover, in a certain application (of various problem sizes and number of processors), it is difficult to assess in advance the advantage of incorporating one load balancing algorithm versus another. To overcome these drawbacks, there is a need for developing an application programming interface (API) for dynamic load balancing scientific applications using the recently developed dynamic loop scheduling algorithms. This thesis describes the design and development of such an API, called ProLAS, which is scalable, and independent of data structures of a host application. ProLAS performance is evaluated theoretically and experimentally (after being used in scientific applications). A qualitative and quantitative analysis of ProLAS is presented by comparing its performance with the state of the art technology in dynamic load balancing tools (e.g. CHARM++ library) for parallel applications. The analysis of the experimental results of using ProLAS in a few scientific aplications indicate that it consistently outperforms the existing technology in dynamic load balancing.

Page generated in 0.1119 seconds