• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 81
  • 12
  • 12
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 375
  • 375
  • 164
  • 123
  • 85
  • 76
  • 62
  • 61
  • 54
  • 51
  • 40
  • 39
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Detecting complex genetic mutations in large human genome data

Alsulaiman, Thamer 01 August 2019 (has links)
All cellular forms of life contain Deoxyribonucleic acid (DNA). DNA is a molecule that carries all the information necessary to perform both, basic and complex cellular functions. DNA is replicated to form new tissue/organs, and to pass genetic information to future generations. DNA replication ideally yield an exact copy of the original DNA. While replication generally occurs without error, it may leave DNA vulnerable to accidental changes via mistakes made during the replication process. Those changes are called mutations. Mutations range in magnitude. Yet, mutations of any magnitude range in consequences, from no effect on the organism, to disease initiation (e.g. cancer), or even death. In this thesis, we limit our focus to mutations in human DNA, and in particular, MMBIR mutations. Recent literature in human genomics has found Microhomology-mediated break-induced replication (MMBIR) to be a common mechanism producing complex mutations in DNA. MMBIRFinder is a tool to detect MMBIR regions in Yeast DNA. Although MMBIRFinder is successful on Yeast DNA, MMBIRFinder is not capable of detecting MMBIR mutations in human DNA. Among several reasons, one major reason for its deficiency with human DNA is the amount of computations required to process human large data. Our contribution in this regard is two fold: 1) We utilize parallel computations to significantly reduce the processing time consumed by the original MMBIFinder, and address several performance degrading issues inherent in the original design; 2) We introduce a new heuristic to detect MMBIR mutations that were not detected by the original MMBIRFinder, even in the case of small sized DNA, like Yeast DNA.
62

A Skeleton library for Cell Broadband Engine / Ett Skelettbibliotek för Cell Broadband Engine

Ålind, Markus January 2008 (has links)
<p>The Cell Broadband Engine processor is a powerful processor capable of over 220 GFLOPS. It is highly specialized and can be controlled in detail by the programmer. The Cell is significantly more complicated to program than a standard homogeneous multi core processor such as the Intel Core2 Duo and Quad. This thesis explores the possibility to abstract some of the complexities of Cell programming while maintaining high performance. The abstraction is achieved through a library of parallel skeletons implemented in the bulk synchronous parallel programming environment NestStep. The library includes constructs for user defined SIMD optimized data parallel skeletons such as map, reduce and more. The evaluation of the library includes porting of a vector based scientific computation program from sequential C code to the Cell using the library and the NestStep environment. The ported program shows good performance when compared to the sequential original code run on a high-end x86 processor. The evaluation also shows that a dot product implemented with the skeleton library is faster than the dot product in the IBM BLAS library for the Cell processor with more than two slave processors.</p><p> </p>
63

The limits of network transparency in a distributed programming language

Collet, Raphaël 19 December 2007 (has links)
This dissertation presents a study on the extent and limits of network transparency in distributed programming languages. This property states that the result of a distributed program is the same as if it were executed on a single computer, in the case when no failure occurs. The programming language may also be network aware if it allows the programmer to control how a program is distributed and how it behaves on the network. Both aim at simplifying distributed programming, by making non-functional aspects of a program more modular. We show that network transparency is not only possible, but also practical: it can be efficient, and smoothly extended in the case of partial failure. We give a proof of concept with the programming language Oz and the system Mozart, of which we have reimplemented the distribution support on top of the Distribution Subsystem (DSS). We have extended the language to control which distribution algorithms are used in a program, and reflect partial failures in the language. Both extensions allow to handle non-functional aspects of a program without breaking the property of network transparency.
64

Design and performance analysis of MPI-SHARC a high-speed network service for distributed digital signal processor systems /

Kohout, James, January 2001 (has links) (PDF)
Thesis (M.S.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains ix, 69 p.; also contains graphics. Vita. Includes bibliographical references (p. 66-68).
65

Implementing a Preconditioned Iterative Linear Solver Using Massively Parallel Graphics Processing Units

Asgari Kamiabad, Amirhassan 26 May 2011 (has links)
The research conducted in this thesis provides a robust implementation of a preconditioned iterative linear solver on programmable graphic processing units (GPUs). Solving a large, sparse linear system is the most computationally demanding part of many widely used power system analysis. This thesis presents a detailed study of iterative linear solvers with a focus on Krylov-based methods. Since the ill-conditioned nature of power system matrices typically requires substantial preconditioning to ensure robustness of Krylov-based methods, a polynomial preconditioning technique is also studied in this thesis. Implementation of the Chebyshev polynomial preconditioner and biconjugate gradient solver on a programmable GPU are presented and discussed in detail. Evaluation of the performance of the GPU-based preconditioner and linear solver on a variety of sparse matrices shows significant computational savings relative to a CPU-based implementation of the same preconditioner and commonly used direct methods.
66

Implementing a Preconditioned Iterative Linear Solver Using Massively Parallel Graphics Processing Units

Asgari Kamiabad, Amirhassan 26 May 2011 (has links)
The research conducted in this thesis provides a robust implementation of a preconditioned iterative linear solver on programmable graphic processing units (GPUs). Solving a large, sparse linear system is the most computationally demanding part of many widely used power system analysis. This thesis presents a detailed study of iterative linear solvers with a focus on Krylov-based methods. Since the ill-conditioned nature of power system matrices typically requires substantial preconditioning to ensure robustness of Krylov-based methods, a polynomial preconditioning technique is also studied in this thesis. Implementation of the Chebyshev polynomial preconditioner and biconjugate gradient solver on a programmable GPU are presented and discussed in detail. Evaluation of the performance of the GPU-based preconditioner and linear solver on a variety of sparse matrices shows significant computational savings relative to a CPU-based implementation of the same preconditioner and commonly used direct methods.
67

The design and implementation of a region-based parallel programming language /

Chamberlain, Bradford L., January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 362-373).
68

Achieving robust performance in parallel programming languages /

Lewis, E Christopher, January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 104-113).
69

Pointer analysis : building a foundation for effective program analysis

Hardekopf, Benjamin Charles 16 October 2012 (has links)
Pointer analysis is a fundamental enabling technology for program analysis. By improving the scalability of precise pointer analysis we can make a positive impact across a wide range of program analyses used for many different purposes, including program verification and model checking, optimization and parallelization, program understanding, hardware synthesis, and more. In this thesis we present a suite of new algorithms aimed at improving pointer analysis scalability. These new algorithms make inclusion-based analysis (the most precise flow- and context-insensitive pointer analysis) over 4x faster while using 7x less memory than the previous state-of-the-art; they also enable flow-sensitive pointer analysis to handle programs with millions of lines of code, two orders of magnitude greater than the previous state-of-the-art. We present a formal framework for describing the space of pointer analysis approximations. The space of possible approximations is complex and multidimensional, and until now has not been well-defined in a formal manner. We believe that the framework is useful as a method to meaningfully compare the precision of the multitude of existing pointer analyses, as well as aiding in the systematic exploration of the entire space of approximations. / text
70

Lygiagretumų programavimo personaliniuose kompiuteriuose problemos / The problems of parallel programming using personal computer

Ivanikovas, Sergėjus 13 June 2005 (has links)
This work gives the overview of particularities of parallel programming for the personal computer and the observation of the possibilities and advantages of the Hyper-Threading technology and new Pentium 4 processors. The work proves that the creation of the Hyper-Threading technology and dual-core processors helps to make parallel computing more available for the usual personal computer. Parallel programming becomes not only the way of solving difficult tasks but gives a real possibility to speed up the work of personal computer and to use its hardware resources more effectively. The work gives the review of the possibilities of creation of parallel programs by using OpenMP standard and particularities of the application of the set of SSE2 processor commands. The results of practical tests are given. They indicate that the floating point computing is more effective without using of multiple threads and Hyper-Treading technology shows best results working with different types of processes or working with new processor possibilities.

Page generated in 0.058 seconds