• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 885
  • 531
  • 218
  • 99
  • 72
  • 23
  • 14
  • 13
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 2240
  • 477
  • 467
  • 390
  • 338
  • 291
  • 287
  • 270
  • 269
  • 263
  • 230
  • 223
  • 221
  • 213
  • 195
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Bayesian inference methods for next generation DNA sequencing

Shen, Xiaohu, active 21st century 30 September 2014 (has links)
Recently developed next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. To provide a blueprint of a target genome, next-generation sequencing systems typically employ the so called shotgun sequencing strategy and oversample the genome with a library of relatively short overlapping reads. The order of nucleotides in the short reads is determined by processing acquired noisy signals generated by the sequencing platforms, and the overlaps between the reads are exploited to assemble the target long genome. Next-generation sequencing utilizes massively parallel array-based technology to speed up the sequencing and reduce the cost. However, accuracy and lengths of the short reads are yet to surpass those provided by the conventional slower and costlier Sanger sequencing method. In this thesis, we first focus on Illumina's sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on an experimental data set obtained by sequencing phiX174 bacteriophage using Illumina's Genome Analyzer II. The results show that ParticleCall scheme is significantly more computationally efficient than the best performing unsupervised base calling method currently available, while achieving the same accuracy. Having addressed the problem of base calling of short reads, we turn our attention to genome assembly. Assembly of a genome from acquired short reads is a computationally daunting task even in the scenario where a reference genome exists. Errors and gaps in the reference, and perfect repeat regions in the target, further render the assembly challenging and cause inaccuracies. We formulate reference-guided assembly as the inference problem on a bipartite graph and solve it using a message-passing algorithm. The proposed algorithm can be interpreted as the classical belief propagation scheme under a certain prior. Unlike existing state-of-the-art methods, the proposed algorithm combines the information provided by the reads without needing to know reliability of the short reads (so-called quality scores). Relation of the message-passing algorithm to a provably convergent power iteration scheme is discussed. Results on both simulated and experimental data demonstrate that the proposed message-passing algorithm outperforms commonly used state-of-the-art tools, and it nearly achieves the performance of a genie-aided maximum a posteriori (MAP) scheme. We then consider the reference-free genome assembly problem, i.e., the de novo assembly. Various methods for de novo assembly have been proposed in literature, all of whom are very sensitive to errors in short reads. We develop a novel error-correction method that enables performance improvements of de novo assembly. The new method relies on a suffix array structure built on the short reads data. It incorporates a hypothesis testing procedure utilizing the sum of quality information as the test statistic to improve the accuracy of overlap detection. Finally, we consider an inference problem in gene regulatory networks. Gene regulatory networks are highly complex dynamical systems comprising biomolecular components which interact with each other and through those interactions determine gene expression levels, i.e., determine the rate of gene transcription. In this thesis, a particle filter with Markov Chain Monte Carlo move step is employed for the estimation of reaction rate constants in gene regulatory networks modeled by chemical Langevin equations. Simulation studies demonstrate that the proposed technique outperforms previously considered methods while being computationally more efficient. Dynamic behavior of gene regulatory networks averaged over a large number of cells can be modeled by ordinary differential equations. For this scenario, we compute an approximation to the Cramer-Rao lower bound on the mean-square error of estimating reaction rates and demonstrate that, when the number of unknown parameters is small, the proposed particle filter can be nearly optimal. In summary, this thesis presents a set of Bayesian inference methods for base-calling and sequence assembly in next-generation DNA sequencing. Experimental studies shows the advantage of proposed algorithms over traditional methods. / text
62

The development of a genetically modified bacteriophage to trace water pollution

Davy, Marjorie January 1999 (has links)
No description available.
63

Molecular biological approaches to the analysis of C1-inhibitor function

Bacon, Louise January 1994 (has links)
No description available.
64

Novel strategies for DNA detection assay

Bourin, Stephanie January 1998 (has links)
No description available.
65

A unified approach to the study of asynchronous communication mechanisms in real-time systems

Clark, Ian George January 2000 (has links)
No description available.
66

Analysis of trace data from fluorescence based Sanger sequencing

Thornley, David John January 1997 (has links)
No description available.
67

The role of the p53 tumour suppressor pathway in central primitive neuroectodermal tumours

Burns, Alice Sin Ying Wai January 1999 (has links)
No description available.
68

Applications of projections to quantitative magnetic resonance imaging

Taylor, Nicola Jane January 1998 (has links)
No description available.
69

Regulation of gene expression by the Wilms' tumour suppressor, WT1

Duarte, Antonio January 1997 (has links)
No description available.
70

Integer programming and heuristic methods for the cell formation problem with part machine sequencing

Papaioannou, Grammatoula January 2007 (has links)
Cell formation has received much attention from academicians and practitioners because of its strategic importance to modern manufacturing practices. Existing research on cell formation problems using integer programming (IP) has achieved the target of solving problems that simultaneously optimise machine-cell allocation and partmachine allocation. This thesis presents extensions of an IP model where part-machine assignment and cell formation are addressed simultaneously, and integration of inter-cell movements of parts and machine set-up costs within the objective function is taking place together with the inclusion of an ordered part machine operation sequence. The latter is identified as a neglected parameter for the Cell Formation problem. Due to the nature of the mathematical IP modelling for Cell Formation two main drawbacks can be identified: (a) Cell Formation is considered to be a complex and difficult combinatorial optimisation problem or in other words NP-hard (Non-deterministic Polynomial time hard) problem and (b) because of the deterministic nature of mathematical programming the decision maker is required to specify precisely goals and constraints. The thesis describes a comprehensive study of the cell formation problem where fuzzy set theory is employed for measuring uncertainty. Membership functions are used to express linguistically the uncertainty involved and aggregation operators are employed to transform the fuzzy models into mathematical programming models. The core of the research concentrates on the investigation and development of heuristic and . metaheuristic approaches. A three stage randomly generated heuristic approach for producing an efficient initial solution for the CF together with an iterative heuristic are first developed. Numerous data sets are employed which prove their effectiveness. Moreover, an iterative tabu search algorithm is implemented where the initial solution fed in is the same as that used in the descent heuristic. The first iterative procedure and the tabu search algorithm are compared and the results produced show the superiority of the latter over the former in stability, computational times and clustering results.

Page generated in 0.0653 seconds