• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
791

Inverse Parametric Alignment for Accurate Biological Sequence Comparison

Kim, Eagu January 2008 (has links)
For as long as biologists have been computing alignments of sequences, the question of what values to use for scoring substitutions and gaps has persisted. In practice, substitution scores are usually chosen by convention, and gap penalties are often found by trial and error. In contrast, a rigorous way to determine parameter values that are appropriate for aligning biological sequences is by solving the problem of Inverse Parametric Sequence Alignment. Given examples of biologically correct reference alignments, this is the problem of finding parameter values that make the examples score as close as possible to optimal alignments of their sequences. The reference alignments that are currently available contain regions where the alignment is not specified, which leads to a version of the problem with partial examples.In this dissertation, we develop a new polynomial-time algorithm for Inverse Parametric Sequence Alignment that is simple to implement, fast in practice, and can learn hundreds of parameters simultaneously from hundreds of examples. Computational results with partial examples show that best possible values for all 212 parameters of the standard alignment scoring model for protein sequences can be computed from 200 examples in 4 hours of computation on a standard desktop machine. We also consider a new scoring model with a small number of additional parameters that incorporates predicted secondary structure for the protein sequences. By learning parameter values for this new secondary-structure-based model, we can improve on the alignment accuracy of the standard model by as much as 15% for sequences with less than 25% identity.
792

List-mode SPECT reconstruction using empirical likelihood

Lehovich, Andre January 2005 (has links)
This dissertation investigates three topics related to imagereconstruction from list-mode Anger camera data. Our mainfocus is the processing of photomultiplier-tube (PMT)signals directly into images. First we look at the use of list-mode calibration data toreconstruct a non-parametric likelihood model relating theobject to the data list. The reconstructed model can thenbe combined with list-mode object data to produce amaximum-likelihood (ML) reconstruction, an approach we calldouble list-mode reconstruction. This trades off reducedprior assumptions about the properties of the imaging systemfor greatly increased processing time and increaseduncertainty in the reconstruction. Second we use the list-mode expectation-maximization (EM)algorithm to reconstruct planar projection images directlyfrom PMT data. Images reconstructed by EM are compared withimages produced using the faster and more common techniqueof first producing ML position estimates, then histogramingto form an image. A mathematical model of the human visualsystem, the channelized Hotelling observer, is used tocompare the reconstructions by performing the Rayleigh task,a traditional measure of resolution. EM is found to producehigher resolution images than the histogram approach,suggesting that information is lost during the positionestimation step. Finally we investigate which linear parameters of an objectare estimable, in other words may be estimated without biasfrom list-mode data. We extend the notion of a linearsystem operator, familiar from binned-mode systems, tolist-mode systems, and show the estimable parameters aredetermined by the range of the adjoint of the systemoperator. As in the binned-mode case, the list-modesensitivity functions define ``natural pixels'' with whichto reconstruct the object.
793

Development and implementation of an artificially intelligent search algorithm for sensor fault detection using neural networks

Singh, Harkirat 30 September 2004 (has links)
This work is aimed towards the development of an artificially intelligent search algorithm used in conjunction with an Auto Associative Neural Network (AANN) to help locate and reconstruct faulty sensor inputs in control systems. The AANN can be trained to detect when sensors go faulty but the problem of locating the faulty sensor still remains. The search algorithm aids the AANN to help locate the faulty sensors and reconstruct their actual values. The algorithm uses domain specific heuristics based on the inherent behavior of the AANN to achieve its task. Common sensor errors such as drift, shift and random errors and the algorithms response to them have been studied. The issue of noise has also been investigated. These areas cover the first part of this work. The second part focuses on the development of a web interface that implements and displays the working of the algorithm. The interface allows any client on the World Wide Web to connect to the engineering software called MATLAB. The client can then simulate a drift, shift or random error using the graphical user interface and observe the response of the algorithm.
794

Accuracy aspects of the reaction-diffusion master equation on unstructured meshes

Kieri, Emil January 2011 (has links)
The reaction-diffusion master equation (RDME) is a stochastic model for spatially heterogeneous chemical systems. Stochastic models have proved to be useful for problems from molecular biology since copy numbers of participating chemical species often are small, which gives a stochastic behaviour. The RDME is a discrete space model, in contrast to spatially continuous models based on Brownian motion. In this thesis two accuracy issues of the RDME on unstructured meshes are studied. The first concerns the rates of diffusion events. Errors due to previously used rates are evaluated, and a second order accurate finite volume method, not previously used in this context, is implemented. The new discretisation improves the accuracy considerably, but unfortunately it puts constraints on the mesh, limiting its current usability. The second issue concerns the rates of bimolecular reactions. Using the macroscopic reaction coefficients these rates become too low when the spatial resolution is high. Recently, two methods to overcome this problem by calculating mesoscopic reaction rates for Cartesian meshes have been proposed. The methods are compared and evaluated, and are found to work remarkably well. Their possible extension to unstructured meshes is discussed.
795

Investigating the Correlation between Swallow Accelerometry Signal Parameters and Anthropometric and Demographic Characteristics of Healthy Adults

Hanna, Fady 24 February 2009 (has links)
Thesis studied correlations between swallowing accelerometry parameters and anthropometrics in 50 healthy participants. Anthropometrics include: age, gender, weight, height, body fat percent, neck circumference and mandibular length. Dual-axis swallowing signals, from a biaxial accelerometer were obtained for 5-saliva and 10-water (5-wet and 5-wet chin-tuck) swallows per participant. Two patient-independent automatic segmentation algorithms using discrete wavelet transforms of swallowing sequences segmented: 1) saliva/wet swallows and 2) wet chin-tuck swallows. Extraction of swallows hinged on dynamic thresholding based on signal statistics. Canonical correlation analysis was performed on sets of anthropometric and swallowing signal variables including: variance, skewness, kurtosis, autocorrelation decay time, energy, scale and peak-amplitude. For wet swallows, significant linear relationships were found between signal and anthropometric variables. In superior-inferior directions, correlations linked weight, age and gender to skewness and signal-memory. In anterior-posterior directions, age was correlated with kurtosis and signal-memory. No significant relationship was observed for dry and wet chin-tuck swallowing
796

Computing sparse multiples of polynomials

Tilak, Hrushikesh 20 August 2010 (has links)
We consider the problem of finding a sparse multiple of a polynomial. Given a polynomial f ∈ F[x] of degree d over a field F, and a desired sparsity t = O(1), our goal is to determine if there exists a multiple h ∈ F[x] of f such that h has at most t non-zero terms, and if so, to find such an h. When F = Q, we give a polynomial-time algorithm in d and the size of coefficients in h. For finding binomial multiples we prove a polynomial bound on the degree of the least degree binomial multiple independent of coefficient size. When F is a finite field, we show that the problem is at least as hard as determining the multiplicative order of elements in an extension field of F (a problem thought to have complexity similar to that of factoring integers), and this lower bound is tight when t = 2.
797

Local Likelihood for Interval-censored and Aggregated Point Process Data

Fan, Chun-Po Steve 03 March 2010 (has links)
The use of the local likelihood method (Tibshirani and Hastie, 1987; Loader, 1996) in the presence of interval-censored or aggregated data leads to a natural consideration of an EM-type strategy, or rather a local EM algorithm. In the thesis, we consider local EM to analyze the point process data that are either interval-censored or aggregated into regional counts. We specifically formulate local EM algorithms for density, intensity and risk estimation and implement the algorithms using a piecewise constant function. We demonstrate that the use of the piecewise constant function at the E-step explicitly results in an iteration that involves an expectation, maximization and smoothing step, or an EMS algorithm considered in Silverman, Jones, Wilson and Nychka (1990). Consequently, we reveal a previously unknown connection between local EM and the EMS algorithm. From a theoretical perspective, local EM and the EMS algorithm complement each other. Although the statistical methodology literature often characterizes EMS methods as ad hoc, local likelihood suggests otherwise as the EMS algorithm arises naturally from a local likelihood consideration in the context of point processes. Moreover, the EMS algorithm not only serves as a convenient implementation of the local EM algorithm but also provides a set of theoretical tools to better understand the role of local EM. In particular, we present results that reinforce the suggestion that the pair of local EM and penalized likelihood are analogous to that of EM and likelihood. Applications include the analysis of bivariate interval-censored data as well as disease mapping for a rare disease, lupus, in the Greater Toronto Area.
798

Local Likelihood for Interval-censored and Aggregated Point Process Data

Fan, Chun-Po Steve 03 March 2010 (has links)
The use of the local likelihood method (Tibshirani and Hastie, 1987; Loader, 1996) in the presence of interval-censored or aggregated data leads to a natural consideration of an EM-type strategy, or rather a local EM algorithm. In the thesis, we consider local EM to analyze the point process data that are either interval-censored or aggregated into regional counts. We specifically formulate local EM algorithms for density, intensity and risk estimation and implement the algorithms using a piecewise constant function. We demonstrate that the use of the piecewise constant function at the E-step explicitly results in an iteration that involves an expectation, maximization and smoothing step, or an EMS algorithm considered in Silverman, Jones, Wilson and Nychka (1990). Consequently, we reveal a previously unknown connection between local EM and the EMS algorithm. From a theoretical perspective, local EM and the EMS algorithm complement each other. Although the statistical methodology literature often characterizes EMS methods as ad hoc, local likelihood suggests otherwise as the EMS algorithm arises naturally from a local likelihood consideration in the context of point processes. Moreover, the EMS algorithm not only serves as a convenient implementation of the local EM algorithm but also provides a set of theoretical tools to better understand the role of local EM. In particular, we present results that reinforce the suggestion that the pair of local EM and penalized likelihood are analogous to that of EM and likelihood. Applications include the analysis of bivariate interval-censored data as well as disease mapping for a rare disease, lupus, in the Greater Toronto Area.
799

Klaidos skleidimo atgal algoritmo tyrimai / Investigation of the error back-propagation algorithm

Sargelis, Kęstas 30 June 2009 (has links)
Šiame darbe detaliai išanalizuotas klaidos skleidimo atgal algoritmas, atlikti tyrimai. Išsamiai analizuota neuroninių tinklų teorija. Algoritmui taikyti ir analizuoti sistemoje Visual Studio Web Developer 2008 sukurta programa su įvairiais tyrimo metodais, padedančiais ištirti algoritmo daromą klaidą. Taip pat naudotasi Matlab 7.1 sistemos įrankiais neuroniniams tinklams apmokyti. Tyrimo metu analizuotas daugiasluoksnis dirbtinis neuroninis tinklas su vienu paslėptu sluoksniu. Tyrimams naudoti gėlių irisų ir oro taršos duomenys. Atlikti gautų rezultatų palyginimai. / The present work provides an in-depth analysis of the error back-propagation algorithm, as well as information on the investigation carried out. A neural network theory has been analysed in detail. For the application and analysis of the algorithm in the system Visual Studio Web Developer 2008, a program has been developed with various investigation methods, which help to research into the error of the algorithm. For training neural networks, Matlab 7.1 tools have been used. In the course of the investigation, a multilayer artificial neural network with one hidden layer has been analysed. For the purpose of the investigation, data on irises (plants) and air pollution have been used. Comparisons of the results obtained have been made.
800

Fast Algorithms for Large-Scale Phylogenetic Reconstruction

Truszkowski, Jakub January 2013 (has links)
One of the most fundamental computational problems in biology is that of inferring evolutionary histories of groups of species from sequence data. Such evolutionary histories, known as phylogenies are usually represented as binary trees where leaves represent extant species, whereas internal nodes represent their shared ancestors. As the amount of sequence data available to biologists increases, very fast phylogenetic reconstruction algorithms are becoming necessary. Currently, large sequence alignments can contain up to hundreds of thousands of sequences, making traditional methods, such as Neighbor Joining, computationally prohibitive. To address this problem, we have developed three novel fast phylogenetic algorithms. The first algorithm, QTree, is a quartet-based heuristic that runs in O(n log n) time. It is based on a theoretical algorithm that reconstructs the correct tree, with high probability, assuming every quartet is inferred correctly with constant probability. The core of our algorithm is a balanced search tree structure that enables us to locate an edge in the tree in O(log n) time. Our algorithm is several times faster than all the current methods, while its accuracy approaches that of Neighbour Joining. The second algorithm, LSHTree, is the first sub-quadratic time algorithm with theoretical performance guarantees under a Markov model of sequence evolution. Our new algorithm runs in O(n^{1+γ(g)} log^2 n) time, where γ is an increasing function of an upper bound on the mutation rate along any branch in the phylogeny, and γ(g) < 1 for all g. For phylogenies with very short branches, the running time of our algorithm is close to linear. In experiments, our prototype implementation was more accurate than the current fast algorithms, while being comparably fast. In the final part of this thesis, we apply the algorithmic framework behind LSHTree to the problem of placing large numbers of short sequence reads onto a fixed phylogenetic tree. Our initial results in this area are promising, but there are still many challenges to be resolved.

Page generated in 0.0637 seconds