Spelling suggestions: "subject:"dequence"" "subject:"1sequence""
91 |
Sequence stratigraphy and depositional history of the upper Cañon del Tule, Las Imagenes, and Lower Cerro Grande Formations, central Parras Basin, northeastern Mexico /Bermúdez Santana, Juan Clemente. January 2003 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2003. / Available also in an electronic version.
|
92 |
Graph algorithms for the haplotyping problemLiu, Yunkai, January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2005. / Title from document title page. Document formatted into pages; contains v, 76 p. : ill. Includes abstract. Includes bibliographical references (p. 71-76).
|
93 |
Sequence analysis of the small (s) RNA segment of viruses in the genus Orthobunyavirus /Mohamed, Maizan. January 2007 (has links)
Thesis (Ph.D.) - University of St Andrews, November 2007.
|
94 |
Assessment of Chenopodium quinoa Willd. genetic diversity in the USDA and CIP-FAQ collections using SSR's and SNP's /Christensen, Shawn A., January 2005 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Plant and Animal Sciences, 2005. / Includes bibliographical references (p. 40-41).
|
95 |
Salt tectonics and sequence-stratigraphic history of minibasins near the Sigsbee Escarpment, Gulf of MexicoMontoya, Patricia. January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
|
96 |
The sedimentology and sequence stratigraphy of the Middle Jurassic Beryl Formation, Quad 9, U.K.C.SMaxwell, Gregor January 1999 (has links)
Quad 9 of the U.K.C.S., North Sea is located 215 miles NE of Aberdeen. It contains four producing fields with over 400 mmbbls of oil and NGL's and 5.1 TCF of gas initially in place. The major reservoir unit is the Middle Jurassic (Bajocian to Bathonian) Beryl Formation, a marginal to shallow marine deposit which varies in thickness from 150' to 1100' across the studied area. It was deposited within the Beryl Embayment, a transfer zone between two actively extending basin bounding faults of the South Viking Graben, prior to the onset of the major rifting phase during the Callovian to Ryazanian. The objectives of the thesis were to provide a revised sedimentological model for the area accounting for the contrasting sedimentary styles present within the Beryl Formation and to unify the different correlation schemes used by the different operating companies in the area. It was based on well data from 58 cored and a further 79 uncored sections spreading nine licence blocks within Quad 9. Reservoir engineering, biostratigraphic and structural data has also been used for a fully integrated study. Initial core logging identified 32 facies and 10 trace fossil assemblages which were subsequently integrated into 14 facies associations. These were then extrapolated further into the uncored sections by wireline facies associations. Correlation was initially driven by comparison of cored sections but finalised by an integration of the reservoir engineering and biostratigraphic data. Outcrop work on the Middle Jurassic of Skye and Companian of eastern Utah provided an analogue study to accompany the downhole data. Quad 9 can be split up into three main areas distinguished by different stratigraphic histories, the southern area consisting of the Buckland and Sorby Fields, the central area consisting of the Beryl, Nevis, Ness and Linnhe Fields and a northern area consisting the Bruce and Keith Fields.
|
97 |
MULTIPLE SEQUENCES ALIGNMENT FOR PHYLOGENETIC TREE CONSTRUCTION USING GRAPHICS PROCESSING UNITSHe, Jintai 01 January 2008 (has links)
Sequence alignment has become a routine procedure in evolutionary biology in looking for evolutionary relationships between primary sequences of DNA, RNA, and protein. Smith Waterman and Needleman Wunsch algorithms are two algorithms respectively for local alignment and global alignment. Both of them are based on dynamic programming and guarantee optimal results. They have been widely used for the past dozens of years. However, time and space requirement increase exponentially with the number of sequences increase. Here I present a novel approach to improve the performance of sequence alignment by using graphics processing unit which is capable of handling large amount of data in parallel.
|
98 |
General motion estimation and segmentationWu, Siu Fan January 1990 (has links)
In this thesis, estimation of motion from an image sequence is investigated. The emphasis is on the novel use of motion model for describing two dimensional motion. Special attention is directed towards general motion models which are not restricted to translational motion. In contrast to translational motion, the 2-D motion is described by the model using motion parameters. There are two major areas which can benefit from the study of general motion model. The first one is image sequence processing and compression. In this context, the use of motion model provides a more compact description of the motion information because the model can be applied to a larger area. The second area is computer vision. The general motion parameters provide clues to the understanding of the environment. This offers a simpler alternative to techniques such as optical flow analysis. A direct approach is adopted here to estimate the motion parameters directly from an image sequence. This has the advantage of avoiding the error caused by the estimation of optical flow. A differential method has been developed for the purpose. This is applied in conjunction with a multi-resolution scheme. An initial estimate is obtained by applying the algorithm to a low resolution image. The initial estimate is then refined by applying the algorithm to image of higher resolutions. In this way, even severe motion can be estimated with high resolution. However, the algorithm is unable to cope with the situation of multiple moving objects, mainly because of the least square estimator used. A second algorithm, inspired by the Hough transform, is therefore developed to estimate the motion parameters of multiple objects. By formulating the problem as an optimization problem, the Hough transform is computed only implicitly. This drastically reduces the computational requirement as compared with the Hough transform. The criterion used in optimization is a measure of the degree of match between two images. It has been shown that the measure is a well behaving function in the vicinity of the motion parameter vectors describing the motion of the objects, depending on the smoothness of the images. Therefore, smoothing an image has the effect of allowing longer range motion to be estimated. Segmentation of the image according to motion is achieved at the same time. The ability to estimate general motion in the situation of multiple moving objects represents a major step forward in 2-D motion estimation. Finally, the application of motion compensation to the problem of frame rate conversion is considered. The handling of the covered and uncovered background has been investigated. A new algorithm to obtain a pixel value for the pixels in those areas is introduced. Unlike published algorithms, the background is not assumed stationary. This presents a major obstacle which requires the study of occlusion in the image. During the research, the art of motion estimation hcis been advanced from simple motion vector estimation to a more descriptive level: The ability to point out that a certain area in an image is undergoing a zooming operation is one example. Only low level information such as image gradient and intensity function is used. In many different situations, problems are caused by the lack of higher level information. This seems to suggest that general motion estimation is much more than using a general motion model and developing an algorithm to estimate the parameters. To advance further the state of the art of general motion estimation, it is believed that future research effort should focus on higher level aspects of motion understanding.
|
99 |
Allelic sequence diversity at the human beta-globin locusFullerton, Stephanie Malia January 1994 (has links)
No description available.
|
100 |
Algorithms for ab initio identification and classification of ncRNAs / Algorithmes ab initio pour l'identification et la classification des ARNs non-codantsPlaton, Ludovic 30 January 2019 (has links)
L'identification des ARN non codants (ARNncs) permet d'améliorer notre compréhension de la biologie.Actuellement, les fonctions biologiques d'une grande partie des ARNncs sont connues.Mais il reste d'autre classes à découvrir.L'identification et la classification des ARNncs n'est pas une tâche triviale.Elle dépend de plusieurs types de données hétérogènes (séquence, structure secondaire, interaction avec d'autres composants biologiques, etc.) et nécessite l'utilisation de méthode appropriées.Durant cette thèse, nous avons développé des méthodes basées sur les cartes auto-organisatrice (SOM).Les SOMs nous permettent analyser et de représenter les ARNncs par une carte où la topologie des données est conservée.Nous avons proposé un nouvel algorithme de SOM qui permet d'intégrer plusieurs sources de données sous forme numérique ou sous forme complexe (représenté par des noyaux).Ce nouvel algorithm que nous appelons MSSOM calcule une SOM pour chaque source de données et les combine à l'aide d'une SOM finale.MSSOM calcule pour chaque cluster la meilleur combinaison de sources.Nous avons par ailleurs développer une variante supervisée de SOM qui s'appelle SLSOM.SLSOM classifie les classes connues à l'aide d'un perceptron multicouche et de la sortie d'une SOM.SLSOM intègre également une option de rejet qui lui permet de rejeter les prédictions incertaines et d’identifier de nouvelles classes.Ces méthodes nous ont permis de développer deux nouveaux outils bioinformatique.Le premier est l'application d'une variante de SLSOM pour la discrimination entre les ARNs codants et non-codants.Cet outil que nous appelons IRSOM a été testé sur plusieurs espèce venant de différents règnes (plantes, animales, bactéries et champignons).A l'aide de caractéristique simples, nous avons montré que IRSOM permet de séparer les ARNs codants des non-codants.De plus, avec la visualisation de SOM et l'option de rejet nous avons pu identifier les ARNs ambiguë chez l'humain.Le second s'appelle CRSOM et permet de classifier les ARNncs en différentes sous-classes.CRSOM est une combinaison de MSSOM et SLSOM et utilise deux sources de données qui sont la fréquence des k-mers de séquence et un noyau Gaussien de structure secondaire utilisant la distance d'édition.Nous avons montrer que CRSOM obtient des performances comparable à l'outil de référence (nRC) sans rejet, et de meilleur résultats avec le rejet. / The non-coding RNA (ncRNA) identification helps to improve our comprehension of biology. We know the biological functions for a majority of ncRNA classes. But, we don't know all the classes of ncRNAs. Besides, the identification of ncRNAs using computational methods is not a trivial task. The relevant features for each class of ncRNAs rely on multiple heterogeneous sources of data (sequences, secondary structure, interaction with other biological components, etc.). During this thesis, we developed methods relying on Self-Organizing Maps (SOM).The SOM is used to analyze and represent the ncRNAs by a map of clusters where the topology of the data is preserved.We proposed a new SOM version called MSSOM which can handle multiple sources of data composed of numerical data or complex data (represented by kernels). MSSOM combines data sources by using a SOM for each source and learns the weights of each source at the cluster level.We also proposed a supervised variant of SOM with rejection called SLSOM. SLSOM is able to identify and classify the known classes using multi layer perceptron and the output of a SOM.The rejection options associated to the output layer allow to reject the unreliable prediction and to identify the potential new classes.These methods lead to the development of bioinformatic tools.We applied a variant of SLSOM to the discrimination of coding and non-coding RNAs. This method called IRSOM has been evaluated on a wide range of species coming from different reigns (plants, animals, bacteria and fungi).By using a simple set of sequence features, we showed that IRSOM is able to separate the coding and non-coding RNAs efficiently.With the SOM visualization and the rejection option, we also highlighted and analyzed some ambiguous RNAs on the human. The second one is called CRSOM.CRSOM classify ncRNAs into sub classes by integrating two data sources which are the sequence k-mer frequencies and a Gaussian kernel using the edit distance. We show that CRSOM give comparable results with the reference tool (nRC) without reject and better results with the rejection option.
|
Page generated in 0.3502 seconds