• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 8
  • Tagged with
  • 173
  • 173
  • 173
  • 164
  • 164
  • 33
  • 32
  • 20
  • 19
  • 18
  • 17
  • 16
  • 16
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Structure comparison in bioinformatics

Peng, Zeshan. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
2

Multiple structural alignment for proteins

Siu, Wing-yan. January 2008 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2008. / Includes bibliographical references (leaf 61-65) Also available in print.
3

A software framework for single molecule estimation /

Abraham, Anish V. January 2008 (has links)
Thesis (M.S.)--University of Texas at Dallas, 2008. / Includes vita. Includes bibliographical references (leaves 77-79)
4

Coverage Analysis in Clinical Next-Generation Sequencing

Odelgard, Anna January 2019 (has links)
With the new way of sequencing by NGS new tools had to be developed to be able to work with new data formats and to handle the larger data sizes compared to the previous techniques but also to check the accuracy of the data. Coverage analysis is one important quality control for NGS data, the coverage indicates how many times each base pair has been sequenced and thus how trustworthy each base call is. For clinical purposes every base of interest must be quality controlled as one wrong base call could affect the patient negatively. The softwares used for coverage analysis with enough accuracy and detail for clinical applications are sparse. Several softwares like Samtools, are able to calculate coverage values but does not further process this information in a useful way to produce a QC report of each base pair of interest. My master thesis has therefore been to create a new coverage analysis report tool, named CAR tool, that extract the coverage values from Samtools and further uses this data to produce a report consisting of tables, lists and figures. CAR tool is created to replace the currently used tool, ExCID, at the Clinical Genomics facility at SciLifeLab in Uppsala and was developed to meet the needs of the bioinformaticians and clinicians. CAR tool is written in python and launched from a terminal window. The main function of the tool is to display coverage breath values for each region of interest and to extract all sub regions below a chosen coverage depth threshold. The low coverage regions are then reported together with region name, start and stop positions, length and mean coverage value. To make the tool useful to as many as possible several settings are possible by entering different flags when calling the tool. Such settings can be to generate pie charts of each region’s coverage values, filtering of the read and bases by quality or write your own entry that will be used for the coverage calculation by Samtools. The tool has been proved to find these low coverage regions very well. Most low regions found are also found by ExCID, the currently used tool, some differences did however occur and every such region was verified by IGV. The coverage values shown in IGV coincided with those found by CAR tool. CAR tool is written to find all low coverage regions even if they are only one base pair long, while ExCID instead seem to generate larger low regions not taking very short low regions into account. To read more about the functions and how to use CAR tool I refer to User instructions in the appendix and on GitHub at the repository anod6351
5

Implementation of an automatic quality control of derived data files for NONMEM

Sandström, Eric January 2019 (has links)
A pharmacometric analysis must be based on correct data to be valid. Source clinical data is rarely ready to be modelled as is, but rather needs to be reprogrammed to fit the format required by the pharmacometric modelling software. The reprogramming steps include selecting the subsets of data relevant for modelling, deriving new information from the source and adjusting units and encoding. Sometimes, the source data may also be flawed, containing vague definitions and missing or confusing values. In either setting, the source data needs to be reprogrammed to remedy this, followed by extensive quality control to capture any errors or inconsistencies produced along the way. The quality control is a lengthy task which is often performed manually, either by the scientists conducting the pharmacometric study or by independent reviewers. This project presents an automatic data quality control with the purpose of aiding the data curation process, as to minimize any potential errors that would otherwise have to be detected by the manual quality control. The automatic quality control is implemented as an R-package and is specifically tailored for the needs of Pharmetheus.
6

Predicting adverse drug reactions in cancer treatment using a neural network based approach

Hillerton, Thomas January 2018 (has links)
No description available.
7

Development of a phylogenomic framework for the krill

Gevorgyan, Arusjak January 2018 (has links)
Over the last few decades, many krill stocks have declined in size and number,likely as a consequence of global climate change (Siegel 2016). A major risk factoris the increased level of carbon dioxide (CO2) in the ocean. A collapse of the krillpopulation has the potential to cause disruption of the ocean ecosystem, as krill arethe main connection between primary producers such as phytoplankton and largeranimals (Murphy et al. 2012). The aim of this project is to produce the firstphylogenomic framework with help of powerful comparative bioinformatics andphylogenomic methods in order to find and analyse the genes that help krill adaptto its environment. Problem with these studies is that we still do not have access toa reference genome sequence of any krill species. To strengthen and increase trustin our studies two different pipelines were performed, each with different OrthologyAssessment Toolkits (OATs), Orthograph and UPhO, in order to establish orthologyrelationships between transcripts/genes. Since UPhO produces well-supportedtrees where the majority of the gene trees match the species tree, it isrecommended as the proper OATs for generating a robust molecular phylogeny ofkrill. The second aim with his project was to estimate the level of positive selectionin E. superba in order to lay a foundation about level of selection acting on proteincodingsequences in krill. As expected, the level of selection was quite high in E.superba, which indicates that krill are adapted to the changing environment bypositive selection rather than natural genetic drift.
8

Evaluation of de novo assembly using PacBio long reads

Che, Huiwen January 2016 (has links)
New sequencing technologies show promise for the construction of complete and accurate genome sequences, by a process called de novo assembly that joins reads by overlap to longer contiguous sequences without the need for a reference genome. High-quality de novo assembly leads to better understanding in genetic variations. The purpose of this thesis is to evaluate human genome sequences obtained from the PacBio sequencing platform, which is a new technology suitable for de novo assembly of large genomes. The evaluation focuses on comparing sequence identity between our own de novo assemblies and the available human reference and through that, benchmark accuracy of our data. Sequences that are absent from the reference genome, are investigated for potential unannotated genes coordinately. We also assess the complex structural variation using different approaches. Our assemblies show high consensus with the human reference genome, with ⇠ 98.6% of the bases in the assemblies mapped to the human reference. We also detect more than ten thousand of structural variants, including some large rearrangements, with respect to the reference.
9

A bioinformaticians view on the evolution of smell perception

Anders, Patrizia January 2006 (has links)
Background: The origin of vertebrate sensory systems still contains many mysteries and thus challenges to bioinformatics. Especially the evolution of the sense of smell maintains important puzzles, namely the question whether or not the vomeronasal system is older than the main olfactory system. Here I compare receptor sequences of the two distinct systems in a phylogenetic study, to determine their relationships among several different species of the vertebrates. Results: Receptors of the two olfactory systems share little sequence similarity and prove to be a challenge in multiple sequence alignment. However, recent dramatical improvements in the area of alignment tools allow for better results and high confidence. Different strategies and tools were employed and compared to derive a high quality alignment that holds information about the evolutionary relationships between the different receptor types. The resulting Maximum-Likelihood tree supports the theory that the vomeronasal system is rather an ancestor of the main olfactory system instead of being an evolutionary novelty of tetrapods. Conclusions: The connections between the two systems of smell perception might be much more fundamental than the common architecture of receptors. A better understanding of these parallels is desirable, not only with respect to our view on evolution, but also in the context of the further exploration of the functionality and complexity of odor perception. Along the way, this work offers a practical protocol through the jungle of programs concerned with sequence data and phylogenetic reconstruction.
10

Using an ontology to enhance metabolic or signaling pathway comparisions by biological and chemical knowledge

Pohl, Matin January 2006 (has links)
Motivation: As genome-scale efforts are ongoing to investigate metabolic networks of miscellaneous organisms the amount of pathway data is growing. Simultaneously an increasing amount of gene expression data from micro arrays becomes available for reverse engineering, delivering e.g. hypothetical regulatory pathway data. To avoid outgrowing of data and keep control of real new informations the need of analysis tools arises. One vital task is the comparison of pathways for detection of similar functionalities, overlaps, or in case of reverse engineering, detection of known data corroborating a hypothetical pathway. A comparison method using ontological knowledge about molecules and reactions will feature a more biological point of view which graph theoretical approaches missed so far. Such a comparison attempt based on an ontology is described in this report. Results: An algorithm is introduced that performs a comparison of pathways component by component. The method was performed on two selected databases and the results proved it to be not satisfying using it as stand-alone method. Further development possibilities are suggested and steps toward an integrated method using several approaches are recommended. Availability: The source code, used database snapshots and pictures can be requested from the author.

Page generated in 0.1593 seconds