• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 87
  • 55
  • 42
  • 41
  • 41
  • 40
  • 30
  • 29
  • 25
  • 24
  • 24
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Vasculature reconstruction from 3D cryomicrotome images

Goyal, Ayush January 2013 (has links)
Background: Research in heart disease can be aided by modelling myocardial hemodynamics with knowledge of coronary pressure and vascular resistance measured from the geometry and morphometry of coronary vasculature. This study presents methods to automatically reconstruct accurate detailed coronary vascular anatomical models from high-resolution three-dimensional optical fluorescence cryomicrotomography image volumes for simulating blood flow in coronary arterial trees. Methods: Images of fluorescent cast and bead particles perfused into the same heart comprise the vasculature and microsphere datasets, employed in a novel combined approach to measure vasculature and simulate a flow model on the extracted coronary vascular tree for estimating regional myocardial perfusion. The microspheres are used in two capacities - as fiducial biomarker point sources for measuring the image formation in order to accurately measure the vasculature dataset and as flowing particles for measuring regional myocardial perfusion through the reconstructed vasculature. A new model-based template-matching method of vascular radius estimation is proposed that incorporates a model of the optical fluorescent image formation measured from the microspheres and a template of the vessels’ tubular geometry. Results: The new method reduced the error in vessel radius estimation from 42.9% to 0.6% in a 170 micrometer vessel as compared to the Full-Width Half Maximum method. Whole-organ porcine coronary vascular trees, automatically reconstructed with the proposed method, contained on the order of 92,000+ vessel segments in the range 0.03 – 1.9 mm radius. Discrepancy between the microsphere perfusion measurements and regional flow estimated with a 1-D steady state linear static blood flow simulation on the reconstructed vasculature was modelled with daughter-to-parent area ratio and branching angle as the parameters. Correcting the flow simulation by incorporating this model of disproportionate distribution of microspheres reduced the error from 24% to 7.4% in the estimation of fractional microsphere distribution in oblique branches with angles of 100°-120°.
152

Biomimetic and autonomic server ensemble orchestration

Nakrani, Sunil January 2005 (has links)
This thesis addresses orchestration of servers amongst multiple co-hosted internet services such as e-Banking, e-Auction and e-Retail in hosting centres. The hosting paradigm entails levying fees for hosting third party internet services on servers at guaranteed levels of service performance. The orchestration of server ensemble in hosting centres is considered in the context of maximising the hosting centre's revenue over a lengthy time horizon. The inspiration for the server orchestration approach proposed in this thesis is drawn from nature and generally classed as swarm intelligence, specifically, sophisticated collective behaviour of social insects borne out of primitive interactions amongst members of the group to solve problems beyond the capability of individual members. Consequently, the approach is self-organising, adaptive and robust. A new scheme for server ensemble orchestration is introduced in this thesis. This scheme exploits the many similarities between server orchestration in an internet hosting centre and forager allocation in a honeybee (Apis mellifera) colony. The scheme mimics the way a honeybee colony distributes foragers amongst flower patches to maximise nectar influx, to orchestrate servers amongst hosted internet services to maximise revenue. The scheme is extended by further exploiting inherent feedback loops within the colony to introduce self-tuning and energy-aware server ensemble orchestration. In order to evaluate the new server ensemble orchestration scheme, a collection of server ensemble orchestration methods is developed, including a classical technique that relies on past history to make time varying orchestration decisions and two theoretical techniques that omnisciently make optimal time varying orchestration decisions or an optimal static orchestration decision based on complete knowledge of the future. The efficacy of the new biomimetic scheme is assessed in terms of adaptiveness and versatility. The performance study uses representative classes of internet traffic stream behaviour, service user's behaviour, demand intensity, multiple services co-hosting as well as differentiated hosting fee schedule. The biomimetic orchestration scheme is compared with the classical and the theoretical optimal orchestration techniques in terms of revenue stream. This study reveals that the new server ensemble orchestration approach is adaptive in a widely varying external internet environments. The study also highlights the versatility of the biomimetic approach over the classical technique. The self-tuning scheme improves on the original performance. The energy-aware scheme is able to conserve significant energy with minimal revenue performance degradation. The simulation results also indicate that the new scheme is competitive or better than classical and static methods.
153

Confocal single-molecule fluorescence as a tool for investigating biomolecular dynamics in vitro and in vivo

Torella, Joseph Peter January 2011 (has links)
Confocal single-molecule fluorescence is a powerful tool for monitoring conformational dynamics, and has provided new insight into the enzymatic activities of complex biological molecules such as DNA and RNA polymerases. Though useful, such studies are typically qualitative in nature, and performed almost exclusively in highly purified, in vitro settings. In this work, I focus on improving the methodology of confocal single-molecule fluorescence in two broad ways: (i) by enabling the quantitative identification of molecular dynamics in proteins and nucleic acids in vitro, and (ii) developing the tools needed to perform these analyses in vivo. Toward the first goal, and together with several colleagues, I have developed three novel methods for the quantitative identification of dynamics in biomolecules: (i) Burst Variance Analysis (BVA), which unambiguously identifies dynamics in single-molecule FRET experiments; (ii) Dynamic Probability Density Analysis (PDA), which hypothesis-tests specific kinetic models against smFRET data and extracts rate information; and (iii) a novel molecular counting method useful for studying single-molecule thermodynamics. We validated these methods against Monte Carlo simulations and experimental DNA controls, and demonstrated their practical application in vitro by analyzing the “fingers-closing” conformational change in E.coli DNA Polymerase I; these studies identified unexpected conformational flexibility which may be important to the fidelity of DNA synthesis. To enable similar studies in the context of a living cell, we generated a nuclease-resistant DNA analogue of the Green Fluorescent Protein, or “Green Fluorescent DNA,” and developed an electroporation method to efficiently transfer it into the cytoplasm of E.coli. We demonstrate in vivo confocal detection of smFRET from this construct, which is both bright and photostable in the cellular milieu. In combination with PDA, BVA and our novel molecular counting method, this Green Fluorescent DNA should enable the characterization of DNA and protein-DNA dynamics in living cells, at the single-molecule level. I conclude by discussing the ways in which these methods may be useful in investigating the dynamics of processes such as transcription, translation and recombination, both in vitro and in vivo.
154

Developing clinical measures of lung function in COPD patients using medical imaging and computational modelling

Doel, Thomas MacArthur Winter January 2012 (has links)
Chronic obstructive pulmonary disease (COPD) describes a range of lung conditions including emphysema, chronic bronchitis and small airways disease. While COPD is a major cause of death and debilitating illness, current clinical assessment methods are inadequate: they are a poor predictor of patient outcome and insensitive to mild disease. A new imaging technology, hyperpolarised xenon MRI, offers the hope of improved diagnostic techniques, based on regional measurements using functional imaging. There is a need for quantitative analysis techniques to assist in the interpretation of these images. The aim of this work is to develop these techniques as part of a clinical trial into hyperpolarised xenon MRI. In this thesis we develop a fully automated pipeline for deriving regional measurements of lung function, making use of the multiple imaging modalities available from the trial. The core of our pipeline is a novel method for automatically segmenting the pulmonary lobes from CT data. This method combines a Hessian-based filter for detecting pulmonary fissures with anatomical cues from segmented lungs, airways and pulmonary vessels. The pipeline also includes methods for segmenting the lungs from CT and MRI data, and the airways from CT data. We apply this lobar map to the xenon MRI data using a multi-modal image registration technique based on automatically segmented lung boundaries, using proton MRI as an intermediate stage. We demonstrate our pipeline by deriving lobar measurements of ventilated volumes and diffusion from hyperpolarised xenon MRI data. In future work, we will use the trial data to further validate the pipeline and investigate the potential of xenon MRI in the clinical assessment of COPD. We also demonstrate how our work can be extended to build personalised computational models of the lung, which can be used to gain insights into the mechanisms of lung disease.
155

Selection along the HIV-1 genome through the CTL mediated immune response

Palmer, Duncan January 2014 (has links)
During human immunodeficiency virus 1 (HIV-1) infection, the viral population is in constant battle with the host immune system. The cytotoxic T-lymphocyte (CTL) response, a branch of the adaptive immune response, is implicated in viral control and can drive viral evolution in the infected host population. Endogenous viral peptides, or ‘epitopes’, are presented to CTLs by human leukocyte antigen (HLA) class I molecules on the surface of infected cells where they may be identified as non-self. Mutations in or proximal to a viral epitope can result in ‘escape’ from CTLs targeting that epitope. The repertoire of epitopes which may be presented is dependent upon host class I HLA types. As such, reversion may occur after transmission due to changes in viral fitness and selection in the context of a new HLA background. Thus, parameters describing the dynamics of CTL escape and reversion are key to understanding how CTL responses within individuals relate to HIV-1 sequence evolution in the infected host population. Escape and reversion can be studied directly using biological assays and longitudinal viral sequence data, or indirectly by considering viral sequences across multiple hosts. Indirect approaches include tree based methods which detect associations between host HLA and viral sequence but do not estimate rates of escape and reversion, and ordinary differential equation (ODE) models which estimate these rates but do not consider the dependency structure inherent in viral sequence data. We introduce two models which estimate escape and reversion rates whilst accounting for the shared ancestry of viral sequence data. For our first model, we lay out an integrated Bayesian approach which combines genealogical inference and an existing epidemiological model to inform escape and reversion rate estimates. Using this model, we find evidence for correlation between escape rate estimates across widely separated geographical regions. We also observe a non-linear negative correlation between in vitro replicative capacity and escape rate. Both findings suggest that epistasis does not play a strong role in the escape process. Although our first model worked well, it had some key limitations which we address in our second method. Notably, by making a series of approximations, we are able account for recombination and analyse very large datasets which would be computationally infeasible under the first model. We verify our second approach through extensive simulations, and use the method to estimate both drug and HLA associated selection along portions of the HIV-1 genome. We test the results of the model using existing knowledge, and determine a collection of putative selected sites which warrant further investigation. Finally, we find evidence to support the notion that the CTL response played a role in HIV-1 subtype diversification.
156

Statistical models for neuroimaging meta-analytic inference

Salimi-Khorshidi, Gholamreza January 2011 (has links)
A statistical meta-analysis combines the results of several studies that address a set of related research hypotheses, thus increasing the power and reliability of the inference. Meta-analytic methods are over 50 years old and play an important role in science; pooling evidence from many trials to provide answers that any one trial would have insufficient samples to address. On the other hand, the number of neuroimaging studies is growing dramatically, with many of these publications containing conflicting results, or being based on only a small number of subjects. Hence there has been increasing interest in using meta-analysis methods to find consistent results for a specific functional task, or for predicting the results of a study that has not been performed directly. Current state of neuroimaging meta-analysis is limited to coordinate-based meta-analysis (CBMA), i.e., using only the coordinates of activation peaks that are reported by a group of studies, in order to "localize" the brain regions that respond to a certain type of stimulus. This class of meta-analysis suffers from a series of problems and hence cannot result in as accurate results as desired. In this research, we describe the problems that existing CBMA methods are suffering from and introduce a hierarchical mixed-effects image-based metaanalysis (IBMA) solution that incorporates the sufficient statistics (i.e., voxel-wise effect size and its associated uncertainty) from each study. In order to improve the statistical-inference stage of our proposed IBMA method, we introduce a nonparametric technique that is capable of adjusting such an inference for spatial nonstationarity. Given that in common practice, neuroimaging studies rarely provide the full image data, in an attempt to improve the existing CBMA techniques we introduce a fully automatic model-based approach that employs Gaussian-process regression (GPR) for estimating the meta-analytic statistic image from its corresponding sparse and noisy observations (i.e., the collected foci). To conclude, we introduce a new way to approach neuroimaging meta-analysis that enables the analysis to result in information such as “functional connectivity” and networks of the brain regions’ interactions, rather than just localizing the functions.
157

Gene x gene interactions in genome wide association studies

Bhattacharya, Kanishka January 2014 (has links)
Genome wide association studies (GWAS) have revolutionized our approach to mapping genetic determinants of complex human diseases. However, even with success from recent studies, we have typically been able to explain only a fraction of the trait heritability. GWAS are typically analysed by testing for the marginal effects of single variants. Consequently, it has been suggested that gene-gene interactions might contribute to the missing heritability of complex diseases. GWAS incorporating interaction effects have not been routinely applied because of statistical and computational challenges relating to the number of tests performed, genome-wide. To overcome this issue, I have developed novel methodology to allow rapid testing of pairwise interactions in GWAS of complex traits, implemented in the IntRapid software. Simulations demonstrated that the power of this approach was equivalent to computationally demanding exhaustive searches of the genome, but required only a fraction of the computing time. Application of IntRapid to GWAS of a range of complex human traits undertaken by the Wellcome Trust Case Control Consortium (WTCCC) identified several interaction effects at nominal significance, which warrant further investigation in independent studies. In an attempt to fine-map the identified interacting loci, I undertook imputation of the WTCCC genotype data up to the 1000 Genomes Project reference panel (Phase 1 integrated release, March 2012) in the neighbourhood of the lead SNPs. I modified the IntRapid software to take account of imputed genotypes, and identified stronger signals of interaction after imputation at the majority of loci, where the lead SNP often had moved by hundreds of kilobases. The X-chromosome is often overlooked in GWAS of complex human traits, primarily because of the difference in the distribution of genotypes in males and females. I have extended IntRapid to allow for interactions with the X chromosome by considering males and females separately, and combining effect estimates across the sexes in a fixed-effects meta-analysis. Application to genotype data from the WTCCC failed to identify any strong signals of association with the X-chromosome, despite known epidemiological differences between the sexes for the traits considered. The novel methods developed as part of this doctoral work enable a user friendly, computationally efficient and powerful way of implementing genome-wide gene-gene interaction studies. Further work would be required to allow for more complex interaction modelling and deal with the associated computational burden, particularly when using next-generation sequencing (NGS) data which includes a much larger set of SNPs. However, IntRapid is demonstrably efficient in exhaustively searching for pairwise interactions in GWAS of complex traits, potentially leading to novel insights into the genetic architecture and biology of human disease.
158

Human population history and its interplay with natural selection

Siska, Veronika January 2019 (has links)
The complex demographic changes that underlie the expansion of anatomically modern humans out of Africa have important consequences on the dynamics of natural selection and our ability to detect it. In this thesis, I aimed to refine our knowledge on human population history using ancient genomes, and then used a climate-informed, spatially explicit framework to explore the interplay between complex demographies and selection. I first analysed a high-coverage genome from Upper Palaeolithic Romania from ~37.8 kya, and demonstrated an early diversification of multiple lineages shortly after the out-of-Africa expansion (Chapter 2). I then investigated Late Upper Palaeolithic (~13.3ky old) and Mesolithic (~9.7 ky old) samples from the Caucasus and a Late Upper Palaeolithic (~13.7ky old) sample from Western Europe, and found that these two groups belong to distinct lineages that also diverged shortly after the out of Africa, ~45-60 ky ago (Chapter 3). Finally, I used East Asian samples from ~7.7ky ago to show that there has been a greater degree of genetic continuity in this region compared to Europe (Chapter 4). In the second part of my thesis, I used a climate-informed, spatially explicit demographic model that captures the out-of-Africa expansion to explore natural selection. I first investigated whether the model can represent the confounding effect of demography on selection statistics, when applied to neutral part of the genome (Chapter 5). Whilst the overlap between different selection statistics was somewhat underestimated by the model, the relationship between signals from different populations is generally well-captured. I then modelled natural selection in the same framework and investigated the spatial distribution of two genetic variants associated with a protective effect against malaria, sickle-cell anaemia and β⁰ thalassemia (Chapter 6). I found that although this model can reproduce the disjoint ranges of different variants typical of the former, it is incompatible with overlapping distributions characteristic of the latter. Furthermore, our model is compatible with the inferred single origin of sickle-cell disease in most regions, but it can not reproduce the presence of this disorder in India without long-distance migrations.
159

Universal Computation and Memory by Neural Switching / Universalcomputer und Speicher mittels neuronaler Schaltvorgänge

Schittler Neves, Fabio 28 October 2010 (has links)
No description available.
160

Stochastic modelling and simulation in cell biology

Szekely, Tamas January 2014 (has links)
Modelling and simulation are essential to modern research in cell biology. This thesis follows a journey starting from the construction of new stochastic methods for discrete biochemical systems to using them to simulate a population of interacting haematopoietic stem cell lineages. The first part of this thesis is on discrete stochastic methods. We develop two new methods, the stochastic extrapolation framework and the Stochastic Bulirsch-Stoer methods. These are based on the Richardson extrapolation technique, which is widely used in ordinary differential equation solvers. We believed that it would also be useful in the stochastic regime, and this turned out to be true. The stochastic extrapolation framework is a scheme that admits any stochastic method with a fixed stepsize and known global error expansion. It can improve the weak order of the moments of these methods by cancelling the leading terms in the global error. Using numerical simulations, we demonstrate that this is the case up to second order, and postulate that this also follows for higher order. Our simulations show that extrapolation can greatly improve the accuracy of a numerical method. The Stochastic Bulirsch-Stoer method is another highly accurate stochastic solver. Furthermore, using numerical simulations we find that it is able to better retain its high accuracy for larger timesteps than competing methods, meaning it remains accurate even when simulation time is speeded up. This is a useful property for simulating the complex systems that researchers are often interested in today. The second part of the thesis is concerned with modelling a haematopoietic stem cell system, which consists of many interacting niche lineages. We use a vectorised tau-leap method to examine the differences between a deterministic and a stochastic model of the system, and investigate how coupling niche lineages affects the dynamics of the system at the homeostatic state as well as after a perturbation. We find that larger coupling allows the system to find the optimal steady state blood cell levels. In addition, when the perturbation is applied randomly to the entire system, larger coupling also results in smaller post-perturbation cell fluctuations compared to non-coupled cells. In brief, this thesis contains four main sets of contributions: two new high-accuracy discrete stochastic methods that have been numerically tested, an improvement that can be used with any leaping method that introduces vectorisation as well as how to use a common stepsize adapting scheme, and an investigation of the effects of coupling lineages in a heterogeneous population of haematopoietic stem cell niche lineages.

Page generated in 0.1005 seconds