• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 5
  • 3
  • Tagged with
  • 36
  • 36
  • 10
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Méthodes d'analyse génétique de traits quantitatifs corrélés : application à l'étude de la densité minérale osseuse / Statistical methods for genetic analysis of correlated quantitative traits : application to the study of bone mineral density

Saint Pierre, Aude 03 January 2011 (has links)
La plupart des maladies humaines ont une étiologie complexe avec des facteurs génétiques et environnementaux qui interagissent. Utiliser des phénotypes corrélés peut augmenter la puissance de détection de locus de trait quantitatif. Ce travail propose d’évaluer différentes approches d’analyse bivariée pour des traits corrélés en utilisantl’information apportée par les marqueurs au niveau de la liaison et de l’association. Legain relatif de ces approches est comparé aux analyses univariées. Ce travail a étéappliqué à la variation de la densité osseuse à deux sites squelettiques dans une cohorted’hommes sélectionnés pour des valeurs phénotypiques extrêmes. Nos résultats montrentl’intérêt d’utiliser des approches bivariées en particulier pour l’analyse d’association. Parailleurs, dans le cadre du groupe de travail GAW16, nous avons comparé lesperformances relatives de trois méthodes d’association dans des données familiales. / The majority of complex diseases in humans are likely determined by both genetic andenvironmental factors. Using correlated phenotypes may increase the power to map theunderlying Quantitative Trait Loci (QTLs). This work aims to evaluate and compare theperformance of bivariate methods for detecting QTLs in correlated phenotypes by linkageand association analyses. We applied these methods to data on Bone Mineral Density(BMD) variation, measured at the two skeletal sites, in a sample of males selected forextreme trait values. Our results demonstrate the relative gain, in particular for associationanalysis, of bivariate approaches when compared to univariate analyses. Finally, we studythe performances of association methods to detect QTLs in the GAW16 simulated familydata.
32

Pharmacometric Methods and Novel Models for Discrete Data

Plan, Elodie L January 2011 (has links)
Pharmacodynamic processes and disease progression are increasingly characterized with pharmacometric models. However, modelling options for discrete-type responses remain limited, although these response variables are commonly encountered clinical endpoints. Types of data defined as discrete data are generally ordinal, e.g. symptom severity, count, i.e. event frequency, and time-to-event, i.e. event occurrence. Underlying assumptions accompanying discrete data models need investigation and possibly adaptations in order to expand their use. Moreover, because these models are highly non-linear, estimation with linearization-based maximum likelihood methods may be biased. The aim of this thesis was to explore pharmacometric methods and novel models for discrete data through (i) the investigation of benefits of treating discrete data with different modelling approaches, (ii) evaluations of the performance of several estimation methods for discrete models, and (iii) the development of novel models for the handling of complex discrete data recorded during (pre-)clinical studies. A simulation study indicated that approaches such as a truncated Poisson model and a logit-transformed continuous model were adequate for treating ordinal data ranked on a 0-10 scale. Features that handled serial correlation and underdispersion were developed for the models to subsequently fit real pain scores. The performance of nine estimation methods was studied for dose-response continuous models. Other types of serially correlated count models were studied for the analysis of overdispersed data represented by the number of epilepsy seizures per day. For these types of models, the commonly used Laplace estimation method presented a bias, whereas the adaptive Gaussian quadrature method did not. Count models were also compared to repeated time-to-event models when the exact time of gastroesophageal symptom occurrence was known. Two new model structures handling repeated time-to-categorical events, i.e. events with an ordinal severity aspect, were introduced. Laplace and two expectation-maximisation estimation methods were found to be performing well for frequent repeated time-to-event models. In conclusion, this thesis presents approaches, estimation methods, and diagnostics adapted for treating discrete data. Novel models and diagnostics were developed when lacking and applied to biological observations.
33

Medical Image Registration and Stereo Vision Using Mutual Information

Fookes, Clinton Brian January 2003 (has links)
Image registration is a fundamental problem that can be found in a diverse range of fields within the research community. It is used in areas such as engineering, science, medicine, robotics, computer vision and image processing, which often require the process of developing a spatial mapping between sets of data. Registration plays a crucial role in the medical imaging field where continual advances in imaging modalities, including MRI, CT and PET, allow the generation of 3D images that explicitly outline detailed in vivo information of not only human anatomy, but also human function. Mutual Information (MI) is a popular entropy-based similarity measure which has found use in a large number of image registration applications. Stemming from information theory, this measure generally outperforms most other intensity-based measures in multimodal applications as it does not assume the existence of any specific relationship between image intensities. It only assumes a statistical dependence. The basic concept behind any approach using MI is to find a transformation, which when applied to an image, will maximise the MI between two images. This thesis presents research using MI in three major topics encompassed by the computer vision and medical imaging field: rigid image registration, stereo vision, and non-rigid image registration. In the rigid domain, a novel gradient-based registration algorithm (MIGH) is proposed that uses Parzen windows to estimate image density functions and Gauss-Hermite quadrature to estimate the image entropies. The use of this quadrature technique provides an effective and efficient way of estimating entropy while bypassing the need to draw a second sample of image intensities (a procedure required in previous Parzen-based MI registration approaches). It is possible to achieve identical results with the MIGH algorithm when compared to current state of the art MI-based techniques. These results are achieved using half the previously required sample sizes, thus doubling the statistical power of the registration algorithm. Furthermore, the MIGH technique improves algorithm complexity by up to an order of N, where N represents the number of samples extracted from the images. In stereo vision, a popular passive method of depth perception, new extensions have been pro- posed in order to increase the robustness of MI-based stereo matching algorithms. Firstly, prior probabilities are incorporated into the MI measure to considerably increase the statistical power of the matching windows. The statistical power, directly related to the number of samples, can become too low when small matching windows are utilised. These priors, which are calculated from the global joint histogram, are tuned to a two level hierarchical approach. A 2D match surface, in which the match score is computed for every possible combination of template and matching windows, is also utilised to enforce left-right consistency and uniqueness constraints. These additions to MI-based stereo matching significantly enhance the algorithms ability to detect correct matches while decreasing computation time and improving the accuracy, particularly when matching across multi-spectra stereo pairs. MI has also recently found use in the non-rigid domain due to a need to compute multimodal non-rigid transformations. The viscous fluid algorithm is perhaps the best method for re- covering large local mis-registrations between two images. However, this model can only be used on images from the same modality as it assumes similar intensity values between images. Consequently, a hybrid MI-Fluid algorithm is proposed to compute a multimodal non-rigid registration technique. MI is incorporated via the use of a block matching procedure to generate a sparse deformation field which drives the viscous fluid algorithm, This algorithm is also compared to two other popular local registration techniques, namely Gaussian convolution and the thin-plate spline warp, and is shown to produce comparable results. An improved block matching procedure is also proposed whereby a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampler is used to optimally locate grid points of interest. These grid points have a larger concentration in regions of high information and a lower concentration in regions of small information. Previous methods utilise only a uniform distribution of grid points throughout the image.
34

Optimisation des méthodes statistiques d'analyse de la variabilité des caractères à l'aide d'informations génomiques / Optimization of statistical methods using genomic data for QTL detection

Jacquin, Laval 10 October 2014 (has links)
L’avènement du génotypage à haut débit permet aujourd’hui de mieux exploiter le phénomène d’association, appelé déséquilibre de liaison (LD), qui existe entre les allèles de différents loci sur le génome. Dans ce contexte, l’utilité de certains modèles utilisés en cartographie de locus à effets quantitatifs (QTL) est remise en question. Les objectifs de ce travail étaient de discriminer entre des modèles utilisés en routine en cartographie et d’apporter des éclaircissements sur la meilleure façon d’exploiter le LD, par l’utilisation d’haplotypes, afin d’optimiser les modèles basés sur ce concept. On montre que les modèles uni-marqueur de liaison, développés en génétique il y a vingtaine d’années, comportent peu d’intérêts aujourd’hui avec le génotypage à haut débit. Dans ce contexte, on montre que les modèles uni-marqueur d’association comportent plus d’avantages que les modèles uni-marqueur de liaison, surtout pour des QTL ayant un effet petit ou modéré sur le phénotype, à condition de bien maîtriser la structure génétique entre individus. Les puissances et les robustesses statistiques de ces modèles ont été étudiées, à la fois sur le plan théorique et par simulations, afin de valider les résultats obtenus pour la comparaison de l’association avec la liaison. Toutefois, les modèles uni-marqueur ne sont pas aussi efficaces que les modèles utilisant des haplotypes dans la prise en compte du LD pour une cartographie fine de QTL. Des propriétés mathématiques reliées à la cartographie de QTL par l’exploitation du LD multiallélique capté par les modèles haplotypiques ont été explicitées et étudiées à l’aide d’une distance matricielle définie entre deux positions sur le génome. Cette distance a été exprimée algébriquement comme une fonction des coefficients du LD multiallélique. Les propriétés mathématiques liées à cette fonction montrent qu’il est difficile de bien exploiter le LD multiallélique, pour un génotypage à haut débit, si l’on ne tient pas compte uniquement de la similarité totale entre des haplotypes. Des études sur données réelles et simulées ont illustré ces propriétés et montrent une corrélation supérieure à 0.9 entre une statistique basée sur la distance matricielle et des résultats de cartographie. Cette forte corrélation a donné lieu à la proposition d’une méthode, basée sur la distance matricielle, qui aide à discriminer entre les modèles utilisés en cartographie. / The advent of high-throughput genotyping nowadays allows better exploitation of the association phenomenon, called linkage disequilibrium (LD), between alleles of different loci on the genome. In this context, the usefulness of some models to fine map quantitative trait locus (QTL) is questioned. The aims of this work were to discriminate between models routinely used for QTL mapping and to provide enlightenment on the best way to exploit LD, when using haplotypes, in order to optimize haplotype-based models. We show that single-marker linkage models, developed twenty years ago, have little interest today with the advent of high-throughput genotyping. In this context, we show that single-marker association models are more advantageous than single-marker linkage models, especially for QTL with a small or moderate effect on the phenotype. The statistical powers and robustness of these models have been studied both theoretically and by simulations, in order to validate the comparison of single-marker association models with single-marker linkage models. However, single-marker models are less efficient than haplotype-based models for making better use of LD in fine mapping of QTL. Mathematical properties related to the multiallelic LD captured by haplotype-based models have been shown, and studied, by the use of a matrix distance defined between two loci on the genome. This distance has been expressed algebraically as a function of the multiallelic LD coefficients. The mathematical properties related to this function show that it is difficult to exploit well multiallelic LD, for a high-throughput genotyping, if one takes into account the partial and total similarity between haplotypes instead of the total similarity only. Studies on real and simulated data illustrate these properties and show a correlation above 0.9 between a statistic based on the matrix distance and mapping results. Hence a new method, based on the matrix distance, which helps to discriminate between models used for mapping is proposed.
35

A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA

Senteney, Michael H. January 2020 (has links)
No description available.
36

Genetics of ankylosing spondylitis

Karaderi, Tugce January 2012 (has links)
Ankylosing spondylitis (AS) is a common inflammatory arthritis of the spine and other affected joints, which is highly heritable, being strongly influenced by the HLA-B27 status, as well as hundreds of mostly unknown genetic variants of smaller effect. The aim of my research was to confirm some of the previously observed genetic associations and to identify new associations, many of which are in biological pathways relevant to AS pathogenesis, most notably the IL-23/T<sub>H</sub>17 axis (IL23R) and antigen presentation (ERAP1 and ERAP2). Studies presented in this thesis include replication and refinement of several potential associations initially identified by earlier GWAS (WTCCC-TASC, 2007 and TASC, 2010). I conducted an extended study of IL23R association with AS and undertook a meta-analysis, confirming the association between AS and IL23R (non-synonymous SNP rs11209026, p=1.5 x 10-9, OR=0.61). An extensive re-sequencing and fine mapping project, including a meta-analysis, to replicate and refine the association of TNFRSF1A with AS was also undertaken; a novel variant in intron 6 was identified and a weak association with a low frequency variant, rs4149584 (p=0.01, OR=1.58), was detected. Somewhat stronger associations were seen with rs4149577 (p=0.002, OR=0.91) and rs4149578 (p=0.015, OR=1.14) in the meta-analysis. Associations at several additional loci had been identified by a more recent GWAS (WTCCC2-TASC, 2011). I used in silico techniques, including imputation using a denser panel of variants from the 1000 Genomes Project, conditional analysis and rare/low frequency variant analysis, to refine these associations. Imputation analysis (1782 cases/5167 controls) revealed novel associations with ERAP2 (rs4869313, p=7.3 x 10-8, OR=0.79) and several additional candidate loci including IL6R, UBE2L3 and 2p16.3. Ten SNPs were then directly typed in an independent sample (1804 cases/1848 controls) to replicate selected associations and to determine the imputation accuracy. I established that imputation using the 1000 Genomes Project pilot data was largely reliable, specifically for common variants (genotype concordence~97%). However, more accurate imputation of low frequency variants may require larger reference populations, like the most recent 1000 Genomes reference panels. The results of my research provide a better understanding of the complex genetics of AS, and help identify future targets for genetic and functional studies.

Page generated in 0.0686 seconds