• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 35
  • 19
  • 17
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 144
  • 24
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Pharmacogénétique de l'Imatinib dans la Leucémie Myéloïde Chronique etDonnées Censurées par Intervalles en présence de Compétition / Pharmacogenetics of Imatinib in Chronic Myeloid Leukemia etInterval Censored Competing Risks Data

Delord, Marc 05 November 2015 (has links)
Le traitement de la leucémie myéloïde chronique (LMC) par imatinib est un succès de thérapie ciblée en oncologie. Le principe de cette thérapie est de bloquer les processus biochimiques à l'origine du développement de la maladie, et de permettre à une majorité de patients de réduire leurs risques de progression mais aussi d'éviter des traitements lourds et risqués comme la greffe de cellules souches hématopoïétiques.Cependant, même si l'efficacité de l'imatinib à été prouvée dans un contexte clinique, il n'en demeure pas moins qu'une proportion non négligeable de patients n'obtient par de niveaux de réponse moléculaire jugés optimale. Le but de cette thèse est de tester l'hypothèse d'un lien entre des polymorphismes de gènes impliqués dans l'absorption des médicaments et de leurs métabolisme, et la réponse moléculaire dans la leucémie myéloïde chronique en phase chronique traitée par imatinib.Dans le but d'évaluer la réponse moléculaire des patients, des prélèvements sanguins sont réalisés tout les 3 mois afin de pratiquer le dosage d'un biomarqueur. Ce type particulier de suivi produit des données censurées par intervalles. Comme par ailleurs, les patients demeurent à risque de progression ou sont susceptible d'interrompre leurs traitements pour cause d'intolérance, il est possible que la réponse d'intérêt ne soit plus observable sous le traitement étudié. Les données ainsi produites sont censurées par intervalles dans un contexte de compétition (risques compétitifs).Afin de tenir compte de la nature particulière des données collectées, une méthode basée sur l'imputation multiple est proposée. L'idée est de transformer les données censurées par intervalles en de multiples jeux de données potentiellement censurées à droite et d'utiliser les méthodes disponibles pour l'analyser de ces données. Finalement les résultats sont assemblés en suivant les règles de l'imputation multiple. / Imatinib in the treatment of chronic myeloid leukemia is a success of targeted therapy in oncology. The aim of this therapy is to block the biochemical processes leading to disease development. This strategy results in a reduction of the risk of disease progression and allows patients to avoid extensive and hazardous treatments such as hematologic stem cell transplantation.However, even if imatinib efficacy has been demonstrated in a clinical setting, a significant part of patients do not achieve suitable levels of molecular response. The objective of this thesis, is to test the hypothesis of a correlation between polymorphisms of genes implied in drug absorption an metabolism and the molecular response in chronic myeloid leukemia in chronic phase treated by imatinib.In order to evaluate patients molecular response, blood biomarker assessments are performed every 3 months. This type of follow up produces interval censored data. As patients remain at risk of disease progression, or may interrupt their treatments due to poor tolerance, the response of interest may not be observable in a given setting. This situation produces interval censored competing risks data.To properly handle such data, we propose a multiple imputation based method.The main idea is to convert interval censored data into multiple sets of potentially right censored data that are then analysed using multiple imputation rules.
22

Lois a priori non-informatives et la modélisation par mélange / Non-informative priors and modelization by mixtures

Kamary, Kaniav 15 March 2016 (has links)
L’une des grandes applications de la statistique est la validation et la comparaison de modèles probabilistes au vu des données. Cette branche des statistiques a été développée depuis la formalisation de la fin du 19ième siècle par des pionniers comme Gosset, Pearson et Fisher. Dans le cas particulier de l’approche bayésienne, la solution à la comparaison de modèles est le facteur de Bayes, rapport des vraisemblances marginales, quelque soit le modèle évalué. Cette solution est obtenue par un raisonnement mathématique fondé sur une fonction de coût.Ce facteur de Bayes pose cependant problème et ce pour deux raisons. D’une part, le facteur de Bayes est très peu utilisé du fait d’une forte dépendance à la loi a priori (ou de manière équivalente du fait d’une absence de calibration absolue). Néanmoins la sélection d’une loi a priori a un rôle vital dans la statistique bayésienne et par conséquent l’une des difficultés avec la version traditionnelle de l’approche bayésienne est la discontinuité de l’utilisation des lois a priori impropres car ils ne sont pas justifiées dans la plupart des situations de test. La première partie de cette thèse traite d’un examen général sur les lois a priori non informatives, de leurs caractéristiques et montre la stabilité globale des distributions a posteriori en réévaluant les exemples de [Seaman III 2012]. Le second problème, indépendant, est que le facteur de Bayes est difficile à calculer à l’exception des cas les plus simples (lois conjuguées). Une branche des statistiques computationnelles s’est donc attachée à résoudre ce problème, avec des solutions empruntant à la physique statistique comme la méthode du path sampling de [Gelman 1998] et à la théorie du signal. Les solutions existantes ne sont cependant pas universelles et une réévaluation de ces méthodes suivie du développement de méthodes alternatives constitue une partie de la thèse. Nous considérons donc un nouveau paradigme pour les tests bayésiens d’hypothèses et la comparaison de modèles bayésiens en définissant une alternative à la construction traditionnelle de probabilités a posteriori qu’une hypothèse est vraie ou que les données proviennent d’un modèle spécifique. Cette méthode se fonde sur l’examen des modèles en compétition en tant que composants d’un modèle de mélange. En remplaçant le problème de test original avec une estimation qui se concentre sur le poids de probabilité d’un modèle donné dans un modèle de mélange, nous analysons la sensibilité sur la distribution a posteriori conséquente des poids pour divers modélisation préalables sur les poids et soulignons qu’un intérêt important de l’utilisation de cette perspective est que les lois a priori impropres génériques sont acceptables, tout en ne mettant pas en péril la convergence. Pour cela, les méthodes MCMC comme l’algorithme de Metropolis-Hastings et l’échantillonneur de Gibbs et des approximations de la probabilité par des méthodes empiriques sont utilisées. Une autre caractéristique de cette variante facilement mise en œuvre est que les vitesses de convergence de la partie postérieure de la moyenne du poids et de probabilité a posteriori correspondant sont assez similaires à la solution bayésienne classique / One of the major applications of statistics is the validation and comparing probabilistic models given the data. This branch statistics has been developed since the formalization of the late 19th century by pioneers like Gosset, Pearson and Fisher. In the special case of the Bayesian approach, the comparison solution of models is the Bayes factor, ratio of marginal likelihoods, whatever the estimated model. This solution is obtained by a mathematical reasoning based on a loss function. Despite a frequent use of Bayes factor and its equivalent, the posterior probability of models, by the Bayesian community, it is however problematic in some cases. First, this rule is highly dependent on the prior modeling even with large datasets and as the selection of a prior density has a vital role in Bayesian statistics, one of difficulties with the traditional handling of Bayesian tests is a discontinuity in the use of improper priors since they are not justified in most testing situations. The first part of this thesis deals with a general review on non-informative priors, their features and demonstrating the overall stability of posterior distributions by reassessing examples of [Seaman III 2012].Beside that, Bayes factors are difficult to calculate except in the simplest cases (conjugate distributions). A branch of computational statistics has therefore emerged to resolve this problem with solutions borrowing from statistical physics as the path sampling method of [Gelman 1998] and from signal processing. The existing solutions are not, however, universal and a reassessment of the methods followed by alternative methods is a part of the thesis. We therefore consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. The idea is to define an alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model which is based on considering the models under comparison as components of a mixture model. By replacing the original testing problem with an estimation version that focus on the probability weight of a given model within a mixture model, we analyze the sensitivity on the resulting posterior distribution of the weights for various prior modelings on the weights and stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. MCMC methods like Metropolis-Hastings algorithm and the Gibbs sampler are used. From a computational viewpoint, another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.In the last part of the thesis we construct a reference Bayesian analysis of mixtures of Gaussian distributions by creating a new parameterization centered on the mean and variance of those models itself. This enables us to develop a genuine non-informative prior for Gaussian mixtures with an arbitrary number of components. We demonstrate that the posterior distribution associated with this prior is almost surely proper and provide MCMC implementations that exhibit the expected component exchangeability. The analyses are based on MCMC methods as the Metropolis-within-Gibbs algorithm, adaptive MCMC and the Parallel tempering algorithm. This part of the thesis is followed by the description of R package named Ultimixt which implements a generic reference Bayesian analysis of unidimensional mixtures of Gaussian distributions obtained by a location-scale parameterization of the model. This package can be applied to produce a Bayesian analysis of Gaussian mixtures with an arbitrary number of components, with no need to specify the prior distribution.
23

Breast cancer risk and genetic ancestry: a case-control study in Uruguay

Bonilla, Carolina, Bertoni, Bernardo, Hidalgo, Pedro C., Artagaveytia, Nora, Ackermann, Elizabeth, Barreto, Isabel, Cancela, Paula, Cappetta, Mónica, Egaña, Ana, Figueiro, Gonzalo, Heinzen, Silvina, Hooker, Stanley, Román, Estela, Sans, Mónica, Kittles, Rick A. January 2015 (has links)
BACKGROUND: Uruguay exhibits one of the highest rates of breast cancer in Latin America, similar to those of developed nations, the reasons for which are not completely understood. In this study we investigated the effect that ancestral background has on breast cancer susceptibility among Uruguayan women. METHODS: We carried out a case-control study of 328 (164 cases, 164 controls) women enrolled in public hospitals and private clinics across the country. We estimated ancestral proportions using a panel of nuclear and mitochondrial ancestry informative markers (AIMs) and tested their association with breast cancer risk. RESULTS: Nuclear individual ancestry in cases was (mean ± SD) 9.8 ± 7.6% African, 13.2 ± 10.2% Native American and 77.1 ± 13.1% European, and in controls 9.1 ± 7.5% African, 14.7 ± 11.2% Native American and 76.2 ± 14.2% European. There was no evidence of a difference in nuclear or mitochondrial ancestry between cases and controls. However, European mitochondrial haplogroup H was associated with breast cancer (OR = 2.0; 95% CI 1.1, 3.5). CONCLUSIONS: We have not found evidence that overall genetic ancestry differs between breast cancer patients and controls in Uruguay but we detected an association of the disease with a European mitochondrial lineage, which warrants further investigation.
24

Gynnar konformitet människor? : Samband mellan konformitet och personlighetsindexar

Johansson, Gustav, Gunes, Betty January 2017 (has links)
Konformitet ses som att människor anpassar sitt beteende utifrån andra och betraktas ofta som något negativt i västerlänska samhällen. Syftet med undersökningen var att framhäva om konformitet gynnade människor när andra ansågs bidra med nyttig information, konstruera ett mätinstrument för normativ och informativ social influens samt undersöka samband mellan personlighetsindexar och konformitet. En enkätstudie genomfördes med totalt 83 deltagare från en högskola i Mellansverige. Deltagarna delades in i två grupper, där konformitetsgruppen fick, tillskillnad från kontrollgruppen, en procentsats vid varje kunskapsfråga. Resultaten visade att konformitet förekom och att det gynnade sig att följa strömmen vid lätta och medelsvåra kunskapsfrågor. Ytterligare förekom det inga samband mellan personlighetsindexarna och konformitet. Slutsatsen var att människor inte borde se konformitet som något negativt, utan bör ibland ta del av det majoriteten har att erbjuda i vardagliga situationer. Förhoppningsvis kommer det konstruerade mätinstrumentet användas i framtida forskningar.
25

Discovering Compact and Informative Structures through Data Partitioning

Fiterau, Madalina 01 September 2015 (has links)
In many practical scenarios, prediction for high-dimensional observations can be accurately performed using only a fraction of the existing features. However, the set of relevant predictive features, known as the sparsity pattern, varies across data. For instance, features that are informative for a subset of observations might be useless for the rest. In fact, in such cases, the dataset can be seen as an aggregation of samples belonging to several low-dimensional sub-models, potentially due to different generative processes. My thesis introduces several techniques for identifying sparse predictive structures and the areas of the feature space where these structures are effective. This information allows the training of models which perform better than those obtained through traditional feature selection. We formalize Informative Projection Recovery, the problem of extracting a set of low-dimensional projections of data which jointly form an accurate solution to a given learning task. Our solution to this problem is a regression-based algorithm that identifies informative projections by optimizing over a matrix of point-wise loss estimators. It generalizes to a number of machine learning problems, offering solutions to classification, clustering and regression tasks. Experiments show that our method can discover and leverage low-dimensional structure, yielding accurate and compact models. Our method is particularly useful in applications involving multivariate numeric data in which expert assessment of the results is of the essence. Additionally, we developed an active learning framework which works with the obtained compact models in finding unlabeled data deemed to be worth expert evaluation. For this purpose, we enhance standard active selection criteria using the information encapsulated by the trained model. The advantage of our approach is that the labeling effort is expended mainly on samples which benefit models from the hypothesis class we are considering. Additionally, the domain experts benefit from the availability of informative axis aligned projections at the time of labeling. Experiments show that this results in an improved learning rate over standard selection criteria, both for synthetic data and real-world data from the clinical domain, while the comprehensible view of the data supports the labeling process and helps preempt labeling errors.
26

Um estudo sobre o espaço de trabalho informativo e o acompanhamento em equipes ágeis de desenvolvimento de software / An study on informative workspaces and tracking in agile development teams

Oliveira, Renan de Melo 24 January 2012 (has links)
Podemos encontrar em métodos ágeis como no Extreme Programming [Beck, 1999, Beck e Andres, 2006], no Scrum [Schwaber, 2008], no Crystal Clear [Cockburn, 2005], e no Lean Software Development [Poppendieck e Poppendieck, 2007] referências relacionadas à manipulação e disponibilização de métricas e outras informações no ambiente de desenvolvimento. Neste trabalho, estas atividades são consideradas como tarefas de acompanhamento ágil. Observamos em métodos ágeis a importância de se realizar ações (práticas) baseadas em alguns princípios como guidelines [Poppendieck e Poppendieck, 2007]. Por isto, realizamos uma análise bibliográca na literatura disponível para compreender princípios ágeis que possam afetar na execução deste tipo de tarefa, além de escrever sobre métricas no contexto de métodos ágeis e engenharia de software. Apesar da bibliograa, não encontramos pesquisas experimentais com o objetivo de levantar e (ou) compreender aspectos relacionados ao sucesso na aplicação deste tipo de tarefa em ambientes de desenvolvimento. Para isto, realizamos neste trabalho uma pesquisa experimental com este objetivo, utilizando uma abordagem de métodos mistos sequenciais de pesquisa [Creswell, 2009]. Escolhemos aplicar esta pesquisa em um conjunto de quinze equipes de desenvolvimento ágil, reunidas em realizações da disciplina Laboratório de Programação Extrema do IME-USP nos anos de 2010 e 2011. Esta pesquisa foi realizada em quatro fases sequenciais. Na primeira fase, realizamos sugestões para as equipes de desenvolvimento vinculadas ao acompanhamento ágil a m de levantar aspectos valiosos em sua aplicação utilizando uma abordagem baseada em pesquisa-ação [Thiollent, 2004]. Baseado nestes resultados, agrupamos alguns destes aspectos como heurísticas para o acompanhamento ágil, modelo similar ao de Hartmann e Dymond [2006]. Na segunda fase, aplicamos um questionário para vericar a validade das heurísticas levantadas. Na terceira fase, realizamos entrevistas semi-estruturadas com alguns integrantes destas equipes para compreender o por quê da validade das heurísticas levantadas, sendo analisadas com técnicas de teoria fundamentada em dados (grounded theory)[Strauss e Corbin, 2008]. Na quarta fase, reaplicamos o questionário da fase 2 em outro ambiente para triangulação da validade das heurísticas. Como resultado nal da pesquisa, estabelecemos um conjunto de heurísticas para o acompanhamento ágil, além de avaliações quantitativas de seus aspectos em dois ambientes, juntamente a diversas considerações qualitativas sobre sua utilização. Realizamos um mapeamento tanto das heurísticas como de seus conceitos relacionados à literatura disponível, identicando aspectos já existentes porém expandidos pela realização da pesquisa, e aspectos ainda não discutidos que podem ser considerados como novos na área. / It is possible to find on the agile methods several references related to managing and displaying relevant information in a software development worplace. These references are available in agile methods such as Extreme Programming [Beck, 1999, Beck e Andres, 2006], Scrum [Schwaber, 2008], Crystal Clear [Cockburn, 2005], Lean Software Development [Poppendieck e Poppendieck, 2007],etc. In our work, we name this kind of activity as agile tracking, relating it to the tracker role defined by Beck [1999]. We noticed the importance of performing actions (practices) based on a set of principles as guidelines [Poppendieck e Poppendieck, 2007], which is deeply associated with agile methods. Taking this matter into account, we performed a literature review in order to discuss a few agile principles that could affect the execution of agile tracking related tasks. We also describe a few works directly related to metrics, both on the agile methods and on the software engineering area in general. Even with related references in the literature, we could not find empirical researches with the goal of raising/understanding aspects related to successfully performing this kind of task on agile environments, which could be helpful on managing informations and informative workspaces. In order to accomplish this goal, we performed a research using a sequential mixed research methods approach [Creswell, 2009]. We chose to apply our research on a set of fifteen agile teams gathered on the IME-USP\'s \"Laboratory of Extreme Programming\" course in 2010 and 2011. This research was performed in four sequential phases. In the first phase, we made several suggestions to the agile teams, regarding agile tracking, using and approach based on action research [Thiollent, 2004]. We used this initial approach in order to gather relevant aspects of their use of agile tracking. Based on these results, we clustered some aspects as \"heuristics for agile tracking\", the same model used by Hartmann e Dymond [2006]. In phase two, we applied a survey to evaluate the validity of the proposed heuristics. In phase three, we gathered data from a few semi-structured interviews performed on team members in order to understand the reasons behind the proposed heuristics, in which we used grounded theory [Strauss e Corbin, 2008] coding techniques for analysis. In phase four, we reapplied phase two survey on a different environment in order to triangulate the heuristics evaluation data gathered on phase 2. As the result of this empirical research, a set of heuristics were established with quantitative evaluation data and several related qualitative concepts. We also relate the set of heuristics and associated concepts with other works in agile methods, highlighting aspects expanded by this research and some others that we could not directly find in the literature, which could be considered as new in the area.
27

Dinâmica da Mistura Étnica em Comunidades Remanescentes de Quilombo Brasileiras / Inter-Ethnic Admixture Dynamics in Brazilian Quilombo Remnant Communities

Luizon, Marcelo Rizzatti 24 October 2007 (has links)
Apesar da intensa mistura étnica na formação da população Brasileira, pequenos grupos isolados ainda podem ser encontrados, principalmente representados pelas tribos indígenas e comunidades remanescentes de quilombo. As comunidades de Barra (BA), São Gonçalo (BA) e Valongo (SC) apresentam diferentes histórias demográficas de formação. Os AIMs (Marcadores Informativos de Ancestralidade) são capazes de revelar essas diferenças pois apresentam grandes diferenciais de freqüência () entre os principais grupos populacionais parentais (africanos, ameríndios, europeus) e, por esta razão, constituem polimorfismos com maior poder discriminante em estimativas de mistura étnica. No presente trabalho, foram testados oito AIMs na análise de três remanescentes de quilombo, comparados a duas amostras de população urbana brasileira. Um destes marcadores, o alelo CYP1A1*2C, foi testado em sete aldeias de quatro tribos da Amazônia Central Brasileira, completando a análise dos outros sete marcadores previamente realizados nestas populações ameríndias. Os objetivos, além da descrição formal de tais populações, incluíam comparar eventuais diferenças entre as comunidades quilombolas e verificar a eficiência relativa destes marcadores em estudos deste tipo. A comparação das freqüências do alelo CYP1A1*2C entre os ameríndios e populações mundiais confirma este alelo como um excelente AIM para diferenciar ameríndios de europeus e africanos, informação importante em estimativas de mistura em populações trihíbridas Brasileiras. As freqüências de oito AIMs (FY-Null, RB, LPL, AT3, Sb19.3, APO, PV92 e CYP1A1*2C) foram então estimadas nas comunidades remanescentes de quilombo de Barra (n=47), São Gonçalo (n=51) e Valongo (n=25) e nas populações urbanas de Jequié (n=47) e Hemosc (Hemocentro de Santa Catarina, n=25) a partir dos fenótipos determinados por PCR e PCRRFLP. As análises estatísticas empregaram programas já descritos (GENEPOP, DISPAN, GDA, STRUCTURE, MVSP e ADMIX 2 e 3). As freqüências alélicas e genotípicas diferenciam todas as comunidades remanescentes e urbanas, fato corroborado pelos valores de FST (p<0,01) par a par entre elas. Outros valores de FST mostram similaridades da comunidade de Barra com africanos e da amostra Hemosc com Europeus, o que é confirmado pelas estimativas do componente africano em Barra (95%) e europeu no Hemosc (83%), como também pelas análises de componente principal. Nestas últimas, o locus FY foi a variável de maior peso (loading) sobre o primeiro componente principal e o PV92 o locus de maior peso sobre o segundo componente principal. Este método demonstrou-se particularmente adequado, pois, em ambas as análises, os dois componentes principais explicaram mais do que 95% da variância total. As estimativas dos componentes africano, europeu e ameríndio em São Gonçalo (68%, 22% e 10%) e JQ (52%, 31% e 17%) mostram que os AIMs geram estimativas de contribuição africana maiores do que as obtidas por STRs autossômicos, YSTRs e marcadores clássicos nas mesmas populações. A estimativa do componente africano em Valongo (68%) foi menor que a obtida a partir dos marcadores clássicos. Isto poderia ser considerado como evidência da maior eficiência destes marcadores na quantificação do componente africano, uma vez que o aumento das estimativas não foi generalizado e, portanto, provavelmente não viciado. Conclui-se que os AIMs seriam mais eficientes para o cálculo da proporção relativa dos diferentes componentes formadores destas populações, pois conduziriam a estimativas mais realistas. / In spite of the high degree of inter-ethnic admixture that characterizes the formation of the Brazilian population, small isolated groups, mainly represented by indigenous Amerindian tribes and communities known as quilombo remnants, can still be found. Barra (BA), São Gonçalo (BA) and Valongo (SC) are communities that presented different demographic histories during their formations. The AIMs (Ancestry Informative Markers) are capable of disclosing such differences due to the fact that they present large frequency differentials () between the major ethnic groups that gave origin to the Brazilian population. This provides more reliable information for interethnic admixture estimates. Given that, the present study aimed at establishing the differences regarding inter-ethnic admixture between these three quilombo remnants, which present different demographic histories. The CYP1A1*2C allele frequencies were established in four indigenous tribes from the Brazilian Amazon, which are characterized by low admixture degrees with non-Amerindian people (2-3%), and were compared with frequencies obtained in worldwide populations. This comparison evidenced that such allele is extremely useful for setting Amerindians apart from Europeans and Africans, which is an outstanding feature for estimation of admixture proportions in Brazilian tri-hybrid populations. Allele frequencies of eight AIMs (FY-Null, RB, LPL, AT3, Sb19.3, APO, PV92 and CYP1A1*2C) were obtained in three quilombo remnant communities, Barra (n=47), São Gonçalo (n=51) and Valongo (n=25), and in urban population samples from Jequié (n=47) and Hemosc (n=25), by means of PCR and PCR-RFLP. Statistical analysis were carried out employing the GENEPOP, DISPAN, GDA, STRUCTURE, MVSP and ADMIX 2 and 3 softwares. Allele and genotype frequencies are able to differentiate all quilombo remnant and urban samples, an aspect corroborated by the pair-wise FST (p<0.01) values. Other FST estimates reveal similarities between Barra and Africans and between Hemosc and Europeans, which are supported by the respective African and European admixture estimates in Barra (95%) and Hemosc (83%) and by the Principal Component Analysis. In the latter analysis, the FY locus consisted in the variable with greatest influence (loading) over the first component. On the other hand, the variable PV92 exhibited the highest influence over the second component analysis. This method has proven to be very reliable, given that, in both analyses, the first two principal components explained more than 95% of the total variance. African, European and Amerindian inter-ethnic admixture estimates in São Gonçalo (68%, 22% and 10%) and JQ (52%, 31% and 17%) emphasize the fact that the AIMs provides higher African contribution estimates than the ones obtained by means of autosomal and Y-linked STRs and classical markers in the same populations. African contribution estimated in Valongo (68%) was lower than the one obtained by means of classical markers. Taken together, these estimates may be an evidence of higher effectiveness of this set of markers in quantifying the African component, as long as the increase in African contribution was not generalized and, hence, probably unbiased. In conclusion, the AIMs are more effective in estimating the admixture proportions of the different ethnic components that gave origin to these populations, given that they resulted in more reliable estimates.
28

Comment modéliser les systèmes aquifères au sein du cycle hydrologique ? : une approche « multi-observables » à différentes échelles / How to model groundwater systems in the hydrological cycle? : an approach at different scales with different observed data types

Guillaumot, Luca 20 December 2018 (has links)
Les systèmes aquifères constituent la partie souterraine du cycle hydrologique. Ils transfèrent les pluies infiltrées à travers les sols sur des distances variables. Après un temps caractéristique de l’ordre du mois au millier d’années, les eaux souterraines regagnent la surface en alimentant les rivières et en satisfaisant en partie l’évapotranspiration. Les aquifères sont ainsi une ressource en eau majeure pour l’Homme et les écosystèmes. La prédiction de leur réponse aux pressions anthropiques et climatiques se heurte à deux difficultés (1) la faible densité d’informations directes sur les milieux géologiques et leur grande hétérogénéité (2) la complexité des échanges entre la surface et la profondeur. L’enjeu est donc de développer des modèles représentant au mieux les processus aux différentes échelles spatiotemporelles. Pour aborder cette question, nous étudions le contenu informatif de différents types d’observables (piézométrie, débit de rivière, déformation de surface...) afin de déterminer comment ils peuvent améliorer la paramétrisation des modèles. Notre travail s’appuie sur la modélisation hydrologique du site de Ploemeur (échelle locale) et du bassin du Rhin (échelle continentale). Dans les deux cas, des modèles simples sont développés en utilisant des solutions analytiques et numériques. Le modèle ModFlow a également été couplé à un modèle hydrologique. À petite échelle, les résultats illustrent l’intérêt de différents types de données transitoires pour contraindre les processus. À grande échelle, le modèle développé ainsi que les observables permettent d’affiner le rôle des systèmes aquifères dans la disponibilité de l’eau en surface. Les deux approches illustrent un contrôle des flux à différentes échelles par la topographie, la géologie et l’hétérogénéité. / Groundwater systems (GW) constitute an important part of the hydrological cycle. GW transfer water infiltrated through soils on variable distances. After a characteristic time ranging from the month to thousand of years, GW reach the surface supporting rivers and evapotranspiration. Thus, they are a major resource for human and ecosystems. PredictingGWresponse to human and climate pressures is limited by (1) the scarcity of direct information on the highly heterogeneous geological media (2) the complexity of surface-depth exchanges. So, it seems necessary to develop models representing at best the processes at different spatiotemporal scales. To address this issue, we study the informative content of different observation types (piezometry, streamflow, surface deformation. . . ) to assess how they can improve models parametrization. Our work is based on GW modeling of the Ploemeur site (local scale) and of the Rhine basin (continental scale). For both approaches, simple models are developed, using analytical or numerical solutions. Also, the ModFlow model was coupled to an hydrological model. At small scale, results show the interest of temporal and multidisciplinary data to better constrain processes. At large scale, the developed model, as well as observations, allows to precise the role ofGWfor water availability on surface. Both approaches highlight a flows control at different scales by topography, geology and heterogeneity.
29

Lärande bildlexikon : Ett interaktivt sätt att lära sig

Gustafsson, Jessica January 2009 (has links)
<p>Ett bildlexikon ger en bildlig stimulans och gör lärande till en interaktiv lek. Men bildlexikon är inte enbart för barn, de skapas även för ungdomar och vuxna. De kan berätta historier och sagor på ett underhållande sätt, även återge historia. Deras syfte bestäms enbart av deras skapare. Det denna rapport handlar om är skapandet av ett bildlexikon. Syftet med detta lexikon är att utbilda yngre människor inom ämnet vardag, allt som man kan komma att stöta på inom vardagen finns i detta lexikon. Det företag som står för detta bildlexikon är Euroway Media Business AB, de har länge jobbat med hemsidor men vill nu ge sig in på nytt territorier där de kan utöka sina kunskaper. Bildlexikonet kommer sedan att göras om till en applikation, i ett senare projekt, och integreras på en hemsida som för tillfället är under konstruktion. Det som är viktigt att tänka på vid skapandet av ett bildlexikon är att ha en röd tråd igenom hela arbetet, göra många undersökningar för att hålla koll på att man är på rätt spår och till sist vara kreativ. Olika väl använda metoder – som till exempel enkäter och persona - kom att användas under projektet för att samla in data, undersöka målgruppen och för att utvärdera lexikonet. Resultatet blev ett strukturerat lexikon med tillräckligt många bilder och kategorier för att det ska vara utbildande. Rent grafiskt blev det tilltalande och framhäver innehållet i lexikonet.</p> / <p>A picture dictionary gives a figurative stimulus and makes learning an interactive game. But the dictionary is not just for children, they are also made for young people and adults. They can tell stories and fairytales in an amusing way, they can even retell history. Their purpose is only decided by their creator. This report is about the creation of a dictionary with pictures. The purpose of this dictionary is to educate young people about the objects we encounter in our everyday life; anything you can encounter is in this dictionary. The company that is in charge for this dictionary is Euroway Media Business AB, they usually work with websites for other companies but they feel like expanding their ground and knowledge. The dictionary will be made into an application, in a later project, which eventually will be integrated on a website – that is currently under construction. One thing to keep in mind when designing a dictionary is to have a main theme throughout the project, go through a lot of surveys to keep on track and finally – be creative. Different kinds of methods were used – for example a poll and a persona – during the project to collect data, examine the target group and evaluate the dictionary. The result became a well structured dictionary with enough pictures and categories to be educational. It became pure graphic, appealing and highlights the dictionaries content.</p>
30

On Estimating Topology and Divergence Times in Phylogenetics

Svennblad, Bodil January 2008 (has links)
<p>This PhD thesis consists of an introduction and five papers, dealing with statistical methods in phylogenetics.</p><p>A phylogenetic tree describes the evolutionary relationships among species assuming that they share a common ancestor and that evolution takes place in a tree like manner. Our aim is to reconstruct the evolutionary relationships from aligned DNA sequences.</p><p>In the first two papers we investigate two measures of confidence for likelihood based methods, bootstrap frequencies with Maximum Likelihood (ML) and Bayesian posterior probabilities. We show that an earlier claimed approximate equivalence between them holds under certain conditions, but not in the current implementations of the two methods.</p><p>In the following two papers the divergence times of the internal nodes are considered. The ML estimate of the divergence time of the root is improved if longer sequences are analyzed or if more taxa are added. We show that the gain in precision is faster with longer sequences than with more taxa. We also show that the algorithm of the software package PATHd8 may give biased estimates if the global molecular clock is violated. A change of the algorithm to obtain unbiased estimates is therefore suggested.</p><p>The last paper deals with non-informative priors when using the Bayesian approach in phylogenetics. The term is not uniquely defined in the literature. We adopt the idea of data translated likelihoods and derive the so called Jeffreys' prior for branch lengths using Jukes Cantor model of evolution.</p>

Page generated in 0.0938 seconds