• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 100
  • 35
  • 32
  • 31
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 524
  • 524
  • 84
  • 81
  • 66
  • 60
  • 46
  • 46
  • 39
  • 38
  • 37
  • 36
  • 35
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Architecture of human complex trait variation

Xin, Xiachi January 2018 (has links)
A complex trait is a trait or disease that is controlled by both genetic and environmental factors, along with their interactions. Trait architecture encompasses the genetic variants and environmental causes of variation in the trait or disease, their effects on the trait or disease and the mechanism by which these factors interact at molecular and organism levels. It is important to understand trait architecture both from a biological viewpoint and a health perspective. In this thesis, I laid emphasis on exploring the influence of familial environmental factors on complex trait architecture alongside the genetic components. I performed a variety of studies to explore the architecture of anthropometric and cardio-metabolic traits, such as height, body mass index, high density lipoprotein content of blood and blood pressure, using a cohort of 20,000 individuals of recent Scottish descent and their phenotype measurements, Single Nucleotide Polymorphism (SNP) data and genealogical information. I extended a method of variance component analysis that could simultaneously estimate SNP-associated heritability and total heritability whilst considering familial environmental effects shared among siblings, couples and nuclear family members. I found that most missing heritability could be explained by including closely related individuals in the analysis and accounting for these close relationships; and that, on top of genetics, couple and sibling environmental effects are additional significant contributors to the complex trait variation investigated. Subsequently, I accounted for couple and sibling environmental effects in Genome- Wide Association Study (GWAS) and prediction models. Results demonstrated that by adding additional couple and sibling information, both GWAS performance and prediction accuracy were boosted for most traits investigated, especially for traits related to obesity. Since couple environmental effects as modelled in my study might, in fact, reflect the combined effect of assortative mating and shared couple environment, I explored further the dissection of couple effects according to their origin. I extended assortative mating theory by deriving the expected resemblance between an individual and in-laws of his first-degree relatives. Using the expected resemblance derived, I developed a novel pedigree study which could jointly estimate the heritability and the degree of assortative mating. I have shown in this thesis that, for anthropometric and cardio-metabolic traits, environmental factors shared by siblings and couples seem to have important effects on trait variation and that appropriate modelling of such effects may improve the outcome of genetic analyses and our understanding of the causes of trait variation. My thesis also points out that future studies on exploring trait architecture should not be limited to genetics because environment, as well as mate choice, might be a major contributor to trait variation, although trait architecture varies from trait to trait.
292

Sobre o emprego e a análise estatística do delineamento em blocos aumentados no melhoramento genético vegetal. / Application and statistical analysis of augmented block design in plant breeding.

João Batista Duarte 19 May 2000 (has links)
A presente pesquisa propôs-se a investigar problemas de ordem estatístico-experimental relacionados à aplicação do delineamento de blocos aumentados, em programas de melhoramento genético vegetal. Nesses desenhos experimentais há duas categorias de tratamentos: as testemunhas (repetidas) e os tratamentos adicionais ou novos (normalmente não repetidos). As primeiras usualmente são cultivares comerciais já recomendados, e os outros, os novos materiais genéticos sob avaliação. Sua relação com os delineamentos de blocos em geral, princípios para a sua aplicação e análise estatística, bem como informações correlatas de outros temas aqui investigados (modelos mistos, componentes de variância e análise espacial de experimentos) foram objetos de revisão no Capítulo 1. O material que fundamentou este trabalho foi um conjunto de 41 ensaios de competição de linhagens de soja (genótipos), em gerações de autofecundação nunca inferiores a F5. Os ensaios, assim delineados, fazem parte do Programa de Melhoramento da Soja desenvolvido pelo Setor de Genética Aplicada às Espécies Autógamas, do Departamento de Genética da ESALQ/USP. Foram escolhidos, preferencialmente, os ensaios que dispunham de mapa de campo completo, o que possibilita associar cada observação à sua posição espacial na área do experimento. Entre os diversos caracteres avaliados, apenas os dados de produtividade de grãos (kg/ha) foram aqui considerados. Sabe-se que esses ensaios são conduzidos em etapas preliminares, quando as linhagens podem ser estatisticamente assumidas como de efeitos aleatórios, ou em fases intermediárias e finais, quando são preferencialmente tidas como de efeitos fixos. Logo, uma curiosidade inicial foi investigar, nos delineamentos em blocos, a influência destas suposições sobre as estimativas das médias genotípicas, bem como sobre o seu ordenamento para fins de seleção (Capítulo 2). O fato também abre a possibilidade de diferentes modelagens para a análise estatística dos dados desses experimentos, incluindo-se o modelo fixo e modelos mistos (análise intrablocos, análise com recuperação de informação interblocos e análises recuperando informação intergenotípica). A apresentação destas alternativas e suas implicações na seleção de genótipos constitui o objetivo principal do Capítulo 3. Acrescenta-se que, na abordagem de modelos mistos, uma etapa fundamental é a de estimação dos componentes de variância. Isto é passível de realização por meio de vários métodos estatísticos, cujos resultados, todavia, podem ser bastante conflitantes; sobretudo em caso de desbalanceamento. Dada a escassez de informações específicas e o fato de esses delineamentos serem naturalmente desbalanceados, avaliaram-se através de simulação em computador as propriedades dos principais estimadores disponíveis: ANOVA, MIVQUE(0), ML e REML (Capítulo 4). Outra característica marcante desses ensaios, particularmente nas etapas iniciais, é a adoção de parcelas de pequeno tamanho, haja vista a pouca disponibilidade de material de propagação. Isto, associado à costumeira alocação sistemática de testemunhas e/ou grupos de linhagens aparentadas, suscitou a avaliação de um procedimento de análise estatística que não ficasse sujeito à clássica suposição de independência espacial entre observações (Capítulo 5). Entre os resultados e conclusões obtidas pode-se destacar: i) a análise intrablocos (modelo fixo) pode fornecer ordenamentos inadequados das médias dos genótipos se estes forem de efeitos aleatórios e, sobretudo, se estiverem relacionados a diferentes populações de referência (Capítulo 2); ii) a classificação das novas linhagens em relação aos cultivares testemunhas pode mudar sensivelmente de um modelo de análise para outro, principalmente quando se passa da análise intrablocos para uma análise que recupera informação interlinhagens (Capítulo 3); iii) o método MIVQUE(0) comparativamente fornece estimativas de melhor qualidade para os componentes de variância, em particular se os genótipos vierem de população(ões) com baixa(s) variância(s) genotípica(s) e os experimentos forem relativamente pequenos (Capítulo 4); e, iv) em experimentos com observações correlacionadas espacialmente, a discriminação genotípica e o ordenamento dos genótipos para fins de seleção podem ser melhorados consideravelmente através da análise estatística espacial (Capítulo 5). / This research investigates experimental and statistical problems related with the use of the augmented block design in plant breeding programs. A characteristic of these designs is that treatments are of two categories, namely: common checks (generally commercial varieties replicated over blocks) and the additional or new treatments (usually not replicated), the latter being the ones under evaluating for selection purposes. The relation of this design with block designs in general and fundaments required for its applications and analysis, as well as information about other topics (mixed models, components of variance and spatial analysis) were reviewed in the Chapter 1. For analysis and discussion a group of 41 trials, set up at the Department of Genetics (ESALQ, USP) for evaluating soybean inbred lines, was used. Only trials having a complete layout of the spatial position of plots in the field were taken. Analyses and discussions given here refer only to grain yield (kg/ha). It is known that these trials are usually conducted at preliminary stages, when genotypes can be statistically assumed as having random effects, or at intermediate and final stages when they are preferentially considered with fixed effects. The first interesting point here investigated was the influence of these assumptions on the estimates of genotypic means, as well as on the ranking of lines for selection purposes (Chapter 2). The assumptions open the possibility of considering either fixed or mixed models for analysis (intrablock analysis, or others with recovering of interblock or intergenotypic information). Chapter 3 shows the corresponding analytical procedures and their consequences on the ranking and selection of genotypes. Under mixed models a fundamental step is the estimation of variance components, for which several procedures are available. It is known that these procedures may lead to different estimates of the same parameter, specially when experiments are unbalanced. Due to the lack of specific information about its point, and since these designs are naturally unbalanced, computer simulations were made here for evaluating the properties of the different available estimators, specifically: ANOVA, MIVQUE(0), ML and REML (Chapter 4). An additional characteristic of these design, at initial stages of breeding programs, is the use of small-sized plots, necessary to accommodate the lack of seeds. This fact in addition to the common practice of systematic arrangement of check plots in the field and/or the arrangement of genetically related treatments in sets, motivated the application of spatial statistics procedures (Chapter 5). The following main results and conclusions can be pointed out: i) the intrablock analysis (fixed model) can provide inadequate ranking of the genotypic means if genotypes have random effects and, specially if they are stem from different populations (Chapter 2); ii) the classification of the new genetic treatments in relation to check varieties may change considerably, depending on the model adopted, specially when the intrablock and the analyses with recovery of intergenotypic information are compared (Chapter 3); iii) the MIVQUE(0) method, in general, furnished more efficient estimates of variance components, particularly if genotypes are derived from population(s) with low genotypic variance(s) and experiments are of small size (Chapter 4); and, iv) if the experimental observations are spatially correlated the discrimination among genotypes and the ranking of genetic treatments can be substantially improved through spatial statistical analysis (Chapter 5).
293

Avaliação do pensamento crítico e da presença cognitiva em fórum de discussão online / Evaluation of critical thinking and cognitive presence in online discussion forum

Araujo, Elenise Maria de 14 November 2014 (has links)
O número de mensagens nos fóruns educativos online gera uma sobrecarga de informação, que dificulta a organização e a comunicação entre os membros, comprometendo, também, a eficácia do processo de avaliação da aprendizagem. Diante da necessidade de uma solução metodológica para o contexto da EAD no Brasil, realizou-se uma revisão de literatura sobre os modelos de organização e o processo de análise das mensagens que envolvem os indicadores das habilidades da presença cognitiva (Garrison et al. 2001) e o pensamento crítico e a dinâmica do questionamento socrático (Paul, 1993). Propõe-se um novo modelo de fórum intitulado \"Fórum Socrático Cognitivo- FSC\" com uma metodologia de avaliação da presença cognitiva e dos níveis de pensamento crítico dos alunos em cursos superiores online. Os procedimentos metodológicos e o grid de avaliação foram aplicados em duas disciplinas do curso de aperfeiçoamento em EAD em 2011, 2012 e 2013 de uma instituição de ensino superior vinculada a UAB do Brasil. A metodologia adotada envolve os princípios da lexicometria, da análise documentária, textuais e estatísticas, modelos de regressão linear e redes complexas. Realizou-se a análise de 3.400 mensagens postadas por 544 alunos para verificar as relações de dependência entre a variável resposta \"nota\" e as covariáveis: conceitos, categorias de Garrison, tópicos socráticos, riqueza vocabular e sala (tutor). Construíram-se 2 modelos de regressão linear e 136 redes complexas que permitiram confirmar os testes ANOVA utilizados na avaliação de desempenho dos alunos. As covariáveis conceitos, categorias de Garrison e sala apresentaram nível de significância positivo na associação com a nota final do aluno. Considera-se que o modelo FSC contribui para o desenvolvimento das habilidades de expressão e comunicação de conceitos abordados na disciplina e colabora na avaliação do pensamento crítico e da presença cognitiva do aluno nos fóruns online. / The usually large number of posts in educational forums leads to an information overload, which hinders forum organization and communication among members while undermining the effectiveness of learning assessment. Considering the need for a methodological solution applicable to the context of Brazil\'s online distance learning (ODL), there was conducted a review of the literature on organizational models and processes of analysis of messages involving indicators of critical thinking and cognitive presence skills (Garrison et al., 2001) and the Socratic inquiry dynamics (Paul, 1993). This study proposes a new forum model entitled FSC (Cognitive Socratic Forum), which comprises a method for assessing students\' cognitive presence and critical thinking skills in online higher education. The methodological procedures and evaluation grid were applied to two specialization courses on ODL in 2011, 2012, and 2013 at a higher education institution associated with UAB (Brazil\'s Open University). The methodology involved principles of lexicometry, documentary, textual and statistical analysis, linear regression models, and complex networks. Analysis was conducted on 3,400 messages posted by 544 students to investigate dependency relationships between the response variable \'grade\' and the following co-variables: concepts, Garrison\'s categories, Socratic topics, vocabulary richness, and classroom (tutor). Two linear regression models and 136 complex networks were constructed to enable confirmation of ANOVA tests employed to assess student performance in forums. The co-variables concepts, Garrison\'s categories, and classroom displayed positive significance levels when associated to students\' final grades. Therefore, it may be concluded that the proposed FSC model promoted students\' ability to express and communicate the concepts covered in the specialization course and contributed to effectively assessing their cognitive presence and critical thinking skills in online forums.
294

Sobre o emprego e a análise estatística do delineamento em blocos aumentados no melhoramento genético vegetal. / Application and statistical analysis of augmented block design in plant breeding.

Duarte, João Batista 19 May 2000 (has links)
A presente pesquisa propôs-se a investigar problemas de ordem estatístico-experimental relacionados à aplicação do delineamento de blocos aumentados, em programas de melhoramento genético vegetal. Nesses desenhos experimentais há duas categorias de tratamentos: as testemunhas (repetidas) e os tratamentos adicionais ou novos (normalmente não repetidos). As primeiras usualmente são cultivares comerciais já recomendados, e os outros, os novos materiais genéticos sob avaliação. Sua relação com os delineamentos de blocos em geral, princípios para a sua aplicação e análise estatística, bem como informações correlatas de outros temas aqui investigados (modelos mistos, componentes de variância e análise espacial de experimentos) foram objetos de revisão no Capítulo 1. O material que fundamentou este trabalho foi um conjunto de 41 ensaios de competição de linhagens de soja (genótipos), em gerações de autofecundação nunca inferiores a F5. Os ensaios, assim delineados, fazem parte do Programa de Melhoramento da Soja desenvolvido pelo Setor de Genética Aplicada às Espécies Autógamas, do Departamento de Genética da ESALQ/USP. Foram escolhidos, preferencialmente, os ensaios que dispunham de mapa de campo completo, o que possibilita associar cada observação à sua posição espacial na área do experimento. Entre os diversos caracteres avaliados, apenas os dados de produtividade de grãos (kg/ha) foram aqui considerados. Sabe-se que esses ensaios são conduzidos em etapas preliminares, quando as linhagens podem ser estatisticamente assumidas como de efeitos aleatórios, ou em fases intermediárias e finais, quando são preferencialmente tidas como de efeitos fixos. Logo, uma curiosidade inicial foi investigar, nos delineamentos em blocos, a influência destas suposições sobre as estimativas das médias genotípicas, bem como sobre o seu ordenamento para fins de seleção (Capítulo 2). O fato também abre a possibilidade de diferentes modelagens para a análise estatística dos dados desses experimentos, incluindo-se o modelo fixo e modelos mistos (análise intrablocos, análise com recuperação de informação interblocos e análises recuperando informação intergenotípica). A apresentação destas alternativas e suas implicações na seleção de genótipos constitui o objetivo principal do Capítulo 3. Acrescenta-se que, na abordagem de modelos mistos, uma etapa fundamental é a de estimação dos componentes de variância. Isto é passível de realização por meio de vários métodos estatísticos, cujos resultados, todavia, podem ser bastante conflitantes; sobretudo em caso de desbalanceamento. Dada a escassez de informações específicas e o fato de esses delineamentos serem naturalmente desbalanceados, avaliaram-se através de simulação em computador as propriedades dos principais estimadores disponíveis: ANOVA, MIVQUE(0), ML e REML (Capítulo 4). Outra característica marcante desses ensaios, particularmente nas etapas iniciais, é a adoção de parcelas de pequeno tamanho, haja vista a pouca disponibilidade de material de propagação. Isto, associado à costumeira alocação sistemática de testemunhas e/ou grupos de linhagens aparentadas, suscitou a avaliação de um procedimento de análise estatística que não ficasse sujeito à clássica suposição de independência espacial entre observações (Capítulo 5). Entre os resultados e conclusões obtidas pode-se destacar: i) a análise intrablocos (modelo fixo) pode fornecer ordenamentos inadequados das médias dos genótipos se estes forem de efeitos aleatórios e, sobretudo, se estiverem relacionados a diferentes populações de referência (Capítulo 2); ii) a classificação das novas linhagens em relação aos cultivares testemunhas pode mudar sensivelmente de um modelo de análise para outro, principalmente quando se passa da análise intrablocos para uma análise que recupera informação interlinhagens (Capítulo 3); iii) o método MIVQUE(0) comparativamente fornece estimativas de melhor qualidade para os componentes de variância, em particular se os genótipos vierem de população(ões) com baixa(s) variância(s) genotípica(s) e os experimentos forem relativamente pequenos (Capítulo 4); e, iv) em experimentos com observações correlacionadas espacialmente, a discriminação genotípica e o ordenamento dos genótipos para fins de seleção podem ser melhorados consideravelmente através da análise estatística espacial (Capítulo 5). / This research investigates experimental and statistical problems related with the use of the augmented block design in plant breeding programs. A characteristic of these designs is that treatments are of two categories, namely: common checks (generally commercial varieties replicated over blocks) and the additional or new treatments (usually not replicated), the latter being the ones under evaluating for selection purposes. The relation of this design with block designs in general and fundaments required for its applications and analysis, as well as information about other topics (mixed models, components of variance and spatial analysis) were reviewed in the Chapter 1. For analysis and discussion a group of 41 trials, set up at the Department of Genetics (ESALQ, USP) for evaluating soybean inbred lines, was used. Only trials having a complete layout of the spatial position of plots in the field were taken. Analyses and discussions given here refer only to grain yield (kg/ha). It is known that these trials are usually conducted at preliminary stages, when genotypes can be statistically assumed as having random effects, or at intermediate and final stages when they are preferentially considered with fixed effects. The first interesting point here investigated was the influence of these assumptions on the estimates of genotypic means, as well as on the ranking of lines for selection purposes (Chapter 2). The assumptions open the possibility of considering either fixed or mixed models for analysis (intrablock analysis, or others with recovering of interblock or intergenotypic information). Chapter 3 shows the corresponding analytical procedures and their consequences on the ranking and selection of genotypes. Under mixed models a fundamental step is the estimation of variance components, for which several procedures are available. It is known that these procedures may lead to different estimates of the same parameter, specially when experiments are unbalanced. Due to the lack of specific information about its point, and since these designs are naturally unbalanced, computer simulations were made here for evaluating the properties of the different available estimators, specifically: ANOVA, MIVQUE(0), ML and REML (Chapter 4). An additional characteristic of these design, at initial stages of breeding programs, is the use of small-sized plots, necessary to accommodate the lack of seeds. This fact in addition to the common practice of systematic arrangement of check plots in the field and/or the arrangement of genetically related treatments in sets, motivated the application of spatial statistics procedures (Chapter 5). The following main results and conclusions can be pointed out: i) the intrablock analysis (fixed model) can provide inadequate ranking of the genotypic means if genotypes have random effects and, specially if they are stem from different populations (Chapter 2); ii) the classification of the new genetic treatments in relation to check varieties may change considerably, depending on the model adopted, specially when the intrablock and the analyses with recovery of intergenotypic information are compared (Chapter 3); iii) the MIVQUE(0) method, in general, furnished more efficient estimates of variance components, particularly if genotypes are derived from population(s) with low genotypic variance(s) and experiments are of small size (Chapter 4); and, iv) if the experimental observations are spatially correlated the discrimination among genotypes and the ranking of genetic treatments can be substantially improved through spatial statistical analysis (Chapter 5).
295

Effets de site, endommagement et érosion des pentes dans les zones épicentrales des chaînes de montagnes actives / Topographic site effects, weakening and erosion in seismically active mountain belt

Rault, Claire 16 April 2019 (has links)
Les glissements de terrain constituent un risque naturel majeur à l’origine de dégâts matériels et humains considérables. Les séismes sont l’une des principales causes de leur déclenchement dans les orogènes actifs. Dans la zone épicentrale, le passage des ondes sismiques perturbe le champs de contraintes local ce qui peut provoquer le dépassement du seuil de stabilité des versants. La probabilité de déclenchement d’un glissement de terrain sismo-induit sur une pente donnée est donc fonction de facteurs liés au mouvement du sol et aux caractéristiques géologiques et topographiques de celle-ci. Très peu de données sismiques sont disponibles sur les versants et les modèles d’interpolation sont peu précis. Or le mouvement sismique peut s’avérer très variable à l’échelle d’un bassin du fait de la présence d’effets de site. L’étude de la réponse sismique d’un relief taïwanais nous permet de documenter ces effets et de prendre connaissance de la complexité du mouvement enregistré sur ce relief à la suite du passage de l’onde. Un réseau de six stations larges-bandes a été déployé, au travers de ce relief large d’environ 3km. Entre mars 2015 et juin 2016, ce réseau a permis d’enregistrer la réponse des sites à plus de 2200 séismes régionaux (magnitude Ml>3, distance hypocentrale<200km). Bien que distants de quelques centaines de mètres, les sites présentent tous une réponse qui leur est caractéristique résultant d’une combinaison complexe entre la topographie et la géologie du site. A fréquences modérées, correspondant à des longueurs d’ondes du mouvement pouvant contribuer au déclenchement de glissements de terrain, l’amplification du mouvement sismique est principalement due à la géologie locale et non à la topographie, comme montré par les indicateurs classiques (SSR, PGA, PGV et Arias) extraits des réponses des stations aux séismes. La topographie semble néanmoins jouer un rôle dans la durée du mouvement sismique fort aux stations situées sur les crêtes et en bordure de bassin sédimentaire, par effet de résonance et génération d’ondes de surface. La contribution prédominante de la géologie dans le déclenchement des glissements de terrain sismo-induits est également montrée par l’analyse de leur position sur les versants pour les glissements associés aux séismes de Northridge (Mw 6.7, 1994, Etats-Unis), de Chi-Chi (Mw 7.6, 1999, Taiwan), et de Wenchuan (Mw 7.9, 2008, Chine). En effet, bien que les glissements sismo-induits se localisent statistiquement plus haut sur les versants que les glissements d’origine climatique, on note que cette tendance est fortement modulée par la géologie des bassins. En fonction des « attracteurs », tels que des failles ou forts contrastes lithologiques, présents dans les bassins, les glissements tendent à se déclencher plus ou moins haut sur les versants, là où le potentiel de rupture est plus fort. Les propriétés mécaniques des pentes sont peu contraintes dans les zones montagneuses. Souvent leurs paramètres géotechniques sont estimés à partir des cartes géologiques régionales, or ils peuvent varier fortement pour une même lithologie d’un bassin à un autre. En considérant un modèle frictionnel simple de stabilité des pentes, on propose d’inverser des paramètres de type Coulomb à partir de la distribution des pentes des glissements de terrain sismo-induits à l’échelle des bassins dans les zones épicentrales des séismes de Northridge, Chi-Chi et Wenchuan. La variation spatiale de ces paramètres semble cohérente avec celle de la lithologie et la profondeur des sols. / Landslides are a major natural hazard that cause significant damages and casualties to people. Earthquakes are one of their main triggers in active mountain belts. In epicentral area, the passage of seismic-waves that disrupt the stress-field, leads the slope stability threshold to be exceeded. Co-seismic slope failure probability thus depends on complex interactions between the ground-motion and the slope geology and geometry. A few seismic data are available on mountain slopes and the resolution of ground-motion models is generally low. Yet strong variation of ground-motion from one ridge to another can be felt due to site effects. We document site effects across topography and show the complexity of slope responses to earthquakes using a seismic network set across a Taiwanese ridge. Six broadband seismometers were set along the profile of this 3km wide ridge. From March 2015 to June 2016, more than 2200 earthquakes (magnitude Ml>3 and hypocentral distance<200km) were recorded. Although the sites are within a distance of hundreds of meters they all show different characteristic responses that are related to a complex combination of the geology and topography of the sites. At medium frequency corresponding to groundmotion wavelength that could affect slope stability, the ground-motion amplification is mostly related to the local geology and the topographic effect seems relatively negligible as attested by current indicators measured at the stations (PGA, PGV, Arias, SSR). However the duration of strong ground-motion at the ridge crests and slope toe seems to be related to possible resonance effects and surface wave generation due to the geometry of the topography. The strong contribution of the geology to co-seismic landslide trigger is demonstrated by the analysis of their position along hillslopes for the co-seismic landslides triggered by the Northridge earthquake (Mw 6.7, 1994, USA), the Chi-Chi earthquake (Mw 7.6, 1999, Taiwan), and the Wenchuan earthquake (Mw 7.9, 2008, China). Indeed, although co-seismic landslides are statistically located higher on hillslopes than the rainfall-induced landslides, we show that this tendency is strongly modulated by the geology. According to the “potential landslides attractiveness” of geological structures, such as faults or lithological contrasts, present in the watershed, the slope failure would occur more or less upslope, where the failure probability is the highest.Slope mechanical properties are not well constrained in mountain area. Their geotechnical parameters are usually estimated using information provided by geological maps, but even for the same lithology they can strongly differ for one basin to another. Considering one simple friction model for seismic slope stability, we propose to invert Coulomb related parameters using the slope distributions of the landslides triggered by the Northridge, Chi-Chi and Wenchuan earthquakes. The spatial variation of these parameters seems to be in agreement with the lithology and soil depth at the first order.
296

Aide à l'analyse fiabiliste d'une pile à combustible par la simulation / PEMFC multi-physical modelling and guidelines to evaluate the consequences of parameter uncertainty on the fuel cell performance

Noguer, Nicolas 07 July 2015 (has links)
Le fonctionnement de la pile à combustible (PAC) de type PEM (à membrane polymère) est encore soumis à de nombreuses incertitudes, aux natures différentes, qui affectent ses performances électriques, sa fiabilité et sa durée de vie. L'objectif général de cette thèse est de proposer une méthode d'aide à l'évaluation de la fiabilité des PAC par la simulation ; la fiabilité étant vue ici comme la garantie d’accéder à un niveau de performance électrique donné dans les différentes conditions d’usage envisagées pour la PAC. La démarche proposée s’appuie sur un couplage physico-fiabiliste où la complexité des phénomènes physiques présents dans la pile est prise en compte par une modélisation de connaissance, dynamique, symbolique et acausale, développée dans l’environnement Modelica - Dymola. La modélisation retenue, monodimensionnelle, non isotherme inclut une représentation diphasique des écoulements fluidiques pour mieux retranscrire la complexité des échanges d’eau dans le coeur de la pile PEM. La modélisation permet aussi d’intégrer des incertitudes sur certains de ses paramètres physiques et semi-empiriques (classés en trois catégories : opératoires, intrinsèques et semi-empiriques) puis d’entreprendre, par des tirages de Monte-Carlo, la modélisation probabiliste des conséquences des incertitudes injectées sur la performance d’une PAC. Il est ainsi possible, par la suite, d’estimer la fiabilité d’une PAC par le calcul de la probabilité que la performance électrique reste supérieure à un seuil minimal à définir en fonction de l’application. Une analyse physico-fiabiliste détaillée a été menée en introduisant à titre d’exemple une incertitude sur la valeur de la porosité de la couche de diffusion cathodique d’une PAC de type PEM (coefficients de variation retenus : 1%, 5% et 10%). L’étude des conséquences de cette incertitude sur la tension et l’impédance d’une PAC a été menée en réalisant un plan d’expériences numériques et en mettant en oeuvre différents outils d’analyse statistique : graphes des effets, analyses de la variance, graphes des coefficients de variation des distributions en entrée et sortie du modèle déterministe. Dans cet exemple d’analyse et dans les conditions d’usages considérées, le taux de fiabilité prévisionnel (probabilité pour que la cellule de pile fournisse un minimum de tension de 0.68V) a été estimé à 91% avec un coefficient de variation d’entrée à 10%. / The Proton Exchange Membrane Fuel Cell (PEMFC) operation is subject to inherent uncertainty in various material, design and control parameters, which leads to performance variability and impacts the cell reliability. Some inaccuracies in the building process of the fuel cell (in the realization of the cell components and also during the assembly of the complete fuel cell stack), some fluctuations in the controls of the operating parameters (e.g. cell and gas temperatures, gas pressures, flows and relative humidity rates) affect the electrical performance of the cell (i.e. cell voltage) as well as its reliability and durability. For a given application, the selections of the different materials used in the various components of the electrochemical cell, the choices in the cell design (geometrical characteristics / sizes of the cell components) correspond to tradeoffs between maximal electrical performances, minimal fuel consumption, high lifespan and reliability targets, and minimal costs.In this PhD thesis, a novel method is proposed to help evaluating the reliability of a PEMFC stack. The aim is to guarantee a target level of electrical performance that can be considered as sufficient to meet any application requirements. The approach is based on the close coupling between physical modeling and statistical analysis of reliability. The complexity of the physical phenomena involved in the fuel cell is taken into account through the development of a dynamical, symbolic, acausal modeling tool including physical and semi-empirical parameters as well. The proposed knowledge PEMFC model is one-dimensional, non-isothermal and it includes a two-phase fluidic flow representation (each reactant is considered as a mix of gases and liquid water) in order to better take into account the complexity of the water management in the cell. The modeling is implemented using the MODELICA language and the DYMOLA software; one of the advantages of this simulation tool is that it allows an effective connection between multi-physical modeling and statistical treatments. In this perspective, the modeling is done with the aim of having as much relevant physical parameters as possible (classified in our work as operating, intrinsic, and semi-empirical parameters). The different effects of these parameters on the PEMFC electrical behavior can be observed and the performance sensitivity can be determined by considering some statistical distributions of input parameters, which is a step towards reliability analysis.A detailed physical and reliability analysis is conducted by introducing (as an example) an uncertainty rate in the porosity value of the cathodic Gas Diffusion Layer (coefficients of variance equal to 1%, 5% and 10%). The study of the uncertainty consequences on the cell voltage and electrical impedance is done through a design of numerical experiments and with the use of various statistical analysis tools, namely: graphs of the average effects, statistical sensitivity analyses (ANOVAs), graphs displaying the coefficients of variances linked with the statistical distributions observed in the inputs and outputs of the deterministic model. In this example of analysis and in the considered cell operating conditions, the provisional reliability rate (probability that the cell voltage is higher than 0.68V) is estimated to 91% with an input coefficient of variance equal to 10%.
297

A Quantitative Analysis of Shape Characteristics of Marine Snow Particles with Interactive Visualization: Validation of Assumptions in Coagulation Models

Dave, Palak P. 28 June 2018 (has links)
The Deepwater Horizon oil spill that started on April 20, 2010, in the Gulf of Mexico was the largest marine oil spill in the history of the petroleum industry. There was an unexpected and prolonged sedimentation event of oil-associated marine snow to the seafloor due to the oil spill. The sedimentation event occurred because of the coagulation process among oil associated marine particles. Marine scientists are developing models for the coagulation process of marine particles and oil, in order to estimate the amount of oil that may reach the seafloor along with marine particles. These models, used certain assumptions regarding the shape and the texture parameters of marine particles. Such assumptions may not be based on accurate information or may vary during and after the oil spill. The work performed here provided a quantitative analysis of the assumptions used in modeling the coagulation process of marine particles. It also investigated the changes in model parameters (shape and texture) during and after the Deepwater Horizon oil spill in different seasons (spring and summer). An Interactive Visualization Application was developed for data exploration and visual analysis of the trends in these parameters. An Interactive Statistical Analysis Application was developed to create a statistical summary of these parameter values.
298

Analys av ljudspektroskopisignaler med artificiella neurala eller bayesiska nätverk / Analysis of Acoustic Spectroscopy Signals using Artificial Neural or Bayesian Networks

Hagqvist, Petter January 2010 (has links)
<p>Vid analys av fluider med akustisk spektroskopi finns ett behov av att finna multivariata metoder för att utifrån akustiska spektra prediktera storheter såsom viskositet och densitet. Användning av artificiella neurala nätverk och bayesiska nätverk för detta syfte utreds genom teoretiska och praktiska undersökningar. Förbehandling och uppdelning av data samt en handfull linjära och olinjära multivariata analysmetoder beskrivs och implementeras. Prediktionsfelen för de olika metoderna jämförs och PLS (Partial Least Squares) framstår som den starkaste kandidaten för att prediktera de sökta storheterna.</p> / <p>When analyzing fluids using acoustic spectrometry there is a need of finding multivariate methods for predicting properties such as viscosity and density from acoustic spectra. The utilization of artificial neural networks and Bayesian networks for this purpose is analyzed through theoretical and practical investigations. Preprocessing and division of data along with a handful of linear and non-linear multivariate methods of analysis are described and implemented. The errors of prediction for the different methods are compared and PLS (Partial Least Squares) appear to be the strongest candidate for predicting the sought-after properties.</p>
299

Quantitative vulnerability analysis of electric power networks

Holmgren, Åke J. January 2006 (has links)
Disturbances in the supply of electric power can have serious implications for everyday life as well as for national (homeland) security. A power outage can be initiated by natural disasters, adverse weather, technical failures, human errors, sabotage, terrorism, and acts of war. The vulnerability of a system is described as a sensitivity to threats and hazards, and is measured by P (Q(t) &gt; q), i.e. the probability of at least one disturbance with negative societal consequences Q larger than some critical value q, during a given period of time (0,t]. The aim of the thesis is to present methods for quantitative vulnerability analysis of electric power delivery networks to enable effective strategies for prevention, mitigation, response, and recovery to be developed. Paper I provides a framework for vulnerability assessment of infrastructure systems. The paper discusses concepts and perspectives for developing a methodology for vulnerability analysis, and gives examples related to power systems. Paper II analyzes the vulnerability of power delivery systems by means of statistical analysis of Swedish disturbance data. It is demonstrated that the size of large disturbances follows a power law, and that the occurrence of disturbances can be modeled as a Poisson process. Paper III models electric power delivery systems as graphs. Statistical measures for characterizing the structure of two empirical transmission systems are calculated, and a structural vulnerability analysis is performed, i.e. a study of the connectivity of the graph when vertices and edges are disabled. Paper IV discusses the origin of power laws in complex systems in terms of their structure and the dynamics of disturbance propagation. A branching process is used to model the structure of a power distribution system, and it is shown that the disturbance size in this analytical network model follows a power law. Paper V shows how the interaction between an antagonist and the defender of a power system can be modeled as a game. A numerical example is presented, and it is studied if there exists a dominant defense strategy, and if there is an optimal allocation of resources between protection of components, and recovery. / QC 20100831
300

Impacts Of Climate Change On Water Resources On Eastern Mountainous Region Of Turkey

Guventurk, Abdulkadir 01 March 2013 (has links) (PDF)
Temperature and precipitation are the most important indicators of climate change. Especially for the basins fed by snow, the shifts of melting to earlier times, affects the streamflow. Increase in temperature causes to shifts of melting of snow to shift to earlier times so that hydrologic regime of the river system changes, and leads to changes in climatic conditions of the region. In this study the shifts of snow melting times are analyzed for the selected 15 streamflow stations located in Euphrates, Tigris, Aras, and &Ccedil / oruh basins in Eastern Anatolia of Turkey along with period from 1970 to 2010. The shifts in snowmelt runoff are determined by Center Time (CT) method. Meteorological stations representing the stream gauge stations regarding the basin characteristics are also selected to be used in the analyses. In order to relate CT shifts to temperature and precipitation changes, trend analysis are applied to temperature, precipitation and streamflow data. In addition to these, days with daily average temperature less than freezing and wet days below freezing until CT for each station pair between stream gauge and meteorological stations and each year are also analyzed. These days till CT within a year for each station pair can be indirectly linked to snowy days and accumulated snow amount. Complete analyses show significant warming at each station in the region and no important trends in annual precipitation. However at a few stations meaningful seasonal changes in precipitation are observed. Regional warming and associated changes in precipitation and snowmelt runoff cause significant shifts to earlier times of snowmelt runoff. In the region eight out of fifteen stream gauge stations in Euphrates, Tigris and Aras basins showed significant time shifts according to statistical trend tests.

Page generated in 0.1296 seconds