• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 8
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 19
  • 15
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Conflation Of CFD And Building Thermal Simulation To Estimate Indian Thermal Comfort Level

Manikandan, K 01 1900 (has links) (PDF)
In the residential and commercial buildings, most of the energy is used to provide the thermal comfort environment to the occupants. The recent research towards Green Buildings is focusing on reduction of energy consumption by air-conditioners and fans used for producing the thermal comfort environment. The thermal comfort is defined as the condition of mind which expresses human satisfaction with the thermal environment. The human body is continuously producing metabolic heat and it should be maintained within the narrow range of core temperature. The heat generated inside the body should be lost to the environment to maintain the thermal equilibrium with each other. The heat loss from the body is taking place in different modes such as conduction, convection, radiation and evaporation through the skin and respiration. These heat losses are influenced by the environmental factors (air temperature, air velocity, relative humidity and mean radiant temperature), physiological factors (activity level, posture and sweat rate) and clothing factors (thermal insulation value, evaporative resistance and microenvironment volume). When the body is in thermally equilibrium with its surrounding environment, the heat production should be equal to heat loss to maintain the thermal comfort. The level of thermal comfort can be measured by the different indices which combine many parameters. Of these, the Fanger’s PMV (Predicted Mean Vote) – PPD (Percentage of People Dissatisfied) index was universally suggested by ASHRAE and ISO. The PMV – PPD index was derived based on the experiment conducted on acclimated European and American subjects. Many researchers have criticized that the PMV – PPD index is not valid for tropical regions and some researchers have well agreed with this index for the same region. The validation of PMV – PPD index for thermal comfort Indians has not yet been examined. The validation of PMV – PPD index can be done by the human heat balance experiment and the individual heat losses have to be calculated from the measured parameters. In the human heat balance, the convective heat transfer plays the major role when the air movement exists around the human body. The convective heat loss is dependent on the convective heat transfer coefficient which is the function of the driving force of the convection. Using Computational Fluid Dynamics techniques, an attempt has been made in this work to determine the convective heat transfer coefficient of the human body at standing posture in natural convection. The CFD technique has been used to analyze the heat and fluid flow around the human body as follows: The anthropometric digital human manikin was modeled in GAMBIT with a test room. This model was meshed by tetrahedral elements and exported to FLUENT software to perform the analysis. The simulation was done at different ambient temperatures (16 oC to 32 oC with increment of 2 oC). The Boussinesq approximation was used to simulate the natural convection and the Surface to Surface model was used to simulate the radiation. The surrounding wall temperature was assigned equal to the ambient temperature. The sum of convective and radiative heat losses calculated based on the ASHRAE model was set as heat flux from the manikin’s surface. From the simulation, the local skin temperatures have been taken, and the temperature and velocity distributions analyzed. The result shows that the skin temperature is increasing with an increase in ambient temperature and the thickness of the hydrodynamic and thermodynamic boundary layers is increasing with height of the manikin. From the Nusselt number analogy, the convective heat transfer coefficients of the individual manikin’s segments have been calculated and the relation with respect to the temperature differences has been derived by the regression analysis. The relation obtained for the convective heat transfer coefficient has been validated with previous experimental results cited in literature for the same conditions. The result shows that the present relation agrees well with the previous experimental relations. The characteristics of the human thermal plume have been studied and the velocity of this plume is found to increase with the ambient temperature. Using the Grashof number, the flow around the human manikin has been examined and it is observed to be laminar up to abdomen level and turbulent from shoulder level. In between these two levels, the flow is found to be in transition. The validation of PMV model for tropical countries, especially for Indians, was done by heat balance experiment on Indian subjects. The experiment was conducted on forty male subjects at different ambient temperatures in a closed room in which low air movement exists. The local skin temperature, relative humidity, air velocity and globe temperature were measured. The sensation vote was received from all the subjects at all the conditions. The convective heat loss was calculated from its coefficient obtained from the present computational simulation. The radiation heat loss was calculated for two cases: In case one, the mean radiant temperature was taken equal to the ambient temperature and in case two, the mean radiant temperature was calculated from the globe temperature. The other heat losses were calculated from the basic formulae and the relations given by ASHRAE based on Fanger’s assumption. From these calculations, the validity of the Fanger’s assumption was examined. The collected sensation votes and the calculated PMV were compared to validate the PMV – PPD index for Indians. The experimental results show that there was much variation in the calculated comfort level using the measured parameters and the Fanger’s assumption. For the case of mean radiant temperature equal to the ambient temperature for indoor condition, the comfort level was varying more than the actual. In addition, the calculated comfort level from the globe temperature agreed well with the comfort level from the collected sensation votes. So it was concluded that the ASHRAE model is valid for Indians if the radiation was measured exactly. Using the ASHRAE model, the required wall emissivity of the surrounding wall at different ambient temperatures was determined from the CFD simulation. In the ASHRAE model, the surrounding wall emissivity plays the major role in the radiative heat loss from the human body. Hence in recent years, research on low emissive wall paints is focused. The computational study was done to determine the required wall emissivity to obtain the thermal comfort of the occupant at low energy consumption. The simulation was done with the different ambient temperatures (16 oC to 40 oC with increment of 4 oC) with the different surrounding wall emissivity (0.0 to 1.0 with increment of 0.2). From this simulation, the change in mean skin temperature with respect to wall emissivity was obtained for all ambient temperature conditions. The required mean skin temperature for a particular activity level was compared with the simulation results and from that, the required wall emissivity at the different ambient conditions was determined. If the surrounding walls are having the required emissivity, it leads to decrease in heat/cold strain on the human body, and the thermal comfort can be obtained with low energy consumption.(please note that title in the CD is given as COMPUTATION OF REQUIRED WALL EMISSIVITY FOR LOW ENERGY CONSUMPTION IN BUILDINGS USING ASHRAE MODEL VALIDATED FOR INDIAN THERMAL COMFORT)
52

Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis

Olson, Brent 05 1900 (has links)
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
53

Estimation simplifiée de la variance dans le cas de l’échantillonnage à deux phases

Béliveau, Audrey 08 1900 (has links)
Dans ce mémoire, nous étudions le problème de l'estimation de la variance pour les estimateurs par double dilatation et de calage pour l'échantillonnage à deux phases. Nous proposons d'utiliser une décomposition de la variance différente de celle habituellement utilisée dans l'échantillonnage à deux phases, ce qui mène à un estimateur de la variance simplifié. Nous étudions les conditions sous lesquelles les estimateurs simplifiés de la variance sont valides. Pour ce faire, nous considérons les cas particuliers suivants : (1) plan de Poisson à la deuxième phase, (2) plan à deux degrés, (3) plan aléatoire simple sans remise aux deux phases, (4) plan aléatoire simple sans remise à la deuxième phase. Nous montrons qu'une condition cruciale pour la validité des estimateurs simplifiés sous les plans (1) et (2) consiste à ce que la fraction de sondage utilisée pour la première phase soit négligeable (ou petite). Nous montrons sous les plans (3) et (4) que, pour certains estimateurs de calage, l'estimateur simplifié de la variance est valide lorsque la fraction de sondage à la première phase est petite en autant que la taille échantillonnale soit suffisamment grande. De plus, nous montrons que les estimateurs simplifiés de la variance peuvent être obtenus de manière alternative en utilisant l'approche renversée (Fay, 1991 et Shao et Steel, 1999). Finalement, nous effectuons des études par simulation dans le but d'appuyer les résultats théoriques. / In this thesis we study the problem of variance estimation for the double expansion estimator and the calibration estimators in the case of two-phase designs. We suggest to use a variance decomposition different from the one usually used in two-phase sampling, which leads to a simplified variance estimator. We look for the necessary conditions for the simplified variance estimators to be appropriate. In order to do so, we consider the following particular cases : (1) Poisson design at the second phase, (2) two-stage design, (3) simple random sampling at each phase, (4) simple random sampling at the second phase. We show that a crucial condition for the simplified variance estimator to be valid in cases (1) and (2) is that the first phase sampling fraction must be negligible (or small). We also show in cases (3) and (4) that the simplified variance estimator can be used with some calibration estimators when the first phase sampling fraction is negligible and the population size is large enough. Furthermore, we show that the simplified estimators can be obtained in an alternative way using the reversed approach (Fay, 1991 and Shao and Steel, 1999). Finally, we conduct some simulation studies in order to validate the theoretical results.
54

Estimation utilisant les polynômes de Bernstein

Tchouake Tchuiguep, Hervé 03 1900 (has links)
Ce mémoire porte sur la présentation des estimateurs de Bernstein qui sont des alternatives récentes aux différents estimateurs classiques de fonctions de répartition et de densité. Plus précisément, nous étudions leurs différentes propriétés et les comparons à celles de la fonction de répartition empirique et à celles de l'estimateur par la méthode du noyau. Nous déterminons une expression asymptotique des deux premiers moments de l'estimateur de Bernstein pour la fonction de répartition. Comme pour les estimateurs classiques, nous montrons que cet estimateur vérifie la propriété de Chung-Smirnov sous certaines conditions. Nous montrons ensuite que l'estimateur de Bernstein est meilleur que la fonction de répartition empirique en terme d'erreur quadratique moyenne. En s'intéressant au comportement asymptotique des estimateurs de Bernstein, pour un choix convenable du degré du polynôme, nous montrons que ces estimateurs sont asymptotiquement normaux. Des études numériques sur quelques distributions classiques nous permettent de confirmer que les estimateurs de Bernstein peuvent être préférables aux estimateurs classiques. / This thesis focuses on the presentation of the Bernstein estimators which are recent alternatives to conventional estimators of the distribution function and density. More precisely, we study their various properties and compare them with the empirical distribution function and the kernel method estimators. We determine an asymptotic expression of the first two moments of the Bernstein estimator for the distribution function. As the conventional estimators, we show that this estimator satisfies the Chung-Smirnov property under conditions. We then show that the Bernstein estimator is better than the empirical distribution function in terms of mean squared error. We are interested in the asymptotic behavior of Bernstein estimators, for a suitable choice of the degree of the polynomial, we show that the Bernstein estimators are asymptotically normal. Numerical studies on some classical distributions confirm that the Bernstein estimators may be preferable to conventional estimators.
55

Distribuição de Poisson bivariada aplicada à previsão de resultados esportivos

Silva, Wesley Bertoli da 23 April 2014 (has links)
Made available in DSpace on 2016-06-02T20:06:10Z (GMT). No. of bitstreams: 1 6128.pdf: 965623 bytes, checksum: 08d957ba051c6348918f8348a857eff7 (MD5) Previous issue date: 2014-04-23 / Financiadora de Estudos e Projetos / The modelling of paired counts data is a topic that has been frequently discussed in several threads of research. In particular, we can cite bivariate counts, such as the analysis of sports scores. As a result, in this work we present the bivariate Poisson distribution to modelling positively correlated scores. The possible independence between counts is also addressed through the double Poisson model, which arises as a special case of the bivariate Poisson model. The main characteristics and properties of these models are presented and a simulation study is conducted to evaluate the behavior of the estimates for different sample sizes. Considering the possibility of modeling parameters by insertion of predictor variables, we present the structure of the bivariate Poisson regression model as a general case as well as the structure of an effects model for application in sports data. Particularly, in this work we will consider applications to Brazilian Championship Serie A 2012 data, in which the effects will be estimated by double Poisson and bivariate Poisson models. Once obtained the fits, the probabilities of scores occurence are estimated and then we obtain forecasts for the outcomes. In order to obtain more accurate forecasts, we present the weighted likelihood method from which it will be possible to quantify the relevance of the data according to the time they were observed. / A modelagem de dados provenientes de contagens pareadas e um típico que vem sendo frequentemente abordado em diversos segmentos de pesquisa. Em particular, podemos citar os casos em que as contagens de interesse são bivariadas, como por exemplo na analise de placares esportivos. Em virtude disso, neste trabalho apresentamos a distribuição Poisson bivariada para os casos em que as contagens de interesse sao positivamente correlacionadas. A possível independencia entre as contagens tambem e abordada por meio do modelo Poisson duplo, que surge como caso particular do modelo Poisson bivariado. As principais características e propriedades desses modelos são apresentadas e um estudo de simulação é realizado, visando avaliar o comportamento das estimativas para diferentes tamanhos amostrais. Considerando a possibilidade de se modelar os parâmetros por meio da inserçao de variáveis preditoras, apresentamos a estrutura do modelo de regressão Poisson bivariado como caso geral, bem como a estrutura de um modelo de efeitos para aplicação a dados esportivos. Particularmente, neste trabalho vamos considerar aplicações aos dados da Serie A do Campeonato Brasileiro de 2012, na qual os efeitos serão estimados por meio dos modelos Poisson duplo e Poisson bivariado. Uma vez obtidos os ajustes, estimam-se as probabilidades de ocorrência dos placares e, a partir destas, obtemos previsões para as partidas de interesse. Com o intuito de se obter previsões mais acuradas para as partidas, apresentamos o metodo da verossimilhança ponderada, a partir do qual seria possível quantificar a relevância dos dados em função do tempo em que estes foram observados.
56

Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis

Olson, Brent 05 1900 (has links)
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
57

Design- och simuleringsstudie av flödeshus och sensorkropp / A design and simulation study of a sensor body and flow housing

Larsson Sparr, Klara, Muhonen, Mathias January 2020 (has links)
I detta arbete har ett koncept utvecklats för en flödesmätningsmetod med en intern sensorkropp samt bibehållen flödeshastighet. Denna mätmetod består av en sensorkropp i ett flödeshus där mätningen av flödet utförs med hjälp av pitotrörsberäkningar. Två olika lösningar presenteras i detta arbete, där skillnaderna grundar sig i utformningen av sensorkroppen. Sensorkroppens tvärsnitt är liknande för båda lösningarna. Den ena lösningen är rotationssymmetrisk i centrum av röret medan den andra går från vägg till vägg centrerat i röret. För att åstadkomma bibehållen flödeshastighet så utfördes beräkningar för att modellera flödeshuset, så att flödets tvärsnittsarea motsvarade arean i röret utan sensorkropp. I dessa beräkningar ingick även att kompensera för ökade solida ytor, då dessa ytor skapar gränsskikt där flödets hastighet sänks. Jämförelser mellan arbetets genererade koncept och uppdragsgivarens nuvarande produkter utfördes. Jämförelsen resulterade i flera områden där arbetets koncept skulle kunna komplettera redan befintliga produkter. / In this project a concept for flow measurement has been developed, where there is an internal sensor body as well as a constant flow speed. This measurement method consists of a sensor body in a flow housing where the flow measurement is done using conventional pitot tube calculations. Two different solutions are presented in this work, the differences between the two solutions are based on the design of the sensor body. The cross-section of the sensor body is similar for both solutions, but one solution is rotationally symmetrical while the other goes from wall to wall. Both sensor bodies are centered in the tube. To accomplish continuous flow speed, calculations were made to model the flow housing, so the cross-sectional area of the flow corresponded to the area of the tube without the sensor body. In these calculations a compensation factor for increased solid surface area were included, as this area creates a boundary layer that lowers the flow speed and changes based on the design of the sensor body. Comparisons between the concept in this project and the commissioner's current products were made. This comparison resulted in several areas where this projects concept could complement existing products.
58

Realisation of genomic selection in the honey bee

Bernstein, Richard 27 July 2022 (has links)
Genomische Selektion ist ein Routine-Verfahren bei verschiedenen Nutztierarten, aber noch nicht bei der Honigbiene wegen der Besonderheiten dieser Spezies. Für die Zuchtwertschätzung bei der Honigbiene ist eine spezielle genetische Verwandtschaftsmatrix erforderlich, da die Paarungsbiologie dieser Spezies ungesicherte Vaterschaft, diploide Königinnen und haploide Drohnen umfasst. Die Arbeit präsentiert einen neu-entwickelten Algorithmus zur effizienten Berechnung der Inversen der genetischen Verwandtschaftsmatrix und der Inzuchtkoeffizienten auf großen Datensätzen. Die Methode wurde zur Voraussage von genomischen und Stammbaum-basierten Zuchtwerten in einer Simulationsstudie genutzt. Die Genauigkeit und die Verzerrung der geschätzten Zuchtwerte wurden ausgewertet unter Berücksichtigung verschiedener Größen der Referenzpopulation. Außerdem wurde der Zuchtfortschritt im ersten Durchlauf von Zuchtprogrammen ausgewertet, die Zuchtschemata mit genomischer oder Stammbaum-basierter Selektion nutzten. Ein erheblich größerer Zuchtfortschritt als bei Stammbaum-basierter Selektion wurde mit genomischer Vorselektion erzielt, für die junge Königinnen genotypisiert wurden, und nur die Kandidaten mit den höchsten genomischen Zuchtwerten zur Anpaarung oder Leistungsprüfung zugelassen wurden. Für einen realen Datensatz von ungefähr 3000 genotypisierten Königinnen wurden Stammbaum-basierte und genomische Zuchtwerte für sechs wirtschaftlich bedeutende Merkmale vorhergesagt. Drei Merkmale zeigten eine signifikant höhere Vorhersagegenauigkeit bei genomischer Zuchtwertschätzung gegenüber Stammbaum-basierten Verfahren und die Unterschiede zwischen allen sechs Merkmalen konnten im Wesentlichen aus den genetischen Parametern der Merkmale und der begrenzten Größe der Referenzpopulation erklärt werden. Damit zeigt die Arbeit, dass die genomische Selektion bei der Honigbiene genutzt werden kann, den Zuchtfortschritt zu erhöhen. / Genomic selection is a routine practice for several important livestock species but not yet in honey bees, due to the peculiarities of this species. For honey bees, a specialized genetic relationship matrix is required for the prediction of breeding values, since their mating biology involves uncertain paternity, diploid queens, and haploid drones. The thesis presents a novel algorithm for the efficient computation of the inverse of the numerator relationship matrix and the coefficients of inbreeding on large data sets. The method was used to estimate genomic and pedigree-based breeding values in a simulation study. The accuracy and bias of the estimated breeding values were evaluated and various sizes of the reference population were considered. Subsequently, the genetic gain in the initial cycle of breeding programs was evaluated for several breeding schemes employing genomic or pedigree-based selection. A considerably higher genetic gain than with pedigree-based selection was achieved with genomic preselection, for which queens were genotyped early in life, and only the candidates of high genomic breeding value were admitted for mating or phenotyping. On a real data set of about 3000 genotyped queens, pedigree-based and genomic breeding values were predicted for six economically relevant traits. Three traits showed significantly higher prediction accuracy with genomic compared to pedigree-based methods, and the differences between all the six traits could be explained mainly from their genetic parameters and the limited size of the reference population. The results show that genomic selection can be applied in honey bees, and the thesis provides appropriate breeding schemes and mathematical methods for its implementation.
59

Design and analysis of sugarcane breeding experiments: a case study / Delineamento e análise de experimentos de melhoramento com cana de açúcar: um estudo de caso

Santos, Alessandra dos 26 May 2017 (has links)
One purpose of breeding programs is the selection of the better test lines. The accuracy of selection can be improved by using optimal design and using models that fit the data well. Finding this is not easy, especially in large experiments which assess more than one hundred lines without the possibility of replication due to the limited material, area and high costs. Thus, the large number of parameters in the complex variance structure that needs to be fitted relies on the limited number of replicated check varieties. The main objectives of this thesis were to model 21 trials of sugarcane provided by \"Centro de Tecnologia Canavieira\" (CTC - Brazilian company of sugarcane) and to evaluate the design employed, which uses a large number of unreplicated test lines (new varieties) and systematic replicated check (commercial) lines. The mixed linear model was used to identify the three major components of spatial variation in the plot errors and the competition effects at the genetic and residual levels. The test lines were assumed as random effects and check lines as fixed, because they came from different processes. The single and joint analyses were developed because the trials could be grouped into two types: (i) one longitudinal data set (two cuts) and (ii) five regional groups of experiment (each group was a region which had three sites). In a study of alternative designs, a fixed size trial was assumed to evaluate the efficiency of the type of unreplicated design employed in these 21 trials comparing to spatially optimized unreplicated and p-rep designs with checks and a spatially optimized p-rep design. To investigate models and design there were four simulation studies to assess mainly the i) fitted model, under conditions of competition effects at the genetic level, ii) accuracy of estimation in the separate versus joint analysis; iii) relation between the sugarcane lodging and the negative residual correlation, and iv) design efficiency. To conclude, the main information obtained from the simulation studies was: the convergence number of the algorithm model analyzed; the variance parameter estimates; the correlations between the direct genetic EBLUPs and the true direct genetic effects; the assertiveness of selection or the average similarity, where similarity was measured as the percentage of the 30 test lines with the highest direct genetic EBLUPs that are in the true 30 best test lines (generated); and the heritability estimates or the genetic gain. / Um dos propósitos dos programas de melhoramento genético é a seleção de novos clones melhores (novos materiais). A acurácia de seleção pode ser melhorada usando delineamentos ótimos e modelos bem ajustados. Porém, descobrir isso não é fácil, especialmente, em experimentos grandes que possuem mais de cem clones sem a possibilidade de repetição devido à limitação de material, área e custos elevados, dadas as poucas repetições de parcelas com variedades comerciais (testemunhas) e o número de parâmetros de complexa variância estrutural que necessitam ser assumidos. Os principais objetivos desta tese foram modelar 21 experimentos de cana de açúcar fornecidos pelo Centro de Tecnologia Canavieira (CTC - empresa brasileira de cana de açúcar) e avaliar o delineamento empregado, o qual usa um número grande de clones não repetidos e testemunhas sistematicamente repetidas. O modelo linear misto foi usado, identificando três principais componentes de variação espacial nos erros de parcelas e efeitos de competição, em nível genético e residual. Os clones foram assumidos de efeitos aleatórios e as testemunhas de efeitos fixos, pois vieram de processos diferentes. As análises individuais e conjuntas foram desenvolvidas neste material pois os experimentos puderam ser agrupados em dois tipos: (i) um delineamento longitudinal (duas colheitas) e (ii) cinco grupos de experimentos (cada grupo uma região com três locais). Para os estudos de delineamentos, um tamanho fixo de experimento foi assumido para se avaliar a eficiência do delineamento não replicado (empregado nesses 21 experimentos) com os não replicados otimizado espacialmente, os parcialmente replicados com testemunhas e os parcialmente replicados otimizado espacialmente. Quatro estudos de simulação foram feitos para avaliar i) os modelos ajustados, sob condições de efeito de competição em nível genético, ii) a acurácia das estimativas vindas dos modelos de análise individual e conjunta; iii) a relação entre tombamento da cana e a correlação residual negativa, e iv) a eficiência dos delineamentos. Para concluir, as principais informações utilizadas nos estudos de simulação foram: o número de vezes que o algoritmo convergiu; a variância na estimativa dos parâmetros; a correlação entre os EBLUPs genético direto e os efeitos genéticos reais; a assertividade de seleção ou a semelhança média, sendo semelhança medida como a porcentagem dos 30 clones com os maiores EBLUPS genético e os 30 melhores verdadeiros clones; e a estimativa da herdabilidade ou do ganho genético.
60

Design and analysis of sugarcane breeding experiments: a case study / Delineamento e análise de experimentos de melhoramento com cana de açúcar: um estudo de caso

Alessandra dos Santos 26 May 2017 (has links)
One purpose of breeding programs is the selection of the better test lines. The accuracy of selection can be improved by using optimal design and using models that fit the data well. Finding this is not easy, especially in large experiments which assess more than one hundred lines without the possibility of replication due to the limited material, area and high costs. Thus, the large number of parameters in the complex variance structure that needs to be fitted relies on the limited number of replicated check varieties. The main objectives of this thesis were to model 21 trials of sugarcane provided by \"Centro de Tecnologia Canavieira\" (CTC - Brazilian company of sugarcane) and to evaluate the design employed, which uses a large number of unreplicated test lines (new varieties) and systematic replicated check (commercial) lines. The mixed linear model was used to identify the three major components of spatial variation in the plot errors and the competition effects at the genetic and residual levels. The test lines were assumed as random effects and check lines as fixed, because they came from different processes. The single and joint analyses were developed because the trials could be grouped into two types: (i) one longitudinal data set (two cuts) and (ii) five regional groups of experiment (each group was a region which had three sites). In a study of alternative designs, a fixed size trial was assumed to evaluate the efficiency of the type of unreplicated design employed in these 21 trials comparing to spatially optimized unreplicated and p-rep designs with checks and a spatially optimized p-rep design. To investigate models and design there were four simulation studies to assess mainly the i) fitted model, under conditions of competition effects at the genetic level, ii) accuracy of estimation in the separate versus joint analysis; iii) relation between the sugarcane lodging and the negative residual correlation, and iv) design efficiency. To conclude, the main information obtained from the simulation studies was: the convergence number of the algorithm model analyzed; the variance parameter estimates; the correlations between the direct genetic EBLUPs and the true direct genetic effects; the assertiveness of selection or the average similarity, where similarity was measured as the percentage of the 30 test lines with the highest direct genetic EBLUPs that are in the true 30 best test lines (generated); and the heritability estimates or the genetic gain. / Um dos propósitos dos programas de melhoramento genético é a seleção de novos clones melhores (novos materiais). A acurácia de seleção pode ser melhorada usando delineamentos ótimos e modelos bem ajustados. Porém, descobrir isso não é fácil, especialmente, em experimentos grandes que possuem mais de cem clones sem a possibilidade de repetição devido à limitação de material, área e custos elevados, dadas as poucas repetições de parcelas com variedades comerciais (testemunhas) e o número de parâmetros de complexa variância estrutural que necessitam ser assumidos. Os principais objetivos desta tese foram modelar 21 experimentos de cana de açúcar fornecidos pelo Centro de Tecnologia Canavieira (CTC - empresa brasileira de cana de açúcar) e avaliar o delineamento empregado, o qual usa um número grande de clones não repetidos e testemunhas sistematicamente repetidas. O modelo linear misto foi usado, identificando três principais componentes de variação espacial nos erros de parcelas e efeitos de competição, em nível genético e residual. Os clones foram assumidos de efeitos aleatórios e as testemunhas de efeitos fixos, pois vieram de processos diferentes. As análises individuais e conjuntas foram desenvolvidas neste material pois os experimentos puderam ser agrupados em dois tipos: (i) um delineamento longitudinal (duas colheitas) e (ii) cinco grupos de experimentos (cada grupo uma região com três locais). Para os estudos de delineamentos, um tamanho fixo de experimento foi assumido para se avaliar a eficiência do delineamento não replicado (empregado nesses 21 experimentos) com os não replicados otimizado espacialmente, os parcialmente replicados com testemunhas e os parcialmente replicados otimizado espacialmente. Quatro estudos de simulação foram feitos para avaliar i) os modelos ajustados, sob condições de efeito de competição em nível genético, ii) a acurácia das estimativas vindas dos modelos de análise individual e conjunta; iii) a relação entre tombamento da cana e a correlação residual negativa, e iv) a eficiência dos delineamentos. Para concluir, as principais informações utilizadas nos estudos de simulação foram: o número de vezes que o algoritmo convergiu; a variância na estimativa dos parâmetros; a correlação entre os EBLUPs genético direto e os efeitos genéticos reais; a assertividade de seleção ou a semelhança média, sendo semelhança medida como a porcentagem dos 30 clones com os maiores EBLUPS genético e os 30 melhores verdadeiros clones; e a estimativa da herdabilidade ou do ganho genético.

Page generated in 0.128 seconds