181 |
[en] ALGORITHMS FOR ONLINE PORTFOLIO SELECTION PROBLEM / [pt] ALGORITMOS PARA O PROBLEMA DE SELEÇÃO ONLINE DE PORTFOLIOSCHARLES KUBUDI CORDEIRO E SILVA 15 April 2019 (has links)
[pt] A otimização online de portfólios é um problema de engenharia financeira que consiste na escolha sequencial de alocação de capital entre um conjunto de ativos, com o objetivo de maximizar o retorno acumulado no longo prazo. Com o avanço dos estudos de modelos de machine learning, diversos algorítmos estão sendo utilizados para resolver esse problema. Uma série de algoritmos seguem a metodologia Follow-the-winner (FTW) , onde o peso de ações com boa performance é aumentado baseado na hipótese de que a tendência de alta será mantida; outros seguem a metodologia inversa Follow-the-loser (FTL), em que ações com má performance tem seu peso aumentado apostando em uma reversão dos preços. Algoritmos estado-da-arte do tipo FTW possuem garantia teórica de se aproximar da performance da melhor ação escolhida de antemão, entretanto, algoritmos do tipo FTL tem performance superior observada empiricamente. Nosso trabalho busca explorar a ideia de aprender quando utilizar cada uma das
duas categorias. Os mecanismos utilizados são algoritmos de online learning com flexibilidade para assumir ambos comportamentos. Foi realizado um estudo da literatura sobre indicadores de memória em séries financeiras e sua possível utilização de forma explícita para escolha entre FTL e FTW. Posteriormente, propomos um método de se realizar o aprendizado entre essas duas categorias de forma online e de forma dinâmica para utilização em algoritmos de online learning. Em nossos experimentos, o método proposto
supera o benchmark estabelecido UCRP com excesso de retorno de 36.76 por cento. / [en] Online portfolio selection is a financial engineering problem which aims to sequentially allocate capital among a set of assets in order to maximize long-term return. With the recent advances in the field of machine learning, several models have been proposed to address this problem. Some algorithms approach the problem with a Follow-the-winner (FTW) methodology, which increases the weights of more successful stocks based on their historical performance. Contrarily, a second approach, Follow-theloser (FTW), increases the weights of less successful stocks, betting on the reversal of their prices. Some state-of-the-art FTW type algorithms have the guarantee to asymptotically approach the same performance as the best stock chosen in hindsight, while FTL algorithms have empirical evidence of overperforming the previous. Our goal is to explore the idea of learning when to use each of those two algorithm categories. We do this by using online learning algorithms that are capable of switching between the described regimes. We review the literature for existing measures of time series memory and predictability, and explicitly use this information for chosing between FTW and FTL. Later, we propose a method for choosing between this two types of algorithms in an online and dynamic manner for usage together with online learning algorithms. The method outperforms the chosen benchmark UCRP in our experiments with 36.76 percent excess returns.
|
182 |
General Weighted Optimality of Designed ExperimentsStallings, Jonathan W. 22 April 2014 (has links)
Design problems involve finding optimal plans that minimize cost and maximize information about the effects of changing experimental variables on some response. Information is typically measured through statistically meaningful functions, or criteria, of a design's corresponding information matrix. The most common criteria implicitly assume equal interest in all effects and certain forms of information matrices tend to optimize them. However, these criteria can be poor assessments of a design when there is unequal interest in the experimental effects. Morgan and Wang (2010) addressed this potential pitfall by developing a concise weighting system based on quadratic forms of a diagonal matrix W that allows a researcher to specify relative importance of information for any effects. They were then able to generate a broad class of weighted optimality criteria that evaluate a design's ability to maximize the weighted information, ultimately targeting those designs that efficiently estimate effects assigned larger weight.
This dissertation considers a much broader class of potential weighting systems, and hence weighted criteria, by allowing W to be any symmetric, positive definite matrix. Assuming the response and experimental effects may be expressed as a general linear model, we provide a survey of the standard approach to optimal designs based on real-valued, convex functions of information matrices. Motivated by this approach, we introduce fundamental definitions and preliminary results underlying the theory of general weighted optimality.
A class of weight matrices is established that allows an experimenter to directly assign weights to a set of estimable functions and we show how optimality of transformed models may be placed under a weighted optimality context. Straightforward modifications to SAS PROC OPTEX are shown to provide an algorithmic search procedure for weighted optimal designs, including A-optimal incomplete block designs. Finally, a general theory is given for design optimization when only a subset of all estimable functions is assumed to be in the model. We use this to develop a weighted criterion to search for A-optimal completely randomized designs for baseline factorial effects assuming all high-order interactions are negligible. / Ph. D.
|
183 |
Evaluation of phenotypic and genetic trends in weaning weight in Angus and Hereford populations in VirginiaNadarajah, Kanagasabai January 1985 (has links)
Total weaning weight records of 29,832 Angus and 15,765 Hereford calves born during 1953 through 1983 in Virginia were used to evaluate phenotypic and genetic trends for adjusted weaning weight (AWWT), weaning weight ratio (WWR) and deviation of AWWT from the mean AWWT of the contemporaries (DEVN). Two approaches, namely the regression techniques and maximum likelihood (ML) procedure were taken to estimate the above trends.
The estimates of annual phenotypic trend for AWWT in the Angus and Hereford breeds were .96 and .82 kg/yr, respectively. The sire and dam genetic trends obtained from both approaches for the traits of interest were positive and significant; however, the estimates from the regression analyses were slightly higher than those- from the ML procedure. The estimates of one-half of the sire genetic trends obtained from ML procedure for WWR and DEVN were .40 ± .04 ratio units/yr and .72 ± .07 kg/yr in the Angus breed and the corresponding values for the Hereford breed were .25 ± .06 ratio units/yr and .45 ± .12 kg/yr. The estimates of one-half of the darn trends for the corresponding traits were .32 ± .02 ratio units/yr and .55 ± .04 kg/yr for Angus and .21 ± .03 ratio units/yr and .30 ± .07 kg/yr for Herefords. The application of adjustment factors (to eliminate the bias due to non-random mating and culling levels) to estimates of sire genetic trends in the regression analyses produced estimates more similar to the estimates obtained from the ML procedure. The average annual genetic trends over the study period from the ML procedure for AWWT were 1.27 kg/yr for Angus and .75 kg/yr for Herefords. / Ph. D.
|
184 |
The assessment of the quality of science education textbooks : conceptual framework and instruments for analysisSwanepoel, Sarita 04 1900 (has links)
Science and technology are constantly transforming our day-to-day living. Science
education has become of vital importance to prepare learners for this everchanging
world. Unfortunately, science education in South Africa is hampered
by under-qualified and inexperienced teachers. Textbooks of good quality can assist
teachers and learners and facilitate the development of science teachers. For
this reason thorough assessment of textbooks is needed to inform the selection of
good textbooks.
An investigation revealed that the available textbook evaluation instruments are
not suitable for the evaluation of the physical science textbooks in the South
African context. An instrument is needed that focusses on science education textbooks
and which prescribes the criteria, weights, evaluation procedure and rating
scheme that can ensure justifiable, transparent, reliable and valid evaluation results.
This study utilised elements from the Analytic Hierarchy Process (AHP) to
develop such an instrument and verified the reliability and validity of the instrument’s
evaluation results.
Development of the Instrument for the Evaluation of Science Education Textbooks
started with the formulation of criteria. Characteristics that influence the
quality of textbooks were identified from literature, existing evaluation instruments
and stakeholders’ concerns. In accordance with the AHP, these characteristics
or criteria were divided into categories or branches to give a hierarchical
structure. Subject experts verified the content validity of the hierarchy.
Expert science teachers compared the importance of different criteria. The data
were used to derive weights for the different criteria with the Expert Choice computer
application. A rubric was formulated to act as rating-scheme and score
sheet. During the textbook evaluation process the ratings were transferred to a
spreadsheet that computed the scores for the quality of a textbook as a whole as
well as for the different categories.
The instrument was tested on small scale, adjusted and then applied on a larger
scale. The results of different analysts were compared to verify the reliability of
the instrument. Triangulation with the opinions of teachers who have used the
textbooks confirmed the validity of the evaluation results obtained with the instrument.
Future investigations on the evaluation instrument can include the use
of different rating scales and limiting of criteria. / Thesis (M. Ed. (Didactics))
|
185 |
Computational modelling of the neural systems involved in schizophreniaThurnham, A. J. January 2008 (has links)
The aim of this thesis is to improve our understanding of the neural systems involved in schizophrenia by suggesting possible avenues for future computational modelling in an attempt to make sense of the vast number of studies relating to the symptoms and cognitive deficits relating to the disorder. This multidisciplinary research has covered three different levels of analysis: abnormalities in the microscopic brain structure, dopamine dysfunction at a neurochemical level, and interactions between cortical and subcortical brain areas, connected by cortico-basal ganglia circuit loops; and has culminated in the production of five models that provide useful clarification in this difficult field. My thesis comprises three major relevant modelling themes. Firstly, in Chapter 3 I looked at an existing neural network model addressing the Neurodevelopmental Hypothesis of Schizophrenia by Hoffman and McGlashan (1997). However, it soon became clear that such models were overly simplistic and brittle when it came to replication. While they focused on hallucinations and connectivity in the frontal lobes they ignored other symptoms and the evidence of reductions in volume of the temporal lobes in schizophrenia. No mention was made of the considerable evidence of dysfunction of the dopamine system and associated areas, such as the basal ganglia. This led to my second line of reasoning: dopamine dysfunction. Initially I helped create a novel model of dopamine neuron firing based on the Computational Substrate for Incentive Salience by McClure, Daw and Montague (2003), incorporating temporal difference (TD) reward prediction errors (Chapter 5). I adapted this model in Chapter 6 to address the ongoing debate as to whether or not dopamine encodes uncertainty in the delay period between presentation of a conditioned stimulus and receipt of a reward, as demonstrated by sustained activation seen in single dopamine neuron recordings (Fiorillo, Tobler & Schultz 2003). An answer to this question could result in a better understanding of the nature of dopamine signaling, with implications for the psychopathology of cognitive disorders, like schizophrenia, for which dopamine is commonly regarded as having a primary role. Computational modelling enabled me to suggest that while sustained activation is common in single trials, there is the possibility that it increases with increasing probability, in which case dopamine may not be encoding uncertainty in this manner. Importantly, these predictions can be tested and verified by experimental data. My third modelling theme arose as a result of the limitations to using TD alone to account for a reinforcement learning account of action control in the brain. In Chapter 8 I introduce a dual weighted artificial neural network, originally designed by Hinton and Plaut (1987) to address the problem of catastrophic forgetting in multilayer artificial neural networks. I suggest an alternative use for a model with fast and slow weights to address the problem of arbitration between two systems of control. This novel approach is capable of combining the benefits of model free and model based learning in one simple model, without need for a homunculus and may have important implications in addressing how both goal directed and stimulus response learning may coexist. Modelling cortical-subcortical loops offers the potential of incorporating both the symptoms and cognitive deficits associated with schizophrenia by taking into account the interactions between midbrain/striatum and cortical areas.
|
186 |
META-ANALYSIS OF GENE EXPRESSION STUDIESSiangphoe, Umaporn 01 January 2015 (has links)
Combining effect sizes from individual studies using random-effects models are commonly applied in high-dimensional gene expression data. However, unknown study heterogeneity can arise from inconsistency of sample qualities and experimental conditions. High heterogeneity of effect sizes can reduce statistical power of the models. We proposed two new methods for random effects estimation and measurements for model variation and strength of the study heterogeneity. We then developed a statistical technique to test for significance of random effects and identify heterogeneous genes. We also proposed another meta-analytic approach that incorporates informative weights in the random effects meta-analysis models. We compared the proposed methods with the standard and existing meta-analytic techniques in the classical and Bayesian frameworks. We demonstrate our results through a series of simulations and application in gene expression neurodegenerative diseases.
|
187 |
Inférence basée sur le plan pour l'estimation de petits domaines / Design-based inference for small area estimationRandrianasolo, Toky 18 November 2013 (has links)
La forte demande de résultats à un niveau géographique fin, notamment à partir d'enquêtes nationales, a mis en évidence la fragilité des estimations sur petits domaines. Cette thèse propose d'y remédier avec des méthodes spécifiques basées sur le plan de sondage. Celles-ci reposent sur la constructionde nouvelles pondérations pour chaque unité statistique. La première méthode consiste à optimiser le redressement du sous-échantillon d'une enquête inclusdans un domaine. La deuxième repose sur la construction de poids dépendant à la fois des unités statistiques et des domaines. Elle consiste à scinder les poids de sondage de l'estimateur global tout en respectant deux contraintes : 1/ la somme des estimations sur toute partition en domaines est égale à l'estimation globale ; 2/ le système de pondération pour un domaine particulier satisfait les propriétés de calage sur les variables auxiliaires connues pour le domaine. L'estimateur par scission ainsi obtenu se comporte de manière quasi analogue au célèbre estimateur blup (meilleur prédicteur linéaire sans biais). La troisième méthode propose une réécriture de l'estimateur blup sous la forme d'un estimateur linéaire homogène, en adoptant une approche basée sur le plan de sondage, bien que l'estimateur dépende d'un modèle. De nouveaux estimateurs blup modifiés sont obtenus. Leur précision, estimée par simulation avec application sur des données réelles, est assez proche de celle de l'estimateur blup standard. Les méthodes développées dans cette thèse sont ensuite appliquées à l'estimation d'indicateurs de la mobilité locale à partir de l'Enquête Nationale sur les Transports et les Déplacements 2007-2008. Lorsque la taille d'un domaine est faible dans l'échantillon, les estimations obtenues avec la première méthode perdent en précision, alors que la précision reste satisfaisante pour les deux autres méthodes. / The strong demand for results at a detailed geographic level, particularly from national surveys, has raised the problem of the fragility of estimates for small areas. This thesis addresses this issue with specific methods based on the sample design. These ones consist of building new weights for each statistical unit. The first method consists of optimizing the re-weighting of a subsample survey included in an area. The second one is based on the construction of weights that depend on the statistical units as well as the areas. It consists of splitting the sampling weights of the overall estimator while satisfying two constraints : 1/ the sum of the estimates on every partition into areas is equal to the overall estimate ; 2/ the system of weights for a given area satisfies calibration properties on known auxiliary variables at the level of the area. The split estimator thus obtained behaves almost similarly as the well-known blup (best linear unbiased predictor) estimator. The third method proposes a rewriting of the blup estimator, although model-based, in the form of a homogenous linear estimator from a design-based approach. New modified blup estimators are obtained. Their precision, estimated by simulation with an application to real data, is quite close to that of the standard blup estimator. Then, the methods developed in this thesis are applied to the estimation of local mobility indicators from the 2007-2008 French National Travel Survey. When the size of an area is small in the sample, the estimates obtained with the first method are not precise enough whereas the precision remains satisfactory for the two other methods.
|
188 |
Hodnocení vybraných států EU / Evaluation of selected EU countriesMachková, Radka January 2010 (has links)
This diploma thesis targets the evaluation of fifteen chosen EU countries using the methods of multicriteria decision-making. The topic is elaborated from the perspective of a student of the University of Economics, Prague, who chooses a suitable country to gain experience abroad. Students' preferences are recorded in a questionnaire and grouped using direct weights estimation method. The ORESTE, WSA, TOPSIS, PROMETHEE II and MAPPAC methods are described in the thesis and applied to the data using an add-on of Microsoft Excel -- Sanna. Since the questionnaire contains a question on the students' subjective ranking of countries, it is possible to compare the two rankings (subjective and calculated) to determine whether they are comparable and the students' decision-making is consistent.
|
189 |
Utilização de aprendizado de máquina para classificação de bactérias através de proteínas ribossomaisTomachewski, Douglas 04 September 2017 (has links)
Submitted by Angela Maria de Oliveira (amolivei@uepg.br) on 2017-11-30T10:57:51Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Douglas Tomachewski.pdf: 4287227 bytes, checksum: 4ee4e1b519755860efa6f01d55b3569f (MD5) / Made available in DSpace on 2017-11-30T10:57:51Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Douglas Tomachewski.pdf: 4287227 bytes, checksum: 4ee4e1b519755860efa6f01d55b3569f (MD5)
Previous issue date: 2017-09-04 / A identificação de microrganismos, nas áreas da saúde e agricultura, é essencial para
compreender a composição e o desenvolvimento do meio. Novas técnicas estão buscando
identificar estes microrganismos com mais acurácia, rapidez e com menor custo. Uma técnica
cada vez mais estudada e utilizada atualmente é a identificação de microrganismos através de
espectros de massa, gerados por uma espectrometria de massa. Os espectros de massa são
capazes de gerar um perfil para reconhecimento de um microrganismo, utilizando os picos
referentes às mais abundantes massas moleculares registradas nos espectros. Analisando os
picos pode-se designar um padrão, como uma impressão digital, para reconhecer um
microrganismo, esta técnica é conhecida como PMF, do inglês Peptide Mass Fingerprint. Outra
forma de identificar um espectro de massa, é através dos picos que são esperados que se
apresentem no espectro, modelo qual este trabalho utilizou. Para prever os picos esperados no
espectro, foram calculados os pesos moleculares estimados de proteínas ribossomais. Essas
proteínas são denominadas house keeping, ou seja são presentes para o próprio funcionamento
celular. Além de apresentarem grande abundância no conteúdo procariótico, elas são altamente
conservadas, não alterando sua fisiologia para diferentes meios ou estágios celulares. Os pesos
estimados formaram uma base de dados presumida, contendo todas as informações obtidas do
repositório do NCBI. Esta base de dados presumida foi generalizada para taxonomia a nível de
espécie, e posteriormente submetida à um aprendizado de máquina. Com isso foi possível obter
um modelo classificatório de microrganismos baseado em valores de proteínas ribossomais.
Utilizando o modelo gerado pelo aprendizado de máquina, foi desenvolvido um software
chamado Ribopeaks, capaz classificar os microrganismos a nível de espécie com acurácia de
94.83%, considerando as espécies correlatas. Também foram observados os resultados a nível
taxonômico de gênero, que obteve 98.69% de assertividade. Valores de massas moleculares
ribossomais biológicas retiradas da literatura também foram testadas no modelo obtido, obtendo
uma assertividade total de 84,48% para acertos em nível de espécie, e 90,51% de acerto em
nível de gênero. / Identification of microorganisms in health and agriculture areas is essential to
understand the composition and development of the environment. New techniques are seeking
to identify these microorganisms with more accuracy, speed and at a lower cost. Nowadays, a
technique that is increasingly studied and used is the identification of microorganisms through
mass spectra, generated by mass spectrometry. The mass spectra are able to generate a
recognition profile from a microorganism, using the referring peaks to the most abundant
molecular masses recorded in the spectrum. By analyzing the peaks, it is possible to designate
a pattern, such as a fingerprint, to recognize a microorganism; this technique is known as the
Peptide Mass Fingerprint (PMF). Another way to identify a mass spectrum is through the peaks
that are expected to appear in the spectrum, which model this work used. To predict the
expected peaks in the spectrum, the estimated molecular weights of ribosomal proteins were
calculated. These proteins are responsible for the cellular functioning itself, so-called
housekeeping. Besides they being abundant in the prokaryotic content, they are highly
conserved, not altering their physiology to different environments or cell stage. The estimated
weights formed a presumed database, containing all the information obtained from the NCBI’s
repository. This presumed database was generalized at the specie level and later submitted to a
machine learning algorithm. With this, it was possible to obtain a microorganism’s
classificatory model based on ribosomal proteins values. Using the generated model by the
machine learning, a software called Ribopeaks was developed to classify the microorganisms
at the specie level with an accuracy of 94.83%, considering the related species. It was also
observed the results at genus level, which obtained 98.69% of assertiveness. Values of
biological ribosomal molecular masses from the literature were also tested in the acquihired
model, obtaining a total assertiveness of 84.48% at the specie level, and 90.51% at the genus
level.
|
190 |
Objetivos de seleção e valores econômicos para bovinos Nelore em sistema de ciclo completo / Breeding Objectives and Economic Values for Nellore Cattle in a Full-Cycle SystemMoreira, Heverton Luís 15 September 2015 (has links)
O objetivo do presente trabalho foi definir os objetivos de seleção e estimar os valores econômicos para característica de importância econômica em um sistema de bovinos de corte criados em regime de ciclo completo, além de estimar os parâmetros genéticos para características produtivas, reprodutivas e de qualidade de carcaça avaliadas no programa de seleção Nelore Brasil. O desenvolvimento do modelo bioeconômico foi realizado considerando as informações do sistema de produção e dos parâmetros biológicos com objetivo de estabilizar o rebanho e calcular o número de animais em cada categoria, obter as informações de desempenho produtivo e econômico (receita e despesas) do sistema pecuário, e por fim a obtenção dos valores econômicos das características contidas no objetivo de seleção em sistema de ciclo completo, que para tal, foram utilizadas planilhas interligadas do programa Microsoft Excel®. Os valores econômicos foram estimados simulando o aumento de 1% no valor da característica do objetivo de seleção, mantendo as demais constantes. A estimação dos parâmetros genéticos foi pelo método de Máxima Verossimilhança Restrita (REML) usando o modelo animal, utilizando o software WOMBAT e o ganho genético anual para as características de reprodução, carcaça e desenvolvimento ponderal foi estimado pela regressão linear do valor genético (VG) em função do ano de nascimento. O modelo bioeconômico foi eficaz na estimação das fontes de receitas e despesas do sistema de produção e os valores econômicos estimados seguindo a ordem de importância para o ciclo completo foram R$ 3,69 para peso ao abate de macho (PAM), R$ 3,63 para peso a desmama de macho (PDM), R$ 3,58 para taxa de desmama (TD), R$ 3,40 para peso ao abate de fêmea (PAF), R$ 2,30 para peso a desmama de fêmea (PDF) e R$ 0,13 para peso ao adulto de vaca (PAV). Portanto, o PAM foi à característica de maior impacto no sistema de produção, porém, todas promoveram retorno econômico positivo com exceção do PAV que foi praticamente nulo. As estimativas de herdabilidade para as características de produção, reprodução e qualidade de carcaça foram favoráveis ao progresso genético por seleção direta. As correlações estimadas demonstram que machos com maior perímetro escrotal tendem a ser mais pesados e esses apresentarem maior rendimento e acabamento da carcaça. O processo de Resumo 13 seleção empírica utilizado pelo programa Nelore Brasil está sendo eficiente, de acordo com os resultados positivos dos progressos genéticos nas estimativas de tendência. Portanto todas as características avaliadas no sistema de ciclo completo tiveram importância econômica positivas, indicando que o processo de seleção trariam aumentos de lucratividade e as avaliadas geneticamente poderiam ser incluídas como critério de seleção contribuindo com a maximização da resposta esperadas para as características do objetivo de seleção. / The objective of this study was to define the objectives of selecting and estimating economic values for characteristics of economic importance in a system of beef cattle raised in fullcycle scheme. We also aimed to estimate the parameters for genetic reproductive and productive characteristics of carcass quality evaluated in the breeding program Nellore Brazil. The bio-economic model was developed considering the information and production system of biological parameters to stabilize the herd and calculate the number of animals in each category, obtain information on economic performance (revenue and expenses) of the livestock system, and finally to obtain economic values of the characteristics contained in the objective of selection in full-cycle system. We used interlinked spreadsheets in the Microsoft Excel®. The economic values were estimated by simulating the increase of 1% for the value of the characteristic of objective selection, keeping the others constant. The estimation of genetic parameters was obtained by the method of Restricted Maximum Likelihood (REML) using the animal model in the WOMBAT software and genetic gain for annual breeding characteristics; weighted carcass and development were estimated by linear regression of genetic value (GV) depending on the year of birth. The bio-economic model was effective in estimating revenue and expenditure sources of the production system and the estimated economic values, following the order of importance for the complete cycle, were R$ 3.69 for slaughter weight of male (SWM), R$ 3.63 for weaning weight of male (WWM), R$ 3.58 for weaning rate (WR), R$ 3.40 for the slaughter weight of female (SWF), R$ 2.30 for weaning weight of female (WWF) and R$ 0.13 for adult cow weight (ACW). Therefore, SWM had the greatest impact on the production system, however, all aspects promoted positive economic return with the exception of ACW, which was practically null. Heritability estimates for production, reproduction and carcass characteristics and quality were favorable to genetic progress for direct selection. The estimated correlations demonstrate that males with larger scrotal perimeter tend to be heavier and have higher carcass yield and finishing. The empirical selection process used by the program Brazil Nellore is efficient, according to the positive results of the genetic progress in trend estimates. Therefore, all features evaluated in the fullAbstract 16 cycle system had positive economic importance, indicating that the selection process could bring gains in profitability and the genetically evaluated characteristics could be included as a breeding criterion, contributing to the maximization of the expected response for the traits of interest in breeding programs.
|
Page generated in 0.0497 seconds