• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Statistical modelling of data from performance of broiler chickens / Modelagem estatística de dados de desempenho de frangos de corte

Hilario, Reginaldo Francisco 30 August 2018 (has links)
Experiments with broiler chickens are common today, because due to the great market demand for chicken meat, the need to improve the factors related to the production of broiler chicken has arisen. Many studies have been done to improve handling techniques. In these studies statistical analysis methods and techniques are employed. In studies with comparisons between treatments, it is not uncommon to observe a lack of significant effect even when there is evidence to indicate the significance of the effects. In order to avoid such eventualities it is fundamental to carry out a good planning before conducting the experiment. In this context, a study of the power of the F test was made emphasizing the relationships between test power, sample size, mean difference to be detected and variance for chicken weights data. In the analysis of data from experiments with broilers with mixed sexes and that the experimental unit is the box, generally the models used do not take into account the variability between the sexes of the birds, this affects the precision of the inference on the population of interest . We propose a model for the total weight per box that takes into account the sex information of the broiler chickens. / Experimentos com frangos de corte são comuns atualmente, pois devido à grande demanda de mercado da carne de frango surgiu a necessidade de melhorar os fatores ligados à produção do frango de corte. Muitos estudos têm sido feitos para aprimorar as técnicas de manejo. Nesses estudos os métodos e técnicas estatísticas de análise são empregados. Em estudos com comparações entre tratamentos, não é incomum observar falta de efeito significativo mesmo quando existem evidências que apontam a significância dos efeitos. Para evitar tais eventualidades é fundamental realizar um bom planejamento antes da condução do experimento. Nesse contexto, foi feito um estudo do poder do teste F enfatizando as relações entre o poder do teste, tamanho da amostra, diferença média a ser detectada e variância para dados de pesos de frangos. Na análise de dados provenientes de experimentos com frangos de corte com ambos os sexos e que a unidade experimental é o boxe, geralmente os modelos utilizados não levam em conta a variabilidade entre os sexos das aves, isso afeta a precisão da inferência sobre a população de interesse. Foi proposto um modelo para o peso total por boxe que leva em conta a informação do sexo dos frangos.
82

Estudo da reprodutibilidade do exame de microscopia especular de córnea em amostras com diferentes números de células / Reproducibility study of the corneal specular microscope in samples with different number of cells

Holzchuh, Ricardo 19 August 2011 (has links)
INTRODUÇÃO: O endotélio corneal exerce papel primordial para a fisiologia da córnea. Seus dados morfológicos gerados pelo microscópio especular (MEC) como densidade endotelial (DE), área celular média (ACM), coeficiente de variação (CV) e porcentagem de células hexagonais (HEX) são importantes para avaliar sua vitalidade. Para interpretar estes dados de forma padronizada e reprodutível, foi utilizado um programa estatístico de análise amostral, Cells Analyzer PAT. REQ.(CA). OBJETIVO: Demonstrar valores de referência para DE, ACM, CV e HEX. Demonstrar o percentual de células endoteliais marcadas e desconsideradas no exame ao marcar-se 40, 100 e 150 células em uma única imagem do mosaico endotelial e o perfil do intervalo de confiança (IC) das variáveis estudadas ao se considerar 40, 100, 150 e tantas células quantas indicadas pelo CA. Demonstrar o erro amostral de cada grupo estudado. MÉTODOS: Estudo transversal. Os exames de MEC foram realizados com o aparelho Konan NONCON ROBO® SP-8000, nos 122 olhos de 61 portadores de catarata (63,97 ± 8,15 anos de idade). As imagens endoteliais caracterizaram se pelo número de células marcadas e consideradas para cálculo dos seguintes dados: DE, ACM, CV e HEX. Os grupos foram formados de 40, 100, 150 células marcadas numa única imagem endotelial e Grupo CA em que foram marcadas tantas células quanto necessárias em diferentes imagens, para obter o erro relativo calculado inferior ao planejado (0,05), conforme orientação do programa CA. Estudou-se o efeito do número de células sobre IC para as variáveis endoteliais utilizadas. RESULTADOS: A média dos valores de referência encontrados para DE foi 2395,37 ± 294,34 cel/mm2; ACM 423,64 ± 51,09 m2; CV 0,40 ± 0,04 e HEX 54,77 ± 4,19%. O percentual de células endoteliais desconsideradas no Grupo 40 foi 51,20%; no Grupo 100, 35,07% e no Grupo 150, 29,83%. O número médio de células calculado inicialmente pelo CA foi 247,48 ± 51,61 e o número médio de células efetivamente incluídas no final do processo amostral foi 425,25 ± 102,24. O erro amostral dos exames no Grupo 40 foi 0,157 ± 0,031; Grupo 100, 0,093 ± 0,024; Grupo 150, 0,075 ± 0,010 e Grupo CA, 0,037 ± 0,005. O aumento do número de células diminuiu a amplitude do IC nos olhos direito e esquerdo para a DE em 75,79% e 77,39%; ACM em 75,95% e 77,37%; CV em 72,72% e 76,92%; HEX em 75,93% e 76,71%. CONCLUSÃO: Os valores de referência da DE foi 2395,37 ± 294,34 cel/mm2; ACM foi 423,64 ± 51,09 m2; CV foi 0,40 ± 0,04 e HEX foi 54,77 ± 4,19%. O percentual de células endoteliais desconsideradas no Grupo 40 foi 51,20%; no Grupo 100 foi 35,07% e no Grupo 150 foi 29,83%. O programa CA considerou correto os exames nos quais 425,25 ± 102,24 células foram marcadas entre duas e cinco imagens (erro relativo calculado de 0,037 ± 0,005). O aumento do número de células diminuiu a amplitude do IC para todas as variáveis endoteliais avaliadas pela MEC / INTRODUCTION: Corneal endothelium plays an important role in physiology of the cornea. Morphological data generated from specular microscope such as endothelial cell density (CD), average cell area (ACA), coefficient of variance (CV) and percentage of hexagonal cells (HEX) are important to analyze corneal status. For a standard and reproducible analysis of the morphological data, a sampling statistical software called Cells Analyzer PAT. REC (CA) was used. PURPOSE: To determine normal reference values of CD, ACA, CV and HEX. To analyze the percentage of marked and excluded cells when the examiner counted 40, 100, 150 cells in one endothelial image. To analyze the percentage of marked and excluded cells according to the statistical software. To determine the confidence interval of these morphological data. METHODS: Transversal study of 122 endothelial specular microscope image (Konan, non-contact NONCON ROBO® SP- 8000 Specular Microscope) of 61 human individuals with cataract (63.97 ± 8.15 years old) was analyzed statistically using CA. Each image was submitted to standard cell counting. 40 cells were counted in study Group 40; 100 cells were counted in study Group 100; and 150 cells were counted in study Group 150. In study group CA, the number of counted cells was determined by the statistical analysis software in order to achieve the most reliable clinical information (relative error < 0,05). Relative error of the morphological data generated by the specular microscope were then analyzed by statistical analysis using CA software. For Group CA, relative planned error was set as 0.05. RESULTS: The average normal reference value of CD was 2395.37 ± 294.34 cells/mm2, ACA was 423.64 ± 51.09 m2, CV was 0.40 ± 0.04 and HEX was 54.77 ± 4.19%. The percentage of cells excluded for analysis was 51.20% in Group 40; 35.07% in Group 100; and 29.83% in Group 150. The average number of cells calculated initially by the statistical software was 247.48 ± 51.61 cells and the average number of cells included in the final sampling process was 425.25 ± 102.24 cells. The average relative error was 0.157 ± 0.031 for Group 40; 0.093 ± 0.024 for Group 100; 0.075 ± 0.010 for Group 150 and 0.037 ± 0.005 for Group CA. The increase of the marked cells decreases the amplitude of confidence interval (right and left eyes respectively) in 75.79% and 77.39% for CD; 75.95% and 77.37% for ACA; 72.72% and 76.92% for CV; 75.93% and 76.71% for HEX. CONCLUSION: The average normal reference value of CD was 2395.37 ± 294.34 cells/mm2, ACA was 423.64 ± 51.09 m2, CV was 0.40 ± 0.04 and HEX was 54.77 ± 4.19%. The percentage of excluded cells for analysis was 51.20% in Group 40; 35.07% in Group 100 and 29.83% in Group 150. CA software has considered reliable data when 425.25 ± 102.24 cells were marked by the examiner in two to five specular images (calculated relative error of 0.037 ± 0.005). The increase of the marked cells decreases the amplitude of confidence interval for all morphological data generated by the specular microscope
83

Statistical modelling of data from performance of broiler chickens / Modelagem estatística de dados de desempenho de frangos de corte

Reginaldo Francisco Hilario 30 August 2018 (has links)
Experiments with broiler chickens are common today, because due to the great market demand for chicken meat, the need to improve the factors related to the production of broiler chicken has arisen. Many studies have been done to improve handling techniques. In these studies statistical analysis methods and techniques are employed. In studies with comparisons between treatments, it is not uncommon to observe a lack of significant effect even when there is evidence to indicate the significance of the effects. In order to avoid such eventualities it is fundamental to carry out a good planning before conducting the experiment. In this context, a study of the power of the F test was made emphasizing the relationships between test power, sample size, mean difference to be detected and variance for chicken weights data. In the analysis of data from experiments with broilers with mixed sexes and that the experimental unit is the box, generally the models used do not take into account the variability between the sexes of the birds, this affects the precision of the inference on the population of interest . We propose a model for the total weight per box that takes into account the sex information of the broiler chickens. / Experimentos com frangos de corte são comuns atualmente, pois devido à grande demanda de mercado da carne de frango surgiu a necessidade de melhorar os fatores ligados à produção do frango de corte. Muitos estudos têm sido feitos para aprimorar as técnicas de manejo. Nesses estudos os métodos e técnicas estatísticas de análise são empregados. Em estudos com comparações entre tratamentos, não é incomum observar falta de efeito significativo mesmo quando existem evidências que apontam a significância dos efeitos. Para evitar tais eventualidades é fundamental realizar um bom planejamento antes da condução do experimento. Nesse contexto, foi feito um estudo do poder do teste F enfatizando as relações entre o poder do teste, tamanho da amostra, diferença média a ser detectada e variância para dados de pesos de frangos. Na análise de dados provenientes de experimentos com frangos de corte com ambos os sexos e que a unidade experimental é o boxe, geralmente os modelos utilizados não levam em conta a variabilidade entre os sexos das aves, isso afeta a precisão da inferência sobre a população de interesse. Foi proposto um modelo para o peso total por boxe que leva em conta a informação do sexo dos frangos.
84

Estimates of Statistical Power and Accuracy for Latent Trajectory Class Enumeration in the Growth Mixture Model

Brown, Eric C 09 June 2003 (has links)
This study employed Monte Carlo simulation to investigate the ability of the growth mixture model (GMM) to correctly identify models based on a "true" two-class pseudo-population from alternative models consisting of "false" one- and three-latent trajectory classes. This ability was assessed in terms of statistical power, defined as the proportion of replications that correctly identified the two-class model as having optimal fit to the data compared to the one-class model, and accuracy, which was defined as the proportion of replications that correctly identified the two-class model over both one- and three-class models. Estimates of power and accuracy were adjusted by empirically derived critical values to reflect nominal Type I error rates of a = .05. Six experimental conditions were examined: (a) standardized between-class differences in growth parameters, (b) percentage of total variance explained by growth parameters, (c) correlation between intercepts and slopes, (d) sample size, (e) number of repeated measures, and (f) planned missingness. Estimates of statistical power and accuracy were related to a measure of the degree of separation and distinction between latent trajectory classes (λ2 ), which approximated a chi-square based noncentrality parameter. Model selection relied on four criteria: (a) the Bayesian information criterion (BIC), (b) the sample-size adjusted BIC (ABIC), (c) the Akaike information criterion (AIC), and (d) the likelihood ratio test (LRT). Results showed that power and accuracy of the GMM to correctly enumerate latent trajectory classes were positively related to greater between-class separation, greater proportion of total variance explained by growth parameters, larger sample sizes, greater numbers of repeated measures, and larger negative correlations between intercepts and slopes; and inversely related to greater proportions of missing data. Results of the Monte Carlo simulations were field tested using specific design and population characteristics from an evaluation of a longitudinal demonstration project. This test compared estimates of power and accuracy generated via Monte Carlo simulation to estimates predicted from a regression of derived λ2 values. Results of this motivating example indicated that knowledge of λ2 can be useful in the two-class case for predicting power and accuracy without extensive Monte Carlo simulations.
85

Efficient strategies for collecting posture data using observation and direct measurement / Effektiva strategier för insamling av data om arbetsställningar geom observation och direkta mätning

Liv, Per January 2012 (has links)
Relationships between occupational physical exposures and risks of contracting musculoskeletal disorders are still not well understood; exposure-response relationships are scarce in the musculoskeletal epidemiology literature, and many epidemiological studies, including intervention studies, fail to reach conclusive results. Insufficient exposure assessment has been pointed out as a possible explanation for this deficiency. One important aspect of assessing exposure is the selected measurement strategy; this includes issues related to the necessary number of data required to give sufficient information, and to allocation of measurement efforts, both over time and between subjects in order to achieve precise and accurate exposure estimates. These issues have been discussed mainly in the occupational hygiene literature considering chemical exposures, while the corresponding literature on biomechanical exposure is sparse. The overall aim of the present thesis was to increase knowledge on the relationship between data collection design and the resulting precision and accuracy of biomechanical exposure assessments, represented in this thesis by upper arm postures during work, data which have been shown to be relevant to disorder risk. Four papers are included in the thesis. In papers I and II, non-parametric bootstrapping was used to investigate the statistical efficiency of different strategies for distributing upper arm elevation measurements between and within working days into different numbers of measurement periods of differing durations. Paper I compared the different measurement strategies with respect to the eventual precision of estimated mean exposure level. The results showed that it was more efficient to use a higher number of shorter measurement periods spread across a working day than to use a smaller number for longer uninterrupted measurement periods, in particular if the total sample covered only a small part of the working day. Paper II evaluated sampling strategies for the purpose of determining posture variance components with respect to the accuracy and precision of the eventual variance component estimators. The paper showed that variance component estimators may be both biased and imprecise when based on sampling from small parts of working days, and that errors were larger with continuous sampling periods. The results suggest that larger posture samples than are conventionally used in ergonomics research and practice may be needed to achieve trustworthy estimates of variance components. Papers III and IV focused on method development. Paper III examined procedures for estimating statistical power when testing for a group difference in postures assessed by observation. Power determination was based either on a traditional analytical power analysis or on parametric bootstrapping, both of which accounted for methodological variance introduced by the observers to the exposure data. The study showed that repeated observations of the same video recordings may be an efficient way of increasing the power in an observation-based study, and that observations can be distributed between several observers without loss in power, provided that all observers contribute data to both of the compared groups, and that the statistical analysis model acknowledges observer variability. Paper IV discussed calibration of an inferior exposure assessment method against a superior “golden standard” method, with a particular emphasis on calibration of observed posture data against postures determined by inclinometry. The paper developed equations for bias correction of results obtained using the inferior instrument through calibration, as well as for determining the additional uncertainty of the eventual exposure value introduced through calibration. In conclusion, the results of the present thesis emphasize the importance of carefully selecting a measurement strategy on the basis of statistically well informed decisions. It is common in the literature that postural exposure is assessed from one continuous measurement collected over only a small part of a working day. In paper I, this was shown to be highly inefficient compared to spreading out the corresponding sample time across the entire working day, and the inefficiency was also obvious when assessing variance components, as shown in paper II. The thesis also shows how a well thought-out strategy for observation-based exposure assessment can reduce the effects of measurement error, both for random methodological variance (paper III) and systematic observation errors (bias) (paper IV).
86

Multiscale fractality with application and statistical modeling and estimation for computer experiment of nano-particle fabrication

Woo, Hin Kyeol 24 August 2012 (has links)
The first chapter proposes multifractal analysis to measure inhomogeneity of regularity of 1H-NMR spectrum using wavelet-based multifractal tools. The geometric summaries of multifractal spectrum are informative summaries, and as such employed to discriminate 1H-NMR spectra associated with different treatments. The methodology is applied to evaluate the effect of sulfur amino acids. The second part of this thesis provides essential materials for understanding engineering background of a nano-particle fabrication process. The third chapter introduces a constrained random effect model. Since there are certain combinations of process variables resulting to unproductive process outcomes, a logistic model is used to characterize such a process behavior. For the cases with productive outcomes a normal regression serves the second part of the model. Additionally, random-effects are included in both logistics and normal regression models to describe the potential spatial correlation among data. This chapter researches a way to approximate the likelihood function and to find estimates for maximizing the approximated likelihood. The last chapter presents a method to decide the sample size under multi-layer system. The multi-layer is a series of layers, which become smaller and smaller. Our focus is to decide the sample size in each layer. The sample size decision has several objectives, and the most important purpose is the sample size should be enough to give a right direction to the next layer. Specifically, the bottom layer, which is the smallest neighborhood around the optimum, should meet the tolerance requirement. Performing the hypothesis test of whether the next layer includes the optimum gives the required sample size.
87

Alternative Sampling and Analysis Methods for Digital Soil Mapping in Southwestern Utah

Brungard, Colby W. 01 May 2009 (has links)
Digital soil mapping (DSM) relies on quantitative relationships between easily measured environmental covariates and field and laboratory data. We applied innovative sampling and inference techniques to predict the distribution of soil attributes, taxonomic classes, and dominant vegetation across a 30,000-ha complex Great Basin landscape in southwestern Utah. This arid rangeland was characterized by rugged topography, diverse vegetation, and intricate geology. Environmental covariates calculated from digital elevation models (DEM) and spectral satellite data were used to represent factors controlling soil development and distribution. We investigated optimal sample size and sampled the environmental covariates using conditioned Latin Hypercube Sampling (cLHS). We demonstrated that cLHS, a type of stratified random sampling, closely approximated the full range of variability of environmental covariates in feature and geographic space with small sample sizes. Site and soil data were collected at 300 locations identified by cLHS. Random forests was used to generate spatial predictions and associated probabilities of site and soil characteristics. Balanced random forests and balanced and weighted random forests were investigated for their use in producing an overall soil map. Overall and class errors (referred to as out-of-bag [OOB] error) were within acceptable levels. Quantitative covariate importance was useful in determining what factors were important for soil distribution. Random forest spatial predictions were evaluated based on the conceptual framework developed during field sampling.
88

Contributions to Imputation Methods Based on Ranks and to Treatment Selection Methods in Personalized Medicine

Matsouaka, Roland Albert January 2012 (has links)
The chapters of this thesis focus two different issues that arise in clinical trials and propose novel methods to address them. The first issue arises in the analysis of data with non-ignorable missing observations. The second issue concerns the development of methods that provide physicians better tools to understand and treat diseases efficiently by using each patient's characteristics and personal biomedical profile. Inherent to most clinical trials is the issue of missing data, specially those that arise when patients drop out the study without further measurements. Proper handling of missing data is crucial in all statistical analyses because disregarding missing observations can lead to biased results. In the first two chapters of this thesis, we deal with the "worst-rank score" missing data imputation technique in pretest-posttest clinical trials. Subjects are randomly assigned to two treatments and the response is recorded at baseline prior to treatment (pretest response), and after a pre-specified follow-up period (posttest response). The treatment effect is then assessed on the change in response from baseline to the end of follow-up time. Subjects with missing response at the end of follow-up are assign values that are worse than any observed response (worst-rank score). Data analysis is then conducted using Wilcoxon-Mann-Whitney test. In the first chapter, we derive explicit closed-form formulas for power and sample size calculations using both tied and untied worst-rank score imputation, where the worst-rank scores are either a fixed value (tied score) or depend on the time of withdrawal (untied score). We use simulations to demonstrate the validity of these formulas. In addition, we examine and compare four different simplification approaches to estimate sample sizes. These approaches depend on whether data from the literature or a pilot study are available. In second chapter, we introduce the weighted Wilcoxon-Mann-Whitney test on un-tied worst-rank score (composite) outcome. First, we demonstrate that the weighted test is exactly the ordinary Wilcoxon-Mann-Whitney test when the weights are equal. Then, we derive optimal weights that maximize the power of the corresponding weighted Wilcoxon-Mann-Whitney test. We prove, using simulations, that the weighted test is more powerful than the ordinary test. Furthermore, we propose two different step-wise procedures to analyze data using the weighted test and assess their performances through simulation studies. Finally, we illustrate the new approach using data from a recent randomized clinical trial of normobaric oxygen therapy on patients with acute ischemic stroke. The third and last chapter of this thesis concerns the development of robust methods for treatment groups identification in personalized medicine. As we know, physicians often have to use a trial-and-error approach to find the most effective medication for their patients. Personalized medicine methods aim at tailoring strategies for disease prevention, detection or treatment by using each individual subject's personal characteristics and medical profile. This would result to (1) better diagnosis and earlier interventions, (2) maximum therapeutic benefits and reduced adverse events, (3) more effective therapy, and (4) more efficient drug development. Novel methods have been proposed to identify subgroup of patients who would benefit from a given treatment. In the last chapter of this thesis, we develop a robust method for treatment assignment for future patients based on the expected total outcome. In addition, we provide a method to assess the incremental value of new covariate(s) in improving treatment assignment. We evaluate the accuracy of our methods through simulation studies and illustrate them with two examples using data from two HIV/AIDS clinical trials.
89

Power and Bias in Hierarchical Linear Growth Models: More Measurements for Fewer People

Haardoerfer, Regine 12 February 2010 (has links)
Hierarchical Linear Modeling (HLM) sample size recommendations are mostly made with traditional group-design research in mind, as HLM as been used almost exclusively in group-design studies. Single-case research can benefit from utilizing hierarchical linear growth modeling, but sample size recommendations for growth modeling with HLM are scarce and generally do not consider the sample size combinations typical in single-case research. The purpose of this Monte Carlo simulation study was to expand sample size research in hierarchical linear growth modeling to suit single-case designs by testing larger level-1 sample sizes (N1), ranging from 10 to 80, and smaller level-2 sample sizes (N2), from 5 to 35, under the presence of autocorrelation to investigate bias and power. Estimates for the fixed effects were good for all tested sample-size combinations, irrespective of the strengths of the predictor-outcome correlations or the level of autocorrelation. Such low sample sizes, however, especially in the presence of autocorrelation, produced neither good estimates of the variances nor adequate power rates. Power rates were at least adequate for conditions in which N2 = 20 and N1 = 80 or N2 = 25 and N1 = 50 when the squared autocorrelation was .25.Conditions with lower autocorrelation provided adequate or high power for conditions with N2 = 15 and N1 = 50. In addition, conditions with high autocorrelation produced less than perfect power rates to detect the level-1 variance.
90

Amostragem do ácaro-do-bronzeado dichopelmus notus keifer (acari, eriophydae) na cultura da erva-mate em Chapecó, Santa Catarina / Sampling of the ácaro-de - bronzed dichopelmus notus keifer (acari, eriophydae) in the culture of the erva-mate in Chapecó, Santa Catarina

Vieira Neto, João 10 April 2006 (has links)
The mate-tea tree it is a forest species that occurs naturally in the tempering regions and subtropical of America of the South. In Brazil it occurs mainly in the states of the Rio Grande do Sul, Paraná and Santa Catarina. Its leaves and branches are used, mainly, as raw material in the preparation of teas. During many years, the exploration of mate-tea if restricted the natives plants, but recently it passed also to be cultivated in monoculture, system that favors the development of pragues. The tan-mite, Dichopelmus notus (Keifer, 1959) (Acari, Eriophydae), specific plague of this culture, that before met in low population levels, currently due to the high infestations, cause the premature leaf fall and death of the tips, with elevated damages to the producers. This mite comes being considered as one of the main pragues of the culture of mate-tea in the Argentine and Brazil. In result of the importance of mate-tea and of the increase of the infestation of this mite, it is necessary to search alternatives and technologies that maximize the profitability of the culture. This work aimed to select a sampling methodology to monitor the levels of infestation of the tan-mite in matetea orchards, destined to explain its habits. The study were carried out in orchard of ten years, in the arrangement of 2,5 X 4,0 m with height of 1,5 m, located in the Chapecó county, Santa Catarina state, Brazil. In three areas of approximately 2,500 m2, distant between itself about 100 m, had been selected 30 plants randomly. Fortnightly, in the period of 27/01/2004 the 10/01/2005, were evaluated infestation of D. notus in 18 mature leaves of ten plants in each area, being six in the upper one-third, six in the medium one-third and six in lower one-third, three in each localization of the crown, external and internal. The evaluations were executed directly in the orchard, using lenses with increase of ten times and 1 cm2 of fixed field. The results had evidenced that: the infestation of the mite occurs of aggregate form; the estimate of the average number of mites for cm2 of leaf, with level of 15% precision, can be carried through in three leaves of 30 plants in sections of hectare, from february to april; the mites concentrate themselves in the external part of the plant in the upper one-third part of canopy as well as at part medium one-third; positive correlation was observed among the population of D. notus and minimum and maximum temperatures, however negative correlations was observed among tan-mite e population and rain precipitation, relative humidity and speed of the wind; the Model of Normal Approach with Correction of Continuity must preferential be used in the elaboration of binomial sequential sampling plans for D. notus. / A erva-mate é uma espécie florestal que ocorre naturalmente nas regiões temperadas e subtropicais da América do Sul. No Brasil ocorre principalmente nos estados do Rio Grande do Sul, Paraná e Santa Catarina. Suas folhas e outras partes do ramo são utilizadas, principalmente, como matéria-prima no preparo de chás. Durante muitos anos, a exploração da erva-mate se restringiu a ervais nativos, mas recentemente passou a ser cultivada também em monocultura, sistema que favorece o desenvolvimento de pragas. O ácaro-do-bronzeado, Dichopelmus notus (Keifer, 1959) (Acari, Eriophydae), praga específica dessa cultura, que antes encontrava-se em baixos níveis populacionais, atualmente devido às altas infestações, provoca a queda prematura de folhas e morte dos ponteiros, com elevados prejuízos aos produtores. Esse ácaro vem sendo considerado como uma das principais pragas da cultura da erva-mate tanto na Argentina como no Brasil. Em decorrência da importância da erva-mate e do aumento da infestação desse acarino, é necessário buscar alternativas e tecnologias que maximizem a rentabilidade da cultura. Este trabalho teve como objetivo selecionar uma metodologia de amostragem para monitorar os níveis de infestação do ácaro-do-bronzeado em lavouras de erva-mate, destinada a esclarecer aspectos de sua bioecologia. O estudo foi conduzido em erval de dez anos, no espaçamento de 2,5 X 4,0 m com altura de 1,5 m, localizado no município de Chapecó, SC. Em três áreas de aproximadamente 2.500 m2 , distantes entre si cerca de 100 m, foram selecionadas 30 plantas ao acaso. Quinzenalmente, no período de 27/01/2004 a 10/01/2005, avaliou-se a infestação de D. notus em 18 folhas maduras de dez plantas em cada área, sendo seis no terço superior, seis no terço médio e seis no terço inferior, em cada terço, três na parte externa da copa e três na interna. As avaliações foram executadas diretamente nos ervais, utilizando lentes com aumento de dez vezes e 1 cm2 de campo fixo. Os resultados evidenciaram que: a infestação do ácaro ocorre de forma agregada; a estimativa do número médio de ácaros por cm2 de folha, com nível de precisão de 15%, pode ser realizada em três folhas de 30 plantas em talhões de um hectare, nos meses de fevereiro a abril; os ácaros concentram-se na parte externa da copa da planta nos terços superior e médio; há correlação positiva entre a população de D. notus e temperaturas mínimas e máximas e correlações negativas com a precipitação pluviométrica, umidade relativa e velocidade do vento; o Modelo de Aproximação Normal com Correção de Continuidade deve ser preferencialmente utilizado na elaboração de planos de amostragem seqüencial binomial para D. notus.

Page generated in 0.237 seconds