• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1284
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 12
  • 12
  • 10
  • 10
  • Tagged with
  • 2847
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 162
  • 156
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Reconciliação ilusória: compensação de erros por amostragem manual. / Illysory reconciliation: errors compensation by manual sampling.

El Hajj, Thammiris Mohamad 03 June 2013 (has links)
No contexto da indústria mineral, reconciliação pode ser definida como a prática de comparar a massa e o teor médio de minério previstos pelos modelos geológicos com a massa e o teor gerados na usina de beneficiamento. Esta prática tem se mostrado cada vez mais importante, visto que, quando corretamente executada, aumenta a confiabilidade no planejamento de curto prazo e otimiza as operações de lavra e beneficiamento do minério. No entanto, a utilidade da reconciliação depende da qualidade e confiabilidade dos dados de entrada, gerados por diferentes métodos de amostragem. Uma boa reconciliação pode ser ilusória. Em muitos casos, erros cometidos em determinado ponto do processo são compensados por erros cometidos em outros pontos, resultando em reconciliações excelentes. Entretanto, esse fato mascara os erros do sistema que, mais cedo ou mais tarde, podem se revelar. Frequentemente, os erros de amostragem podem levar a uma análise errônea do sistema de reconciliação, gerando consequências graves à operação, principalmente quando a lavra alcança regiões mais pobres ou mais heterogêneas do depósito. Como uma boa estimativa só é possível com práticas corretas de amostragem, a confiabilidade dos resultados de reconciliação depende da representatividade das amostras que os geraram. Este trabalho analisa as práticas de amostragem manual em uma mina de cobre e ouro em Goiás e propõe um método mais confiável para fins de reconciliação. Os resultados mostram que a reconciliação aparentemente excelente entre mina e usina é ilusória, consequência da compensação de diversos erros devidos às práticas de coleta de amostras para o planejamento de curto prazo. / In the mining industry context, reconciliation can be defined as the practice of comparing the tonnage and average grade of ore predicted by the geological models with the tonnage and grade generated by the processing or metallurgical plant. This practice has shown an increasingly importance, since, if correctly executed, allows to improve the reliability on short-term planning and to optimize the mining and processing operations. However, the usefulness of reconciliation relies on the quality and reliability of the input data, generated by different sampling methods. Successful reconciliation can be illusory. In many cases, errors generated at one point of the process are offset by errors generated at other points, resulting in excellent reconciliations. However, this fact can hide compensating biases in the system that may surface someday. Very often sampling errors can be masked and may lead to erroneous analysis of the reconciliation system, generating serious consequences to the operation, especially when mining reaches poorer or more heterogeneous areas of the deposit. Since good estimation is only possible with correct sampling practices, the reliability in the reconciliation results depends on the representativeness of the samples that generated them. This work analyzes the manual sampling practices carried out at a copper and gold mine in Goiás, proposing a more reliable sampling method for reconciliation purposes. Results show that the apparently excellent reconciliation between the mine and the plant is in fact illusory, consequence of the compensation of many errors due to sampling practices for short-term planning.
242

Amostragem intencional / Intentional Sampling

Nagae, Catia Yumi 26 September 2007 (has links)
Neste trabalho apresentamos o método de amostragem intencional via otimização. Tal método baseia-se na fundamentação de que devemos controlar a seleção amostral sempre que houver conhecimento suficiente para garantir boas inferências de quantidades conhecidas e de alguma forma correlacionadas com aquelas desconhecidas e de interesse. Para a resolução dos problemas de otimização foram utilizadas técnicas de programação linear. Três aplicações foram apresentadas e em todas elas notou-se que o procedimento de amostragem intencional produziu amostras com bom balanceamento entre as composições amostrais e de referência. / In this work we present the method of intentional sampling by optimization. Such method is based on the fact that we must control the sampling selection whenever we have enough knowledge to guarantee good inferences of known quantities and somehow correlated with those interesting and unknown ones. Linear programming techniques were used to solve the optimization problems. Three applications were presented and all of them produced samples with good balancing properties related to the referenced populations.
243

Bayesian approach to variable sampling plans for the Weibull distribution with censoring.

January 1996 (has links)
by Jian-Wei Chen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 84-86). / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Bayesian approach to single variable sampling plan for the exponential distribution --- p.3 / Chapter 1.3 --- Outline of the thesis --- p.7 / Chapter Chapter 2 --- Single Variable Sampling Plan With Type II Censoring / Chapter 2.1 --- Model --- p.10 / Chapter 2.2 --- Loss function and finite algorithm --- p.13 / Chapter 2.3 --- Numerical examples and sensitivity analysis --- p.17 / Chapter Chapter 3 --- Double Variable Sampling Plan With Type II Censoring / Chapter 3.1 --- Model --- p.25 / Chapter 3.2 --- Loss function and Bayes risk --- p.27 / Chapter 3.3 --- Discretization method and numerical analysis --- p.33 / Chapter Chapter 4 --- Bayesian Approach to Single Variable Sampling Plans for General Life Distribution with Type I Censoring / Chapter 4.1 --- Model --- p.42 / Chapter 4.2 --- The case of the Weibull distribution --- p.47 / Chapter 4.3 --- The case of the two-parameter exponential distribution --- p.49 / Chapter 4.4 --- The case of the gamma distribution --- p.52 / Chapter 4.5 --- Numerical examples and sensitivity analysis --- p.54 / Chapter Chapter 5 --- Discussions / Chapter 5.1 --- Comparison between Bayesian variable sampling plans and OC curve sampling plans --- p.63 / Chapter 5.2 --- Comparison between single and double sampling plans --- p.64 / Chapter 5.3 --- Comparison of both models --- p.66 / Chapter 5.4 --- Choice of parameters and coefficients --- p.66 / Appendix --- p.78 / References --- p.84
244

Parameter estimation when outliers may be present in normal data

Quimby, Barbara Bitz January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
245

Multi-defect inspection

Su, Jinn-Yen January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
246

Binary plankton recognition using random sampling. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Among the these proposed methods (i.e., random subspace, bagging, and pairwise classification), the pairwise classification method produces the highest accuracy at the expense of more computation time for training classifiers. The random subspace method and bagging approach have similar performance. To recognize a testing plankton pattern, the computational costs of the these methods are alike. / Due to the complexity of plankton recognition problem, it is difficult to pursue a single optimal classifier to meet all the requirements. In this work, instead of developing a single sophisticated classifier, we propose an ensemble learning framework based on the random sampling techniques including random subspace and bagging. In the random subspace method, a set of low-dimensional subspaces are generated by randomly sampling on the feature space, and multiple classifiers constructed from these random subspaces are combined to yield a powerful classifier. In the bagging approach, a number of independent bootstrap replicates are generated by randomly sampling with replacement on the training set. A classifier is trained on each replicate, and the final result is produced by integrating all the classifiers using majority voting. Using random sampling, the constructed classifiers are stable and multiple classifiers cover the entire feature space or the whole training set without losing discriminative information. Thus, good performance can be achieved. Experimental results demonstrate the effectiveness of the random sampling techniques for improving the system performance. / On the other hand, in previous approaches, normally the samples of all the plankton classes are used for a single classifier training. It may be difficult to select one feature space to optimally represent and classify all the patterns. Therefore, the overall accuracy rate may be low. In this work, we propose a pairwise classification framework, in which the complex multi-class plankton recognition problem is transformed into a set of two-class problems. Such a problem decomposition leads to a number of simpler classification problems to be solved, and it provides an approach for independent feature selection for each pair of classes. This is the first time for such a framework introduced in plankton recognition. We achieve nearly perfect classification accuracy on every pairwise classifier with less number of selected features, since it is easier to select an optimal feature vector to discriminate the two-class patterns. The ensemble of these pairwise classifiers will increase the overall performance. A high accuracy rate of 94.49% is obtained from a collection of more than 3000 plankton images, making it comparable with what a trained biologist can achieve by using conventional manual techniques. / Plankton including phytoplankton and zooplankton form the base of the food chain in the ocean and are a fundamental component of marine ecosystem dynamics. The rapid mapping of plankton abundance together with taxonomic and size composition can help the oceanographic researchers understand how climate change and human activities affect marine ecosystems. / Recently the University of South Florida developed the Shadowed Image Particle Profiling and Evaluation Recorder (SIPPER), an underwater video system which can continuously capture the magnified plankton images in the ocean. The SIPPER images differ from those used for most previous research in four aspects: (i) the images are much noisier, (ii) the objects are deformable and often partially occluded, (iii) the images are projection variant, i.e., the images are video records of three-dimensional objects in arbitrary positions and orientations, and (iv) the images are binary thus are lack of texture information. To deal with these difficulties, we implement three most valuable general features (i.e., moment invariants, Fourier descriptors, and granulometries) and propose a set of specific features such as circular projections, boundary smoothness, and object density to form a more complete description of the binary plankton patterns. These features are translation, scale, and rotation invariant. Moreover, they are less sensitive to noise. High-quality features will surely benefit the overall performance of the plankton recognition system. / Since all the features are extracted from the same plankton pattern, they may contain much redundant information and noise as well. Different types of features are incompatible in length and scale and the combined feature vector has a higher dimensionality. To make the best of these features for the binary SIPPER plankton image classification, we propose a two-stage PCA based scheme for feature selection, combination, and normalization. The first-stage PCA is used to compact every long feature vector by removing the redundant information and reduce noise as well, and the second-stage PCA is employed to compact the combined feature vector by eliminating the correlative information among different types of features. In addition, we normalize every component in the combined feature vector to the same scale according to its mean value and variance. In doing so, we reduce the computation time for the later recognition stage, and improve the classification accuracy. / Zhao Feng. / "May 2006." / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6666. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 121-136). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
247

The development of non-lethal sampling methodology to investigate salmonid host immune responses to ectoparasites

Chance, Rachel J. January 2018 (has links)
No description available.
248

Occurrence sampling technique to develop a pattern for staffing a university residence hall foodservice

Bryant, Julia Ann January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
249

Amostragem intencional / Intentional Sampling

Catia Yumi Nagae 26 September 2007 (has links)
Neste trabalho apresentamos o método de amostragem intencional via otimização. Tal método baseia-se na fundamentação de que devemos controlar a seleção amostral sempre que houver conhecimento suficiente para garantir boas inferências de quantidades conhecidas e de alguma forma correlacionadas com aquelas desconhecidas e de interesse. Para a resolução dos problemas de otimização foram utilizadas técnicas de programação linear. Três aplicações foram apresentadas e em todas elas notou-se que o procedimento de amostragem intencional produziu amostras com bom balanceamento entre as composições amostrais e de referência. / In this work we present the method of intentional sampling by optimization. Such method is based on the fact that we must control the sampling selection whenever we have enough knowledge to guarantee good inferences of known quantities and somehow correlated with those interesting and unknown ones. Linear programming techniques were used to solve the optimization problems. Three applications were presented and all of them produced samples with good balancing properties related to the referenced populations.
250

Análise de regressão incorporando o esquema amostral / Regression analysis incorporating the sample design

Cléber da Costa Figueiredo 22 June 2004 (has links)
Neste trabalho estudamos modelos lineares de regressão para a análise de dados obtidos de pesquisas amostrais complexas. Foram considerados aspectos teóricos e aplicações a conjuntos de dados reais por meio do uso do aplicativo SUDAAN e da biblioteca ADAC da linguagem R. Nas aplicações foram abordados os modelos de regressão normal e logística. Foram realizados também estudos comparativos dos métodos estudados com os que assumem que as observações são selecionadas segundo amostragem aleatória simples. / We have studied linear regression models for data analysis when the data set comes from a complex sampling survey. We have considered theoretical aspects and some applications utilizing the SUDAAN software and the ADAC library for R language. The applications involved the normal and logistic regression models. The studied methods were compared with those obtained from simple random samples.

Page generated in 0.0405 seconds