Spelling suggestions: "subject:"[een] SAMPLING"" "subject:"[enn] SAMPLING""
211 |
Joint Resampling and Restoration of Hexagonally Sampled Images Using Adaptive Wiener FilterBurada, Ranga January 2015 (has links)
No description available.
|
212 |
Contributions to the theory of unequal probability samplingLundquist, Anders January 2009 (has links)
This thesis consists of five papers related to the theory of unequal probability sampling from a finite population. Generally, it is assumed that we wish to make modelassisted inference, i.e. the inclusion probability for each unit in the population is prescribed before the sample is selected. The sample is then selected using some random mechanism, the sampling design. Mostly, the thesis is focused on three particular unequal probability sampling designs, the conditional Poisson (CP-) design, the Sampford design, and the Pareto design. They have different advantages and drawbacks: The CP design is a maximum entropy design but it is difficult to determine sampling parameters which yield prescribed inclusion probabilities, the Sampford design yields prescribed inclusion probabilities but may be hard to sample from, and the Pareto design makes sample selection very easy but it is very difficult to determine sampling parameters which yield prescribed inclusion probabilities. These three designs are compared probabilistically, and found to be close to each other under certain conditions. In particular the Sampford and Pareto designs are probabilistically close to each other. Some effort is devoted to analytically adjusting the CP and Pareto designs so that they yield inclusion probabilities close to the prescribed ones. The result of the adjustments are in general very good. Some iterative procedures are suggested to improve the results even further. Further, balanced unequal probability sampling is considered. In this kind of sampling, samples are given a positive probability of selection only if they satisfy some balancing conditions. The balancing conditions are given by information from auxiliary variables. Most of the attention is devoted to a slightly less general but practically important case. Also in this case the inclusion probabilities are prescribed in advance, making the choice of sampling parameters important. A complication which arises in the context of choosing sampling parameters is that certain probability distributions need to be calculated, and exact calculation turns out to be practically impossible, except for very small cases. It is proposed that Markov Chain Monte Carlo (MCMC) methods are used for obtaining approximations to the relevant probability distributions, and also for sample selection. In general, MCMC methods for sample selection does not occur very frequently in the sampling literature today, making it a fairly novel idea.
|
213 |
Ensemble for Deterministic Sampling with positive weights : Uncertainty quantification with deterministically chosen samplesSahlberg, Arne January 2016 (has links)
Knowing the uncertainty of a calculated result is always important, but especially so when performing calculations for safety analysis. A traditional way of propagating the uncertainty of input parameters is Monte Carlo (MC) methods. A quicker alternative to MC, especially useful when computations are heavy, is Deterministic Sampling (DS). DS works by hand-picking a small set of samples, rather than randomizing a large set as in MC methods. The samples and its corresponding weights are chosen to represent the uncertainty one wants to propagate by encoding the first few statistical moments of the parameters' distributions. Finding a suitable ensemble for DS in not easy, however. Given a large enough set of samples, one can always calculate weights to encode the first couple of moments, but there is good reason to want an ensemble with only positive weights. How to choose the ensemble for DS so that all weights are positive is the problem investigated in this project. Several methods for generating such ensembles have been derived, and an algorithm for calculating weights while forcing them to be positive has been found. The methods and generated ensembles have been tested for use in uncertainty propagation in many different cases and the ensemble sizes have been compared. In general, encoding two or four moments in an ensemble seems to be enough to get a good result for the propagated mean value and standard deviation. Regarding size, the most favorable case is when the parameters are independent and have symmetrical distributions. In short, DS can work as a quicker alternative to MC methods in uncertainty propagation as well as in other applications.
|
214 |
Sound Sampling : der Schutz von Werk- und Darbietungsteilen der Musik nach schweizerischem Urheberrechtsgesetz /Wegener, Poto. January 2007 (has links) (PDF)
Diss. Univ. Basel, 2006. / Ed. commerciale de la thèse de Bâle, 2006. Literaturverz.
|
215 |
On unequal probability sampling designsGrafström, Anton January 2010 (has links)
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
|
216 |
Estimation Bayésienne de l’abondance par "removal sampling" en présence de variabilité du taux d’échantillonnage : application aux tiques Ixodes ricinus en quête d’hôtes / Bayesian estimation of abundance based on removal sampling with variability of the sampling rate : case study of questing Ixodes ricinus ticksBord, Séverine 17 June 2014 (has links)
L'estimation des abondances de population est essentielle pour comprendre les dynamiques de population, les interactions entre espèces et estimer les risques de transmission d'agents pathogènes dans les populations. Plusieurs méthodes d'échantillonnages, basées sur des hypothèses spécifiques permettent d'estimer ces abondances : les méthodes par comptages uniques, par « distance sampling », par échantillonnages successifs ou par capture marquage recapture. Nous nous sommes intéressés à l'abondance des tiques Ixodes ricinus, vecteurs de nombreux agents pathogènes. Cette abondance est classiquement estimée par le nombre de tiques capturées lors d'échantillonnages uniques réalisés sur différentes unités d'observation. Cependant, de nombreuses études remettent en cause cette hypothèse forte et suggèrent que le taux d'échantillonnage est variable selon les conditions d'échantillonnage (type de végétation,…) mais ne prennent pas en compte ce taux d'échantillonnage pour autant. A partir d'une méthode d'échantillonnage par « removal sampling » (RS), (i) nous avons montré que les conditions environnementales influençaient le taux d'échantillonnage et l'indicateur d'abondance usuel i.e. le nombre de tiques capturées lors d'un seul échantillonnage (ii) nous avons proposé une méthode pour détecter l'indicateur d'abondance, basés sur le nombre cumulé de capture, le moins soumis aux variations du taux ; (iii) par une approche Bayésienne hiérarchique, nous avons estimé simultanément l'abondance de tiques des unités d'observation et la valeur du taux d'échantillonnage en fonction du type de végétation et de l'heure d'échantillonnage. Nous avons montré que le taux d'échantillonnage sur des arbustes (entre 33,9 % et 47,4%) était significativement inférieur au taux d'échantillonnage sur des feuilles mortes (entre 53,6 % et 66,7%). De plus, nous avons montré que le modèle RS tend vers un modèle de Poisson iid lorsque la taille de la population N0 tend vers l'infini ce qui pose des problèmes d'indétermination pour estimer les paramètres N0 et τ, le taux d'échantillonnage. Nous avons également montré que (i) les estimateurs Bayésiens divergent lorsque les lois a priori sont des lois vagues ; (ii) les lois a priori β(a, b) avec a > 2 sur τ conduisaient à des estimateurs Bayésien convergents. Enfin, nous avons proposé des recommandations quant au choix des lois a priori pour τ afin d'obtenir de bonnes estimations pour N0 ou pour τ. Nous discutons de la pertinence des méthodes RS pour les tiques et des perspectives envisageables pour (i) estimer le risque acarologique représenté par la population de tiques potentiellement actives sur une unité d'observation, (ii) estimer un risque à l'échelle d'une parcelle, à savoir comment répartir l'effort d'échantillonnage entre le nombre d'unités d'observation et le nombre d'échantillonnages successifs par unités d'observation. / The estimation of animal abundance is essential to understand population dynamics, species interactions and disease patterns in populations and to estimate the risk of pathogens transmission. Several sampling methods such as single counts, distance sampling, removal sampling or capture mark recapture could be used to estimate abundance. In this study, we are investigated the abundance of Ixodes ricinus ticks, which are involved in the transmission of many pathogens. Tick abundance is commonly estimated by the number of nymphs captured during a single observation (a cloth dragged on a given surface). In this case, analyses of abundance patterns assumes that the probability of detecting a tick, hence the sampling rate, remains constant across the observations. In practice, however, this assumption is often not satisfied as the sampling rate may fluctuate between observation plots. The variation of sampling rate is never taken into account in estimations of tick abundance. Using a removal sampling design (RS), (i) we showed that the sampling rate and the usual abundance indicator (based on a single drag observation per spot) were both influenced by environmental conditions ; (ii) we proposed a method to determine the abundance indicator the least influenced by sampling rate variations ; (iii) using a hierarchical Bayesian model, we estimated simultaneously the abundance and the sampling rate according the type of vegetation, and the time of sampling. The sampling rate varied between 33,9 % and 47,4 % for shrubs and 53,6 % and 66,7 % for dead leaves. In addition, we show that the RS model tends to Poisson iid model when the population size N0 tends to infinite. This result conduct to infinite estimations for N0. We show that (i) Bayesian estimators were divergent for vague prior ; (ii) β(a, b) prior for a > 2 on τ conduct to convergent estimators. Then, we proposed recommendations for prior choice for τ parameter to give good estimations of N0 or τ. We discuss the relevance of RS for ticks and the possible perspectives to (i) estimate the acarologic risk associated to all potential active ticks for given spot, (ii) estimate the risk at the larger scale, i.e. how to distribute the sampling effort between number of spot and number of consecutive sampling by spot.
|
217 |
Incerteza da amostragem e da análise no controle de qualidade do monitoramento de recursos hídricosSantana, Rogério Visquetti de January 2018 (has links)
Orientadora: Profª. Drª. Roseli Frederigi Benassi / Coorientadora. Profª. Drª. Lucia Helena Gomes Coelho / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência e Tecnologia Ambiental, Santo André, 2018. / Os monitoramentos ambientais de recursos hídricos são construídos por meio de resultados analíticos de variáveis ecológicas que são avaliadas por meio da aquisição de amostras. Estas devem representar o mais fielmente possível as condições e características do ambiente estudado para garantir que as tomadas de decisão baseadas neles sejam adequados. Por conta disso, este trabalho teve como objetivo avaliar a eficiência da utilização do cálculo de incerteza da amostragem como estratégia de garantia da representatividade dos procedimentos de coleta usados em estudos ambientais. Para isso, foram avaliadas amostras, coletadas e analisadas em duplicatas e cujos resultados foram usados para determinar a incerteza da amostragem conforme procedimento definido pelo EURACHEM no documento "Measurement uncertainty arising from sampling". Para tanto, foram avaliados pontos de coleta pertencentes a rede de monitoramento da qualidade das águas superficiais da CETESB, sendo 11 localizados em rios que cruzam a região metropolitana de São Paulo (Pinheiros, Tamanduateí e Tietê) e 9 localizados no reservatório Billings. A população de dados obtidas foi tratada por meio da análise da diferença percentual das duplicatas, para que os valores suspeitos (outliers) fossem excluídos, sendo os resultados escolhidos usados para a determinação incerteza. Por meio do cálculo de incerteza em dois níveis, foram calculadas as incertezas da amostragem e da análise observando-se que a primeira apresentou resultados maiores do que a segunda. Conclui-se, por fim, que apesar da metodologia ser adequada para avaliar o a influência da amostragem e da análise separadamente, o desenho amostral aplicado não permite avaliar as fontes de incerteza presentes em cada ponto, uma vez que considerada pontos com características hidrológicas e limnológicas bastante distintas. / The environmental monitoring of water resources is constructed through analytical results of ecological variables that are evaluated through the acquisition of samples. These should represent as accurately as possible the conditions and characteristics of the environment studied to ensure that decision-making based on them is adequate. The objective of this study is to evaluate the efficiency of the use of the uncertainty calculation of sampling as a strategy to guarantee the representativeness of the collection procedures used in environmental studies. This essay aims to evaluate the efficiency of the uncertainty estimation method as a strategy to ensure representativeness of sampling procedures used in environmental studies. For this, were evaluated the samples, which were collected and analyzed in duplicates and whose results were used to determine the sampling uncertainty according to the procedure defined by EURACHEM in the document "Measurement uncertainty arising from sampling" To that end, collection points belonging to the CETESB surface water quality monitoring network were evaluated, of which 11 were located in rivers crossing the São Paulo metropolitan area (Pinheiros, Tamanduateí and Tietê) and 9 located in the Billings reservoirs. The population of data obtained was treated by means of the analysis of the Percent Difference of the duplicates, so that the outliers were excluded, and the chosen results were used to determine uncertainty. By calculating uncertainty at two levels, we calculated the sampling and analysis uncertainties by observing that the former presented higher than the second results. Finally, it is concluded that although the methodology is adequate to evaluate the influence of sampling and analysis separately, the applied sample design does not allow to evaluate the sources of uncertainty present in each point, once considered points with hydrological and limnological characteristics quite different.
|
218 |
Process Monitoring with Multivariate Data:Varying Sample Sizes and Linear ProfilesKim, Keunpyo 01 December 2003 (has links)
Multivariate control charts are used to monitor a process when more than one quality variable associated with the process is being observed. The multivariate exponentially weighted moving average (MEWMA) control chart is one of the most commonly recommended tools for multivariate process monitoring. The standard practice, when using the MEWMA control chart, is to take samples of fixed size at regular sampling intervals for each variable. In the first part of this dissertation, MEWMA control charts based on sequential sampling schemes with two possible stages are investigated. When sequential sampling with two possible stages is used, observations at a sampling point are taken in two groups, and the number of groups actually taken is a random variable that depends on the data. The basic idea is that sampling starts with a small initial group of observations, and no additional sampling is done at this point if there is no indication of a problem with the process. But if there is some indication of a problem with the process then an additional group of observations is taken at this sampling point. The performance of the sequential sampling (SS) MEWMA control chart is compared to the performance of standard control charts. It is shown that that the SS MEWMA chart is substantially more efficient in detecting changes in the process mean vector than standard control charts that do not use sequential sampling. Also the situation is considered where different variables may have different measurement costs. MEWMA control charts with unequal sample sizes based on differing measurement costs are investigated in order to improve the performance of process monitoring. Sequential sampling plans are applied to MEWMA control charts with unequal sample sizes and compared to the standard MEWMA control charts with a fixed sample size. The steady-state average time to signal (SSATS) is computed using simulation and compared for some selected sets of sample sizes. When different variables have significantly different measurement costs, using unequal sample sizes can be more cost effective than using the same fixed sample size for each variable.
In the second part of this dissertation, control chart methods are proposed for process monitoring when the quality of a process or product is characterized by a linear function. In the historical analysis of Phase I data, methods including the use of a bivariate <i>T</i>² chart to check for stability of the regression coefficients in conjunction with a univariate Shewhart chart to check for stability of the variation about the regression line are recommended. The use of three univariate control charts in Phase II is recommended. These three charts are used to monitor the <i>Y</i>-intercept, the slope, and the variance of the deviations about the regression line, respectively. A simulation study shows that this type of Phase II method can detect sustained shifts in the parameters better than competing methods in terms of average run length (ARL) performance. The monitoring of linear profiles is also related to the control charting of regression-adjusted variables and other methods. / Ph. D.
|
219 |
Development of a Novel Probe for Engine Ingestion Sampling in Parallel With Initial Developments of a High-speed Particle-laden JetCollins, Addison Scott 07 December 2021 (has links)
Particle ingestion remains an important concern for turbine engines, specifically those in aircraft. Sand and related particles tend to become suspended in air, posing an omnipresent health threat to engine components. This issue is most prevalent during operation in sandy environments at low altitudes. Takeoffs and landings can blow a significant quantity of particulates into the air; these particulates may then be ingested by the engine. Helicopters and other Vertical Takeoff and Landing (VTOL) aircraft are at high risk of engine damage in these conditions. Compressor blades are especially vulnerable, as they may encounter the largest of particles. Robust and thorough experimental and computational studies have been conducted to understand the relationships between particle type, shape, and size and their effects on compressor and turbine blade wear. However, there is a lack of literature that focuses on sampling particles directly from the flow inside an engine. Instead, experimental studies that estimate the trajectories and behavior of particles are based upon the resulting erosion of blades and the expected aerodynamics and physics of the region. It is important to close this gap to fully understand the role of particulates in eroding engine components. This study investigated the performance of a particle-sampling probe designed to collect particles after the first compressor stage of a Rolls-Royce Allison Model 250 turboshaft engine. The engine was not used in this investigation; rather, a rig that creates a particle-laden jet was developed in order to determine probe sampling sensitivity with respect to varying angles of attack and flow Mach number. Particle image velocimetry (PIV) was utilized to understand the aerodynamic effects of the probe on smaller particles. / Master of Science / Aircraft jet engines are constantly exposed to particles suspended in the atmosphere. Most jet engines contain several stages of spinning blades. The first series of stages near the front of the engine comprise the compressor, while the series towards the end of the engine comprise the turbine. Engines depend on compressor blades to add energy to the flow via compression and turbine blades to extract energy from the flow after combustion. Thus, they are critical for the successful operation of the engine. The constant impact of airborne particulates against these blades causes erosion, which alters blade geometry and thereby engine performance. Depending on the turbine inlet temperature, particles may melt and clog the cooling passages in turbine blades, causing serious damage as the blades reach temperatures above their intended operating regime. These damages inhibit the ability of the engine to operate properly and pose a serious safety risk if left unchecked. In literature, experimental engine erosion correlations and numerical models of particle trajectories through the engine have been developed; however, none of these studies collected particles directly from the compressor region of the engine. In this study, a probe was developed and evaluated for the purpose of sampling particulates between the first and second compressor stages of a Rolls-Royce Allison Model 250 turboshaft engine. The probe's efficacy and aerodynamic properties were analyzed such that the probe will provide processable data when inserted into the engine. The methods to obtain this data include particle-sampling and particle image velocimetry (PIV).
|
220 |
Pneumatic probe sampling of Kansas farm-stored sorghumMeagher, R. L. (Robert L.) January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
Page generated in 0.0343 seconds