Spelling suggestions: "subject:"ampling error"" "subject:"ampling arror""
1 |
Standardization of Street Sampling Units to Improve Street Tree Population Estimates Derived by i-Tree Streets Inventory SoftwarePatterson, Mason Foushee 29 June 2012 (has links)
Street trees are a subpopulation of the urban forest resource and exist in the rights-of-way adjacent to public roads in a municipality. Benefit-cost analyses have shown that the annual benefits provided by the average street tree far outweigh the costs of planting and maintenance. City and municipal foresters spend a majority of their time and resources managing street tree populations. Sample street tree inventories are a common method of estimating municipal street tree populations for the purposes of making urban forest policy, planning, and management decisions.
i-Tree Streets is a suite of software tools capable of producing estimates of street tree abundance and value from a sample of street trees taken along randomly selected sections (segments) of public streets. During sample street tree inventories conducted by Virginia Tech Urban Forestry, it was observed that the lengths of the sample streets recommended by i-Tree varied greatly within most municipalities leading to concern about the impact of street length variation on sampling precision.
This project was conducted to improve i-Tree Streets by changing the recommended sampling protocol without altering the software. Complete street tree censuses were obtained from 7 localities and standardized using GIS. The effects of standardizing street segments to 3 different lengths prior to sampling on the accuracy and precision of i-Tree Streets estimates were investigated though computer simulations and analysis of changes in variation in number of trees per street segment as a basis for recommending procedural changes.
It was found that standardizing street segments significantly improved the precision of i-Tree Streets estimates. Based on the results of this investigation, it is generally recommended that street segments be standardized to 91m (300 ft) prior to conducting a sample inventory. Standardizing to 91m will significantly reduce the number of trees, the number of street segments, and the percentage of total street segments that must be sampled to achieve an estimate with a 10% relative standard error.
The effectiveness of standardization and the associated processing time can be computed from municipal attributes before standardization so practitioners can gauge the marginal gains in field time versus costs in processing time. Automating standardization procedures or conducting an optimization study of segment length would continue to increase the efficiency and marginal gains associated with street segment standardization. / Master of Science
|
2 |
Development of methodology to correct sampling error associated with FRM PM10 samplersChen, Jing 15 May 2009 (has links)
Currently, a lack of accurate emission data exits for particulate matter (PM) in
agricultural air quality studies (USDA-AAQTF, 2000). PM samplers, however, tend to
over estimate the concentration of most agricultural dusts because of the interaction of
the particle size distribution (PSD) and performance characteristics of the sampler
(Buser, 2004). This research attempts to find a practical method to characterize and
correct this error for the Federal Reference Method (FRM) PM10 sampler. First, a new
dust wind tunnel testing facility that satisfies the USEPA’s requirement of testing PM10
samplers was designed, built, and evaluated. Second, the wind tunnel testing protocol
using poly-dispersed aerosol as the test dust was proved to be able to provide results
consistent with mono-dispersed dusts. Third, this study quantified the variation of over
sampling ratios for the various cut point and slopes of FRM PM10 samplers and proposed
an averaged over sampling ratio as a correction factor for various ranges of PSD. Finally,
a method of using total suspended particle (TSP) samplers as a field reference for
determining PM10 concentrations and aerosol PSD was explored computationally. Overall, this dissertation developed successfully the methodology to correct the
sampling error associated with the FRM PM10 sampler: (1) wind tunnel testing facilities
and protocol for experimental evaluation of samplers; (2) the variation of the oversampling
ratios of FRM PM10 samplers for computational evaluation of samplers; (3) the
evaluation of TSP sampler effectiveness as a potential field reference for field evaluation
of samplers.
|
3 |
Quantifying the uncertainty caused by sampling, modeling, and field measurements in the estimation of AGB with information of the national forest inventory in Durango, MexicoTrucíos Caciano, Ramón 20 April 2020 (has links)
No description available.
|
4 |
Theoretical and Statistical Approaches to Understand Human Mitochondrial DNA Heteroplasmy InheritanceWonnapinij, Passorn 07 May 2010 (has links)
Mitochondrial DNA (mtDNA) mutations have been widely observed to cause a variety of human diseases, especially late-onset neurodegenerative disorders. The prevalence of mitochondrial diseases caused by mtDNA mutation is approximately 1 in 5,000 of the population. There is no effective way to treat patients carrying pathogenic mtDNA mutation; therefore preventing transmission of mutant mtDNA became an important strategy. However, transmission of human mtDNA mutation is complicated by a large intergenerational random shift in heteroplasmy level causing uncertainty for genetic counseling. The aim of this dissertation is to gain insight into how human mtDNA heteroplasmy is inherited.
By working closely with our experimental collaborators, the computational simulation of mouse embryogenesis has been developed in our lab using their measurements of mouse mtDNA copy number. This experimental-computational interplay shows that the variation of offspring heteroplasmy level has been largely generated by random partition of mtDNA molecules during pre- and early postimplantation development.
By adapting a set of probability functions developed to describe the segregation of allele frequencies under a pure random drift process, we now can model mtDNA heteroplasmy distribution using parameters estimated from experimental data. The absence of an estimate of sampling error of mtDNA heteroplasmy variance may largely affect the biological interpretation drawn from this high-order statistic, thereby we have developed three different methods to estimate sampling error values for mtDNA heteroplasmy variance. Applying this error estimation to the comparison of mouse to human mtDNA heteroplasmy variance reveals the difference of the mitochondrial genetic bottleneck between these organisms.
In humans, the mothers who carry a high proportion of m.3243A>G mutation tend to have fewer daughters than sons. This offspring gender bias has been revealed by applying basic statistical tests on the human clinical pedigrees carrying this mtDNA mutation. This gender bias may partially determine the mtDNA mutation level among female family members.
In conclusion, the application of population genetic theory, statistical analysis, and computational simulation help us gain understanding of human mtDNA heteroplasmy inheritance. The results of these studies would be of benefit to both scientific research and clinical application. / Ph. D.
|
5 |
Extension of Particle Image Velocimetry to Full-Scale Turbofan Engine Bypass Duct FlowsGeorge, William Mallory 10 July 2017 (has links)
Fan system efficiency for modern aircraft engine design is increasing to the point that bypass duct geometry is becoming a significant contributor and could ultimately become a limiting factor. To investigate this, a number of methods are available to provide qualitative and quantitative analysis of the flow around the loss mechanisms present in the duct. Particle image velocimetry (PIV) is a strong candidate among experimental techniques to address this challenge. Its use has been documented in many other locations within the engine and it can provide high spatial resolution data over large fields of view. In this work it is shown that these characteristics allow the PIV user to reduce the spatial sampling error associated with sparsely spaced point measurements in a large measurement region with high order gradients and small spatial scale flow phenomena. A synthetic flow featuring such attributes was generated by computational fluid dynamics (CFD) and was sampled by a virtual PIV system and a virtual generic point measurement system. The PIV sampling technique estimated the average integrated velocity field about five times more accurately than the point measurement sampling due to the large errors that existed between each point measurement location. Despite its advantages, implementation of PIV can be a significant challenge, especially for internal measurement where optical access is limited. To reduce the time and cost associated with iterating through experiment designs, a software package was developed which incorporates basic optics principles and fundamental PIV relationships, and calculates experimental output parameters of interest such as camera field of view and the amount of scattered light which reaches the camera sensor. The program can be used to judge the likelihood of success of a proposed PIV experiment design by comparing the output parameters with those calculated from benchmark experiments. The primary experiment in this work focused on the Pratt and Whitney Canada JT15D-1 aft support strut wake structure in the bypass duct and was comprised of three parts: a simulated engine environment was created to provide a proof
of concept of the PIV experiment design; the PIV experiment was repeated in the full scale engine at four fan speeds ranging from engine idle up to 80% of the maximum corrected fan speed; and, finally, a CFD simulation was performed with simplifying assumptions to provide insight and perspective into the formation of the wake structures observed in the PIV data. Both computational and experimental results illustrate a non-uniform wake structure downstream of the support strut and support the hypothesis that the junction of the strut and the engine core wall is creating a separate wake structure from that created by the strut main body. The PIV data also shows that the wake structure moves in the circumferential direction at higher fan speeds, possibly due to bulk swirl present in the engine or a pressure differential created by the support strut. The experiment highlights the
advantages of using PIV, but also illustrates a number of the implementation challenges present, most notably, those associated with consistently providing a sufficient number of seeding particles in the measurement region. Also, the experiment is the first to the author's knowledge to document the use of PIV in a full scale turbofan engine bypass duct. / Master of Science
|
6 |
Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling DistributionKanyongo, Gibbs Y. 16 March 2012 (has links) (PDF)
No description available.
|
7 |
Optimizing Sample Design for Approximate Query ProcessingRösch, Philipp, Lehner, Wolfgang 30 November 2020 (has links)
The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
|
8 |
Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling DistributionKanyongo, Gibbs Y. 16 March 2012 (has links)
No description available.
|
9 |
Addressing an old issue from a new methodological perspective : a proposition on how to deal with bias due to multilevel measurement error in the estimation of the effects of school compositionTelevantou, Ioulia January 2014 (has links)
With educational effectiveness studies, school-level aggregates of students' characteristics (e.g. achievement) are often used to assess the impact of school composition on students' outcomes – school compositional effects. Empirical findings on the magnitude and direction of school compositional effects have not been consistent. Relevant methodological studies raise the issue of under-specification at level 1 in compositional models - evident when the student-level indicator on which the aggregation is based is mis-measured. This phenomenon has been shown to bias compositional effect estimates, leading to misleading effects of the aggregated variables – phantom compositional effects. My thesis, consisted of three separate studies, presents an advanced methodological framework that can be used to investigate the effect of school composition net of measurement error bias. In Study 1, I quantify the impact of failing to account for measurement error on school compositional effects as used in value added models of educational effectiveness to explain relative school effects. Building on previous studies, multilevel structural equation models are incorporated to control for measurement error and/or sampling error. Study 1a, a large sample of English primary students in years one and four (9,059 students from 593 schools) reveals a small, significant and negative compositional effect on students' subsequent mathematics achievement that becomes more negative after controlling for measurement error. Study 1b, a large study of Cyprus primary students in year four (1694 students in 59 schools) shows a small, positive but statistically significant effect that becomes non-significant after controlling for measurement error. Further analyses with the English data (Study 2), demonstrates a negative compositional effect of school average mathematics achievement on subsequent mathematics self-concept – a Big Fish Little Pond Effect (BFLPE). Adjustments for measurement and sampling error result in more negative BFLPEs. The originality of Study 2 lies in verifying BFLPEs for students as young as five to eight/nine years old. Bridging the findings related to students' mathematics self-concept (Study 2) and the findings on students’ mathematics achievements (Study 1a), I demonstrate that the prevalence of BFLPEs with the English data partly explains the negative compositional effect of school average mathematics achievement on students' subsequent mathematics achievement. Lastly, in Study 3 I consider an alternative approach to school accountability to conventional value added models, namely the Regression Discontinuity approach. Specifically, I use the English TIMSS 1995 primary (years four and five) and secondary (years eight and nine) data to investigate the effect of one extra year of schooling on students' mathematics achievement and the variability across schools in their absolute effects. The extent to which school composition, as given by school average achievement, correlates with schools' added-year effects is addressed. Importantly the robustness of the RD estimates to measurement error bias is demonstrated. My findings have important methodological, substantive and theoretical implications for on-going debates on the school compositional effects on students' outcomes, because nearly all previous research has been based on traditional approaches to multilevel models, which are positively biased due to the failure to control for measurement error.
|
10 |
Uma contribuição para o estudo sobre o erro não amostral na pesquisa de mercadoZanotta, Egydio Barbosa 25 October 2013 (has links)
Made available in DSpace on 2016-04-25T20:21:04Z (GMT). No. of bitstreams: 1
Egydio Barbosa Zanotta.pdf: 1601956 bytes, checksum: 19d6138c932f32f13bd5155a613801bb (MD5)
Previous issue date: 2013-10-25 / The main objectiv of this study is to present a contribution to the non
sampling error in the Marketing Research. To reach this goal we had ( by
means of bibliografical research) to discover and typify the non sampling error
which is present in a research project, assuming the presence of this error in
each step of the process. Finally we tried to solve these problems by taking into
account the more recents conquests of the knowledge in this area. It is
important to emphasize that we included notes and theories originated from our
own experience, stimulated by the opinion of the advisor of our thesis / O objetivo central desse estudo é de proporcionar Uma Contribuição
Para o Estudo Sobre o Erro Não Amostral na Pesquisa de Mercado. Para
lograr atingir esse objetivo, necessitamos através da pesquisa bibliográfica,
descobrir e tipificar os erros não amostrais existentes no projeto de pesquisa e,
partindo do pressuposto de que em cada etapa, tais erros estão
presentes.Posteriormente, passamos a pesquisar como solucionar esses erros,
com base no estado da arte. Convém salientar, que graças ao incentivo
recebido de minha orientadora, incluímos notas sobre teorias oriundas de
nossa experiência e conhecimento
|
Page generated in 0.0772 seconds