• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 29
  • 11
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 268
  • 55
  • 52
  • 46
  • 29
  • 25
  • 24
  • 18
  • 17
  • 17
  • 17
  • 17
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Developing a basis for characterizing precision of estimates produced from non-probability samples on continuous domains

Cooper, Cynthia 20 February 2006 (has links)
Graduation date: 2006 / This research addresses sample process variance estimation on continuous domains and for non-probability samples in particular. The motivation for the research is a scenario in which a program has collected non-probability samples for which there is interest in characterizing how much an extrapolation to the domain would vary given similarly arranged collections of observations. This research does not address the risk of bias and a key assumption is that the observations could represent the response on the domain of interest. This excludes any hot-spot monitoring programs. The research is presented as a collection of three manuscripts. The first (to be published in Environmetrics (2006)) reviews and compares model- and design-based approaches for sampling and estimation in the context of continuous domains and promotes a model-assisted sample-process variance estimator. The next two manuscripts are written to be companion papers. With the objective of quantifying uncertainty of an estimator based on a non-probability sample, the proposed approach is to first characterize a class of sets of locations that are similarly arranged to the collection of locations in the non-probability sample, and then to predict variability of an estimate over that class of sets using the covariance structure indicated by the non-probability sample (assuming the covariance structure is indicative of the covariance structure on the study region). The first of the companion papers discusses characterizing classes of similarly arranged sets with the specification of a metric density. Goodness-of-fit tests are demonstrated on several types of patterns (dispersed, random and clustered) and on a non-probability collection of locations surveyed by Oregon Department of Fish & Wildlife on the Alsea River basin in Oregon. The second paper addresses predicting the variability of an estimate over sets in a class of sets (using a Monte Carlo process on a simulated response with appropriate covariance structure).
242

Non-uniform sampling: algorithms and architectures

Luo, Chenchi 09 November 2012 (has links)
Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques. Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses. Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time grid so that classic signal processing techniques can be applied afterwards.
243

Determinação de contaminantes em óleo diesel por ICP-OES empregando a microextração líquido-líquido dispersiva em fase reversa / Determination of contaminants in diesel oil by ICP-OES using the reverse-phase dispersive liquid-liquid microextraction

Delpino, Isabela Solana 30 March 2017 (has links)
Neste trabalho foi desenvolvido um método empregando a microextração líquidolíquido dispersiva em fase reversa (RP-DLLME) para extração e pré-concentração de Al, Cd, Cu, Fe, Mn, Ni e Zn em amostras de óleo diesel. A determinação dos analitos foi feita por espectrometria de emissão óptica de plasma indutivamente acoplado (ICP-OES). Para a etapa de extração e pré-concentração, empregou-se uma mistura de solventes dispersor e extrator, a qual foi adicionada diretamente na amostra aquecida. Posteriormente, para separação das fases, as soluções foram centrifugadas e, foi retirada a fase sedimentada para determinação dos analitos por ICP-OES. Para o desenvolvimento do método foi utilizado o planejamento experimental e otimização de processos, onde inicialmente utilizou-se um planejamento fatorial fracionado e em seguida um delineamento composto central rotacional. As variáveis estudadas foram temperatura de extração (60, 70 e 80 oC), massa de amostra (5, 10 e 15 g), volume da solução extratora (0,5, 1 e 1,5 mL), concentração do extrator (1, 1,5 e 2 mol L-1) e proporção do dispersor na solução extratora (60, 70 e 80%). Resultados quantitativos foram obtidos empregando as seguintes condições: i) temperatura de extração: 70 oC, ii) massa de amostra: 8,5 g, iii) volume da solução extratora: 1 mL, iv) concentração de HNO3: 2 mol L-1 e v) proporção do dispersor: 70% (v/v). Todos os experimentos foram feitos usando a adição de 1,0 µg g-1 de Al, Cd, Cu, Fe, Mn, Ni e Zn diretamente nas amostras de diesel, para isso foi utilizado um padrão multielementar de óleo lubrificante Conostan® (100 mg L-1). Os resultados foram expressos como recuperação dos analitos (%). As soluções de calibração foram feitas em solução aquosa e os extratos foram diretamente determinados por ICP-OES. Os limites de quantificação (LQ) para Al, Cd, Cu, Fe, Mn, Ni e Zn foram 0,0492, 0,0031, 0,0031, 0,0140, 0,0008, 0,0049 e 0,0093 µg g-1, respectivamente. A exatidão do método foi avaliada por meio de ensaios com adição de analitos. Recuperações quantitativas foram obtidas para todos analitos e os desvios padrão relativos foram inferiores a 7%. Posteriormente, o método foi aplicado para 6 amostras de óleo diesel comercial, S-10 e S-500, e a concentração de Al, Fe e Zn foi na faixa de 0,026 até 0,150 µg g-1, os demais analitos ficaram abaixo do LQ do método. Então, o método desenvolvido empregando a RP-DLLME e posterior determinação por ICP-OES apresentou-se com precisão e exatidão adequados para determinação de Al, Cd, Cu, Fe, Mn, Ni, Zn em óleo diesel em baixas concentrações, minimizando o consumo de reagentes e, consequentemente a geração de resíduos, de forma rápida e de simples execução, mostrando-se adequado para análises de rotina. / This research developed a method for employing a reversed-phase dispersive liquidliquid microextraction (RP-DLLME) as sample preparation for the extraction and preconcentration of Al, Cd, Cu, Fe, Mn, Ni and Zn in biodiesel samples. The determination of analytes were executed through inductively coupled plasma optical emission spectrometry (ICP-OES). The extraction/pre-concentration step of the analytes was performed using a mixture of two solvents, a dispersing and an extractor, which were added directly to the heated sample. Subsequently, for separation of the phases, the solutions were centrifuged and the sedimented phase was withdrawn to determine the analytes by ICP-OES. For the development of the method was used the experimental planning and optimization of processes, where initially a fractional factorial design was used and then a central composite rotatable design. The variables studied were temperature of the extraction (60, 70 and 80 oC), sample mass (5, 10 and 15 g), volume of extraction phase (0.5, 1 and 1.5 mL), concentration of the extraction (1.5 and 2 mol L-1) and the proportion of the dispersant in the extractive solution (60, 70 and 80%). Quantitative results were obtained using the following conditions: i) temperature of the extraction: 70 oC, ii) sample mass: 8.5 g, iii) volume of extraction phase: 1 mL, iv) concentration of HNO3: 2 mol L-1, and v) proportion of dispersant: 70% (v / v). All experiments were performed using the addition of 1.0 μg g-1 of Al, Cd, Cu, Fe, Mn, Ni and Zn directly in the diesel samples, was used a standard multi-element Conostan® lubricant oil (100 mg L-1). The results were expressed as analyte recovery (%). The calibration solutions were made in aqueous solution and the extracts were directly determined by ICP-OES. The quantification limits (LQ) for Al, Cd, Cu, Fe, Mn, Ni and Zn were 0.0492, 0.0031, 0.0031, 0.0140, 0.0008, 0.0049 and 0.0093 μg g-1, respectively. Accuracy was assessed by addition of analytes. Quantitative recoveries were obtained for all analytes and the relative standard deviations were less than 6.4%. The method was applied to 6 samples of commercial diesel oil, S-10 and S-500, and Al, Fe and Zn were determined in the range of 0.026 to 0.150 μg g-1, the remaining analytes were below the LQ of the method. The method developed using RP-DLLME and determination by ICP-OES presented with adequate precision and accuracy for the determination of Al, Cd, Cu, Fe, Mn, Ni, Zn in diesel oil in low concentrations, minimizing reagent consumption and, consequently the generation of toxic wastes, in a fast and simple way, and is suitable for routine analysis.
244

A cox proportional hazard model for mid-point imputed interval censored data

Gwaze, Arnold Rumosa January 2011 (has links)
There has been an increasing interest in survival analysis with interval-censored data, where the event of interest (such as infection with a disease) is not observed exactly but only known to happen between two examination times. However, because so much research has been focused on right-censored data, so many statistical tests and techniques are available for right-censoring methods, hence interval-censoring methods are not as abundant as those for right-censored data. In this study, right-censoring methods are used to fit a proportional hazards model to some interval-censored data. Transformation of the interval-censored observations was done using a method called mid-point imputation, a method which assumes that an event occurs at some midpoint of its recorded interval. Results obtained gave conservative regression estimates but a comparison with the conventional methods showed that the estimates were not significantly different. However, the censoring mechanism and interval lengths should be given serious consideration before deciding on using mid-point imputation on interval-censored data.
245

Computation of estimates in a complex survey sample design

Maremba, Thanyani Alpheus January 2019 (has links)
Thesis (M.Sc. (Statistics)) -- University of Limpopo, 2019 / This research study has demonstrated the complexity involved in complex survey sample design (CSSD). Furthermore the study has proposed methods to account for each step taken in sampling and at the estimation stage using the theory of survey sampling, CSSD-based case studies and practical implementation based on census attributes. CSSD methods are designed to improve statistical efficiency, reduce costs and improve precision for sub-group analyses relative to simple random sample(SRS).They are commonly used by statistical agencies as well as development and aid organisations. CSSDs provide one of the most challenging fields for applying a statistical methodology. Researchers encounter a vast diversity of unique practical problems in the course of studying populations. These include, interalia: non-sampling errors,specific population structures,contaminated distributions of study variables,non-satisfactory sample sizes, incorporation of the auxiliary information available on many levels, simultaneous estimation of characteristics in various sub-populations, integration of data from many waves or phases of the survey and incompletely specified sampling procedures accompanying published data. While the study has not exhausted all the available real-life scenarios, it has outlined potential problems illustrated using examples and suggested appropriate approaches at each stage. Dealing with the attributes of CSSDs mentioned above brings about the need for formulating sophisticated statistical procedures dedicated to specific conditions of a sample survey. CSSD methodologies give birth to a wide variety of approaches, methodologies and procedures of borrowing the strength from virtually all branches of statistics. The application of various statistical methods from sample design to weighting and estimation ensures that the optimal estimates of a population and various domains are obtained from the sample data.CSSDs are probability sampling methodologies from which inferences are drawn about the population. The methods used in the process of producing estimates include adjustment for unequal probability of selection (resulting from stratification, clustering and probability proportional to size (PPS), non-response adjustments and benchmarking to auxiliary totals. When estimates of survey totals, means and proportions are computed using various methods, results do not differ. The latter applies when estimates are calculated for planned domains that are taken into account in sample design and benchmarking. In contrast, when the measures of precision such as standard errors and coefficient of variation are produced, they yield different results depending on the extent to which the design information is incorporated during estimation. The literature has revealed that most statistical computer packages assume SRS design in estimating variances. The replication method was used to calculate measures of precision which take into account all the sampling parameters and weighting adjustments computed in the CSSD process. The creation of replicate weights and estimation of variances were done using WesVar, astatistical computer package capable of producing statistical inference from data collected through CSSD methods. Keywords: Complex sampling, Survey design, Probability sampling, Probability proportional to size, Stratification, Area sampling, Cluster sampling.
246

Determining the most appropiate [sic] sampling interval for a Shewhart X-chart

Vining, G. Geoffrey January 1986 (has links)
A common problem encountered in practice is determining when it is appropriate to change the sampling interval for control charts. This thesis examines this problem for Shewhart X̅ charts. Duncan's economic model (1956) is used to develop a relationship between the most appropriate sampling interval and the present rate of"disturbances,” where a disturbance is a shift to an out of control state. A procedure is proposed which switches the interval to convenient values whenever a shift in the rate of disturbances is detected. An example using simulation demonstrates the procedure. / M.S.
247

A study of sex/age ratios in wild ungulate populations : an approach to designing an appropriate sampling strategy for estimating the structure of wild ungulate populations on Rooipoort Nature Reserve

Laubscher, Sarah-Jane 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: This thesis investigates the population structure of a number of ungulate species occurring within Rooipoort private reserve. Specifically the study serves to make estimates of the ratio of males to females and calves to cows within each species population under observation, based on the data collected. Data were also analysed to ascertain the distribution patterns of the species in question, in relation to vegetation type and habitat. Distribution data were additionally compared to distribution data collected at an earlier period on Rooipoort, to determine whether any change has occurred in distribution patterns of the ungulates concerned. Through analysis of both sex/age data and distributional data, one of the main objectives of the study was to determine the most appropriate time of the year, length of time and managment costs involved to undertake sex/age counts on Rooipoort. Results of the study were also compared to existing population models of ungulates on the reserve. Results obtained from data concluded that a single monthly sex/age count or, in some cases, even three consecutive monthly counts, to determine age ratios, would be insufficient to deliver a reliable estimate of population structure. A number of counts would have to be carried out throughout the year in order to make reliable estimates. Distribution data revealed that all habitat/vegetation types on Rooipoort would have to be covered in order to effectively sample all of the species in question. / AFRIKAANSE OPSOMMING: Gegewens is ook ontleed om die verspreidingspatrone van hoefdiersoorte te bepaal met betrekking tot plantegroeitipe en habitat. Die verspreidingsdata is ook vergelyk met vorige ..studies wat op Rooipoort gedoen is om te. bepaal. of enige. veranderings in die verpreidingspatrone van die hoefdiere onder bespreking plaasgevind het. Een van die hoof doelwitte van die studie was om.. deur ontleding. van beide die geslags/ouderdom data en die verspreidingsdata, die mees geskikte tye van die jaar, die tydsduur en bestuurskoste te bepaal, om geslags/ouderdomstellings op Rooipoort uit te voer. Resultate van die studie is ook met vertroude populasiemodelle op die reservaat vergelyk. Die dataontledings het aangeduidat 'n enkele maandlikse geslags/ouderdoms telling, of, In sekere gevalle, selfs drie agtereenvolgende maandlikse tellings, om ouderdomsverhoudings te bepaal, nie voldoende sal wees om 'n vertroubare beraming van die bevolkings struktuur te maak. n' Aantal tellings moet gedurende die yaar uitgevoer word om vertroubare beramings te kan doen. Verspreidingsdata het bevestig dat alle habitate en plantegroeitipes op Rooipoort bemonster moet word om alle spesie effektief te bemonster.
248

Incorporating measurement error and density gradients in distance sampling surveys

Marques, Tiago Andre Lamas Oliveira January 2007 (has links)
Distance sampling is one of the most commonly used methods for estimating density and abundance. Conventional methods are based on the distances of detected animals from the center of point transects or the center line of line transects. These distances are used to model a detection function: the probability of detecting an animal, given its distance from the line or point. The probability of detecting an animal in the covered area is given by the mean value of the detection function with respect to the available distances to be detected. Given this probability, a Horvitz-Thompson- like estimator of abundance for the covered area follows, hence using a model-based framework. Inferences for the wider survey region are justified using the survey design. Conventional distance sampling methods are based on a set of assumptions. In this thesis I present results that extend distance sampling on two fronts. Firstly, estimators are derived for situations in which there is measurement error in the distances. These estimators use information about the measurement error in two ways: (1) a biased estimator based on the contaminated distances is multiplied by an appropriate correction factor, which is a function of the errors (PDF approach), and (2) cast into a likelihood framework that allows parameter estimation in the presence of measurement error (likelihood approach). Secondly, methods are developed that relax the conventional assumption that the distribution of animals is independent of distance from the lines or points (usually guaranteed by appropriate survey design). In particular, the new methods deal with the case where animal density gradients are caused by the use of non-random sampler allocation, for example transects placed along linear features such as roads or streams. This is dealt with separately for line and point transects, and at a later stage an approach for combining the two is presented. A considerable number of simulations and example analysis illustrate the performance of the proposed methods.
249

The relationship between pollen rain, vegetation, climate, meteorological factors and land-use in the PWV, Transvaal

Cadman, Ann January 1991 (has links)
A two-year analysis of pollen rain was conducted in the Pretoria-Witwatersrand-Vereeniging district of the Transvaal, South Africa. Poaceae WaS the major component of the pollen assemblage, comprising 52% regionally. Of the total pollen count, 58.8% was non-seasonal and present throughout the year. During the analysis it became apparent that fungal spores dominated the atmospheric content, accounting for 94% of total airspora, considered here to incl ude pollen and fUngal spores.[Abbreviated Abstract. Open document to view full version]. / AC2017
250

Development of a grassland monitoring system for the management of the wolkberg wilderness area.

Coombes, Peter, John. January 1991 (has links)
A thesis submitted to the Faculty of Science, University of the Wiwatersrand, Johannesburg, for the degree of Master of Science. / This study aimed to investigate, within the contemporary philosophy of science, key aspects of the paradigm formulated by the national Vegetation Monitoring Work team (VMW), and thereby develop a grassland monitoring system to place. the management of the Wolkberg Wilderness Area (WWA) on a testable basis. ( Abbreviation abstract ) / AC2017

Page generated in 0.1133 seconds