• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 29
  • 11
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 268
  • 55
  • 52
  • 46
  • 29
  • 25
  • 24
  • 18
  • 17
  • 17
  • 17
  • 17
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Labeled Sampling Consensus A Novel Algorithm For Robustly Fitting Multiple Structures Using Compressed Sampling

Messina, Carl J 01 January 2011 (has links)
The ability to robustly fit structures in datasets that contain outliers is a very important task in Image Processing, Pattern Recognition and Computer Vision. Random Sampling Consensus or RANSAC is a very popular method for this task, due to its ability to handle over 50% outliers. The problem with RANSAC is that it is only capable of finding a single structure. Therefore, if a dataset contains multiple structures, they must be found sequentially by finding the best fit, removing the points, and repeating the process. However, removing incorrect points from the dataset could prove disastrous. This thesis offers a novel approach to sampling consensus that extends its ability to discover multiple structures in a single iteration through the dataset. The process introduced is an unsupervised method, requiring no previous knowledge to the distribution of the input data. It uniquely assigns labels to different instances of similar structures. The algorithm is thus called Labeled Sampling Consensus or L-SAC. These unique instances will tend to cluster around one another allowing the individual structures to be extracted using simple clustering techniques. Since divisions instead of modes are analyzed, only a single instance of a structure need be recovered. This ability of L-SAC allows a novel sampling procedure to be presented “compressing” the required samples needed compared to traditional sampling schemes while ensuring all structures have been found. L-SAC is a flexible framework that can be applied to many problem domains.
192

The small-sample power of some nonparametric tests

Gibbons, Jean Dickinson January 1962 (has links)
I. Small-Sample Power of the One-Sample Sign Test for Approximately Normal Distributions. The power function of the one-sided, one-sample sign test is studied for populations which deviate from exact normality, either by skewness, kurtosis, or both. The terms of the Edgeworth asymptotic expansion of order more than N<sup>-3/2</sup> are used to represent the population density. Three sets of hypotheses and alternatives, concerning the location of (1) the median, (2) the median as approximated by the mean and coefficient of skewness, and (3) the mean, are considered in an attempt to make valid comparisons between the power of the sign test and Student's t test under the same conditions. Numerical results are given for samples of size 10, significance level .05, and for several combinations of the coefficients of skewness and kurtosis. II. Power of Two-Sample Rank Teats on the Equality of Two Distribution Functions. A comparative study is made of the power of two-sample rank tests of the hypothesis that both samples are drawn from the same population. The general alternative is that the variables from one population are stochastically larger than the variables from the other. One of the alternatives considered is that the variables in the first sample are distributed as the smallest of k variates with distribution F, and the variables in the second sample are distributed as the largest of these k – H₁ : H = 1 - (1 - F)<sup>k</sup>, G = F<sup>k</sup>. These two alternative distributions are mutually symmetric if F is symmetrical. Formulae are presented, which are independent of F, for the evaluation of the probability under H₁ of any joint arrangement of the variables from the two samples. A theorem is proved concerning the equality of the probabilities of certain pairs of orderings under assumptions of mutually symmetric populations. The other alternative is that both samples are normally distributed with the same variance but different means, the standardized difference between the two extreme distributions in the first alternative corresponding to the difference between the means. Numerical results of power are tabulated for small sample sizes, k = 2, 3 and 4, significance levels .01, .05 and .10. The rank tests considered are the most powerful rank test, the one and two-sided Wilcoxon tests, Terry's c₁ test, the one and two-aided median tests, the Wald-Wolfowitz runs test, and two new tests called the Psi test and the Gamma test. The two-sample rank test which is locally most powerful against any alternative·expressing an arbitrary functional relationship between the two population distribution functions and an unspecified parameter θ is derived and its asymptotic properties studied. The method is applied to two specific functional alternatives, H₁* : H = (1-θ)F<sup>k</sup> + θ[1 - (1-F)<sup>k</sup>], G = F<sup>k</sup>, and H₁**: H = 1 - (1-F)<sup>1+θ</sup>, G = F<sup>1+θ</sup>, where θ ≥ 0, which are similar to the alternative of two extreme distributions. The resulting test statistics are the Gamma test and the Psi test, respectively. The latter test is shown to have desirable small-sample properties. The asymptotic power functions of the Wilcoxon and WaldWolfowitz tests are compared for the alternative of two extreme distributions with k = 2, equal sample sizes and significance level .05. / Ph. D.
193

A comparison of sequential probability ratio and generalized attributes acceptance sampling plans

Shaparenko, Raymond Allen January 1983 (has links)
A comparison is made between Wald's Sequential Probability Ratio sampling plan, several generalized attributes acceptance sampling plans, and a curtailed single sampling plan. The evaluation of the plans is based upon a cost function based upon a combination of the Average Sampling Number (ASN) and the variance of an estimator for the proportion of defective items in a lot. Using a numerical calculation of the defined cost function, the curtailed single sampling plan and also a generalized attributes acceptance sampling plan are shown to be better than Wald's SPR in a number of instances for representative operating characteristics. However, strictly in terms of ASN Wald's SPR is shown to be better. A computer program is devised which gives a good approximation of the variance of the estimator used for Wald's SPR. / M.S.
194

The application of statistical quality control to the centrifugal casting of iron pipe

Whaley, Paul Arthur January 1947 (has links)
M.S.
195

Sample and counting variations associated with x-ray flourescence [sic] analysis

Davis, Robert Loyal January 1966 (has links)
M. S.
196

Design and regression estimation in double sampling

Tan, Edith Estillore January 1987 (has links)
Two methods developed to improve regression estimation in double sampling under the superpopulation model approach are examined. One method proposes the use of an alternative double sample regression estimator. The other method recommends the use of nonrandom, purposive subsampling plans. Both methods aim to reduce the mean squared errors of regression estimators in double sampling. A major criticism against the superpopulation model approach is its strong dependence on the correctness of the assumed model. Thus, two purposive subsampling plans were considered. The first plan designed subsamples based on the assumption that the superpopulation model was a first order linear model. The second plan selected subsamples that guarded against the occurrence of a second order model. As expected, the designed subsamples without protection can be very sensitive to the presence of a second order linear model. On the other hand, the designed subsamples with protection rendered the double sample regression estimators robust not only to a second order superpopulation model but also fairly robust to other slight model deviations such as variance misspecification. Therefore the use of designed subsamples with protection against a second order model is suggested whenever a first order superpopulation model is uncertain. Under designed subsamples with or without protection, the alternative double sample regression estimator is not found to be more efficient than the usual double sample regression estimator found in most sampling textbooks . However, the alternative double sample regression estimator has shown itself to be more efficient under simple random subsampling when the correlation between variables is weak and subsamples are small. / Ph. D.
197

Variable sampling interval control charts

Amin, Raid Widad January 1987 (has links)
Process control charts are widely used to display sample data from a process for purposes of determining whether a process is in control, for bringing an out-of-control process into control, and for monitoring a process to make sure that it stays in control. The usual practice in maintaining a control chart is to take samples from the process at fixed length sampling intervals. This research investigates the modification of the standard practice where the sampling interval or time between samples is not fixed but can vary depending on what is observed from the data. Variable sampling interval process control procedures are considered for monitoring the outcome of a production process. The time until the next sample depends on what is being observed in the current sample. Sampling is less frequent when the process is at a high level of quality and vise versa. Properties such as the average number of samples until signal, average time to signal and the variance of the time to signal are developed for the variable sampling interval Shewhart and cusum charts. A Markov chain is utilized to approximate the average time to signal and the corresponding variance for the cusum charts. Properties of the variable sampling interval Shewhart chart are investigated through Renewal Theory and Markov chain approaches for the cases of a sudden and gradual shift in the process mean respectively. Also considered is the case of a shift occurring in the time between two samples without the simplifying assumption that the process mean remains the same from time zero onward. For such a case, the adjusted time to signal is developed for both the Shewhart and cusum charts in addition to the variance of the adjusted time to signal. Results show that the variable sampling interval control charts are considerably more efficient than the corresponding fixed sampling interval control charts. It is preferable to use only two sampling intervals which keeps the complexity of the chart to a reasonable level and has practical implications. This feature should make such charts very appealing for use in industry and other fields of application where control charts are used. / Ph. D.
198

Adaptive Sampling for Targeted Software Testing

Shah, Abhishek January 2024 (has links)
Targeted software testing is a critical task in development of secure software. The core challenge of targeted software testing is to generate many inputs that reach specific code target locations in a given program. However, this task is challenging because it is NP-hard in theory and real-world programs contain very large input spaces and many lines of code, making this difficult in practice. In this thesis, I introduce a new approach for targeted software testing based on adaptive sampling. The key insight is to reduce the original problem to a sequence of approximate counting problems, and I apply this approach to targeted software testing in two stages. First, to find a single target-reaching input when no such input is given, I develop a new search algorithm MC2 that adaptively uses approximate-count feedback to narrow down which input region is more likely to contain a target-reaching input using probabilistic bisection. Second, given a single target-reaching input, I develop a new set approximation algorithm ProgramSampler that adaptively learns an approximation of the set of target-reaching inputs based on approximate-count feedback, where the set approximation can be efficiently uniformly sampled for many target-reaching inputs. Backed by theoretical guarantees, these techniques have been highly effective in practice: outperforming existing methods on average by 1-2 orders of magnitude.
199

An empirical survey of certain key aspects of the use of statistical sampling by South African registered auditors accredited by the Johannesburg securities exchange

Swanepoel, Elmarie 12 1900 (has links)
Thesis (MAcc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: The quality of external audits has increasingly come under the spotlight over the last decade as a result of a number of audit failures. The use of scientifically based statistical sampling as a sampling technique is allowed, but not required by International Standards on Auditing. The science behind this sampling technique can add to the credibility and quality of the audit. Accordingly the main objective of this study was to explore certain key aspects of the use of statistical sampling as a sampling technique in the audits of financial statements done by South African Registered Auditors accredited by the Johannesburg Stock Exchange (JSE). A literature review of the most recent local and international studies related to the key aspects addressed in this study was done. An empirical study was then done by means of a questionnaire that was sent to the JSE-accredited auditing firms for completion. The questionnaire focused on what was allowed by the firms’ audit methodologies regarding the key aspects investigated in this study and not on the actual usage of statistical sampling in audits performed by the firms. The following main conclusions were drawn in respect of the four key aspects that were investigated: 1. In investigating the extent to which statistical sampling is used by auditing firms, it was found that the majority of them was allowed to use the principles of statistical sampling. Upon further investigation it was found that only 38% were explicitly allowed to use it in all three sampling steps (size determination, selection of items and evaluation of results). The evaluation step was identified as the most problematic statistical sampling phase. 2. Two reasons why auditors decided not use statistical sampling as a sampling technique were identified, namely the perceived inefficiency (costliness) of the statistical sampling process, and a lack of understanding, training and experience in the use thereof. 3. In investigating how professional judgement is exercised in the use of statistical sampling, it was found that the audit methodologies of the majority of the auditing firms prescribed the precision and confidence levels to be used, and further that the minority indicated that they were allowed to adjust these levels using their professional judgement. The partner in charge of the audit was identified to be typically responsible for final authorisation of the sampling approach to be followed. 4. It was found that approximately a third of the auditing firms did not use computer software for assistance in using statistical sampling. The majority of the auditing firms did however have a written guide on how to use statistical sampling in practice available as a resource to staff. The value of this study lies in its contribution to the existing body of knowledge in South Africa regarding the use of statistical sampling in auditing. Stakeholders in statistical sampling as an auditing technique that can benefit from this study include Registered Auditors in practice, academics, and, from regulatory, education and training perspectives, the Independent Regulatory Board for Auditors and the South African Institute of Chartered Accountants. / AFRIKAANSE OPSOMMING: Na aanleiding van 'n aantal oudit mislukkings in die afgelope dekade het die kwaliteit van eksterne oudits toenemend onder die soeklig gekom. Die gebruik van wetenskaplik gebaseerde statistiese steekproefneming word deur die International Standards on Auditing toegelaat, maar nie vereis nie, as 'n steekproefnemingstegniek. Die wetenskap agter hierdie steekproefnemingstegniek kan tot die geloofwaardigheid en die kwaliteit van die oudit bydra. Die hoofdoel van hierdie studie was gevolglik om sekere sleutel aspekte van die gebruik van statistiese steekproefneming as 'n steekproefnemingstegniek in die oudits van finansiële state soos gedoen deur Suid-Afrikaanse Geregistreerde Ouditeure geakkrediteer deur die Johannesburgse Effektebeurs (JSE), te verken. 'n Literatuurstudie van die mees onlangse plaaslike en internasionale studies wat verband hou met die sleutel aspekte wat in hierdie studie aangespreek word, is gedoen. 'n Empiriese studie is daarna gedoen met behulp van 'n vraelys wat vir die voltooiing aan die JSE-geakkrediteerde ouditeursfirmas gestuur is. Die vraelys het gefokus op wat toegelaat word deur die firmas se oudit metodologieë ten opsigte van die sleutel aspekte ondersoek in hierdie studie en nie op die werklike gebruik van statistiese steekproefneming in oudits wat deur die firmas uitgevoer word nie. Die volgende hoofgevolgtrekkings is gemaak ten opsigte van die vier sleutel aspekte wat ondersoek is: 1. In die ondersoek na die mate waarin statistiese steekproefneming gebruik word deur ouditeursfirmas, is gevind dat die meerderheid toegelaat was om die beginsels van statistiese steekproefneming te gebruik. By verdere ondersoek is gevind dat slegs 38% uitdruklik toegelaat word om dit te gebruik in al drie steekproefneming stappe (grootte-bepaling, keuse van items en evaluering van resultate). Die evalueringstap is geïdentifiseer as die mees problematiese statistiese steekproefnemings fase. 2. Twee redes waarom ouditeure besluit het om nie statistiese steekproefneming as 'n steekproefnemingstegniek te gebruik nie is geïdentifiseer, naamlik die vermeende ondoeltreffendheid (hoë koste) van die statistiese steekproefnemingsproses, en 'n gebrek aan begrip, opleiding en ondervinding in die gebruik daarvan. 3. Met die ondersoek van die wyse waarop professionele oordeel uitgeoefen word in die gebruik van statistiese steekproefneming, is gevind dat die presisiepeil en vertrouensvlakke wat gebruik word deur die meerderheid van die ouditeursfirmas se oudit metodologieë voorgeskryf word, en verder het die minderheid aangedui dat hulle hierdie vlakke mag aanpas deur hul professionele oordeel te gebruik. Die vennoot in beheer van die oudit is geïdentifiseer as tipies verantwoordelik vir die finale goedkeuring van die steekproefnemingsbenadering wat gevolg word . 4. Daar is gevind dat ongeveer 'n derde van die ouditeursfirmas nie gebruik maak van rekenaarsagteware vir bystand in die gebruik van statistiese steekproefneming nie. Die meerderheid van die ouditeursfirmas het egter 'n geskrewe gids oor hoe om statistiese steekproefneming in die praktyk te gebruik as 'n hulpmiddel aan personeel beskikbaar. Die waarde van hierdie studie lê in sy bydrae tot die bestaande liggaam van kennis in Suid-Afrika met betrekking tot die gebruik van statistiese steekproefneming in ouditkunde. Belanghebbers in statistiese steekproefneming as 'n oudittegniek wat kan baat vind by hierdie studie sluit in Geregistreerde Ouditeure in praktyk, akademici, en, vanuit regulerings-, opvoedings- en opleidingsperspektiewe, die Independent Regulatory Board for Auditors en die Suid-Afrikaanse Instituut van Geoktrooieerde Rekenmeesters.
200

A Simulation Study Comparing Various Confidence Intervals for the Mean of Voucher Populations in Accounting

Lee, Ihn Shik 12 1900 (has links)
This research examined the performance of three parametric methods for confidence intervals: the classical, the Bonferroni, and the bootstrap-t method, as applied to estimating the mean of voucher populations in accounting. Usually auditing populations do not follow standard models. The population for accounting audits generally is a nonstandard mixture distribution in which the audit data set contains a large number of zero values and a comparatively small number of nonzero errors. This study assumed a situation in which only overstatement errors exist. The nonzero errors were assumed to be normally, exponentially, and uniformly distributed. Five indicators of performance were used. The classical method was found to be unreliable. The Bonferroni method was conservative for all population conditions. The bootstrap-t method was excellent in terms of reliability, but the lower limit of the confidence intervals produced by this method was unstable for all population conditions. The classical method provided the shortest average width of the confidence intervals among the three methods. This study provided initial evidence as to how the parametric bootstrap-t method performs when applied to the nonstandard distribution of audit populations of line items. Further research should provide a reliable confidence interval for a wider variety of accounting populations.

Page generated in 0.0833 seconds