• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 10
  • 10
  • 10
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementing confidence bands for simple linear regression in the statistical laboratory PLOTTER program

Arheart, Kristopher Lee January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
2

Test case prioritization

Malishevsky, Alexey Grigorievich 19 June 2003 (has links)
Regression testing is an expensive software engineering activity intended to provide confidence that modifications to a software system have not introduced faults. Test case prioritization techniques help to reduce regression testing cost by ordering test cases in a way that better achieves testing objectives. In this thesis, we are interested in prioritizing to maximize a test suite's rate of fault detection, measured by a metric, APED, trying to detect regression faults as early as possible during testing. In previous work, several prioritization techniques using low-level code coverage information had been developed. These techniques try to maximize APED over a sequence of software releases, not targeting a particular release. These techniques' effectiveness was empirically evaluated. We present a larger set of prioritization techniques that use information at arbitrary granularity levels and incorporate modification information, targeting prioritization at a particular software release. Our empirical studies show significant improvements in the rate of fault detection over randomly ordered test suites. Previous work on prioritization assumed uniform test costs and fault seventies, which might not be realistic in many practical cases. We present a new cost-cognizant metric, APFD[subscript c], and prioritization techniques, together with approaches for measuring and estimating these costs. Our empirical studies evaluate prioritization in a cost-cognizant environment. Prioritization techniques have been developed independently with little consideration of their similarities. We present a general prioritization framework that allows us to express existing prioritization techniques by a framework algorithm using parameters and specific functions. Previous research assumed that prioritization was always beneficial if it improves the APFD metric. We introduce a prioritization cost-benefit model that more accurately captures relevant cost and benefit factors, and allows practitioners to assess whether it is economical to employ prioritization. Prioritization effectiveness varies across programs, versions, and test suites. We empirically investigate several of these factors on substantial software systems and present a classification-tree-based predictor that can help select the most appropriate prioritization technique in advance. Together, these results improve our understanding of test case prioritization and of the processes by which it is performed. / Graduation date: 2004
3

A disc-oriented graphics system applied to interactive regression analysis.

Thibault, Philippe C. January 1972 (has links)
No description available.
4

A disc-oriented graphics system applied to interactive regression analysis.

Thibault, Philippe C. January 1972 (has links)
No description available.
5

An evaluation of various plotting positions

Rys, Margaret J. (Margaret Joanna) January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Industrial Engineering.
6

The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyll

Beedell, David C. (David Charles) January 1995 (has links)
Least squares linear regression is a common tool in ecological research. One of the central assumptions of least squares linear regression is that the independent variable is measured without error. But this variable is measured with error whenever it is a sample mean. The significance of such contraventions is not regularly assessed in ecological studies. A simulation program was made to provide such an assessment. The program requires a hypothetical data set, and using estimates of S$ sp2$ it scatters the hypothetical data to simulate the effect of sampling error. A regression line is drawn through the scattered data, and SSE and r$ sp2$ are measured. This is repeated numerous times (e.g. 1000) to generate probability distributions for r$ sp2$ and SSE. From these distributions it is possible to assess the likelihood of the hypothetical data resulting in a given SSE or r$ sp2$. The method was applied to survey data used in a published TP-CHLa regression (Pace 1984). Beginning with a hypothetical, linear data set (r$ sp2$ = 1), simulated scatter due to sampling exceeded the SSE from the regression through the survey data about 30% of the time. Thus chances are 3 out of 10 that the level of uncertainty found in the surveyed TP-CHLa relationship would be observed if the true relationship were perfectly linear. If this is so, more precise and more comprehensive models will only be possible when better estimates of the means are available. This simulation approach should apply to all least squares regression studies that use sampled means, and should be especially relevant to studies that use log-transformed values.
7

The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyll

Beedell, David C. (David Charles) January 1995 (has links)
No description available.
8

The Applications of Regression Analysis in Auditing and Computer Systems

Hubbard, Larry D. 05 1900 (has links)
This thesis describes regression analysis and shows how it can be used in account auditing and in computer system performance analysis. The study first introduces regression analysis techniques and statistics. Then, the use of regression analysis in auditing to detect "out of line" accounts and to determine audit sample size is discussed. These applications led to the concept of using regression analysis to predict job completion times in a computer system. The feasibility of this application of regression analysis was tested by constructing a predictive model to estimate job completion times using a computer system simulator. The predictive model's performance for the various job streams simulated shows that job completion time prediction is a feasible application for regression analysis.
9

Application of a Geographical Information System to Estimate the Magnitude and Frequency of Floods in the Sandy and Clackamas River Basins, Oregon

Brownell, Dorie Lynn 26 May 1995 (has links)
A geographical information system (GIS) was used to develop a regression model designed to predict flood magnitudes in the Sandy and Clackamas river basins in Oregon. Manual methods of data assembly, input, storage, manipulation and analysis traditionally used to estimate basin characteristics were replaced with automated techniques using GIS-based computer hardware and software components. Separate GIS data layers representing (1) stream gage locations, (2) drainage basin boundaries, (3) hydrography, (4) water bodies, (5) precipitation, (6) landuse/land cover, (7) elevation and (8) soils were created and stored in a GIS data base. Several GIS computer programs were written to automate the spatial analysis process needed in the estimation of basin characteristic values using the various GIS data layers. Twelve basin characteristic data parameters were computed and used as independent variables in the regression model. Streamflow data from 19 gaged sites in the Sandy and Clackamas basins were used in a log Pearson Type III analysis to define flood magnitudes at 2-, 5-, 10-, 25-, 50- and 100-year recurrence intervals. Flood magnitudes were used as dependent variables and regressed against different sets of basin characteristics (independent variables) to determine the most significant independent variables used to explain peak discharge. Drainage area, average annual precipitation and percent area above 5000 feet proved to be the most significant explanatory variables for defining peak discharge characteristics in the Sandy and Clackamas river basins. The study demonstrated that a GIS can be successfully applied in the development of basin characteristics for a flood frequency analysis and can achieve the same level of accuracy as manual methods. Use of GIS technology reduced the time and cost associated with manual methods and allowed for more in-depth development and calibration of the regression model. With the development of GIS data layers and the use of GIS-based computer programs to automate the calculation of explanatory variables, regression equations can be developed and applied more quickly and easily. GIS proved to be ideally suited for flood frequency modeling applications by providing advanced computerized techniques for spatial analysis and data base management.
10

Change-effects analysis for effective testing and validation of evolving software

Santelices, Raul A. 17 May 2012 (has links)
The constant modification of software during its life cycle poses many challenges for developers and testers because changes might not behave as expected or may introduce erroneous side effects. For those reasons, it is of critical importance to analyze, test, and validate software every time it changes. The most common method for validating modified software is regression testing, which identifies differences in the behavior of software caused by changes and determines the correctness of those differences. Most research to this date has focused on the efficiency of regression testing by selecting and prioritizing existing test cases affected by changes. However, little attention has been given to finding whether the test suite adequately tests the effects of changes (i.e., behavior differences in the modified software) and which of those effects are missed during testing. In practice, it is necessary to augment the test suite to exercise the untested effects. The thesis of this research is that the effects of changes on software behavior can be computed with enough precision to help testers analyze the consequences of changes and augment test suites effectively. To demonstrate this thesis, this dissertation uses novel insights to develop a fundamental understanding of how changes affect the behavior of software. Based on these foundations, the dissertation defines and studies new techniques that detect these effects in cost-effective ways. These techniques support test-suite augmentation by (1) identifying the effects of individual changes that should be tested, (2) identifying the combined effects of multiple changes that occur during testing, and (3) optimizing the computation of these effects.

Page generated in 0.1324 seconds