Spelling suggestions: "subject:"coequality control."" "subject:"c.equality control.""
151 |
Participative goals and assigned goals on inspection performanceSanne, Murli L January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
152 |
Multi-defect inspectionSu, Jinn-Yen January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
153 |
黨參的質量控制胡建丹, 01 January 2013 (has links)
No description available.
|
154 |
Heat transfer studies on canned particulate viscous fluids during end-over-end rotation : by Yang Meng.Meng, Yang, 1968- January 2006 (has links)
No description available.
|
155 |
The consequences of drug related problems in paediatricsEaston-Carter, Kylie,1973- January 2001 (has links)
Abstract not available
|
156 |
Investigation into value difference within the professional culture of nursingCook, Peter, 1947- January 1995 (has links) (PDF)
No description available.
|
157 |
Building a framework for improving data quality in engineering asset managementLin, Chih Shien January 2008 (has links)
Asset managers recognise that high-quality engineering data is the key enabler in gaining control of engineering assets. Although they consider accurate, timely and relevant data as critical to the quality of their asset management (AM) decisions, evidence of large variations in data quality (DQ) associated with AM abounds. Therefore, the question arises as to what factors influence DQ in engineering AM. Accordingly, the main goal of this research is to investigate DQ issues associated with AM, and to develop an AM specific DQ framework of factors affecting DQ in AM. The framework is aimed at providing structured guidance for AM organisations to understand, identify and mitigate their DQ problems in a systematic way, and help them create an information orientation to achieve a greater AM performance.
|
158 |
An experimental study of cost cognizant test case prioritizationGoel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing
in an order that increases their ability to meet some performance goal. One performance
goal, rate of fault detection, measures how quickly faults are detected
within the testing process. The APFD metric had been proposed for measuring
the rate of fault detection. This metric applies, however, only in cases in which
test costs and fault costs are uniform. In practice, fault costs and test costs
are not uniform. For example, some faults which lead to system failures might
be more costly than faults which lead to minor errors. Similarly, a test case
that runs for several hours is much more costly than a test case that runs for
a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for
measuring rate of fault detection, that incorporates test costs and fault costs.
However, studies of this metric thus far have been limited to abstract distribution
models of costs. These distribution models did not represent actual fault
costs and test costs for software systems.
In this thesis, we describe some practical ways to estimate real fault costs
and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques
which focus on the APFD[subscript c] metric. We report results of an empirical
study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost"
across various cost-cognizant prioritization techniques and tradeoffs between
techniques.
The results of our empirical study indicate that cost-cognizant test case prioritization
techniques can substantially improve the rate of fault detection of
test suites. The results also provide insights into the tradeoffs among various
prioritization techniques. For example: (1) techniques incorporating feedback
information (information from previous tests) outperformed those without any
feedback information; (2) technique effectiveness differed most when faults are
relatively difficult to detect; (3) in most cases, technique performance was similar
at function and statement level; (4) surprisingly, techniques considering
change location did not perform as well as expected. The study also reveals
several practical issues that might arise in applying test case prioritization, as
well as opportunities for future work. / Graduation date: 2003
|
159 |
Test case prioritizationChu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
|
160 |
Economically optimal control charts for two stage samplingHall, Kathryn B. 23 January 1990 (has links)
Control charts are designed to monitor population parameters. Selection of a
control chart sampling plan involves determination of the frequency of samples, size
of each sample, and critical values to determine when the system is sending an out-of-control signal. Since the main use of control charts is in industry, a widely
accepted measure of a good sampling plan is one that minimizes the total cost of
operating the system per unit time.
Methods for selection of control chart sampling plans for economically optimal
X charts are well established. These plans focus on single stage sampling at each
sampling period. However, some populations naturally call for two stage sampling.
Here, the cost of operating a system per unit time is redefined in terms of two stage
sampling plans, and computer search techniques are developed to determine the control
chart parameters. First the sample sizes and critical values are fixed, and
Newton's method is used to determine the optimal time between samples. Then, a
Hooke - Jeeves search is used to simultaneously determine the optimal critical value,
sample sizes and time between samples. Adjustment to the latter is required whenever
any of the other three parameters change. Alternative methods are also discussed.
Information from a single sample is usually used to control shifts in both the
process mean and variance. With two stage sampling, this means two additional control
charts are used, one for each variance component. The computer algorithm
developed for selection of parameters for X charts is adapted by expanding the Hooke
Jeeves search region to a six dimensional space, now over three critical values, sample
sizes for both stages of sampling, and the time between samples.
These methods are applied to a real data set that requires two stage sampling. A
representative analysis of the sensitivity of the optimal sampling scheme to the input
parameters completes the paper. / Graduation date: 1990
|
Page generated in 0.067 seconds