Spelling suggestions: "subject:"[een] QUALITY CONTROL"" "subject:"[enn] QUALITY CONTROL""
101 |
Heat transfer studies on canned particulate viscous fluids during end-over-end rotation : by Yang Meng.Meng, Yang, 1968- January 2006 (has links)
No description available.
|
102 |
The consequences of drug related problems in paediatricsEaston-Carter, Kylie,1973- January 2001 (has links)
Abstract not available
|
103 |
Investigation into value difference within the professional culture of nursingCook, Peter, 1947- January 1995 (has links) (PDF)
No description available.
|
104 |
Building a framework for improving data quality in engineering asset managementLin, Chih Shien January 2008 (has links)
Asset managers recognise that high-quality engineering data is the key enabler in gaining control of engineering assets. Although they consider accurate, timely and relevant data as critical to the quality of their asset management (AM) decisions, evidence of large variations in data quality (DQ) associated with AM abounds. Therefore, the question arises as to what factors influence DQ in engineering AM. Accordingly, the main goal of this research is to investigate DQ issues associated with AM, and to develop an AM specific DQ framework of factors affecting DQ in AM. The framework is aimed at providing structured guidance for AM organisations to understand, identify and mitigate their DQ problems in a systematic way, and help them create an information orientation to achieve a greater AM performance.
|
105 |
An experimental study of cost cognizant test case prioritizationGoel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing
in an order that increases their ability to meet some performance goal. One performance
goal, rate of fault detection, measures how quickly faults are detected
within the testing process. The APFD metric had been proposed for measuring
the rate of fault detection. This metric applies, however, only in cases in which
test costs and fault costs are uniform. In practice, fault costs and test costs
are not uniform. For example, some faults which lead to system failures might
be more costly than faults which lead to minor errors. Similarly, a test case
that runs for several hours is much more costly than a test case that runs for
a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for
measuring rate of fault detection, that incorporates test costs and fault costs.
However, studies of this metric thus far have been limited to abstract distribution
models of costs. These distribution models did not represent actual fault
costs and test costs for software systems.
In this thesis, we describe some practical ways to estimate real fault costs
and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques
which focus on the APFD[subscript c] metric. We report results of an empirical
study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost"
across various cost-cognizant prioritization techniques and tradeoffs between
techniques.
The results of our empirical study indicate that cost-cognizant test case prioritization
techniques can substantially improve the rate of fault detection of
test suites. The results also provide insights into the tradeoffs among various
prioritization techniques. For example: (1) techniques incorporating feedback
information (information from previous tests) outperformed those without any
feedback information; (2) technique effectiveness differed most when faults are
relatively difficult to detect; (3) in most cases, technique performance was similar
at function and statement level; (4) surprisingly, techniques considering
change location did not perform as well as expected. The study also reveals
several practical issues that might arise in applying test case prioritization, as
well as opportunities for future work. / Graduation date: 2003
|
106 |
Test case prioritizationChu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
|
107 |
Effective Static Debugging via Compential Set-Based AnalysisJanuary 1997 (has links)
Sophisticated software systems are inherently complex. Understanding, debugging
and maintaining such systems requires inferring high-level characteristics of the
system's behavior from a myriad of low-level details. For large systems, this quickly
becomes an extremely difficult task.
MrSpidey is a static debugger that augments the programmers ability to deal
with such complex systems. It statically analyzes the program and uses the results
of the analysis to identify and highlight any program operation may cause a run-time
fault. The programmer can then investigate each potential fault site and, using the
graphical explanation facilities of MrSpidey, determine if the fault will really happen
or whether the corresponding correctness proof is beyond the analysis's capabilities.
In practice, MrSpidey has proven to be an effective tool for debugging program under
development and understanding existing programs.
The key technology underlying MrSpidey is componential set-based analysis. This
is a constraint-based, whole-program analysis for object-oriented and functional programs.
The analysis first processes each program component (eg. module or package)
independently, generating and simplifying a constraint system describing the data
flow behavior of that component. The analysis then combines and solves these simplified
constraint systems to yield invariants characterizing the run-time behavior of
the entire program. This component-wise approach yields an analysis that handles
significantly larger programs than previous analyses of comparable accuracy.
The simplification of constraint systems raises a number of questions. In particular,
we need to ensure that simplification preserves the observable behavior, or
solution space, of a constraint system. This dissertation provides a complete proof-theoretic
and algorithmic characterization of the observable behavior of constraint
systems, and establishes a close connection between the observable equivalence of
constraint systems and the equivalence of regular tree grammars. We exploit this
connection to develop a complete algorithm for deciding the observable equivalence
of constraint systems, and to adapt a variety of algorithms for simplifying regular
tree grammars to the problem of simplifying constraint systems. The resulting constraint
simplification algorithms yield an order of magnitude reduction in the size of
constraint systems for typical program expressions.
|
108 |
Optimal Quality Control for Oligo-arrays Using Genetic AlgorithmLi, Ya-hui 17 August 2004 (has links)
Oligo array is a high throughput technology and is widely used in many scopes of biology and medical researches for quantitative and highly parallel measurements of gene expression. When one faulty step occurs during the synthesis process, it affects all probes using the faulty step. In this thesis, a two-phase genetic algorithm (GA) is proposed to design optimal quality control of oligo array for detecting any single faulty step. The first phase performs the wide search to obtain the approximate solutions and the second phase performs the local search on the approximate solutions to achieve the optimal solution. Besides, the proposed algorithm could hold many non-duplicate individuals and parallelly search multiple regions simultaneously. The superior searching capability of the two-phase GA helps us to find out the 275 nonequireplicate cases that settled by the hill-climbing algorithm. Furthermore, the proposed algorithm also discovers five more open issues.
|
109 |
Department of Defense quality management systems and ISO 9000:2000 /Lucius, Tommie J. January 2002 (has links) (PDF)
Thesis (M.S.)--Naval Postgraduate School, 2002. / Thesis advisor(s): Michael Boudreau, Ira Lewis. Includes bibliographical references (p. 129-133). Also available online.
|
110 |
An investigation of the type I error rates and power of standard and alternative multivariate tests on means under homogeneous and heterogeneous covariance matrices and multivariate normality and nonnormality /Yockey, Ron David, January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 316-324). Available also in a digital version from Dissertation Abstracts.
|
Page generated in 0.0384 seconds