Spelling suggestions: "subject:"anda alidation"" "subject:"anda balidation""
11 |
System Validation via Constraint ModelingWaters, Richard C. 01 February 1988 (has links)
Constraint modeling could be a very important system validation method, because its abilities are complementary to both testing and code inspection. In particular, even though the ability of constraint modeling to find errors is limited by the simplifications which are introduced when making a constraint model, constraint modeling can locate important classes of errors which are caused by non-local faults (i.e., are hard to find with code inspection) and manifest themselves as failures only in unusual situations (i.e., are hard to find with testing).
|
12 |
Upgrade and validation of PHX2MCNP for criticality analysis calculations for spent fuel storage poolsLarsson, Cecilia January 2010 (has links)
A few years ago Westinghouse started the development of a new method for criticality calculations for spent nuclear fuel storage pools called “PHOENIX-to–MCNP” (PHX2MCNP). PHX2MCNP transfers burn-up data from the code PHOENIX to use in MCNP in order to calculate the criticality. This thesis describes a work with the purpose to further validate the new method first by validating the software MCNP5 at higher water temperatures than room temperature and, in a second step, continue the development of the method by adding a new feature to the old script. Finally two studies were made to examine the effect from decay time on criticality and to study the possibility to limit the number of transferred isotopes used in the calculations. MCNP was validated against 31 experiments and a statistical evaluation of the results was done. The evaluation showed no correlation between the water temperature of the pool and the criticality. This proved that MCNP5 can be used in criticality calculations in storage pools at higher water temperature. The new version of the PHX2MCNP script is called PHX2MCNP version 2 and has the capability to distribute the burnable absorber gadolinium into several radial zones in one pin. The decay time study showed that the maximum criticality occurs immediately after the takeout from the reactor as expected. The last study, done to evaluate the possibility to limit the isotopes transferred from PHOENIX to MCNP showed that Case A, a case with the smallest number of isotopes, is conservative for all sections of the fuel element. Case A, which contains only some of the actinides and the strongest absorber of the burnable absorbers gadolinium 155, could therefore be used in future calculations. Finally, the need for further validation of the method is discussed.
|
13 |
A Framework for Validating Reusable Behavioral Models in Engineering DesignMalak, Richard J., Jr. 28 April 2005 (has links)
Designers commonly use computer-based modeling and simulation methods to predict artifact behavior. Such predictions are central to engineering decision making. As such, determining how well they correspond to actual artifact behavior is a problem of critical importance. A significant aspect of this problem is determining whether the model used to generate the behavioral predictionsi.e., the behavioral modelreflects the relevant physical phenomena. The process of doing this is referred to as behavioral model validation.
Prior works take an integrated approach to validation in which model creators and model users interact throughout the modeling and simulation process. Although effective for many problems, this type of approach is not appropriate for model reuse scenarios. Model validation requires knowledge about the model and its use. In model reuse scenarios, model creators and model users operate in independent processes with limited inter-process communication. The core challenge to behavioral model validation in this setting is that, in general, neither model creators nor model users possess the requisite knowledge to perform behavioral model validation.
Presented in this thesis is a conceptual framework for validating reusable behavioral models in model reuse scenarios. This framework solves the problem of creator-user separation by defining specific validation responsibilities for each and an interface by which they communicate. This interface consists of a formal description of the models limitations and the domain over which these limitations are known to be true. The framework is illustrated through basic engineering examples.
|
14 |
Programmation fiable et efficace des architectures parallèles distribuéesJézéquel, Jean-Marc January 1997 (has links) (PDF)
Habilitation à diriger des recherches : Informatique : Rennes 1 : 1997. / Bibliogr. p.119-132.
|
15 |
Denotational Translation ValidationGovereau, Paul 02 January 2013 (has links)
In this dissertation we present a simple and scalable system for validating the correctness of low-level program transformations. Proving that program transformations are correct is crucial to the development of security critical software tools. We achieve a simple and scalable design by compiling sequential low-level programs to synchronous data-flow programs. Theses data-flow programs are a denotation of the original programs, representing all of the relevant aspects of the program semantics. We then check that the two denotations are equivalent, which implies that the program transformation is semantics preserving. Our denotations are computed by means of symbolic analysis. In order to achieve our design, we have extended symbolic analysis to arbitrary control-flow graphs. To this end, we have designed an intermediate language called Synchronous Value Graphs (SVG), which is capable of representing our denotations for arbitrary control-flow graphs, we have built an algorithm for computing SVG from normal assembly language, and we have given a formal model of SVG which allows us to simplify and compare denotations. Finally, we report on our experiments with LLVM M.D., a prototype denotational translation validator for the LLVM optimization framework. / Engineering and Applied Sciences
|
16 |
Quantitative data validation (automated visual evaluations)Martin, Anthony John Michael January 1999 (has links)
Historically, validation has been perfonned on a case study basis employing visual evaluations, gradually inspiring confidence through continual application. At present, the method of visual evaluation is the most prevalent form of data analysis, as the brain is the best pattern recognition device known. However, the human visual/perceptual system is a complicated mechanism, prone to many types of physical and psychological influences. Fatigue is a major source of inaccuracy within the results of subjects perfonning complex visual evaluation tasks. Whilst physical and experiential differences along with age have an enormous bearing on the visual evaluation results of different subjects. It is to this end that automated methods of validation must be developed to produce repeatable, quantitative and objective verification results. This thesis details the development of the Feature Selective Validation (FSV) method. The FSV method comprises two component measures based on amplitude differences and feature differences. These measures are combined employing a measured level of subjectivity to fonn an overall assessment of the comparison in question or global difference. The three measures within the FSV method are strengthened by statistical analysis in the form of confidence levels based on amplitude, feature or global discrepancies between compared signals. Highly detailed diagnostic infonnation on the location and magnitude of discrepancies is also made available through the employment of graphical (discrete) representations of the three measures. The FSV method also benefits from the ability to mirror human perception, whilst producing infonnation which directly relates human variability and the confidence associated with it. The FSV method builds on the common language of engineers and scientists alike, employing categories which relate to human interpretations of comparisons, namely: 'ideal', 'excellent', 'very good', 'good', 'fair', 'poor' and 'extremely poor' . Quantitative
|
17 |
Model comparison and assessment by cross validationShen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three
v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in
estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model.
In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive,
multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one
execution of CV. This adaptive and computationally efficient methodology
is demonstrated on a large cheminformatics data set from PubChem.
Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods.
For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These
results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection.
|
18 |
Development and Validation of a Home Literacy Questionnaire to Assess Emergent Reading Skills of Pre-School ChildrenCurry, Jennifer E. Unknown Date
No description available.
|
19 |
Examining Thinking Skills in the Context of Large-scale Assessments Using a Validation ApproachHachey, Krystal 30 April 2014 (has links)
Large Scale Assessments (LSAs) of student achievement in education serve a variety of purposes, such as comparing educational programs, providing accountability measures, and assessing achievement on a broad range of curriculum standards. In addition to measuring content-related processes such as mathematics or reading, LSAs also focus on thinking-related skills such as lower level thinking (e.g., understanding concepts) and problem solving. The purpose of the current study was to deconstruct and clarify the mechanisms that make up an LSA, including thinking skills and assessment perspectives, from a validation approach based on the work by Messick (1995) and Kane (1990). Therefore, when examining the design and student data of two LSAs in reading, (a) what common thinking skills are assessed? and (b) what are the LSAs’ underlying assessment perspectives? Content analyses were carried out on two LSAs that purported to assess thinking skills in reading: the Pan-Canadian Assessment Program (PCAP) and the Educational Quality and Accountability Office (EQAO). As the two LSAs evaluated reading, the link between reading and thinking was also addressed. Conceptual models were developed and used to examine the assessment framework, test booklets, and scoring guide of the two assessments. In addition, a nonlinear factor analysis was conducted on the EQAO item-level data from the test booklets to examine the dimensionality of the LSA. The most prominent thinking skill referenced after qualitatively analyzing the assessment frameworks, test booklets, and scoring guides was critical thinking, while results from the quantitative analysis revealed that two factors best represented the item-level EQAO data. Overall, the tools provided in the current study can help inform both researchers and practitioners about the interaction between the assessment approach and related thinking skills.
|
20 |
Model comparison and assessment by cross validationShen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three
v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in
estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model.
In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive,
multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one
execution of CV. This adaptive and computationally efficient methodology
is demonstrated on a large cheminformatics data set from PubChem.
Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods.
For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These
results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection.
|
Page generated in 0.0843 seconds