• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 7
  • 2
  • Tagged with
  • 33
  • 33
  • 33
  • 15
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Formal specification and verification of a JVM and its bytecode verifier

Liu, Hanbing 28 August 2008 (has links)
Not available / text
22

Software testing tools and productivity

Moschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
23

A kernel to support computer-aided verification of embedded software

Grobler, Leon D 03 1900 (has links)
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2006. / Formal methods, such as model checking, have the potential to improve the reliablility of software. Abstract models of systems are subjected to formal analysis, often showing subtle defects not discovered by traditional testing.
24

Testing and maintenance of graphical user interfaces / Test et maintenance des interfaces graphiques

Lelli leitao, Valeria 19 November 2015 (has links)
La communauté du génie logiciel porte depuis ses débuts une attention spéciale à la qualité et la fiabilité des logiciels. De nombreuses techniques de test logiciel ont été développées pour caractériser et détecter des erreurs dans les logiciels. Les modèles de fautes identifient et caractérisent les erreurs pouvant affecter les différentes parties d’un logiciel. D’autre part, les critères de qualité logiciel et leurs mesures permettent d’évaluer la qualité du code logiciel et de détecter en amont du code potentiellement sujet à erreur. Les techniques d’analyses statiques et dynamiques scrutent, respectivement, le logiciel à l’arrêt et à l’exécution pour trouver des erreurs ou réaliser des mesures de qualité. Dans cette thèse, nous prônons le fait que la même attention doit être portée sur la qualité et la fiabilité des interfaces utilisateurs (ou interface homme-machine, IHM), au sens génie logiciel du terme. Cette thèse propose donc deux contributions dans le domaine du test et de la maintenance d’interfaces utilisateur : 1. Classification et mutation des erreurs d’interfaces utilisateur. 2. Qualité du code des interfaces utilisateur. Nous proposons tout d’abord un modèle de fautes d’IHM. Ce modèle a été conçu à partir des concepts standards d’IHM pour identifier et classer les fautes d’IHM ; Au travers d’une étude empirique menée sur du code Java existant, nous avons montré l’existence d’une mauvaise pratique récurrente dans le développement du contrôleur d’IHM, objet qui transforme les évènements produits par l’interface utilisateur pour les transformer en actions. Nous caractérisons cette nouvelle mauvaise pratique que nous avons appelée Blob listener, en référence à la méthode Blob. Nous proposons également une analyse statique permettant d’identifier automatiquement la présence du Blob listener dans le code d’interface Java Swing. / The software engineering community takes special attention to the quality and the reliability of software systems. Software testing techniques have been developed to find errors in code. Software quality criteria and measurement techniques have also been assessed to detect error-prone code. In this thesis, we argue that the same attention has to be investigated on the quality and reliability of GUIs, from a software engineering point of view. We specifically make two contributions on this topic. First, GUIs can be affected by errors stemming from development mistakes. The first contribution of this thesis is a fault model that identifies and classifies GUI faults. We show that GUI faults are diverse and imply different testing techniques to be detected. Second, like any code artifact GUI code should be analyzed statically to detect implementation defects and design smells. As for the second contribution, we focus on design smells that can affect GUIs specifically. We identify and characterize a new type of design smell, called Blob listener. It occurs when a GUI listener, that gathers events to treat and transform as commands, can produce more than one command. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in a tool called InspectorGuidget. Experiments we conducted exhibits positive results regarding the ability of InspectorGuidget in detecting Blob listeners. To counteract the use of Blob listeners, we propose good coding practices regarding the development of GUI listeners.
25

Efficient Neural Network Verification Using Branch and Bound

Wang, Shiqi January 2022 (has links)
Neural networks have demonstrated great success in modern machine learning systems. However, they remain susceptible to incorrect corner-case behaviors, often behaving unpredictably and producing surprisingly wrong results. Therefore, it is desirable to formally guarantee their trustworthiness for certain robustness properties when applied to safety-/security-sensitive systems like autonomous vehicles and aircraft. Unfortunately, the task is extremely challenging due to the complexity of neural networks, and traditional formal methods were not efficient enough to verify practical properties. Recently, a Branch and Bound (BaB) framework is generally extended for neural network verification and shows great success in accelerating the verification. This dissertation focuses on state-of-the-art neural network verifiers using BaB. We will first introduce two efficient neural network verifiers ReluVal and Neurify using basic BaB approaches involving two main steps: (1) They will recursively split the original verification problem into easier independent subproblems by splitting input or hidden neurons; (2) For each split subproblem, we propose an efficient and tight bound propagation method called symbolic interval analysis, producing sound estimated bounds for outputs using convex linear relaxations. Both ReluVal and Neurify are three orders of magnitude faster than previously state-of-the-art formal analysis systems on standard verification benchmarks. However, basic BaB approaches like Neurify have to construct each subproblem into a Linear Programming (LP) problem and solve it using expensive LP solvers, significantly limiting the overall efficiency. This is because each step of BaB will introduce neuron split constraints (e.g., a ReLU neuron larger or smaller than 0), which are hard to be handled by existing efficient bound propagation methods. We propose novel designs of bound propagation method 𝛼-CROWN and its improved variance 𝛽-CROWN, solving the verification problem by optimizing Lagrangian multipliers 𝛼 and 𝛽 with gradient ascent without requiring to call any expensive LP solvers. They were built based on previous work CROWN, a generalized efficient bound propagation method using linear relaxation. BaB verification using 𝛼-CROWN and 𝛽-CROWN cannot only provide tighter output estimations than most of the bound propagation methods but also can fully leverage the accelerations by GPUs with massive parallelization. Combining our methods with BaB empowers the state-of-the-art verifier 𝛼,𝛽-CROWN (alpha-beta-CROWN), the winning tool in the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. Our $\alpha,𝛽-CROWN can be three orders of magnitude faster than LP solver based BaB verifiers and is notably faster than all existing approaches on GPUs. Recently, we further generalize 𝛽-CROWN and propose an efficient iterative approach that can tighten all intermediate layer bounds under neuron split constraints and strengthen the bound tightness without LP solvers. This new approach in BaB can greatly improve the efficiency of 𝛼,𝛽-CROWN, especially on several challenging benchmarks. Lastly, we study verifiable training that incorporates verification properties in training procedures to enhance the verifiable robustness of trained models and scale verification to larger models and datasets. We propose two general verifiable training frameworks: (1) MixTrain that can significantly improve verifiable training efficiency and scalability and (2) adaptive verifiable training that can improve trained verifiable robustness accounting for label similarity. The combination of verifiable training and BaB based verifiers opens promising directions for more efficient and scalable neural network verification.
26

A comparison of two different model checking techniques

Bull, J. J. D 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2003. / ENGLISH ABSTRACT: Model checking is a computer-aided verification technique that is used to verify properties about the formal description of a system automatically. This technique has been applied successfully to detect subtle errors in reactive systems. Such errors are extremely difficult to detect by using traditional testing techniques. The conventional method of applying model checking is to construct a model manually either before or after the implementation of a system. Constructing such a model requires time, skill and experience. An alternative method is to derive a model from an implementation automatically. In this thesis two techniques of applying model checking to reactive systems are compared, both of which have problems as well as advantages. Two specific strategies are compared in the area of protocol development: 1. Structuring a protocol as a transition system, modelling the system, and then deriving an implementation from the model. 2. Automatically translating implementation code to a verifiable model. Structuring a reactive system as a transition system makes it possible to verify the control flow of the system at implementation level-as opposed to verifying the control flow at abstract level. The result is a closer correspondence between implementation and specification (model). At the same time testing, which is restricted to small, independent code fragments that manipulate data, is simplified significantly. The construction of a model often takes too long; therefore, verification results may no longer be applicable when they become available. To address this problem, the technique of automated model extraction was suggested. This technique aims to reduce the time required to construct a model by minimising manual input during model construction. A transition system is a low-level formalism and direct execution through interpretation is feasible. However, the overhead of interpretation is the major disadvantage of this technique. With automated model extraction there are disadvantages too. For example, differences between the implementation and specification languages-such as constructs present in the implementation language that cannot be expressed in the modelling language-make the development of an automated model extraction tool extremely difficult. In conclusion, the two techniques are compared against a set of software development considerations. Since a specific technique is not always preferable, guidelines are proposed to help select the best approach in different circumstances. / AFRIKAANSE OPSOMMING: Modeltoetsing is 'n rekenaargebaseerde verifikasietegniek wat gebruik word om eienskappe rakende 'n formele spesifikasie van 'n stelsel te verifieer. Die tegniek is al suksesvol toegepas om subtiele foute in reaktiewe stelsels op te spoor. Sulke foute word uiters moeilik opgespoor as tradisionele toetsings tegnieke gebruik word. Tradisioneel word modeltoetsing toegepas deur 'n model te bou voor of na die implementasie van 'n stelsel. Om'n model te bou verg tyd, vernuf en ervaring. 'n Alternatiewe metode is om outomaties 'n model van 'n implementasie af te lei. In hierdie tesis word twee toepassingstegnieke van modeltoetsing vergelyk, waar beide tegnieke beskik oor voordele sowel as nadele. Twee strategieë word vergelyk in die gebied van protokol ontwikkeling: 1. Om 'n protokol as 'n oorgangsstelsel te struktureer, dit te moduleer en dan 'n implementasie van die model af te lei. 2. Om outomaties 'n verifieerbare model van 'n implementasie af te lei. Om 'n reaktiewe stelsel as 'n oorgangsstelsel te struktureer maak dit moontlik om die kontrolevloei op implementasie vlak te verifieer-in teenstelling met verifikasie van kontrolevloei op 'n abstrakte vlak. Die resultaat is 'n nouer band wat bestaan tussen die implementasie en die spesifikasie. Terselfdetyd word toetsing, wat beperk word tot klein, onafhanklike kodesegmente wat data manupileer, beduidend vereenvoudig. Die konstruksie van 'n model neem soms te lank; gevolglik, wanneer die verifikasieresultate beskikbaar word, is dit dalk nie meer toepaslik op die huidige weergawe van 'n implementasie nie. Om die probleem aan te spreek is 'n tegniek om modelle outomaties van implementasies af te lei, voorgestel. Die doel van die tegniek is om die tyd wat dit neem om 'n model te bou te verminder deur handtoevoer tot 'n minimum te beperk. 'n Oorgangsstelsel is 'n laevlak formalisme en direkte uitvoering deur interpretasie is wesenlik. Die oorhoofse koste van die interpreteerder is egter die grootste nadeel van die tegniek. Daar is ook nadele wat oorweeg moet word rakende die tegniek om outomaties modelle van implementasies af te lei. Byvoorbeeld, verskille tussen die implementasietaal en spesifikasietaal=-soos byvoorbleed konstrukte wat in die implementasietaal gebruik word wat nie in die modeleringstaal voorgestel kan word nie-vrnaak die ontwikkeling van 'n modelafieier uiters moeilik. As gevolg word die twee tegnieke vergelyk teen 'n stel van programatuurontwikkelingsoorwegings. Omdat 'n spesifieke tegniek nie altyd voorkeur kan geniet nie, word riglyne voorgestel om te help met die keuse om die beste tegniek te kies in verskillende omstandighede.
27

Investigating the non-termination of affine loops

Durant, Kevin 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The search for non-terminating paths within a program is a crucial part of software verification, as the detection of anfinite path is often the only manner of falsifying program termination - the failure of a termination prover to verify termination does not necessarily imply that a program is non-terminating. This document describes the development and implementation of two focussed techniques for investigating the non-termination of affine loops. The developed techniques depend on the known non-termination concepts of recurrent sets and Jordan matrix decomposition respectively, and imply the decidability of single-variable and cyclic affine loops. Furthermore, the techniques prove to be practically capable methods for both the location of non-terminating paths, as well as the generation of preconditions for non-termination. / AFRIKAANSE OPSOMMING: Sagtewareveri kasie vereis of die bewys van die beeindiging van 'n program, of die deteksie van oneindige uitvoerings. In hierdie tesis ontwikkel en implementeer ons twee tegnieke om oor die oneindige eienskap van a ene lusse te beslis. Die tegnieke wat ontwikkel word is gebaseer op konsepte soos Jordan matriksdekomposisie en herhaalde groepe wat al in die verlede gebruik is om die beeindiging van lusse te ondersoek. Die tegnieke kan gebruik word om die uitvoerbaarheid van beide een-veranderlike en sikliese a ene lusse te bepaal. Feitlik alle nie-eindige a ene lusse kan ge denti seer word en die toestande waaronder hierdie oneindige eienskap verskyn kan beskryf word.
28

Post-silicon Functional Validation with Virtual Prototypes

Cong, Kai 03 June 2015 (has links)
Post-silicon validation has become a critical stage in the system-on-chip (SoC) development cycle, driven by increasing design complexity, higher level of integration and decreasing time-to-market. According to recent reports, post-silicon validation effort comprises more than 50% of the overall development effort of an 65nm SoC. Though post-silicon validation covers many aspects ranging from electronic properties of hardware to performance and power consumption of whole systems, a central task remains validating functional correctness of both hardware and its integration with software. There are several key challenges to achieving accelerated and low-cost post-silicon functional validation. First, there is only limited silicon observability and controllability; second, there is no good test coverage estimation over a silicon device; third, it is difficult to generate good post-silicon tests before a silicon device is available; fourth, there is no effective software robustness testing approaches to ensure the quality of hardware/software integration. We propose a systematic approach to accelerating post-silicon functional validation with virtual prototypes. Post-silicon test coverage is estimated in the pre-silicon stage by evaluating the test cases on the virtual prototypes. Such analysis is first conducted on the initial test suite assembled by the user and subsequently on the expanded test suite which includes test cases that are automatically generated. Based on the coverage statistics of the initial test suite on the virtual prototypes, test cases are automatically generated to improve the test coverage. In the post-silicon stage, our approach supports coverage evaluation of test cases on silicon devices to ensure fidelity of early coverage evaluation. The generated test cases are issued to silicon devices to detect inconsistencies between virtual prototypes and silicon devices using conformance checking. We further extend the test case generation framework to generate and inject fault scenario with virtual prototypes for driver robustness testing. Besides virtual prototype-based fault injection, an automatic driver fault injection approach is developed to support runtime fault generation and injection for driver robustness testing. Since virtual prototype enables early driver development, our automatic driver fault injection approach can be applied to driver testing in both pre-silicon and post-silicon stages. For preliminary evaluation, we have applied our coverage evaluation and test generation to several network adapters and their virtual prototypes. We have conducted coverage analysis for a suite of common tests on both the virtual prototypes and silicon devices. The results show that our approach can estimate the test coverage with high fidelity. Based on the coverage estimation, we have employed our automatic test generation approach to generate additional tests. When the generated test cases were issued to both virtual prototypes and silicon devices, we observed significant coverage improvement. And we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect. After we applied virtual prototype-based fault injection approach to virtual prototypes for three widely-used network adapters, we generated and injected thousands of fault scenarios and found 2 driver bugs. For automatic driver fault injection, we have applied our approach to 12 widely used drivers with either virtual prototypes or silicon devices. After testing all these drivers, we found 28 distinct bugs.
29

Towards effective and efficient temporal verification in grid workflow systems

Chen, Jinjun, n/a January 2007 (has links)
In grid architecture, a grid workflow system is a type of high-level grid middleware which aims to support large-scale sophisticated scientific or business processes in a variety of complex e-science or e-business applications such as climate modelling, disaster recovery, medical surgery, high energy physics, international stock market modelling and so on. Such sophisticated processes often contain hundreds of thousands of computation or data intensive activities and take a long time to complete. In reality, they are normally time constrained. Correspondingly, temporal constraints are enforced when they are modelled or redesigned as grid workflow specifications at build-time. The main types of temporal constraints include upper bound, lower bound and fixed-time. Then, temporal verification would be conducted so that we can identify any temporal violations and handle them in time. Conventional temporal verification research and practice have presented some basic concepts and approaches. However, they have not paid sufficient attention to overall temporal verification effectiveness and efficiency. In the context of grid economy, any resources for executing grid workflows must be paid. Therefore, more resources should be mainly used for execution of grid workflow itself rather than for temporal verification. Poor temporal verification effectiveness or efficiency would cause more resources diverted to temporal verification. Hence, temporal verification effectiveness and efficiency become a prominent issue and deserve an in-depth investigation. This thesis systematically investigates the limitations of conventional temporal verification in terms of temporal verification effectiveness and efficiency. The detailed analysis of temporal verification effectiveness and efficiency is conducted for each step of a temporal verification cycle. There are four steps in total: Step 1 - defining temporal consistency; Step 2 - assigning temporal constraints; Step 3 - selecting appropriate checkpoints; and Step 4 - verifying temporal constraints. Based on the investigation and analysis, we propose some new concepts and develop a set of innovative methods and algorithms towards more effective and efficient temporal verification. Comparisons, quantitative evaluations and/or mathematical proofs are also presented at each step of the temporal verification cycle. These demonstrate that our new concepts, innovative methods and algorithms can significantly improve overall temporal verification effectiveness and efficiency. Specifically, in Step 1, we analyse the limitations of two temporal consistency states which are defined by conventional verification work. After, we propose four new states towards better temporal verification effectiveness. In Step 2, we analyse the necessity of a number of temporal constraints in terms of temporal verification effectiveness. Then we design a novel algorithm for assigning a series of finegrained temporal constraints within a few user-set coarse-grained ones. In Step 3, we discuss the problem of existing representative checkpoint selection strategies in terms of temporal verification effectiveness and efficiency. The problem is that they often ignore some necessary checkpoints and/or select some unnecessary ones. To solve this problem, we develop an innovative strategy and corresponding algorithms which only select sufficient and necessary checkpoints. In Step 4, we investigate a phenomenon which is ignored by existing temporal verification work, i.e. temporal dependency. Temporal dependency means temporal constraints are often dependent on each other in terms of their verification. We analyse its impact on overall temporal verification effectiveness and efficiency. Based on this, we develop some novel temporal verification algorithms which can significantly improve overall temporal verification effectiveness and efficiency. Finally, we present an extension to our research about handling temporal verification results since these verification results are based on our four new temporal consistency states. The major contributions of this research are that we have provided a set of new concepts, innovative methods and algorithms for temporal verification in grid workflow systems. With these, we can significantly improve overall temporal verification effectiveness and efficiency. This would eventually improve the overall performance and usability of grid workflow systems because temporal verification can be viewed as a service or function of grid workflow systems. Consequently, by deploying the new concepts, innovative methods and algorithms, grid workflow systems would be able to better support large-scale sophisticated scientific and business processes in complex e-science and e-business applications in the context of grid economy.
30

Verification and validation of computer simulations with the purpose of licensing a pebble bed modular reactor

Bollen, Rob 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: The Pebble Bed Modular Reactor is a new and inherently safe concept for a nuclear power generation plant. In order to obtain the necessary licenses to build and operate this reactor, numerous design and safety analyses need to be performed. The results of these analyses must be supported with substantial proof to provide the nuclear authorities with a sufficient level of confidence in these results to be able to supply the required licences. Beside the obvious need for a sufficient level of confidence in the safety analyses, the analyses concerned with investment protection also need to be reliable from the investors’ point of view. The process to be followed to provide confidence in these analyses is the verification and validation process. It is aimed at presenting reliable material against which to compare the results from the simulations. This material for comparison will consist of a combination of results from experimental data, extracts from actual plant data, analytical solutions and independently developed solutions for the simulation of the event to be analysed. Besides comparison with these alternative sources of information, confidence in the results will also be built by providing validated statements on the accuracy of the results and the boundary conditions with which the simulations need to comply. Numerous standards exist that address the verification and validation of computer software, for instance by organisations such as the American Society of Mechanical Engineers (ASME) and the Institute of Electrical and Electronics Engineers (IEEE). The focal points of the verification and validation of the design and safety analyses performed on typical PBMR modes and states, and the requirements imposed by both the local and overseas nuclear regulators, are not entirely enveloped by these standards. For this reason, PBMR developed a systematic and disciplined approach for the preparation of the Verification and Validation Plan, aimed at capturing the essence of the analyses. This approach aims to make a definite division between software development and the development of technical analyses, while still using similar processes for the verification and validation. The reasoning behind this is that technical analyses are performed by engineers and scientists who should only be responsible for the verification and validation of the models and data they use, but not for the software they are dependent on. Software engineers should be concerned with the delivery of qualified software to be used in the technical analyses. The PBMR verification and validation process is applicable to both hand calculations and computer-aided analyses, addressing specific requirements in clearly defined stages of the software and Technical Analysis life cycle. The verification and validation effort of the Technical Analysis activity is divided into the verification and validation of models and data, the review of calculational tasks, and the verification and validation of software, with the applicable information to be validated, captured in registers or databases. The resulting processes are as simple as possible, concise and practical. Effective use of resources is ensured and internationally accepted standards have been incorporated, aiding in faith in the process by all stakeholders, including investors, nuclear regulators and the public. / AFRIKAASE OPSOMMING: Die Modulêre Korrelbedreaktor is ’n nuwe konsep vir ’n kernkragsentrale wat inherent veilig is. Dit word deur PBMR (Edms.) Bpk. ontwikkel. Om die nodige vergunnings om so ’n reaktor te kan bou en bedryf, te bekom, moet ’n aansienlike hoeveelheid ontwerp- en veiligheidsondersoeke gedoen word. Die resultate wat hierdie ondersoeke oplewer, moet deur onweerlegbare bewyse ondersteun word om vir die owerhede ’n voldoende vlak van vertroue in die resultate te gee, sodat hulle die nodigde vergunnings kan maak. Benewens die ooglopende noodsaak om ’n voldoende vlak van vertroue in die resultate van die veiligheidsondersoeke te hê, moet die ondersoeke wat met die beskerming van die beleggers se beleggings gepaard gaan, net so betroubaar wees. Die proses wat gevolg word om vertroue in die resultate van die ondersoeke op te bou, is die proses van verifikasie en validasie. Dié proses is daarop gerig om betroubare vergelykingsmateriaal vir simulasies voor te lê. Hierdie vergelykingsmateriaal vir die gebeurtenis wat ondersoek word, sal bestaan uit enige kombinasie van inligting wat in toetsopstellings bekom is, wat in bestaande installasies gemeet is, wat analities bereken is; asook dit wat deur ’n derde party onafhanklik van die oorspronklike ontwikkelaars bekom is. Vertroue in die resultate van die ondersoeke sal, behalwe deur vergelyking met hierdie alternatiewe bronne van inligting, ook opgebou word deur die resultate te voorsien van ’n gevalideerde verklaring wat die akkuraatheid van die resultate aantoon en wat die grensvoorwaardes waaraan die simulasies ook moet voldoen, opsom. Daar bestaan ’n aansienlike hoeveelheid internasionaal aanvaarde standaarde wat die verifikasie en validasie van rekenaarsagteware aanspreek. Die standaarde kom van instansies soos die Amerikaanse Vereniging vir Meganiese Ingenieurs (ASME) en die Instituut vir Elektriese en Elektroniese Ingenieurs (IEEE) – ook van Amerika. Die aandag wat deur die Suid-Afrikaanse en oorsese kernkragreguleerders vereis word vir die toestande wat spesifiek geld vir korrelbedreaktors, word egter nie geheel en al deur daardie standaarde aangespreek nie. Daarom het die PBMR maatskappy ’n stelselmatige benadering ontwikkel om verifikasie- en validasieplanne voor te berei wat die essensie van die ondersoeke kan ondervang. Hierdie benadering is daarop gemik om ’n duidelike onderskeid te maak tussen die ontwikkeling van sagteware en die ontwikkeling van tegniese ondersoeke, terwyl steeds gelyksoortige prosesse in die verifikasie en validasie gebruik sal word. Die rede hiervoor is dat tegniese ondersoeke uitgevoer word deur ingenieurs en wetenskaplikes wat net vir verifikasie en validasie van hulle eie modelle en die gegewens verantwoordelik gehou kan word, maar nie vir die verifikasie en validasie van die sagteware wat hulle gebruik nie. Ingenieurs wat spesialiseer in sagteware-ontwikkeling behoort verantwoordelik te wees vir die daarstelling van sagteware wat deur die reguleerders gekwalifiseer kan word, sodat dit in tegniese ondersoeke op veiligheidsgebied gebruik kan word. Die verifikasie- en validasieproses van die PBMR is sowel vir handberekeninge as vir rekenaarondersteunde-ondersoek geskik. Hierdie proses spreek spesifieke vereistes in onderskeie stadiums gedurende die lewenssiklusse van die ontwikkeling van sagteware en van tegniese ondersoeke aan. Die verifikasie- en validasiewerk vir tegniese ondersoeksaktiwiteite is verdeel in die verifikasie en validasie van modelle en gegewens, die nasien van berekeninge en die verifikasie en validasie van sagteware, waarby die betrokke inligting wat gevalideer moet word, versamel word in registers of databasisse. Die prosesse wat hieruit voortgevloei het, is so eenvoudig as moontlik, beknop en prakties gehou. Hierdeur is ’n effektiewe benutting van bronne verseker. Internasionaal aanvaarde standaarde is gebruik wat die vertroue in die proses deur alle betrokkenes, insluitende beleggers, die owerhede en die publiek, sal bevorder.

Page generated in 0.1291 seconds