Spelling suggestions: "subject:"alidation"" "subject:"balidation""
201 |
Otimização dos processos de calibração e validação do modelo cropgro-soybean / Optimization of the cropgro-soybean model calibration and validation processesFensterseifer, Cesar Augusto Jarutais 06 December 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Crop models are important tools to improve the management and yield of agricultural systems. These improvements are helpful to meet the growing food and fuel demand without increase the crop areas. The conventional approach for calibrating/validating a crop model considers few to many experiments. However, few experiments could lead to higher uncertainties and a large number of experiments is too expensive. Traditionally, the classical procedure use to share an experimental dataset one part to calibrate and the other to validate the model. However, if only few experiments are available, split it could increase the uncertainties on simulation performance. On the other hand, to calibrate/validate the model using several experiments is too expensive and time consuming. Methods that can optimize these procedures, decreasing the processing time and costs, with a reliable performance are always welcome. The first chapter of this study was conducted to evaluate and compare a statistically robust method with the classical calibration/validation procedure. These two procedure, were applied to estimate the genetic coefficients of the CROPGRO-soybean model, using multiple experiments. The cross-validation leave-one-out method, was applied to 21 experiments, using the NA 5909 RG variety, across a southern state of Brazil. The cross-validation reduced the classical calibration/validation procedure average RMSE from 2.6, 4.6, 4.8, 7.3, 10.2, 677 and 551 to 1.1, 4.1, 4.1, 6.2, 6.3, 347 and 447 for emergence, R1, R3, R5, R7 (days), grains.m-2 and kg.ha-1, respectively. There was stability in the estimated ecotype and genetic coefficient among the 21 experiments. Considering the wide range of environment conditions, the CROPGRO-soybean model provided robust predictions of phenology, biomass and grain yield. Finally, to improve the calibration/validation procedure performance, the cross-validation method should be used whenever possible. For the second chapter of this study, the main objectives were to evaluate the calibration/validation uncertainties using different numbers of experiments and to find out the minimum number of experiments required for a reliable CROPGRO-Soybean simulation. This study also used 21 field experiments (BMX Potencia RR variety) sown in eight different locations of Southern Brazil between 2010 and 2014. The experiments were grouped in four classes (Individual sowings, season/year per location, experimental sites, and all data together). As the grouping level increase, the developmental stages RRMSE (%), decreased from 22.2% to 7.8% from individual sowings to all data together, respectively. The use of only one individual sowings experiment could lead to a RRMSE of 28.4, 48, and 36% for R1, LAI and yield, respectively. However, the largest decrease occurred from the individual sowings to the season/year per location. Then, is recommended, use at least the season/year per location (early, recommended and late sowing dates) class. It will allow understand the behavior of the variety, avoiding the high costs of several experiments and keeping a reliable performance of the model. / Modelos agrícolas são ferramentas importantes para aprimorar técnicas de manejo e consequentemente a eficiência dos sistemas agrícolas. Esse acréscimo na eficiência são úteis para atender a crescente demanda de alimentos e combustíveis, sem avançar a fronteira agrícola. A calibração e validação de um modelo agrícola, historicamente considerou conjuntos de dados que variam de poucos á muitos experimentos. Poucos experimentos podem aumentar as incertezas e muitos experimentos tem alto custo financeiro e demanda de tempo. Pelo método de partição em dois grupos, o conjunto de experimentos é dividido em duas partes, uma para calibrar e a outra validar o modelo. Se apenas um conjunto pequeno de experimentos está disponível, dividi-los pode prejudicar o desempenho do modelo. Assim, métodos que otimizem esses processos, diminuindo o tempo e o custo de experimentos necessários para a calibração e validação, são sempre bem vindos. O objetivo do primeiro capítulo desta tese, foi comparar o método tradicionalmente utilizado na calibração e validação de modelos com um método mais robusto (cross-validation). Ambos os métodos foram aplicados para estimar os coeficientes genéticos na calibração e validação do modelo CROPGRO-soybean, utilizando múltiplos experimentos. Um conjunto com os 3 experimentos mais detalhados foram utilizados para calibração utilizando o método de partição em dois grupos. Já o método cross-validation, foi aplicado utilizando 21 experimentos. A cultivar NA5909 RG foi selecionada por ser uma das mais cultivadas no sul do Brasil nos últimos 5 anos, conduzida em experimentos distribuídos em oitos locais do Estado do Rio Grande do Sul durante as safras de 2010/2011 ate 2013/2014. O método cross-validation reduziu os RMSEs encontrados no método tradicionalmente utilizado de 2.6, 4.6, 4.8, 7.3, 10.2, 677 e 551 para 1.1, 4.1, 4.1, 6.2, 6.3, 347 e 447 para emergência, R1, R3, R5, R7 (em dias), grãos.m-2 e kg.ha-1, respectivamente. Foi observado estabilidade na maioria das estimativas de coeficientes genéticos, o que sugere a possibilidade de utilizar um menor número de experimentos no processo. Considerando a ampla faixa de condições ambientais, o modelo apresentou desempenho satisfatório na previsão fenológica, de biomassa e produtividade. Para otimizar os processos de calibração e validação, indica-se que o método cross-validation seja utilizado sempre que possível. No segundo capítulo, o principal objetivo foi avaliar o desempenho do uso de diferentes números de experimentos, e estimar o número mínimo necessário para garantir desempenho satisfatório do modelo CROPGRO-soybean. Esse estudo também utilizou 21 experimentos, com a cultivar BMX Potência RR. Os experimentos foram organizados em quatro grupos: Grupo 1 (semeaduras individuais), grupo 2 (ano agrícola por local), grupo 3 (local experimental) e grupo 4 (todos os experimentos juntos). Conforme o número de experimentos aumentou, a variabilidade dos coeficientes e os erros relativos (RRMSE) diminuíram. O primeiro grupo apresentou os maiores erros relativos, com até 28.4, 48 e 36% de erros nas simulações de R1, IAF e produtividade, respectivamente. O maior decréscimo nos erros relativos, ocorreu quando avançamos do grupo 1 para o grupo 2. Em alguns casos os erros foram reduzidos em mais que duas vezes. Assim, considerando o elevado custo financeiro e a demanda de tempo que os grupos 3 e 4 apresentam, recomenda-se a escolha de pelo menos o grupo 2, com 3 experimentos no mesmo ano agrícola. Essa estratégia vai permitir um melhor entendimento sobre o desempenho da cultivar, além de calibrar e validar o modelo CROPGRO-soybean, evitando os altos custos de vários experimentos, garantindo o desempenho satisfatório do modelo.
|
202 |
Environnement générique pour la validation de simulations médicales / A generic framework for validation of medical simulationsDeram, Aurélien 23 October 2012 (has links)
Dans le cadre des simulations pour l'entrainement, le planning, ou l'aide per-opératoire aux gestes médicaux-chirurgicaux, de nombreux modèles ont été développés pour décrire le comportement mécanique des tissus mous. La vérification, la validation et l'évaluation sont des étapes cruciales en vue de l'acceptation clinique des résultats de simulation. Ces tâches, souvent basées sur des comparaisons avec des données expérimentales ou d'autres simulations, sont rendues difficiles par le nombre de techniques de modélisation existantes, le nombre d'hypothèses à considérer et la difficulté de réaliser des expériences réelles utilisables. Nous proposons un environnement de comparaison basé sur une analyse du processus de modélisation et une description générique des éléments constitutifs d'une simulation (e.g. géométrie, chargements, critère de stabilité) ainsi que des résultats (expérimentaux ou provenant d'une simulation). La description générique des simulations permet d'effectuer des comparaisons avec diverses techniques de modélisation (e.g. masse-ressorts, éléments finis) implémentées sur diverses plateformes de simulation. Les comparaisons peuvent être faites avec des expériences réelles, d'autres résultats de simulation ou d'anciennes versions du modèle grâce à la description commune des résultats, et s'appuient sur un ensemble de métriques pour quantifier la précision et la vitesse de calcul. La description des résultats permet également de faciliter l'échange d'expériences de validation. La pertinence de la méthode est montrée sur différentes expériences de validation et de comparaison de modèles. L'environnement et ensuite utilisé pour étudier l'influence des hypothèses de modélisations et des paramètres d'un modèle d'aspiration de tissu utilisé par un dispositif de caractérisation des lois de comportement. Cette étude permet de donner des pistes pour l'amélioration des prédictions du dispositif. / Numerous models have been developed to describe the mechanical behavior of soft tissues for medical simulation. Verification, validation and evaluation are crucial steps towards the acceptance of simulation results by clinicians. These tasks, often based on comparisons between simulation results and experimental data or other simulations, are difficult because of the wide range of available modeling techniques, the number of possible assumptions, and the difficulty to perform validation experiments. A comparison framework is proposed based on the analysis of the modelisation process and on a generic description of both constitutive elements of a simulation (e.g. geometry, loads, stability criterion) and results (from simulations or experiments). Generic description allows comparisons between different modeling techniques implemented in various simulation platforms. Comparisons can be performed against real experiments, other simulation results or previous versions of a model thanks to the generic description of results and use a set of metrics to quantify both accuracy and computational efficiency. This description also facilitates validation experiments sharing. The usability of the method is shown on several validation and comparison experiments. The framework is then used to investigate the influence of modeling assumptions and parameters in a biomechanical finite element model of an in-vivo tissue aspiration device. This study gives clues towards the improvement of the predictions of the characterization device.
|
203 |
Gaia : de la validation des données aux paramètres du Red Clump / Gaia : from the data validation to the Red Clump parametersRuiz-Dern, Laura 08 November 2016 (has links)
La mission Gaia de l'Agence Spatiale Européenne (ESA) a pour objectif de cartographier notre galaxie avec une précision astrométrique jamais atteinte auparavant. Il est donc particulièrement important que les données qui seront publiées soient rigoureusement validées afin d'assurer une qualité optimum au Catalogue. Ces validations sont faites par l'une des équipes de l'unité de coordination CU9 du Consortium Gaia DPAC (Data Processing and Analys Consortium) chargé par l'ESA de la production du Catalogue Gaia. Dans le cadre de cette thèse, nous avons mis en place toute l’infrastructure nécessaire à la validation du catalogue Gaia par comparaison avec des catalogues externes. Celle-ci gère toutes les interactions avec l'environnement global des validations et avec la base de données Gaia. Ensuite nous avons développé un ensemble de tests statistiques pour valider les données du premier catalogue Gaia (DR1). Ces tests concernent notamment l’homogénéité des données sur le ciel, la qualité des positions et de la photométrie de l'ensemble des étoiles de DR1 (plus d'un milliard d'étoiles, $V<20$) ainsi que celle des parallaxes et mouvements propres des étoiles de textit{Tycho-Gaia} Astrometric Solution (TGAS), environ deux millions d'étoiles communes aux catalogues Gaia et Tycho-2 ($V<12$). Ces tests statistiques sur la DR1 sont opérationnels et ont déjà été appliqués très récemment sur des données préliminaires. Cela a déjà permis d'améliorer ces données (donc la qualité du catalogue), et d'en caractériser les propriétés statistiques. Cette caractérisation est essentielle à une exploitation scientifique correcte des données. Le premier catalogue Gaia sera publié à la fin de l’été 2016. Parmi les objets observés par Gaia, il y a une population d'étoiles particulièrement intéressantes, les étoiles du Red Clump (RC), très utilisées comme étalons de distance. Nous avons développé et testé deux méthodes pour modéliser les relations couleur-couleur (CC) et température effective - couleur dans toutes les bandes photométriques, de l'ultraviolet au proche-infrarouge. Elles permettront de caractériser le RC dans la bande G de Gaia dès la publication du catalogue: 1. en utilisant des modèles théoriques, et 2. empiriquement, en se basant sur une méthode Monte Carlo Markov Chain (MCMC). Pour cela nous avons très rigoureusement sélectionné des échantillons d'étoiles avec une bonne qualité photométrique, une bonne métallicité, déterminée par spectroscopie, une température effective homogène et une faible extinction interstellaire. À partir de ces calibrations CC et température-couleur, nous avons ensuite développé une méthode par Maximum de Vraisemblance qui permet de déterminer les magnitudes absolues, les températures et les extinctions des étoiles du RC. Les couleurs et extinctions ainsi obtenues ont été testées sur des étoiles avec des températures effectives mesurées spectroscopiquement et une extinction déterminée par la mesure des Bandes Diffuses Interstellaires (DIB). Ces propriétés intrinsèques des étoiles du RC vont permettre de caractériser le Red Clump Gaia et de calibrer, dans la bande Gaia, la magnitude absolue de cet étalon de distance, premier échelon essentiel de la détermination des distances dans l'Univers. / The Gaia mission of the European Space Agency (ESA) aims to map our galaxy with an unprecedented astrometric precision. It is therefore very important that the data that will be published be rigorously validated to ensure an optimal quality in the Catalogue. These validations are done by one of the teams of the coordination unit CU9 of the Gaia DPAC Consortium (Data Processing and Analysis Consortium) commissioned by ESA of the Gaia catalogue production. As part of this thesis, we implemented all the necessary infrastructure to validate the Gaia catalogue by comparison with external catalogues. This last manages all the interactions with the global environment of validations and with the Gaia database. Then we developed a set of statistical tests to validate the data from the first Gaia catalogue (DR1). These tests relate in particular to the homogeneity of data on the sky, the quality of the positions and of photometry of all the stars of DR1 (more than a billion stars, $V <20$) as well as that of the parallaxes and proper motions for textit{Tycho-Gaia} Astrometric Solution (TGAS) stars, around two million stars in common in Gaia and Tycho-2 catalogues ($V <12$).These DR1 statistical tests are operational and were already applied very recently on preliminary data. This has improved the data (thus the quality of the catalog) as well as allowed to characterize the statistical properties. This characterisation is essential for a correct scientific exploitation of the data. The first Gaia catalogue will be released in late summer 2016.Among the objects that Gaia observes, there is a population of stars particularly interesting, the Red Clump (RC) stars, widely used for distance indicators. We developed and tested two methods to model the colour-colour (CC) and effective temperature - colour relations in all photometric bands, from the ultraviolet to the near infrared. They will allow us to characterize the RC in the Gaia G band upon publication of the catalogue: 1. using theoretical models, and 2. empirically, based on a Monte Carlo Markov Chain (MCMC) method. For this we have very carefully selected samples of stars with a good photometric quality, good metallicity determined by spectroscopy, an homogeneous effective temperature and a low interstellar extinction.From these CC and temperature-colour calibrations, we then developed a Maximum Likelihood method that allows to derive absolute magnitudes, temperatures and extinctions of the RC stars. Estimates of colours and extinctions are tested on stars with spectroscopically measured effective temperatures and an extinction determined by the measurement of Diffuse Interstellar Bands (DIB). These intrinsic properties of RC stars will allow to characterize the Gaia RC and calibrate, within the Gaia G band, the absolute magnitude of this standard candle, first essential step of determining distances in the Univers.
|
204 |
Développement et validation d'un indicateur holistique clinique multidimensionnel rassemblant une évaluation clinimétrique et un indicateur de qualité de vie spécifique à la maladie de Huntington / Development and validation of a multidimensional clinical holistic indicator combining a clinical assessment and a quality of life indicator specific to Huntington's diseaseClay, Emilie 07 December 2016 (has links)
La maladie de Huntington est une maladie neurodégénérative pour laquelle il n’existe à l’heure actuelle pas de traitement curatif. Afin de récolter des données cliniques, économiques et de qualité de vie, une étude internationale transversale, l’Euro-HDB (pour European Huntington’s Disease Burden) a été mise en place en France, Italie, Etats-Unis, Pologne, Allemagne et en Espagne. Un ensemble de questionnaires a été développé à l’occasion de cette étude. Cette thèse se focalise sur le développement et la validation de deux questionnaires spécifiques : un questionnaire de qualité de vie nommé H-QoL-I (pour Huntington Quality of life Instrument) et un questionnaire clinimétrique nommé H-CSRI (pour Huntington Clinical self-reported instrument). Des groupes de discussion et des entretiens semi-structurés avec des patients atteints de la maladie de Huntington, des aidants et des professionnels de santé spécialisés ont permis d’établir le cadre conceptuel et le développement de la première version de l’instrument de mesure de qualité de vie. Le questionnaire clinimétrique est une adaptation sous forme auto-reporté d’un outil standard habituellement utilisé par les cliniciens pour évaluer le statut clinique du patient (l’UHDRS). La théorie classique des tests et la théorie de réponse aux items sont employées pour l’évaluation des propriétés psychométriques (validité et fiabilité).Globalement, les deux outils ont démontré de bonnes propriétés psychométriques et une bonne validité transculturelle. H-QoL-I et HCSR-I sont désormais des outils disponibles et sont un moyen validé pour suivre la progression de la maladie du patient. / Huntington's disease is a neurological disease for which there is presently no cure. Huntington’s disease causes gradual physical, emotional and cognitive deterioration. In order to collect clinical, economic and quality of life data, an international cross-sectional study called the Euro-HDB (for European Huntington’s Disease Burden) was set in France, Italy, the United Sates, Poland, Germany and Spain. A set of questionnaires has been developed for that data collection. This thesis focus on the development and the validation of two specific questionnaires: a quality of life questionnaire named H-QoL-I (for Huntington Quality of life Instrument) and a clinimetric questionnaire named H-CSRI (for Huntington Clinical self-reported instrument). Discussion groups and semi-structured interviews with Huntington’s disease patients, caregivers and specialised health professionals allowed to establish the conceptual framework and the development of the first version of the questionnaire of quality of life. The clinimetric questionnaire was an adaptation to allow self-reporting of a tool usually used by clinicians to evaluate the clinical status of the patient (the UHDRS). Both classical test theory and item response theory were used to assess the psychometric properties of the questionnaires (validity and reliability). Globally, both tools demonstrated good psychometric properties and a good cross-cultural validity. H-QoL-I and H-CSRI are now available and are a validated mean to follow patient's disease progression.
|
205 |
Validation d’architectures temps-réel pour la robotique autonome / Real-time architecture validation for autonomous robotsGobillot, Nicolas 29 April 2016 (has links)
Un système robotique est un système complexe, à la fois d’un point de vue matériel et logiciel. Afin de simplifier la conception de ces machines, le développement est découpé en modules qui sont ensuite assemblés pour constituer le système complet. Cependant, la facilité de conception de ces systèmes est bien souvent contrebalancée par la complexité de leur mise en sécurité, à la fois d’un point de vue fonctionnel et temporel. Il existe des ensembles d’outils et de méthodes permettant l’étude d’ordonnançabilité d’un système logiciel à base de tâches. Ces outils permettent de vérifier qu’un système de tâches respecte ses contraintes temporelles. Cependant ces méthodes d’analyse considèrent les tâches comme des entités monolithiques, sans prendre en compte la structure interne des tâches, ce qui peut les rendre trop pessimistes et non adaptées à des applications robotiques. Cette étude consiste à prendre en compte la structure interne des tâches dans des méthodes d’analyse d’ordonnançabilité. Cette thèse montre que le découpage de tâches monolithiques permet d’améliorer la précision des analyses d’ordonnancement. De plus, les outils issus de ces travaux ont été expérimentés sur un cas d’application de robotique mobile autonome. / A robot is a complex system combining hardware and software parts. In order to simplify the robot design, the whole system is split in several separated modules. However, the complexity of the functional and temporal validation to improve the safety counterweights the robot design simplicity. We can find scheduling analysis tools for task-based software. These tools are used to check and validate the schedulability of the tasks involved in a software, run on a specific hardware. However, these methods considers the tasks as monolithic entities, without taking into account their internal structure. The resulting analyses may be too much pessimistic and therefore not applicable to robotic applications. In this work, we have modeled the internal structure of the tasks as state-machines and used these state-machines into the schedulability analysis in order to improve the analysis precision. Moreover, the tools developed during this work have been tested on real robotic use-cases.
|
206 |
Validation d'un test de mesure de bioaccessibilité : application à quatre éléments traces métalliques dans les sols : as, Cd, Pb et Sb / Validation of bioaccessibility test : application to 4 metals in soils : as, Cd, Pb and SbCaboche, Julien 28 September 2009 (has links)
La gestion des sites et sols pollués repose sur l’évaluation des expositions aux contaminants. Le retour d’expérience montre que les voies d’exposition directe, et notamment l’ingestion de terre pour les enfants, engendrent les niveaux de risque les plus élevés. Actuellement, en se basant sur la concentration totale d’un polluant dans le sol, l’évaluation des risques tend à être surprotectrice dans la mesure où seule une fraction de la substance peut pénétrer à l’intérieur de l’organisme. L’objectif de l’étude est de mettre en évidence que le test in vitro UBM (Unified Barge Method) de bioaccessibilité est pertinent pour estimer la fraction biodisponible des ETM dans les sols. Pour cela, il est nécessaire de démontrer que la solubilisation des contaminants dans le tractus gastro-intestinal est une étape limitante dans le processus de biodisponibilité et d’autre part que les mesures de bioaccessibilité sont corrélées aux mesures de biodisponibilité. Pour 15 sols sélectionnés sur trois sites contaminés différents, l’étude montre que la biodisponibilité est très variable pour le plomb (8% à 82%), le cadmium (12% à 91%) et l’arsenic (3% à 78%). Pour l’antimoine, les valeurs de biodisponibilité relative et de bioaccessibilité sont très faibles indépendamment des caractéristiques contrastées des sols (valeurs < 20%). De ce fait, ces conditions ne permettent pas de valider le test in vitro pour l’antimoine. Les résultats des corrélations, pour les trois autres contaminants, démontrent que la bioaccessibilité est l’étape limitante de la biodisponibilité et que le test UBM est pertinent pour estimer la bioaccessibilité de ces éléments dans les sols. Notre étude met également en évidence l’impact de la matrice sol sur les variations des valeurs de bioaccessibilité. Ainsi, il a été montré que la distribution des contaminants sur les différentes phases porteuses du sol est un paramètre majeur et robuste pour expliquer les variations de la bioaccessibilité pour l’ensemble des sols étudiés. Les résultats de l’étude mettent en lumière que le test in vitro UBM peut fournir une alternative possible aux investigations in vivo afin d’affiner les niveaux d’exposition des ETM suite à l’ingestion de sol / The management of contaminated soil is based on the assessment of exposure of pollutants. The review shows that the direct routes of exposure, including soil ingestion for children, generate the highest risk levels. Currently, based on the total pollutant concentration in soil, risk assessment tends to be overestimate because only a fraction of the substance may penetrate into the body. The aim of the study is to demonstrate that in vitro UBM test (Unified Method Barge) is relevant to estimate the bioavailable fraction of metals in the soil by estimating the bioaccessible fraction. For this, it is necessary to show that the solubilization of contaminants in the gastrointestinal tract is a limiting step in oral bioavailability process and that bioaccessibility is correlated to bioavailability. For 15 soils selected on three different sites, the study shows that bioavailability is highly variable for lead (8% to 82%), cadmium (12% to 91%) and arsenic (3 % to 78%). For antimony, the relative bioavailability and bioaccessibility values are very low independently of the different soil characteristics (values <20%). Thus, these conditions do not allow to validate in vitro test for antimony. The results of correlations, for the three other contaminants, show that bioaccessibility is the limiting step in the bioavailability process and that UBM test is relevant to estimate the bioaccessibility. Our study also highlights the impact of the soil matrix on the variation of bioaccessibility values. Thus, it was shown that the distribution of contaminants in the different bearing phases of soil is a major and robust parameter to explain the variations of bioaccessibility for all soils studied. The results of the study highlight that the in vitro UBM test is a promising alternative to in vivo investigations to measure the exposure levels of metals after soil ingestion
|
207 |
Analýza nástrojů pro ověření přístupnosti internetových stránek dle amerického zákona Sekce 508 / Accessibility evaluation tools analysis according to U.S. law Section 508Novák, Jiří January 2013 (has links)
The main goal of this thesis is a comparison of tools used for web page accessibility evaluation, mainly their ability to find an accessibility issues defined in US law Section 508 paragraph § 1194.22 Web-based intranet and internet information and applications. A new web page created for the tool analysis, which contains accessibility issues described in paragraph § 1194.22, has been used to evaluate how many issues the tool can find. Based on the analysis results every tool is scored and the final ranks are decided. The results of the analysis and the final ranks are the main outcome of this thesis. The theoretical part of the thesis describes accessibility in detail and introduces the main dissabilities affecting their ability to use computers. A detailed description of all Section 508 rules is present as well. It consists of translation of every rule into czech language, short description of the rule's meaning and points when the rule is met. The practical part describes the test web page, analysis proces and results in detail.
|
208 |
Requirements Validation Techniques : Factors Influencing themPEDDIREDDY, SANTOSH KUMAR REDDY, NIDAMANURI, SRI RAM January 2021 (has links)
Context: Requirement validation is a phase of the software development life cycle where requirements are validated to get rid of inconsistency, incompleteness. Stakeholders involved in the validation process to make requirements are suitable for the product. Requirement validation techniques are for validating the requirements. Selection of requirements validation techniques related to the factors that need to consider while validating requirements makes the validation process better. This paper is about the factors that influence the selection of requirements validation technique and analyzing the most critical factors. Objectives: Our research aim is to find the factors influencing the selection of requirements validation techniques and evaluating critical factor from the factors list. To achieve our goal, we are following these objectives. To get a list of validation techniques that are currently being used by organizations, and to enlist the factors that influence the requirements validation technique. Methods: To identify the factors influencing the selection of requirement validation techniques and evaluating the critical factors, we conducted both a literature review and survey. Results: From the literature review, two articles considered as our starter set, and through snowball sampling, a total of fifty-four articles were found relevant to the study. From the results of the literature review, we have formulated a questionnaire and conducted a survey. A total of thirty-three responses have gathered from the survey. The survey obtains the factors influencing the requirement validation techniques. Conclusions: The factors we got from the survey possess a mixed view like each factor has its critically in different aspects of validation. Selecting one critical factor is not possible during the selection of the requirements validation technique. So, we shortlisted the critical factors that have more influence in the selection of requirement validation techniques, Factors, Requirements validation techniques.
|
209 |
Prevention of Input Validation Vulnerabilities on the Client-Side : A Comparison Between Validating in AngularJS and React ApplicationsStrålberg, Linda January 2019 (has links)
The aim of this research was to test the JavaScript library React and framework AngularJS against each other in regard of the response time of the validation and validation robustness. The experiments in this work were performed to support developers in their decision making regarding which library or framework to use. There are many other aspects to consider when choosing which library or framework to develop in other than the security and response time related aspects mentioned in this work, but this work can, amongst other information, give yet another viewpoint to the developers. The results showed that there is no difference amongst them security wise, but that it was somewhat faster to validate in a React application than in an AngularJS application.
|
210 |
Talking about Narrative Messages: The Interaction between Elaboration and Interpersonal ValidationRader, Kara 13 November 2020 (has links)
No description available.
|
Page generated in 0.0994 seconds