Spelling suggestions: "subject:"anda alidation"" "subject:"anda balidation""
21 |
A methodology for modeling the verification, validation, and testing process for launch vehiclesSudol, Alicia 07 January 2016 (has links)
Completing the development process and getting to first flight has become a difficult hurdle for launch vehicles. Program cancellations in the last 30 years were largely due to cost overruns and schedule slips during the design, development, testing and evaluation (DDT&E) process. Unplanned rework cycles that occur during verification, validation, and testing (VVT) phases of development contribute significantly to these overruns, accounting for up to 75% of development cost. Current industry standard VVT planning is largely subjective with no method for evaluating the impact of rework. The goal of this research is to formulate and implement a method that will quantitatively capture the impact of unplanned rework by assessing the reliability, cost, schedule, and risk of VVT activities. First, the fidelity level of each test is defined and the probability of rework between activities is modeled using a dependency structure matrix. Then, a discrete event simulation projects the occurrence of rework cycles and evaluates the impact on reliability, cost, and schedule for a set of VVT activities. Finally, a quadratic risk impact function is used to calculate the risk level of the VVT strategy based on the resulting output distributions.
This method is applied to alternative VVT strategies for the Space Shuttle Main Engine to demonstrate how the impact of rework can be mitigated, using the actual test history as a baseline. Results indicate rework cost to be the primary driver in overall project risk, and yield interesting observations regarding the trade-off between the upfront cost of testing and the associated cost of rework. Ultimately, this final application problem demonstrates the merits of this methodology in evaluating VVT strategies and providing a risk-informed decision making framework for the verification, validation, and testing process of launch vehicle systems.
|
22 |
Determinação de endotoxina bacteriana (pirogênio) em radiofármacos pelo método de formação de gel. Validação / Determination of bacterial endotoxin (pyrogen) in radiopharmaceuticals by the gel clot method. ValidationFUKUMORI, NEUZA T.O. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:53:47Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:48Z (GMT). No. of bitstreams: 0 / Dissertação (Mestrado) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
|
23 |
Determinação de endotoxina bacteriana (pirogênio) em radiofármacos pelo método de formação de gel. Validação / Determination of bacterial endotoxin (pyrogen) in radiopharmaceuticals by the gel clot method. ValidationFUKUMORI, NEUZA T.O. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:53:47Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:48Z (GMT). No. of bitstreams: 0 / Antes do Ensaio do Lisado de Amebócitos do Limulus (LAL), a única forma de se avaliar a pirogenicidade em drogas parenterais e dispositivos médicos era o ensaio de pirogênio em coelhos da Farmacopéia Americana (USP). Especialmente para radiofármacos, o ensaio LAL é a escolha para a determinação de endotoxina bacteriana (pirogênio). O objetivo deste trabalho foi validar o método de formação de gel para alguns radiofármacos sem uma interferência mensurável. O guia do método LAL do Food and Drug Administration (FDA) define interferência como uma condição que causa uma diferença significativa entre os pontos finais de gelificação das séries de controle positivo da água e controle positivo do produto utilizando-se um endotoxina padrão. Os experimentos foram realizados de acordo com o teste de endotoxinas bacterianas da USP na m-iodobenzilguanidina-131I, nos radioisótopos Gálio-67 e Tálio-201, nos reagentes liofilizados DTPA, Fitato, GHA, SAH e Sn Coloidal. A Máxima Diluição Válida (MDV) foi calculada para cada produto com base na sua dose clínica e diluições seriadas abaixo da MDV foram avaliadas em duplicata para a detecção de interferências. A sensibilidade declarada do reagente de LAL foi de 0,125 UE mL-1 (Unidades de Endotoxina por mililitro). Para a validação, uma série de diluições foi feita utilizando-se padrão de endotoxina (PE) nas concentrações de 0,5 a 0,03 UE mL-1 para a confirmação da sensibilidade do reagente de LAL, em quadruplicata. A mesma série de diluições foi feita com o PE e o produto diluído 100 vezes em três lotes consecutivos de cada radiofármaco. Os produtos m-iodobenzilguanidina-131I, Gálio-67, Tálio-201, DTPA, SAH e Sn Coloidal foram compatíveis com o método no fator de diluição 1:100. Fitato e GHA apresentaram interferência no ensaio de formação de gel. Outras técnicas para determinar endotoxinas como o ensaio cromogênico (desenvolvimento de cor) e o turbidimétrico (desenvolvimento de turbidez) foram avaliadas para obter informações qualitativas e quantitativas sobre as concentrações de endotoxinas nas amostras. / Dissertação (Mestrado) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
|
24 |
Validation of an In Vitro Mutagenicity Assay Based on Pulmonary Epithelial Cells from the Transgenic MutaMouse: Intra-Laboratory Variability and Metabolic CompetenceHanna, Joleen January 2018 (has links)
Genetic toxicity tests used for regulatory screening must be rigorously validated to ensure accuracy, reliability and relevance. Hence, prior to establishment of an internationally-accepted test guideline, a new assay must undergo multi-stage validation. An in vitro transgene mutagenicity assay based on an immortalized cell line derived from MutaMouse lung (i.e., FE1 cells) is currently undergoing formal validation. FE1 cells retain a lacZ transgene in a λgt10 shuttle vector that can be retrieved for scoring of chemically-induced mutations. This work contributes to validation of the in vitro transgene (lacZ) mutagenicity assay in MutaMouse FE1 cells. More specifically, the work includes an intra-laboratory variability study, and a follow-up study to assess the endogenous metabolic capacity of FE1 cells. The former is essential to determine assay reliability, the latter to define the range of chemicals that can be reliably screened without an exogenous metabolic activation mixture (i.e., rat liver S9). The intra-laboratory variability assessment revealed minimal variability; thus, assay reproducibility can be deemed acceptable. Assessment of metabolic capacity involved exposure of FE1 cells to 5 known mutagens, and subsequent assessment of changes in the expression of genes involved in xenobiotic metabolism; induced transgene mutant frequency (±S9) was assessed in parallel. The results revealed that the FE1 cell line is capable of mobilising several Phase I and Phase II gene products known to be involved in the bioactivation of mutagens. Collectively, the results presented support the contention that the FE1 cell mutagenicity assay can be deemed reliable and reproducible. Consequently, the assay is an excellent candidate for continued validation, and eventual establishment of an OECD (Organization for Economic Cooperation and Development) Test Guideline.
|
25 |
Model comparison and assessment by cross validationShen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three
v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in
estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model.
In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive,
multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one
execution of CV. This adaptive and computationally efficient methodology
is demonstrated on a large cheminformatics data set from PubChem.
Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods.
For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These
results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection. / Science, Faculty of / Statistics, Department of / Graduate
|
26 |
Examining Thinking Skills in the Context of Large-scale Assessments Using a Validation ApproachHachey, Krystal January 2014 (has links)
Large Scale Assessments (LSAs) of student achievement in education serve a variety of purposes, such as comparing educational programs, providing accountability measures, and assessing achievement on a broad range of curriculum standards. In addition to measuring content-related processes such as mathematics or reading, LSAs also focus on thinking-related skills such as lower level thinking (e.g., understanding concepts) and problem solving. The purpose of the current study was to deconstruct and clarify the mechanisms that make up an LSA, including thinking skills and assessment perspectives, from a validation approach based on the work by Messick (1995) and Kane (1990). Therefore, when examining the design and student data of two LSAs in reading, (a) what common thinking skills are assessed? and (b) what are the LSAs’ underlying assessment perspectives? Content analyses were carried out on two LSAs that purported to assess thinking skills in reading: the Pan-Canadian Assessment Program (PCAP) and the Educational Quality and Accountability Office (EQAO). As the two LSAs evaluated reading, the link between reading and thinking was also addressed. Conceptual models were developed and used to examine the assessment framework, test booklets, and scoring guide of the two assessments. In addition, a nonlinear factor analysis was conducted on the EQAO item-level data from the test booklets to examine the dimensionality of the LSA. The most prominent thinking skill referenced after qualitatively analyzing the assessment frameworks, test booklets, and scoring guides was critical thinking, while results from the quantitative analysis revealed that two factors best represented the item-level EQAO data. Overall, the tools provided in the current study can help inform both researchers and practitioners about the interaction between the assessment approach and related thinking skills.
|
27 |
Development and Validation of a Case-finding Questionnaire to Identify Undiagnosed Chronic Obstructive Pulmonary Disease (COPD) and AsthmaHuynh, Chau 17 September 2021 (has links)
Background: Undiagnosed chronic obstructive pulmonary disease (COPD) and asthma remain prevalent health issues. The current global and Canadian prevalence reported for obstructive lung disease do not reflect the true prevalence since undiagnosed cases remain missed and uncounted. Spirometry testing is viewed as the current gold standard for diagnosing obstructive lung disease. However, barriers associated with inaccessibility and underuse have contributed to undiagnosed lung disease. While guidelines advise against spirometry for asymptomatic persons, active case-finding for persons at-risk and those presenting with symptoms has been recommended. Given early treatment and management has the potential to improve health-related quality of life and reduce the progression of lung decline, identifying undiagnosed lung disease is critical to preventing adverse health outcomes. To date, this marks the first study to incorporate both obstructive lung diseases into a single-case finding instrument.
Objective: To develop and validate a case-finding questionnaire to identify undiagnosed COPD and asthma in community-dwelling adults, and to prospectively evaluate reliability and predictive performance.
Methods: This study uses data obtained from the Undiagnosed Chronic Obstructive Pulmonary Disease and Asthma Population (UCAP) study from June 2017 to March 2020. Eligible participants were >18 years, had a history of chronic respiratory symptoms, and had no previous physician diagnosis of obstructive lung disease. Presence of obstructive lung disease was confirmed with spirometry. Multinomial logistic regression and recursive partitioning were used to develop a case-finding questionnaire. Predictors available from six questionnaires completed during spirometry visit. Diagnostic accuracy of the models was used to evaluate performance. Risk score externally validated in a cohort of participants recruited between October 2020 and January 2021 at study sites open during the COVID-19 pandemic.
Results: Derivation cohort included 1615 participants, with 136 ultimately diagnosed with asthma and 195 diagnosed with COPD. A 13-item questionnaire was developed using logistic regression: age, pack-years of cigarette smoking, wheeze, cough, sleep, chest tightness, level of tiredness, physical activity limitation, occupational exposure, primary or second-hand smoke exposure, frequency of chest attacks, and salbutamol medication. Internal validation showed an area under the curve (AUC) of 0.79 (0.70-0.90) for COPD and 0.64 (0.45-0.80) for asthma. At a predicted probability of greater than or equal to 6%, specificity was 17% for no OLD, sensitivity was 91% for asthma, and sensitivity was 96% for COPD. External cohort included 74 subjects, with 8 diagnosed with COPD and 6 diagnosed with asthma. The AUC for COPD was 0.89 (95% CI: 0.62-0.90) and AUC was 0.65 (95% CI: 0.63-0.72) for asthma. Sensitivity was 100% for both asthma and COPD, specificity was 13%, and positive predictive value was 23%.
Conclusion: The 13-item case-finding questionnaire was shown to be reliable and with modest predictive ability in identifying COPD and asthma. Prospective evaluation with the UCAP study is still ongoing to recruit a larger sample to re-evaluate predictive performance.
|
28 |
An Optical Flow Based Approach to Validating Dynamic Structural Finite Element Models of Biological Organs Using 4D Medical Images - the Aortic Valve as an ExampleGibney, Emma 25 November 2021 (has links)
Recent developments within the biomedical engineering field of using finite element methods to analyze biological structures has resulted in a need for a standardized method to validate these models. The purpose of this thesis was to develop a system to effectively and efficiently validate biological finite element models using 4D medical images. The aortic valve was chosen as the biological model for testing as any solution that could manage the complexity of the valve’s motion would likely work for simpler biological models. The proposed validation method involved 3 steps: estimating a voxel displacement field using a direct method of 3D motion estimation, converting the voxel displacement field into a nodal displacement field, and validating the results of a finite element model by comparing the nodal displacement field of the finite element model to the nodal displacement field from the medical images. The proposed validation method was implemented using synthetic 4D CT images of an aortic valve based on an existing finite element model, where the ground truth was the results of the existing finite element model. Three different direct motion estimation methods were implemented within the first step of the method and compared. The three methods were: 3D Horn-Schunck optical flow, 3D Brox optical flow, and demons method. The addition of a multilevel scheme with a variable scale constant was integrated into each of these motion estimation methods so that larger magnitudes of displacement could by captured. It was found that Horn-Schunck optical flow was best able to capture the motion of the aortic valve throughout a cardiac cycle. The proposed method of validation was able to track the aorta nodes effectively through an entire cardiac cycle and was able to track leaflet nodes through large displacements until the valve closed. Although the general trend of the motion of the aortic valve was captured by the validation method using synthetic medical images, node-to-node comparison was not entirely reliable. Comparison of the general trend was still superior to the current validation methods for biological finite element methods as it considered the motion of the entire structure.
|
29 |
How Do We Assess Perceived Stigma? Initial Validation of a New MeasureWilliams, Stacey L. 01 August 2011 (has links)
No description available.
|
30 |
Do Procedure Codes within Population-based Administrative Datasets Accurately Identify Patients Undergoing Cystectomy with Urinary Diversion?Ross, James 01 February 2024 (has links)
Abstract
Introduction
Cystectomy with urinary diversion (i.e. bladder removal surgery) is commonly studied using large health administrative databases. These databases often use diagnoses or procedure codes with unknown accuracy to identify cystectomy patients, thereby resulting in significant misclassification bias. The primary objective of this study is to develop a predictive model that will return an accurate probability that patients recorded in the discharge abstract database have undergone cystectomy with urinary diversion, stratified by type of urinary diversion (continent vs incontinent). Secondary objectives of this study include: 1) to internally validate our predictive model to determine its accuracy using a cohort of all adults admitted to The Ottawa Hospital (TOH) within the study period; and 2) compare the accuracy of this model to that of code-based algorithms used previously in published studies to identify cystectomy.
Methods
A gold standard reference cohort (GSC) of all patients who underwent cystectomy and urinary diversion at TOH between 2009 and 2019 was created by using the SIMS registry within the TOH data warehouse which captures all primary surgical procedures performed. The GSC was then confirmed by manual chart review to ensure accuracy. Through ICES, the GSC was linked to the provincial Discharge Abstract Database (DAD), physician billing records (OHIP), and Ontario Cancer Registry (OCR) and a new combined dataset containing all admissions at TOH during the study period was created. Clinical information, billing, and intervention codes within these databases were reviewed and the co-variables thought to be predictive of cystectomy were selected a priori. A multinomial logistic regression model (i.e. The Ottawa Cystectomy Identification Model or OCIM) was created using these co-variables to determine the probability of a patient undergoing cystectomy, stratified by continent vs incontinent diversion, during an admission in the DAD. Using the OCIM and bootstrap imputation methods, co-variable values and 95% confidence intervals were calculated. The values of these same co- variables were then measured using a code algorithm (the presence of either a procedure code or billing code for cystectomy with incontinent or continent diversion). Misclassification bias was then measured by comparing the values of co-variables using the OCIM or code algorithm to the true values obtained from the gold standard reference cohort.
Results
Five hundred patients were included in the GSC [median age 68.0 (IQR 13.0); 75.6% male; 55.6% incontinent diversion]. The prevalence of cystectomy within the DAD over the study period was 0.12% (500/428697 total admissions). Sensitivity and positive predictive values for cystectomy codes were 97.1% and 58.6% for incontinent diversions and 100.0% and 48.4% for continent diversions, respectively. The OCIM accurately predicted cystectomy with incontinent diversion (c-statistic [C] 0.999, Integrated Calibration Index [ICI] 0.000) and cystectomy with continent diversion (C:1.000, ICI 0.000) probabilities. Misclassification bias was lower when identifying cystectomy patients using the OCIM with bootstrap imputation compared to the use of the code algorithm alone.
Conclusions
A model using administrative data accurately returned the probability that cystectomy by diversion type occurred during a hospitalization. Using this model to impute cystectomy status minimized misclassification bias.
|
Page generated in 0.093 seconds