• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1398
  • 991
  • 380
  • 88
  • 62
  • 56
  • 43
  • 38
  • 21
  • 19
  • 14
  • 12
  • 11
  • 8
  • 8
  • Tagged with
  • 3626
  • 1140
  • 591
  • 492
  • 372
  • 349
  • 293
  • 250
  • 246
  • 244
  • 224
  • 224
  • 217
  • 214
  • 209
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A methodology for modeling the verification, validation, and testing process for launch vehicles

Sudol, Alicia 07 January 2016 (has links)
Completing the development process and getting to first flight has become a difficult hurdle for launch vehicles. Program cancellations in the last 30 years were largely due to cost overruns and schedule slips during the design, development, testing and evaluation (DDT&E) process. Unplanned rework cycles that occur during verification, validation, and testing (VVT) phases of development contribute significantly to these overruns, accounting for up to 75% of development cost. Current industry standard VVT planning is largely subjective with no method for evaluating the impact of rework. The goal of this research is to formulate and implement a method that will quantitatively capture the impact of unplanned rework by assessing the reliability, cost, schedule, and risk of VVT activities. First, the fidelity level of each test is defined and the probability of rework between activities is modeled using a dependency structure matrix. Then, a discrete event simulation projects the occurrence of rework cycles and evaluates the impact on reliability, cost, and schedule for a set of VVT activities. Finally, a quadratic risk impact function is used to calculate the risk level of the VVT strategy based on the resulting output distributions. This method is applied to alternative VVT strategies for the Space Shuttle Main Engine to demonstrate how the impact of rework can be mitigated, using the actual test history as a baseline. Results indicate rework cost to be the primary driver in overall project risk, and yield interesting observations regarding the trade-off between the upfront cost of testing and the associated cost of rework. Ultimately, this final application problem demonstrates the merits of this methodology in evaluating VVT strategies and providing a risk-informed decision making framework for the verification, validation, and testing process of launch vehicle systems.
22

Determinação de endotoxina bacteriana (pirogênio) em radiofármacos pelo método de formação de gel. Validação / Determination of bacterial endotoxin (pyrogen) in radiopharmaceuticals by the gel clot method. Validation

FUKUMORI, NEUZA T.O. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:53:47Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:48Z (GMT). No. of bitstreams: 0 / Dissertação (Mestrado) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
23

Determinação de endotoxina bacteriana (pirogênio) em radiofármacos pelo método de formação de gel. Validação / Determination of bacterial endotoxin (pyrogen) in radiopharmaceuticals by the gel clot method. Validation

FUKUMORI, NEUZA T.O. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:53:47Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:48Z (GMT). No. of bitstreams: 0 / Antes do Ensaio do Lisado de Amebócitos do Limulus (LAL), a única forma de se avaliar a pirogenicidade em drogas parenterais e dispositivos médicos era o ensaio de pirogênio em coelhos da Farmacopéia Americana (USP). Especialmente para radiofármacos, o ensaio LAL é a escolha para a determinação de endotoxina bacteriana (pirogênio). O objetivo deste trabalho foi validar o método de formação de gel para alguns radiofármacos sem uma interferência mensurável. O guia do método LAL do Food and Drug Administration (FDA) define interferência como uma condição que causa uma diferença significativa entre os pontos finais de gelificação das séries de controle positivo da água e controle positivo do produto utilizando-se um endotoxina padrão. Os experimentos foram realizados de acordo com o teste de endotoxinas bacterianas da USP na m-iodobenzilguanidina-131I, nos radioisótopos Gálio-67 e Tálio-201, nos reagentes liofilizados DTPA, Fitato, GHA, SAH e Sn Coloidal. A Máxima Diluição Válida (MDV) foi calculada para cada produto com base na sua dose clínica e diluições seriadas abaixo da MDV foram avaliadas em duplicata para a detecção de interferências. A sensibilidade declarada do reagente de LAL foi de 0,125 UE mL-1 (Unidades de Endotoxina por mililitro). Para a validação, uma série de diluições foi feita utilizando-se padrão de endotoxina (PE) nas concentrações de 0,5 a 0,03 UE mL-1 para a confirmação da sensibilidade do reagente de LAL, em quadruplicata. A mesma série de diluições foi feita com o PE e o produto diluído 100 vezes em três lotes consecutivos de cada radiofármaco. Os produtos m-iodobenzilguanidina-131I, Gálio-67, Tálio-201, DTPA, SAH e Sn Coloidal foram compatíveis com o método no fator de diluição 1:100. Fitato e GHA apresentaram interferência no ensaio de formação de gel. Outras técnicas para determinar endotoxinas como o ensaio cromogênico (desenvolvimento de cor) e o turbidimétrico (desenvolvimento de turbidez) foram avaliadas para obter informações qualitativas e quantitativas sobre as concentrações de endotoxinas nas amostras. / Dissertação (Mestrado) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
24

Validation of an In Vitro Mutagenicity Assay Based on Pulmonary Epithelial Cells from the Transgenic MutaMouse: Intra-Laboratory Variability and Metabolic Competence

Hanna, Joleen January 2018 (has links)
Genetic toxicity tests used for regulatory screening must be rigorously validated to ensure accuracy, reliability and relevance. Hence, prior to establishment of an internationally-accepted test guideline, a new assay must undergo multi-stage validation. An in vitro transgene mutagenicity assay based on an immortalized cell line derived from MutaMouse lung (i.e., FE1 cells) is currently undergoing formal validation. FE1 cells retain a lacZ transgene in a λgt10 shuttle vector that can be retrieved for scoring of chemically-induced mutations. This work contributes to validation of the in vitro transgene (lacZ) mutagenicity assay in MutaMouse FE1 cells. More specifically, the work includes an intra-laboratory variability study, and a follow-up study to assess the endogenous metabolic capacity of FE1 cells. The former is essential to determine assay reliability, the latter to define the range of chemicals that can be reliably screened without an exogenous metabolic activation mixture (i.e., rat liver S9). The intra-laboratory variability assessment revealed minimal variability; thus, assay reproducibility can be deemed acceptable. Assessment of metabolic capacity involved exposure of FE1 cells to 5 known mutagens, and subsequent assessment of changes in the expression of genes involved in xenobiotic metabolism; induced transgene mutant frequency (±S9) was assessed in parallel. The results revealed that the FE1 cell line is capable of mobilising several Phase I and Phase II gene products known to be involved in the bioactivation of mutagens. Collectively, the results presented support the contention that the FE1 cell mutagenicity assay can be deemed reliable and reproducible. Consequently, the assay is an excellent candidate for continued validation, and eventual establishment of an OECD (Organization for Economic Cooperation and Development) Test Guideline.
25

Model comparison and assessment by cross validation

Shen, Hui 11 1900 (has links)
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model. In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive, multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one execution of CV. This adaptive and computationally efficient methodology is demonstrated on a large cheminformatics data set from PubChem. Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods. For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection. / Science, Faculty of / Statistics, Department of / Graduate
26

Examining Thinking Skills in the Context of Large-scale Assessments Using a Validation Approach

Hachey, Krystal January 2014 (has links)
Large Scale Assessments (LSAs) of student achievement in education serve a variety of purposes, such as comparing educational programs, providing accountability measures, and assessing achievement on a broad range of curriculum standards. In addition to measuring content-related processes such as mathematics or reading, LSAs also focus on thinking-related skills such as lower level thinking (e.g., understanding concepts) and problem solving. The purpose of the current study was to deconstruct and clarify the mechanisms that make up an LSA, including thinking skills and assessment perspectives, from a validation approach based on the work by Messick (1995) and Kane (1990). Therefore, when examining the design and student data of two LSAs in reading, (a) what common thinking skills are assessed? and (b) what are the LSAs’ underlying assessment perspectives? Content analyses were carried out on two LSAs that purported to assess thinking skills in reading: the Pan-Canadian Assessment Program (PCAP) and the Educational Quality and Accountability Office (EQAO). As the two LSAs evaluated reading, the link between reading and thinking was also addressed. Conceptual models were developed and used to examine the assessment framework, test booklets, and scoring guide of the two assessments. In addition, a nonlinear factor analysis was conducted on the EQAO item-level data from the test booklets to examine the dimensionality of the LSA. The most prominent thinking skill referenced after qualitatively analyzing the assessment frameworks, test booklets, and scoring guides was critical thinking, while results from the quantitative analysis revealed that two factors best represented the item-level EQAO data. Overall, the tools provided in the current study can help inform both researchers and practitioners about the interaction between the assessment approach and related thinking skills.
27

Development and Validation of a Case-finding Questionnaire to Identify Undiagnosed Chronic Obstructive Pulmonary Disease (COPD) and Asthma

Huynh, Chau 17 September 2021 (has links)
Background: Undiagnosed chronic obstructive pulmonary disease (COPD) and asthma remain prevalent health issues. The current global and Canadian prevalence reported for obstructive lung disease do not reflect the true prevalence since undiagnosed cases remain missed and uncounted. Spirometry testing is viewed as the current gold standard for diagnosing obstructive lung disease. However, barriers associated with inaccessibility and underuse have contributed to undiagnosed lung disease. While guidelines advise against spirometry for asymptomatic persons, active case-finding for persons at-risk and those presenting with symptoms has been recommended. Given early treatment and management has the potential to improve health-related quality of life and reduce the progression of lung decline, identifying undiagnosed lung disease is critical to preventing adverse health outcomes. To date, this marks the first study to incorporate both obstructive lung diseases into a single-case finding instrument. Objective: To develop and validate a case-finding questionnaire to identify undiagnosed COPD and asthma in community-dwelling adults, and to prospectively evaluate reliability and predictive performance. Methods: This study uses data obtained from the Undiagnosed Chronic Obstructive Pulmonary Disease and Asthma Population (UCAP) study from June 2017 to March 2020. Eligible participants were >18 years, had a history of chronic respiratory symptoms, and had no previous physician diagnosis of obstructive lung disease. Presence of obstructive lung disease was confirmed with spirometry. Multinomial logistic regression and recursive partitioning were used to develop a case-finding questionnaire. Predictors available from six questionnaires completed during spirometry visit. Diagnostic accuracy of the models was used to evaluate performance. Risk score externally validated in a cohort of participants recruited between October 2020 and January 2021 at study sites open during the COVID-19 pandemic. Results: Derivation cohort included 1615 participants, with 136 ultimately diagnosed with asthma and 195 diagnosed with COPD. A 13-item questionnaire was developed using logistic regression: age, pack-years of cigarette smoking, wheeze, cough, sleep, chest tightness, level of tiredness, physical activity limitation, occupational exposure, primary or second-hand smoke exposure, frequency of chest attacks, and salbutamol medication. Internal validation showed an area under the curve (AUC) of 0.79 (0.70-0.90) for COPD and 0.64 (0.45-0.80) for asthma. At a predicted probability of greater than or equal to 6%, specificity was 17% for no OLD, sensitivity was 91% for asthma, and sensitivity was 96% for COPD. External cohort included 74 subjects, with 8 diagnosed with COPD and 6 diagnosed with asthma. The AUC for COPD was 0.89 (95% CI: 0.62-0.90) and AUC was 0.65 (95% CI: 0.63-0.72) for asthma. Sensitivity was 100% for both asthma and COPD, specificity was 13%, and positive predictive value was 23%. Conclusion: The 13-item case-finding questionnaire was shown to be reliable and with modest predictive ability in identifying COPD and asthma. Prospective evaluation with the UCAP study is still ongoing to recruit a larger sample to re-evaluate predictive performance.
28

An Optical Flow Based Approach to Validating Dynamic Structural Finite Element Models of Biological Organs Using 4D Medical Images - the Aortic Valve as an Example

Gibney, Emma 25 November 2021 (has links)
Recent developments within the biomedical engineering field of using finite element methods to analyze biological structures has resulted in a need for a standardized method to validate these models. The purpose of this thesis was to develop a system to effectively and efficiently validate biological finite element models using 4D medical images. The aortic valve was chosen as the biological model for testing as any solution that could manage the complexity of the valve’s motion would likely work for simpler biological models. The proposed validation method involved 3 steps: estimating a voxel displacement field using a direct method of 3D motion estimation, converting the voxel displacement field into a nodal displacement field, and validating the results of a finite element model by comparing the nodal displacement field of the finite element model to the nodal displacement field from the medical images. The proposed validation method was implemented using synthetic 4D CT images of an aortic valve based on an existing finite element model, where the ground truth was the results of the existing finite element model. Three different direct motion estimation methods were implemented within the first step of the method and compared. The three methods were: 3D Horn-Schunck optical flow, 3D Brox optical flow, and demons method. The addition of a multilevel scheme with a variable scale constant was integrated into each of these motion estimation methods so that larger magnitudes of displacement could by captured. It was found that Horn-Schunck optical flow was best able to capture the motion of the aortic valve throughout a cardiac cycle. The proposed method of validation was able to track the aorta nodes effectively through an entire cardiac cycle and was able to track leaflet nodes through large displacements until the valve closed. Although the general trend of the motion of the aortic valve was captured by the validation method using synthetic medical images, node-to-node comparison was not entirely reliable. Comparison of the general trend was still superior to the current validation methods for biological finite element methods as it considered the motion of the entire structure.
29

How Do We Assess Perceived Stigma? Initial Validation of a New Measure

Williams, Stacey L. 01 August 2011 (has links)
No description available.
30

Validation of the Tranquillity Rating Prediction Tool (TRAPT): comparative studies in UK and Hong Kong

Watts, Gregory R., Marafa, L. 23 August 2017 (has links)
Yes / The Tranquillity Rating Prediction Tool (TRAPT) has been used to make predictions of the quality of tranquility in outdoor urban areas using two significant factors i.e. the average level of anthropogenic noise and the percentage of natural features in view. The method has a number of applications including producing tranquillity contours that can inform decisions regarding the impact of new anthropogenic noise sources or developments causing visual intrusion. The methodwas intended for use in mainly outdoor areas and yet was developed using responses from UK volunteers to video clips indoors. Because the volunteers for this study were all UK residents it was important to calibrate responses for other ethnic groups who may respond differently depending on cultural background. To address these issues further studies were performed in Hong Kong using the same video recording played back under the same conditions as the study in the UK. The HK study involved recruiting three groups i.e. residents fromHong Kong, Mainland China and a diverse group from 16 different nations. There was good agreement between all these groups with average tranquillity ratings for the different locations differing by less than one scale point in most cases. / The studywas supported by the Bradford Centre for Sustainable Environments at the University of Bradford and by the Research Grants Council of the Hong Kong Special Administrative Region, China (RGC/GRF. CUHK 449612)

Page generated in 0.1362 seconds