Spelling suggestions: "subject:"modelvalidation"" "subject:"modelvalidations""
71 |
Examination of posterior predictive check and bootstrap as population model validation toolsDesai, Amit V. 01 January 2008 (has links) (PDF)
Drug development is time consuming, expensive with high failure rates. It takes 10-15 years for a drug to go from discovery to approval, while the mean cost of developing a drug is $1.5 billions dollars. Pharmacometric models (PM) play a pivotal role in knowledge driven drug development and these models require validation prior to application. The purpose of the current study was to evaluate the posterior predictive check (PPC) and the bootstrap as population model validation tools. PPC was evaluated to determine, if it was able to distinguish between population pharmacokinetic (PPK) models that were developed/estimated from influence data versus models that were not derived/estimated from influence data. Bootstrap was examined to see if there was a correspondence between the root mean squared prediction errors (RMSPE) for serum concentrations when estimated by external prediction methods versus when estimated by the standard bootstrap. In the case of PPC, C last , C first -C last and C mid values from initial data sets were compared to corresponding posterior distributions. In the case of no influence data for C last , C first -C last and C mid on average 76%, 30% and 52% of the values from the posterior distributions were below the initial C last , C first -C last and C mid on average 93%, 13% and 67% of the values from the posterior distributions were below the initial C last , C first -C last and C mid respectively. PPC was able to classify models from influence versus no influence data. In the case of bootstrap when the original model was used to predict into the external data the WRMSPE for drug 1, drug 2, drug 3, drug 4 and simulated data set was 10.40 mg/L, 20.36 mg/L, 0.72 mg/L, 15.27 mg/L and 14.24 mg/L respectively. From the bootstrap the improved WRMSPE for drug 1 drug 2, drug 3, drug 4 and simulated data set was 9.35 mg/L, 19.85 mg/L, 0.50 mg/L, 14.44 mg/L and 13.98mg/L respectively. The bootstrap provided estimates of WRMSPE that corresponded to the external validation methods. From the results obtained, it was concluded that both the PPC and the Bootstrap were demonstrated to have value as validation tools.
|
72 |
Improving the validation of a railway vehicle model in the virtual certification process / Förbättring av valideringen av en spårfordonsmodell i den virtuella certifieringsprocessende Leeuw, Bente January 2021 (has links)
Before vehicles can be placed in service it has to complete an authorisation process. At the moment,this process is largely depended on tests. This is, however, an expensive and long process. With new technologies and improved simulations this process can be shortened and the costs can be lowered. The validation of a vehicle model, however, is often limited by the available data. Often the measured rail profiles are not available and thus a new UIC60 profile is used for the simulations. The railway track often has been used and showssigns of wear and damages, therefore research has been done to investigate the influence of the rail profiles on the validation of a railway vehicle model. The current methods of validation in the European norm are used to compare simulated values with forces and accelerations available from vehicle measurements. In the first step,25 track sections with different curve radii have beensimulated with a measured rail profile every 100 meters. In the next step, the same sections have been simulated by using the standard UIC60 rail profile. The results show that the use of measured rail profiles does have a positive influence on the outcome of simulation. In the final step, one single narrow curve has been simulated to show the effect of standard and worn rail profiles. Four different wear stages of the rail profile are simulated and compared to the available vehicle measurements available. These simulations show that the use of a medium worn rail profile gives the most accurate value. / Innan ett fordon kan tas i bruk måste det genomgå en tillståndsprocess. För närvarande är denna process till stor del beroende av provningar. Detta är dock en dyr och lång process. Med hjälp av ny teknik och förbättrade simuleringar kan denna process förkortas och kostnaderna sänkas. Valideringen av en fordonsmodell begränsas dock ofta av de tillgängliga uppgifterna. Ofta finns inte de uppmätta rälsprofilerna tillgängliga och därför används en ny UIC60-profil för simuleringarna. Järnvägsspåret har ofta använts och visarDärför har forskning gjorts för att undersöka hur rälsprofilerna påverkar valideringen av en modell av ett järnvägsfordon. De nuvarande valideringsmetoderna i den europeiska normen används för att jämföra simulerade värden med de krafter och accelerationer som finns tillgängliga från fordonsmätningar. I det första steget har 25 spårsektioner med olika kurvradier använts.simulerats med en uppmätt rälsprofil var 100:e meter. I nästa steg har samma sektioner simulerats med hjälp av standardprofilen UIC60. Resultaten visar att användningen av uppmätta rälsprofiler har en positiv inverkan på simuleringsresultatet. I det sista steget har en enda smal kurva simulerats för att visa effekten av standard- och slitna rälsprofiler. Fyra olika slitningsstadier av rälsprofilen simuleras och jämförs med tillgängliga fordonsmätningar. Simuleringarna visar att användningen av en mediumsliten rälsprofil ger det mest exakta värdet.
|
73 |
Constructing and Validating the Productivity in Online Course Redesign (POCR) Model for Higher EducationLaguardia, Eric D 01 January 2024 (has links) (PDF)
Explored and reported as three distinct but interrelated studies, this dissertation endeavored to develop a technology (i.e., process) to address the challenges faced by higher education faculty charged with updating or otherwise redesigning online courses. Considering the growing prevalence of online learning in higher education, calls on faculty with limited pedagogical training to design effective instruction are increasingly commonplace. Through a developmental research approach, these studies chronicled the Productivity in Online Course Redesign (POCR) model's construction via practical and theoretical means as well as its validation via expert review and field evaluation. The first of three contributing publications recorded the POCR model’s initial conceptualization and internal validation via review by an expert panel. Realized as a case study, the second contributing publication utilized field evaluation procedures to test the POCR model in a real-world setting, thus externally validating the model. The final contributing publication detailed a model refinement effort in which instructional design principles aligned with the Community of Inquiry framework were integrated to provide users with additional pedagogical support. The integration underwent internal validation via Delphi review. Deemed a valid model in both a conceptual and a practical sense, the POCR model shows promise as a tool for faculty who wish to engage with course design more efficiently and systematically.
|
74 |
Estudo e implementação de métodos de validação de modelos matemáticos aplicados no desenvolvimento de sistemas de controle de processos industriais. / Research and implementation of mathematical model validation methods applied in the development of industrial process control systems.Alvarado, Christiam Segundo Morales 22 June 2017 (has links)
A validação de modelos lineares é uma etapa importante em um projeto de Identificação de Sistemas, pois a escolha correta do modelo para representar a maior parte da dinâmica do processo, dentro de um número finito de técnicas de identificação e em torno de um ponto de operação, permite o sucesso no desenvolvimento de controladores preditivos e de controladores robustos. Por tal razão, o objetivo principal desta Tese é o desenvolvimento de um método de validação de modelos lineares, tendo como ferramentas de avaliação os métodos estatísticos, avaliações dinâmicas e análise da robustez do modelo. O componente principal do sistema de validação de modelos lineares proposto é o desenvolvimento de um sistema fuzzy para análise dos resultados obtidos pelas ferramentas utilizadas na etapa de validação. O projeto de Identificação de Sistemas é baseado em dados reais de operação de uma Planta-Piloto de Neutralização de pH, localizada no Laboratório de Controle de Processos Industriais da Escola Politécnica da USP. Para verificar o resultado da validação, todos os modelos são testados em um controlador preditivo do tipo QDMC (Quadratic Dynamic Matrix Control) para seguir uma trajetória de referência. Os critérios utilizados para avaliar o desempenho do controlador QDMC, para cada modelo utilizado, foram a velocidade de resposta do controlador e o índice da mínima variabilidade da variável de processo. Os resultados mostram que a confiabilidade do sistema de validação projetado para malhas com baixa e alta não-linearidade em um processo real, foram de 85,71% e 50%, respectivamente, com relação aos índices de desempenho obtidos pelo controlador QDMC. / Linear model validation is the most important stage in System Identification Project because, the model correct selection to represent the most of process dynamic allows the success in the development of predictive and robust controllers, within identification technique finite number and around the operation point. For this reason, the development of linear model validation methods is the main objective in this Thesis, taking as a tools of assessing the statistical, dynamic and robustness methods. Fuzzy system is the main component of model linear validation system proposed to analyze the results obtained by the tools used in validation stage. System Identification project is performed through operation real data of a pH neutralization pilot plant, located at the Industrial Process Control Laboratory, IPCL, of the Escola Politécnica of the University of São Paulo, Brazil. In order to verify the validation results, all modes are used in QDMC type predictive controller, to follow a set point tracking. The criterions used to assess the QDMC controller performance were the speed response and the process variable minimum variance index, for each model used. The results show that the validation system reliability were 85.71% and 50% projected for low and high non-linearity in a real process, respectively, linking to the performance indexes obtained by the QDMC controller.
|
75 |
Confidence-based model validation for reliability assessment and its integration with reliability-based design optimizationMoon, Min-Yeong 01 August 2017 (has links)
Conventional reliability analysis methods assume that a simulation model is able to represent the real physics accurately. However, this assumption may not always hold as the simulation model could be biased due to simplifications and idealizations. Simulation models are approximate mathematical representations of real-world systems and thus cannot exactly imitate the real-world systems. The accuracy of a simulation model is especially critical when it is used for the reliability calculation. Therefore, a simulation model should be validated using prototype testing results for reliability analysis. However, in practical engineering situation, experimental output data for the purpose of model validation is limited due to the significant cost of a large number of physical testing. Thus, the model validation needs to be carried out to account for the uncertainty induced by insufficient experimental output data as well as the inherent variability existing in the physical system and hence in the experimental test results. Therefore, in this study, a confidence-based model validation method that captures the variability and the uncertainty, and that corrects model bias at a user-specified target confidence level, has been developed. Reliability assessment using the confidence-based model validation can provide conservative estimation of the reliability of a system with confidence when only insufficient experimental output data are available.
Without confidence-based model validation, the designed product obtained using the conventional reliability-based design optimization (RBDO) optimum could either not satisfy the target reliability or be overly conservative. Therefore, simulation model validation is necessary to obtain a reliable optimum product using the RBDO process. In this study, the developed confidence-based model validation is integrated in the RBDO process to provide truly confident RBDO optimum design. The developed confidence-based model validation will provide a conservative RBDO optimum design at the target confidence level. However, it is challenging to obtain steady convergence in the RBDO process with confidence-based model validation because the feasible domain changes as the design moves (i.e., a moving-target problem). To resolve this issue, a practical optimization procedure, which terminates the RBDO process once the target reliability is satisfied, is proposed. In addition, the efficiency is achieved by carrying out deterministic design optimization (DDO) and RBDO without model validation, followed by RBDO with the confidence-based model validation. Numerical examples are presented to demonstrate that the proposed RBDO approach obtains a conservative and practical optimum design that satisfies the target reliability of designed product given a limited number of experimental output data.
Thus far, while the simulation model might be biased, it is assumed that we have correct distribution models for input variables and parameters. However, in real practical applications, only limited numbers of test data are available (parameter uncertainty) for modeling input distributions of material properties, manufacturing tolerances, operational loads, etc. Also, as before, only a limited number of output test data is used. Therefore, a reliability needs to be estimated by considering parameter uncertainty as well as biased simulation model. Computational methods and a process are developed to obtain confidence-based reliability assessment. The insufficient input and output test data induce uncertainties in input distribution models and output distributions, respectively. These uncertainties, which arise from lack of knowledge – the insufficient test data, are different from the inherent input distributions and corresponding output variabilities, which are natural randomness of the physical system.
|
76 |
Enabling Timing Analysis of Complex Embedded Software SystemsKraft, Johan January 2010 (has links)
Cars, trains, trucks, telecom networks and industrial robots are examples of products relying on complex embedded software systems, running on embedded computers. Such systems may consist of millions of lines of program code developed by hundreds of engineers over many years, often decades. Over the long life-cycle of such systems, the main part of the product development costs is typically not the initial development, but the software maintenance, i.e., improvements and corrections of defects, over the years. Of the maintenance costs, a major cost is the verification of the system after changes has been applied, which often requires a huge amount of testing. However, today's techniques are not sufficient, as defects often are found post-release, by the customers. This area is therefore of high relevance for industry. Complex embedded systems often control machinery where timing is crucial for accuracy and safety. Such systems therefore have important requirements on timing, such as maximum response times. However, when maintaining complex embedded software systems, it is difficult to predict how changes may impact the system's run-time behavior and timing, e.g., response times.Analytical and formal methods for timing analysis exist, but are often hard to apply in practice on complex embedded systems, for several reasons. As a result, the industrial practice in deciding the suitability of a proposed change, with respect to its run-time impact, is to rely on the subjective judgment of experienced developers and architects. This is a risky and inefficient, trial-and-error approach, which may waste large amounts of person-hours on implementing unsuitable software designs, with potential timing- or performance problems. This can generally not be detected at all until late stages of testing, when the updated software system can be tested on system level, under realistic conditions. Even then, it is easy to miss such problems. If products are released containing software with latent timing errors, it may cause huge costs, such as car recalls, or even accidents. Even when such problems are found using testing, they necessitate design changes late in the development project, which cause delays and increases the costs. This thesis presents an approach for impact analysis with respect to run-time behavior such as timing and performance for complex embedded systems. The impact analysis is performed through optimizing simulation, where the simulation models are automatically generated from the system implementation. This approach allows for predicting the consequences of proposed designs, for new or modified features, by prototyping the change in the simulation model on a high level of abstraction, e.g., by increasing the execution time for a particular task. Thereby, designs leading to timing-, performance-, or resource usage problems can be identified early, before implementation, and a late redesigns are thereby avoided, which improves development efficiency and predictability, as well as software quality. The contributions presented in this thesis is within four areas related to simulation-based analysis of complex embedded systems: (1) simulation and simulation optimization techniques, (2) automated model extraction of simulation models from source code, (3) methods for validation of such simulation models and (4) run-time recording techniques for model extraction, impact analysis and model validation purposes. Several tools has been developed during this work, of which two are in commercialization in the spin-off company Percepio AB. Note that the Katana approach, in area (2), is subject for a recent patent application - patent pending. / PROGRESS
|
77 |
Model Validation and Discovery for Complex Stochastic SystemsJha, Sumit Kumar 02 July 2010 (has links)
In this thesis, we study two fundamental problems that arise in the modeling of stochastic systems: (i) Validation of stochastic models against behavioral specifications such as temporal logics, and (ii) Discovery of kinetic parameters of stochastic biochemical models from behavioral specifications.
We present a new Bayesian algorithm for Statistical Model Checking of stochastic systems based on a sequential version of Jeffreys’ Bayes Factor test. We argue that the Bayesian approach is more suited for application do- mains like systems biology modeling, where distributions on nuisance parameters and priors may be known. We prove that our Bayesian Statistical Model Checking algorithm terminates for a large subclass of prior probabilities. We also characterize the Type I/II errors associated with our algorithm. We experimentally demonstrate that this algorithm is suitable for the analysis of complex biochemical models like those written in the BioNetGen language. We then argue that i.i.d. sampling based Statistical Model Checking algorithms are not an effective way to study rare behaviors of stochastic models and present another Bayesian Statistical Model Checking algorithm that can incorporate non-i.i.d. sampling strategies.
We also present algorithms for synthesis of chemical kinetic parameters of stochastic biochemical models from high level behavioral specifications. We consider the setting where a modeler knows facts that must hold on the stochastic model but is not confident about some of the kinetic parameters in her model. We suggest algorithms for discovering these kinetic parameters from facts stated in appropriate formal probabilistic specification languages. Our algorithms are based on our theoretical results characterizing the probability of a specification being true on a stochastic biochemical model. We have applied this algorithm to discover kinetic parameters for biochemical models with as many as six unknown parameters.
|
78 |
Stochastic Modelling of Random Variables with an Application in Financial Risk Management.Moldovan, Max January 2003 (has links)
The problem of determining whether or not a theoretical model is an accurate representation of an empirically observed phenomenon is one of the most challenging in the empirical scientific investigation. The following study explores the problem of stochastic model validation. Special attention is devoted to the unusual two-peaked shape of the empirically observed distributions of the conditional on realised volatility financial returns. The application of statistical hypothesis testing and simulation techniques leads to the conclusion that the conditional on realised volatility returns are distributed with a specific previously undocumented distribution. The probability density that represents this distribution is derived, characterised and applied for validation of the financial model.
|
79 |
An assessment of recent changes in catchment sediment sources and sinks, central Queensland, AustraliaHughes, Andrew Owen, Physical, Environmental & Mathematical Sciences, Australian Defence Force Academy, UNSW January 2009 (has links)
Spatial and temporal information on catchment sediment sources and sinks can provide an improved understanding of catchment response to human-induced disturbances. This is essential for the implementation of well-targeted catchment-management decisions. This thesis investigates the nature and timing of catchment response to human activities by examining changes in sediment sources and sinks in a dry-tropical subcatchment of the Great Barrier Reef (GBR) catchment area, in northeastern Australia. Changes in catchment sediment sources, both in terms of spatial provenance and erosion type, are determined using sediment tracing techniques. Results indicate that changes in sediment source contributions over the last 250 years can be linked directly to changes in catchment land use. Sheetwash and rill erosion from cultivated land (40???60%) and channel erosion from grazed areas (30-80%) currently contribute most sediment to the river system. Channel erosion, on a basin-wide scale, appears to be more important than previously considered in this region of Australia. Optically stimulated luminescence and 137Cs dating are used to determine pre-and post- European settlement (ca. 1850) alluvial sedimentation rates. The limitations of using 137Cs as a floodplain sediment dating tool in a low fallout environment, dominated by sediment derived from channel and cultivation sources, are identified. Low magnitude increases in post-disturbance floodplain sedimentation rates (3 to 4 times) are attributed to the naturally high sediment loads in the dry-tropics. These low increases suggest that previous predictions which reflect order of magnitude increases in post-disturbance sediment yields are likely to be overestimates. In-channel bench deposits, formed since European settlement, are common features that appear to be important stores of recently eroded material. The spatially distributed erosion/sediment yield model SedNet is applied, both with generic input parameters and locally-derived data. Outputs are evaluated against available empirically-derived data. The results suggest that previous model estimates using generic input parameters overestimate post-disturbance and underestimate pre-disturbance sediment yields, exaggerating the impact of European catchment disturbance. This is likely to have important implications for both local-scale and catchment-wide management scenarios in the GBR region. Suggestions for future study and the collection of important empirical data to enable more accurate model performance are made.
|
80 |
Modeling and Simulation of Brake Squeal in Disc Brake Assembly / Modellering och simulering av bromsskrik i skivbromsarNilman, Jenny January 2018 (has links)
Brake squeal is an old and well-known problem in the vehicle industry and is a frequent source for customer complain. Although, brake squeal is not usually affecting the performance of the brakes, it is still important to address the problem and to predict the brakes tendency to squeal on an early stage in the design process. Brake squeal is usually defined as a sustained, high-frequency vibration of the brake components, due to the braking action. By using simulation in finite element (FE) method it should be possible to predict at what frequencies the brakes tend to emit sound. The method chosen for the analysis was the complex eigenvalues analysis (CEA) method, since it is a well-known tool to predict unstable modes in FE analysis. The results from the CEA were evaluated against measured data from an earlier study. Even though there are four main mechanism formulated in order to explain the up come of squeal, the main focus in this project was modal coupling, since it is the main mechanism in the CEA. A validation of the key components in model was performed before the analysis, in order to achieve better correlation between the FE model and reality. A parametric study was conducted with the CEA, to investigate how material properties and operating parameters effected the brakes tendency to squeal. The following parameters was included in the analysis; coefficient of friction, brake force, damping, rotational velocity, and Young’s modulus for different components. The result from the CEA did not exactly reproduce the noise frequencies captured in experimental tests. The discrepancy is believed to mainly be due to problems in the calibration process of the components in the model. The result did however show that the most effective way to reduce the brakes tendency for squeal was to lower the coefficient of friction. The effect of varying the Young’s modulus different components showed inconsistent results on the tendency to squeal. By adding damping one of the main disadvantages for the CEA, which the over-prediction of the number of unstable modes, where minimized.
|
Page generated in 0.1161 seconds