Spelling suggestions: "subject:"modelvalidation"" "subject:"modelvalidations""
91 |
Enabling Digital Twinning via Information-Theoretic Machine Learning-Based Inference IntelligenceJeongwon Seo (8458191) 30 November 2023 (has links)
<p dir="ltr">Nuclear energy, renowned for its clean, carbon-free attributes and cost-effectiveness, stands as a pivotal pillar in the global quest for sustainable energy sources. Additionally, nuclear power, being a spatially high-concentrated industry, offers an unparalleled energy density compared to other sources of energy. Despite its numerous advantages, if a nuclear power plant (NPP) is not operated safely, it can lead to long-term shutdowns, radiation exposure to workers, radiation contamination of surrounding areas, or even a national-scale disaster, as witnessed in the Chernobyl incident of 1986. Therefore, ensuring the safe operation of nuclear reactors is considered the most important factor in their operation. Recognizing the intricate tradeoff between safety and economy, economic considerations are often sacrificed in favor of safety.</p><p dir="ltr">Given this context, it becomes crucial to develop technologies that ensure NPPs’ safety while optimizing their operational efficiency, thereby minimizing the sacrifice of economic benefits. In response to this critical need, scientists introduced the term “digital twin (DT)”, derived from the concept of product lifecycle management. As the first instance of the term, the DT model comprises the physical product, its digital representation, data flowing from the physical to the DT, and information flowing from the digital to the physical twin. In this regard, various nuclear stakeholders such as reactor designers, researchers, operators, and regulators in the nuclear sector, are pursuing the DT technologies which are expected to enable NPPs to be monitored and operated/controlled in an automated and reliable manner. DT is now being actively sought given its wide potential, including increased operational effectiveness, enhanced safety and reliability, uncertainty reduction, etc.</p><p dir="ltr">While a number of technical challenges must be overcome to successfully implement DT technology, this Ph.D. work limits its focus on one of the DT’s top challenges, i.e., model validation, which ensures that model predictions can be trusted for a given application, e.g., the domain envisaged for code usage. Model validation is also a key regulatory requirement in support of the various developmental stages starting from conceptual design to deployment, licensing, operation, and safety. To ensure a given model to be validated, the regulatory process requires the consolidation of two independent sources of knowledge, one from measurements collected from experimental conditions, and the other from code predictions that model the same experimental conditions.</p><p dir="ltr">and computational domains in an optimal manner, considering the characteristics of predictor and target responses. Successful model validation necessitates a complete data analytics pipeline, generally including data preprocessing, data analysis (model training), and result interpretation. Therefore, this Ph.D. work begins by revisiting fundamental concepts such as uncertainty classification, sensitivity analysis (SA), similarity/representativity metrics, and outlier rejection techniques, which serve as robust cornerstones of validation analysis.</p><p dir="ltr">The ultimate goal of this Ph.D. work is to develop an intelligent inference framework that infers/predicts given responses, adaptively handling various levels of data complexities, i.e., residual shape, nonlinearity, heteroscedasticity, etc. These Ph.D. studies are expected to significantly advance DT technology, enabling support for various levels of operational autonomy in both existing and first-of-a-kind reactor designs. This extends to critical aspects such as nuclear criticality safety, nuclear fuel depletion dynamics, spent nuclear fuel (SNF) analysis, and the introduction of new fuel designs, such as high burnup fuel and high-assay low-enriched uranium fuel (HALEU). These advancements are crucial in scenarios where constructing new experiments is costly, time-consuming, or infeasible with new reactor systems or high-consequence events like criticality accidents.</p>
|
92 |
PVactVal: A Validation Approach for Agent-based Modeling of Residential Photovoltaic AdoptionJohanning, Simon, Abitz, Daniel, Schulte, Emily, Scheller, Fabian, Bruckner, Thomas 19 October 2023 (has links)
Agent-based simulation models are an important tool
to study the effectiveness of policy interventions on the uptake of
residential photovoltaic systems by households, a cornerstone of
sustainable energy system transition. In order for these models
to be trustworthy, they require rigorous validation.
However, the canonical approach of validating emulation models
through calibration with parameters that minimize the difference
of model results and reference data fails when the model is
subject to many stochastic influences. The residential photovoltaic
diffusion model PVact features numerous stochastic influences
that prevent straightforward optimization-driven calibration.
From the analysis of the results of a case-study on the cities
Dresden and Leipzig (Germany) based on three error metrics
(mean average error, root mean square error and cumulative
average error), this research identifies a parameter range where
stochastic fluctuations exceed differences between results of
different parameterization and a minimization-based calibration
approach fails.
Based on this observation, an approach is developed that
aggregates model behavior across multiple simulation runs and
parameter combinations to compare results between scenarios
representing different future developments or policy interventions
of interest.
|
93 |
Modeling and Control of a Hybrid-Electric Vehicle for Drivability and Fuel Economy ImprovementsKoprubasi, Kerem 16 September 2008 (has links)
No description available.
|
94 |
Data-Fitted Generic Second Order Macroscopic Traffic Flow ModelsFan, Shimao January 2013 (has links)
The Aw-Rascle-Zhang (ARZ) model has become a favorable ``second order" macroscopic traffic model, which corrects several shortcomings of the Payne-Whitham (PW) model. The ARZ model possesses a family of flow rate versus density (FD) curves, rather than a single one as in the ``first order" Lighthill-Whitham-Richards (LWR) model. This is more realistic especially during congested traffic state, where the historic fundamental diagram data points are observed to be set-valued. However, the ARZ model also possesses some obvious shortcomings, e.g., it assumes multiple maximum traffic densities which should be a ``property" of road. Instead, we propose a Generalized ARZ (GARZ) model under the generic framework of ``second order" macroscopic models to overcome the drawbacks of the ARZ model. A systematic approach is presented to design generic ``second order" models from historic data, e.g., we construct a family of flow rate curves by fitting with data. Based on the GARZ model, we then propose a phase-transition-like model that allows the flow rate curves to coincide in the free flow regime. The resulting model is called Collapsed GARZ (CGARZ) model. The CGARZ model keeps the flavor of phase transition models in the sense that it assume a single FD function in the free-flow phase. However, one should note that there is no real phase transition in the CGARZ model. To investigate to which extent the new generic ``second order" models (GARZ, CGARZ) improve the prediction accuracy of macroscopic models, we perform a comparison of the proposed models with two types of LWR models and their ``second order" generalizations, given by the ARZ model, via a three-detector problem test. In this test framework, the initial and boundary conditions are derived from real traffic data. In terms of using historic traffic data, a statistical technique, the so-called kernel density estimation, is applied to obtain density and velocity distributions from trajectory data, and a cubic interpolation is employed to formulate boundary condition from single-loop sensor data. Moreover, a relaxation term is added to the momentum equation of selected ``second order" models to address further unrealistic aspects of homogeneous models. Using these inhomogeneous ``second order" models, we study which choices of the relaxation term &tau are realistic. / Mathematics
|
95 |
Direct Assessment and Investigation of Nonlinear and Nonlocal Turbulent Constitutive Relations in Three-Dimensional Boundary Layer FlowGargiulo, Aldo 12 July 2023 (has links)
Three-dimensional (3D) turbulent boundary layers (TBLs) play a crucial role in determining the aerodynamic properties of most aero-mechanical devices. However, accurately predicting these flows remains a challenge due to the complex nonlinear and nonlocal physics involved, which makes it difficult to develop universally applicable models. This limitation is particularly significant as the industry increasingly relies on simulations to make decisions in high-consequence environments, such as the certification or aircraft, and high-fidelity simulation methods that don't rely on modeling are prohibitively expensive. To address this challenge, it is essential to gain a better understanding of the physics underlying 3D TBLs. This research aims to improve the predictive accuracy of turbulence models in 3D TBLs by examining the impact of model assumptions underpinning turbulent constitutive relations, which are fundamental building blocks of every turbulence model. Specifically, the study focuses on the relevance and necessity of nonlinear and nonlocal model assumptions for accurately predicting 3D TBLs. The study considers the attached 3D boundary layer flow over the textbf{Be}nchmark textbf{V}alidation textbf{E}xperiment for textbf{R}ANS/textbf{L}ES textbf{I}nvestiagtions (BeVERLI) Hill as a test case and corresponding particle image velocimetry data for the investigation. In a first step, the BeVERLI Hill experiment is comprehensively described, and the important characteristics of the flow over the BeVERLI Hill are elucidated, including complex symmetry breaking characteristics of this flow. Reynolds-averaged Navier-Stokes simulations of the case using standard eddy viscosity models are then presented to establish the baseline behavior of local and linear constitutive relations, i.e., the standard Boussinesq approximation. The tested eddy viscosity models fail in the highly accelerated hill top region of the BeVERLI hill and near separation. In a further step, several nonlinear and nonlocal turbulent constitutive relations, including the QCR model, the model by Gatski and Speziale, and the difference-quotient model by Egolf are used as metrics to gauge the impact of nonlinearities and nonlocalities for the modeling of 3D TBLs. It is shown that nonlinear and nonlocal approaches are essential for effective 3D TBL modeling. However, simplified reduced-order models could accurately predict 3D TBLs without high computational costs. A constitutive relation with local second-order nonlinear mean strain relations and simplified nonlocal terms may provide such a minimal model. In a final step, the structure and response of non-equilibrium turbulence to continuous straining are studied to reveal new scaling laws and structural models. / Doctor of Philosophy / Airplanes and other flying objects rely on the way air flows around them to generate lift and stay in the sky. This airflow can be very complex, especially close to the surface of the object, where it is affected by friction with the object. This friction generates a layer of air called a boundary layer, which can become turbulent and lead to complex patterns of airflow. The boundary layer is generated by the friction between the air and the surface of the object, which causes the air molecules to "stick" to the surface. This sticking creates a layer of slow-moving air that slows down the flow of air around the object. This loss of momentum creates drag, which is one of the main factors that resist the motion of objects in the air. The slowing of the air flow in the boundary layer is due to the viscosity of the air, which is a measure of how resistant the air is to deformation. The molecules in the air have a tendency to stick together, making it difficult for them to move past each other. This resistance causes the momentum of the air to be lost as it flows over the surface of the object because air molecules close to the surface "pull" on the ones farther away. Understanding how turbulent boundary layers (TBLs) work is essential to accurately predict the airflow around these objects using computer simulations. However, it's challenging because TBLs involve complex physics that are difficult to model accurately. This research focuses on a specific type of TBL called a three-dimensional (3D) TBL. This study looks at how different assumptions affect the accuracy of computer simulations that predict this type of airflow. It is found that using more complex models that take into account nonlinear and nonlocal physics can help predict 3D TBLs more accurately. However, these models are computationally expensive, and it is also found that simpler models can work well enough and are cheaper. This research further establishes important physical relations of the mechanisms pertaining 3D TBLs that could support the advancement of current models.
|
96 |
Um método de refinamento para desenvolvimento de software embarcado: uma abordagem baseada em UML-RT e especificações formais. / A refinement method for embedded software development: a based UML-RT and formal specification approach.Polido, Marcelo Figueiredo 18 May 2007 (has links)
Neste trabalho é apresentado um método de refinamento para especificações de sistemas embarcados, baseado na linguagem de especificação gráfica UML-RT e na linguagem de especificação formal CSP-OZ. A linguagem UML-RT é utilizada para descrever a arquitetura de sistemas de tempo real distribuídos e esses mapeados para uma especificação formal através de CSP-OZ. A linguagem de especificação formal CSP-OZ é a combinação da linguagem orientada a objetos Object-Z e a algebra de processos CSP, que descreve o comportamento de processos concorrentes. O método de refinamento proposto é baseado na integração de dois métodos: o de bi-simulação, para refinar a parte comportamental da especificação descrita por CSP; e o de equivalência de especificações, para refinar as estruturas de dados descritas por Object-Z, permitindo assim que características de orientação a objetos possam ser utilizadas. Com o método proposto é possível refinar especificações e, conseqüentemente, verificá-las com sua implementação. O desenvolvimento desse método é rigoroso, incluindo a definição formal para um metamodelo da UML-RT. Um exemplo detalhado é apresentado no final deste trabalho. / In this work, a method of refinement of embedded systems specifications based on the graphical specification language UML-RT and the formal specification CSP-OZ is introduced. The UML-RT is used to model real time distributed architecture systems and these are mapped onto formal specifications using CSP-OZ. The CSP-OZ formal specification language is a combination of the state-based object oriented language Object-Z and the CSP process algebra that describes behavioral models of concurrent processes. The rationale of the proposed refinement method is twofold, the use of bisimulation to refine the behavioral part and the specification matching algorithm to refine the state-based part, supporting object-oriented characteristics. Using this result, an equivalence between the specification-matching algorithm and simulation rules is showed. Using the proposed method it is possible to refine CSP-OZ specifications and verify them against their implementations. The development of the proposed refinement method is rigorous, including a formal definition for a UML-RT metamodel. A detailed study case is given at the end of this work.
|
97 |
A behavioral ecology of fishermen : hidden stories from trajectory data in the Northern Humboldt Current System / Une écologie du comportement des pêcheurs : histoires cachées à partir des données de trajectoires dans le système de Courant de HumboldtJoo Arakawa, Rocío 19 December 2013 (has links)
Ce travail propose une contribution originale à la compréhension du comportement spatial des pêcheurs, basée sur les paradigmes de l'écologie comportementale et de l'écologie du mouvement. En s'appuyant sur des données du 'Vessel Monitoring System', nous étudions le comportement des pêcheurs d'anchois du Pérou à des échelles différentes: (1) les modes comportementaux au sein des voyages de pêche (i.e. recherche, pêche et trajet), (2) les patrons comportementaux parmi les voyages de pêche, (3) les patrons comportementaux par saison de pêche conditionnés par des scénarios écosystémiques et (4) les patrons spatiaux des positions de modes comportementaux, que nous utilisons pour la création de cartes de probabilité de présence d'anchois. Pour la première échelle, nous comparons plusieurs modèles Markoviens (modèles de Markov et semi-Markov cachés) et discriminatifs (forêts aléatoires, machines à vecteurs de support et réseaux de neurones artificiels) pour inférer les modes comportementaux associés aux trajectoires VMS. L'utilisation d'un ensemble de données pour lesquelles les modes comportementaux sont connus (grâce aux données collectées par des observateurs embarqués), nous permet d'entraîner les modèles dans un cadre supervisé et de les valider. Les modèles de semi-Markov cachés sont les plus performants, et sont retenus pour inférer les modes comportementaux sur l'ensemble de données VMS. Pour la deuxième échelle, nous caractérisons chaque voyage de pêche par plusieurs descripteurs, y compris le temps passé dans chaque mode comportemental. En utilisant une analyse de classification hiérarchique, les patrons des voyages de pêche sont classés en groupes associés à des zones de gestion, aux segments de la flottille et aux personnalités des capitaines. Pour la troisième échelle, nous analysons comment les conditions écologiques donnent forme au comportement des pêcheurs à l'échelle d'une saison de pêche. Via des analyses de co-inertie, nous trouvons des associations significatives entre les dynamiques spatiales des pêcheurs, des anchois et de l'environnement, et nous caractérisons la réponse comportementale des pêcheurs selon des scénarios environnementaux contrastés. Pour la quatrième échelle, nous étudions si le comportement spatial des pêcheurs reflète dans une certaine mesure la répartition spatiale de l'anchois. Nous construisons un indicateur de la présence d'anchois à l'aide des modes comportementaux géo-référencés inférés à partir des données VMS. Ce travail propose enfin une vision plus large du comportement de pêcheurs: les pêcheurs ne sont pas seulement des agents économiques, ils sont aussi des fourrageurs, conditionnés par la variabilité dans l'écosystème. Pour conclure, nous discutons de la façon dont ces résultats peuvent avoir de l'importance pour la gestion de la pêche, des analyses de comportement collectif et des modèles end-to-end. / This work proposes an original contribution to the understanding of fishermen spatial behavior, based on the behavioral ecology and movement ecology paradigms. Through the analysis of Vessel Monitoring System (VMS) data, we characterized the spatial behavior of Peruvian anchovy fishermen at different scales: (1) the behavioral modes within fishing trips (i.e., searching, fishing and cruising); (2) the behavioral patterns among fishing trips; (3) the behavioral patterns by fishing season conditioned by ecosystem scenarios; and (4) the computation of maps of anchovy presence proxy from the spatial patterns of behavioral mode positions. At the first scale considered, we compared several Markovian (hidden Markov and semi-Markov models) and discriminative models (random forests, support vector machines and artificial neural networks) for inferring the behavioral modes associated with VMS tracks. The models were trained under a supervised setting and validated using tracks for which behavioral modes were known (from on-board observers records). Hidden semi-Markov models performed better, and were retained for inferring the behavioral modes on the entire VMS dataset. At the second scale considered, each fishing trip was characterized by several features, including the time spent within each behavioral mode. Using a clustering analysis, fishing trip patterns were classified into groups associated to management zones, fleet segments and skippers' personalities. At the third scale considered, we analyzed how ecological conditions shaped fishermen behavior. By means of co-inertia analyses, we found significant associations between fishermen, anchovy and environmental spatial dynamics, and fishermen behavioral responses were characterized according to contrasted environmental scenarios. At the fourth scale considered, we investigated whether the spatial behavior of fishermen reflected to some extent the spatial distribution of anchovy. Finally, this work provides a wider view of fishermen behavior: fishermen are not only economic agents, but they are also foragers, constrained by ecosystem variability. To conclude, we discuss how these findings may be of importance for fisheries management, collective behavior analyses and end-to-end models.
|
98 |
Um método de refinamento para desenvolvimento de software embarcado: uma abordagem baseada em UML-RT e especificações formais. / A refinement method for embedded software development: a based UML-RT and formal specification approach.Marcelo Figueiredo Polido 18 May 2007 (has links)
Neste trabalho é apresentado um método de refinamento para especificações de sistemas embarcados, baseado na linguagem de especificação gráfica UML-RT e na linguagem de especificação formal CSP-OZ. A linguagem UML-RT é utilizada para descrever a arquitetura de sistemas de tempo real distribuídos e esses mapeados para uma especificação formal através de CSP-OZ. A linguagem de especificação formal CSP-OZ é a combinação da linguagem orientada a objetos Object-Z e a algebra de processos CSP, que descreve o comportamento de processos concorrentes. O método de refinamento proposto é baseado na integração de dois métodos: o de bi-simulação, para refinar a parte comportamental da especificação descrita por CSP; e o de equivalência de especificações, para refinar as estruturas de dados descritas por Object-Z, permitindo assim que características de orientação a objetos possam ser utilizadas. Com o método proposto é possível refinar especificações e, conseqüentemente, verificá-las com sua implementação. O desenvolvimento desse método é rigoroso, incluindo a definição formal para um metamodelo da UML-RT. Um exemplo detalhado é apresentado no final deste trabalho. / In this work, a method of refinement of embedded systems specifications based on the graphical specification language UML-RT and the formal specification CSP-OZ is introduced. The UML-RT is used to model real time distributed architecture systems and these are mapped onto formal specifications using CSP-OZ. The CSP-OZ formal specification language is a combination of the state-based object oriented language Object-Z and the CSP process algebra that describes behavioral models of concurrent processes. The rationale of the proposed refinement method is twofold, the use of bisimulation to refine the behavioral part and the specification matching algorithm to refine the state-based part, supporting object-oriented characteristics. Using this result, an equivalence between the specification-matching algorithm and simulation rules is showed. Using the proposed method it is possible to refine CSP-OZ specifications and verify them against their implementations. The development of the proposed refinement method is rigorous, including a formal definition for a UML-RT metamodel. A detailed study case is given at the end of this work.
|
99 |
Covariate Model Building in Nonlinear Mixed Effects ModelsRibbing, Jakob January 2007 (has links)
<p>Population pharmacokinetic-pharmacodynamic (PK-PD) models can be fitted using nonlinear mixed effects modelling (NONMEM). This is an efficient way of learning about drugs and diseases from data collected in clinical trials. Identifying covariates which explain differences between patients is important to discover patient subpopulations at risk of sub-therapeutic or toxic effects and for treatment individualization. Stepwise covariate modelling (SCM) is commonly used to this end. The aim of the current thesis work was to evaluate SCM and to develop alternative approaches. A further aim was to develop a mechanistic PK-PD model describing fasting plasma glucose, fasting insulin, insulin sensitivity and beta-cell mass.</p><p>The lasso is a penalized estimation method performing covariate selection simultaneously to shrinkage estimation. The lasso was implemented within NONMEM as an alternative to SCM and is discussed in comparison with that method. Further, various ways of incorporating information and propagating knowledge from previous studies into an analysis were investigated. In order to compare the different approaches, investigations were made under varying, replicated conditions. In the course of the investigations, more than one million NONMEM analyses were performed on simulated data. Due to selection bias the use of SCM performed poorly when analysing small datasets or rare subgroups. In these situations, the lasso method in NONMEM performed better, was faster, and additionally validated the covariate model. Alternatively, the performance of SCM can be improved by propagating knowledge or incorporating information from previously analysed studies and by population optimal design.</p><p>A model was also developed on a physiological/mechanistic basis to fit data from three phase II/III studies on the investigational drug, tesaglitazar. This model described fasting glucose and insulin levels well, despite heterogeneous patient groups ranging from non-diabetic insulin resistant subjects to patients with advanced diabetes. The model predictions of beta-cell mass and insulin sensitivity were well in agreement with values in the literature.</p>
|
100 |
Covariate Model Building in Nonlinear Mixed Effects ModelsRibbing, Jakob January 2007 (has links)
Population pharmacokinetic-pharmacodynamic (PK-PD) models can be fitted using nonlinear mixed effects modelling (NONMEM). This is an efficient way of learning about drugs and diseases from data collected in clinical trials. Identifying covariates which explain differences between patients is important to discover patient subpopulations at risk of sub-therapeutic or toxic effects and for treatment individualization. Stepwise covariate modelling (SCM) is commonly used to this end. The aim of the current thesis work was to evaluate SCM and to develop alternative approaches. A further aim was to develop a mechanistic PK-PD model describing fasting plasma glucose, fasting insulin, insulin sensitivity and beta-cell mass. The lasso is a penalized estimation method performing covariate selection simultaneously to shrinkage estimation. The lasso was implemented within NONMEM as an alternative to SCM and is discussed in comparison with that method. Further, various ways of incorporating information and propagating knowledge from previous studies into an analysis were investigated. In order to compare the different approaches, investigations were made under varying, replicated conditions. In the course of the investigations, more than one million NONMEM analyses were performed on simulated data. Due to selection bias the use of SCM performed poorly when analysing small datasets or rare subgroups. In these situations, the lasso method in NONMEM performed better, was faster, and additionally validated the covariate model. Alternatively, the performance of SCM can be improved by propagating knowledge or incorporating information from previously analysed studies and by population optimal design. A model was also developed on a physiological/mechanistic basis to fit data from three phase II/III studies on the investigational drug, tesaglitazar. This model described fasting glucose and insulin levels well, despite heterogeneous patient groups ranging from non-diabetic insulin resistant subjects to patients with advanced diabetes. The model predictions of beta-cell mass and insulin sensitivity were well in agreement with values in the literature.
|
Page generated in 0.0942 seconds