Spelling suggestions: "subject:"predictive incertainty"" "subject:"predictive ncertainty""
1 |
Predictive uncertainty in infrared marker-based dynamic tumor tracking with Vero4DRT / Vero4DRTを用いた赤外線反射マーカーに基づく動体追尾照射の予測誤差Akimoto, Mami 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第18867号 / 医博第3978号 / 新制||医||1008(附属図書館) / 31818 / 京都大学大学院医学研究科医学専攻 / (主査)教授 鈴木 実, 教授 黒田 知宏, 教授 富樫 かおり / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
2 |
CountNet3D: A 3D Computer Vision Approach to Infer Counts of Occluded Objects with Quantified UncertaintyNelson, Stephen W. 30 August 2023 (has links) (PDF)
3D scene understanding is an important problem that has experienced great progress in recent years, in large part due to the development of state-of-the-art methods for 3D object detection. However, the performance of 3D object detectors can suffer in scenarios where extreme occlusion of objects is present, or the number of object classes is large. In this paper, we study the problem of inferring 3D counts from densely packed scenes with heterogeneous objects. This problem has applications to important tasks such as inventory management or automatic crop yield estimation. We propose a novel regression-based method, CountNet3D, that uses mature 2D object detectors for finegrained classi- fication and localization, and a PointNet backbone for geo- metric embedding. The network processes fused data from images and point clouds for end-to-end learning of counts. We perform experiments on a novel synthetic dataset for inventory management in retail, which we construct and make publicly available to the community. We also have a proprietary dataset we've collected of real-world scenes. In addition we run experiments to quantify the uncertainty of the models and evaluate the confidence of our predic- tions. Our results show that regression-based 3D counting methods systematically outperform detection-based meth- ods, and reveal that directly learning from raw point clouds greatly assists count estimation under extreme occlusion.
|
3 |
Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction DeviceFreeman, Jacob Andrew 07 September 2012 (has links)
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study.
To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°.
The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA.
Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty. / Ph. D.
|
4 |
Neural Network Approximations to Solution Operators for Partial Differential EquationsNickolas D Winovich (11192079) 28 July 2021 (has links)
<div>In this work, we introduce a framework for constructing light-weight neural network approximations to the solution operators for partial differential equations (PDEs). Using a data-driven offline training procedure, the resulting operator network models are able to effectively reduce the computational demands of traditional numerical methods into a single forward-pass of a neural network. Importantly, the network models can be calibrated to specific distributions of input data in order to reflect properties of real-world data encountered in practice. The networks thus provide specialized solvers tailored to specific use-cases, and while being more restrictive in scope when compared to more generally-applicable numerical methods (e.g. procedures valid for entire function spaces), the operator networks are capable of producing approximations significantly faster as a result of their specialization.</div><div><br></div><div>In addition, the network architectures are designed to place pointwise posterior distributions over the observed solutions; this setup facilitates simultaneous training and uncertainty quantification for the network solutions, allowing the models to provide pointwise uncertainties along with their predictions. An analysis of the predictive uncertainties is presented with experimental evidence establishing the validity of the uncertainty quantification schema for a collection of linear and nonlinear PDE systems. The reliability of the uncertainty estimates is also validated in the context of both in-distribution and out-of-distribution test data.</div><div><br></div><div>The proposed neural network training procedure is assessed using a novel convolutional encoder-decoder model, ConvPDE-UQ, in addition to an existing fully-connected approach, DeepONet. The convolutional framework is shown to provide accurate approximations to PDE solutions on varying domains, but is restricted by assumptions of uniform observation data and homogeneous boundary conditions. The fully-connected DeepONet framework provides a method for handling unstructured observation data and is also shown to provide accurate approximations for PDE systems with inhomogeneous boundary conditions; however, the resulting networks are constrained to a fixed domain due to the unstructured nature of the observation data which they accommodate. These two approaches thus provide complementary frameworks for constructing PDE-based operator networks which facilitate the real-time approximation of solutions to PDE systems for a broad range of target applications.</div>
|
5 |
Exploring the use of conceptual catchment models in assessing irrigation water availability for grape growing in the semi-arid Andes / Apport des modèles hydrologiques conceptuels à l’estimation de la disponibilité en eau pour l’irrigation de la vigne dans les Andes semi-aridesHublart, Paul 30 November 2015 (has links)
La thèse explore l’utilisation de modèles hydrologiques globaux pour estimer la disponibilité en eau agricole dans le contexte des Andes chiliennes semi-arides. Dans cette région, l’approvisionnement en eau des cultures irriguées de fonds de vallée durant l’été dépend de précipitations se produisant sous forme de neige à haute altitude lors de quelques évènements hivernaux. L’influence des phénomènes ENSO et PDO induit par ailleurs une forte variabilité climatique à l’échelle inter-annuelle, marquée par l’occurrence d’années extrêmement sèches ou humides. La région connaît aussi depuis les années 1980 une progression importante de la viticulture irriguée. Afin de prendre en compte les variations saisonnières et inter-annuelles de la disponibilité et de la consommation en eau d’irrigation, une chaîne de modélisation intégrée a été développée et différentes méthodes de quantification/réduction des incertitudes de simulation ont été mises en œuvre. Les écoulements naturels ont été simulés avec un modèle hydrologique global de type empirique/conceptuel prenant en compte les processus d’accumulation et d’ablation de la neige. En parallèle, les besoins en eau d’irrigation ont été estimés à l’échelle du bassin à partir de modèles phénologiques orientés processus et d’une approche simple du bilan hydrique du sol. Dans l’ensemble, une approche holistique et parcimonieuse a été privilégiée afin de maintenir un niveau d’abstraction mathématique et de représentation des processus équivalent à celui des modèles de bassin couramment utilisés. Afin d’améliorer l’utilité et la fiabilité des simulations obtenues en contexte de changement ou de forte variabilité climatique, l’effet des températures extrêmes sur le développement des cultures et l’impact des pertes en eau par sublimation à haute altitude ont fait l’objet d’une attention particulière. Ce cadre de modélisation conceptuel a été testé pour un bassin typique des Andes semi-arides (1512 km2, 820–5500 m a.s.l.) sur une période de 20 ans incluant une large gamme de conditions climatiques et des pratiques agricoles non-stationnaires (évolution des variétés de vigne, des surfaces et modes d’irrigation, etc). L’évaluation des modèles a été réalisée dans un cadre bayésien en faisant l’hypothèse d’erreurs auto-corrélées, hétéroscédastiques et non-gaussiennes. Différents critères et sources de données ont par ailleurs été mobilisés afin de vérifier l’efficacité et la cohérence interne de la chaîne de modélisation ainsi que la fiabilité statistique et la finesse des intervalles de confiance obtenus. De manière alternative, une caractérisation des erreurs de structure et de l’ambiguïté propre au choix du modèle hydrologique a été réalisée de manière non-probabiliste à partir d’une plate-forme de modélisation modulaire. Dans l’ensemble, la prise en compte explicite de la consommation en eau des cultures a mis en valeur certaines interactions entre paramètres hydrologiques et permis d’améliorer la fiabilité des simulations hydrologiques en année sèche. Finalement, une étude de sensibilité à différents seuils d’augmentation de la température et de la concentration en CO2 a été effectuée afin d’évaluer l’impact potentiel des changements climatiques sur le comportement de l’hydrosystème et la capacité à satisfaire la demande en eau d’irrigation dans le futur. / This thesis investigates the use of lumped catchment models to assess water availability for irrigation in the upland areas of northern-central Chile (30°S). Here, most of the annual water supply falls as snow in the high Cordillera during a few winter storms. Seasonal snowpacks serve as natural reservoirs, accumulating water during the winter and sustaining streams and aquifers during the summer, when irrigation demand in the cultivated valleys is at its peak. At the inter-annual timescale, the influence of ENSO and PDO phenomena result in the occurrence of extremely wet and dry years. Also, irrigated areas and grape growing have achieved a dramatic increase since the early 1980s. To evaluate the usefulness of explicitly accounting for changes in irrigation water-use in lumped catchment models, an integrated modeling framework was developed and different ways of quantifying/reducing model uncertainty were explored. Natural streamflow was simulated using an empirical hydrological model and a snowmelt routine. In parallel, seasonal and inter-annual variations in irrigation requirements were estimated using several process-based phenological models and a simple soil-water balance approach. Overall, this resulted in a low-dimensional, holistic approach based on the same level of mathematical abstraction and process representation as in most commonly-used catchment models. To improve model reliability and usefulness under varying or changing climate conditions, particular attention was paid to the effects of extreme temperatures on crop phenology and the contribution of sublimation losses to water balance at high elevations. This conceptual framework was tested in a typical semi-arid Andean catchment (1512 km2, 820–5500 m a.s.l.) over a 20–year simulation period encompassing a wide range of climate and water-use conditions (changes in grape varieties, irrigated areas, irrigation techniques). Model evaluation was performed from a Bayesian perspective assuming auto-correlated, heteroscedastic and non-gaussian residuals. Different criteria and data sources were used to verify model assumptions in terms of efficiency, internal consistency, statistical reliability and sharpness of the predictive uncertainty bands. Alternatively, a multiple-hypothesis and multi-criteria modeling framework was also developed to quantify the importance of model non-uniqueness and structural inadequacy from a non-probabilistic perspective. On the whole, incorporating the effects of irrigation water-use led to new interactions between the hydrological parameters of the modeling framework and improved reliability of streamflow predictions during low-flow periods. Finally, a sensitivity analysis to changes in climate conditions was conducted to evaluate the potential impacts of increasing temperatures and atmospheric CO2 on the hydrological behavior of the catchment and the capacity to meet future water demands.
|
6 |
Smart Quality Assurance System for Additive Manufacturing using Data-driven based Parameter-Signature-Quality FrameworkLaw, Andrew Chung Chee 02 August 2022 (has links)
Additive manufacturing (AM) technology is a key emerging field transforming how customized products with complex shapes are manufactured. AM is the process of layering materials to produce objects from three-dimensional (3D) models. AM technology can be used to print objects with complicated geometries and a broad range of material properties. However, the issue of ensuring the quality of printed products during the process remains an obstacle to industry-level adoption. Furthermore, the characteristics of AM processes typically involve complex process dynamics and interactions between machine parameters and desired qualities. The issues associated with quality assurance in AM processes underscore the need for research into smart quality assurance systems.
To study the complex physics behind process interaction challenges in AM processes, this dissertation proposes the development of a data-driven smart quality assurance framework that incorporates in-process sensing and machine learning-based modeling by correlating the relationships among parameters, signatures, and quality. High-fidelity AM simulation data and the increasing use of sensors in AM processes help simulate and monitor the occurrence of defects during a process and open doors for data-driven approaches such as machine learning to make inferences about quality and predict possible failure consequences.
To address the research gaps associated with quality assurance for AM processes, this dissertation proposes several data-driven approaches based on the design of experiments (DoE), forward prediction modeling, and an inverse design methodology. The proposed approaches were validated for AM processes such as fused filament fabrication (FFF) using polymer and hydrogel materials and laser powder bed fusion (LPBF) using common metal materials. The following three novel smart quality assurance systems based on a parameter–signature–quality (PSQ) framework are proposed:
1. A customized in-process sensing platform with a DOE-based process optimization approach was proposed to learn and optimize the relationships among process parameters, process signatures, and parts quality during bioprinting processes. This approach was applied to layer porosity quantification and quality assurance for polymer and hydrogel scaffold printing using an FFF process.
2. A data-driven surrogate model that can be informed using high-fidelity physical-based modeling was proposed to develop a parameter–signature–quality framework for the forward prediction problem of estimating the quality of metal additive-printed parts. The framework was applied to residual stress prediction for metal parts based on process parameters and thermal history with reheating effects simulated for the LPBF process.
3. Deep-ensemble-based neural networks with active learning for predicting and recommending a set of optimal process parameter values were developed to optimize optimal process parameter values for achieving the inverse design of desired mechanical responses of final built parts in metal AM processes with fewer training samples. The methodology was applied to metal AM process simulation in which the optimal process parameter values of multiple desired mechanical responses are recommended based on a smaller number of simulation samples. / Doctor of Philosophy / Additive manufacturing (AM) is the process of layering materials to produce objects from three-dimensional (3D) models. AM technology can be used to print objects with complicated geometries and a broad range of material properties. However, the issue of ensuring the quality of printed products during the process remains a challenge to industry-level adoption. Furthermore, the characteristics of AM processes typically involve complex process dynamics and interactions between machine parameters and the desired quality. The issues associated with quality assurance in AM processes underscore the need for research into smart quality assurance systems.
To study the complex physics behind process interaction challenges in AM processes, this dissertation proposes a data-driven smart quality assurance framework that incorporates in-process sensing and machine-learning-based modeling by correlating the relationships among process parameters, sensor signatures, and parts quality. Several data-driven approaches based on the design of experiments (DoE), forward prediction modeling, and an inverse design methodology are proposed to address the research gaps associated with implementing a smart quality assurance system for AM processes. The proposed parameter–signature–quality (PSQ) framework was validated using bioprinting and metal AM processes for printing with polymer, hydrogel, and metal materials.
|
Page generated in 0.0558 seconds