Spelling suggestions: "subject:"adaptive refreshing"" "subject:"adaptive meshing""
1 |
Development of Surrogate Model for FEM Error Prediction using Deep LearningJain, Siddharth 07 July 2022 (has links)
This research is a proof-of-concept study to develop a surrogate model, using deep learning (DL), to predict solution error for a given model with a given mesh. For this research, we have taken the von Mises stress contours and have predicted two different types of error indicators contours, namely (i) von Mises error indicator (MISESERI), and (ii) energy density error indicator (ENDENERI). Error indicators are designed to identify the solution domain areas where the gradient has not been properly captured. It uses the spatial gradient distribution of the existing solution for a given mesh to estimate the error. Due to poor meshing and nature of the finite element method, these error indicators are leveraged to study and reduce errors in the finite element solution using an adaptive remeshing scheme. Adaptive re-meshing is an iterative and computationally expensive process to reduce the error computed during the post-processing step. To overcome this limitation we propose an approach to replace it using data-driven techniques. We have introduced an image processing-based surrogate model designed to solve an image-to-image regression problem using convolutional neural networks (CNN) that takes a 256 × 256 colored image of von mises stress contour and outputs the required error indicator. To train this model with good generalization performance we have developed four different geometries for each of the three case studies: (i) quarter plate with a hole, (b) simply supported plate with multiple holes, and (c) simply supported stiffened plate. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN to perform training on stress contour images with their corresponding von Mises stress values volume-averaged over the entire domain. Phase II involves developing a surrogate model to perform image-to-image regression and the final phase III involves extending the capabilities of phase II and making the surrogate model more generalized and robust. The final surrogate model used to train the global dataset of 12,000 images consists of three auto encoders, one encoder-decoder assembly, and two multi-output regression neural networks. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks. / Master of Science / This research is a proof-of-concept study to develop an image processing-based neural network (NN) model to solve an image-to-image regression problem. In finite element analysis (FEA), due to poor meshing and nature of the finite element method, these error indicators are used to study and reduce errors. For this research, we have predicted two different types of error indicator contours by using stress images as inputs to the NN model. In popular FEA packages, adaptive remeshing scheme is used to optimize mesh quality by iteratively computing error indicators making the process computationally expensive. To overcome this limitation we propose an approach to replace it using convolutional neural networks (CNN). Such neural networks are particularly used for image based data. To train our CNN model with good generalization performance we have developed four different geometries with varying load cases. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN model to perform initial level training on small image size. Phase II involves developing an assembled neural network to perform image-to-image regression and the final phase III involves extending the capabilities of phase II for more generalized and robust results. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks.
|
2 |
Numerical simulation of diaphragm rupturePetrie-Repar, Paul J Unknown Date (has links)
The results from computer simulations of the gas-dynamic processes that occur during and after the rupture of diaphragms within shock tubes and expansion tubes are presented. A two-dimensional and axisymmetric finite-volume code that solves the unsteady Euler equations for inviscid compressible flow, was used to perform the simulations. The flow domains were represented as unstructured meshes of triangular cells and solution-adaptive remeshing was used to focus computational effort in regions where the flow-field gradients were high. The ability of the code to produce accurate solutions to the Euler equations was verified by examining the following test cases: supersonic vortex flow between two arcs, an ideal shock tube, and supersonic flow over a cone. The ideal shock tube problem was studied in detail, in particular the shock speed. The computed shock speed was accurate when the initial pressure ratio was low. When the initial pressure ratio was high the ow was dificult to resolve because of the large density ratio at the contact surface where significant numerical diffusion occurred. However, solution- adaptive remeshing was used to control the error and reasonable estimates for the shock speed were obtained. The code was used to perform multi-dimensional simulations of the gradual opening of a primary diaphragm within a shock tube. The development of the flow, in particular the contact surface was examined and found to be strongly dependent on the initial pressure ratio across the diaphragm. For high initial pressure ratios across the diaphragm, previous experiments have shown that the measured shock speed can exceed the shock speed predicted by one- dimensional models. The shock speeds computed via the present multi-dimensional simulation were higher than those estimated by previous one-dimensional models and were closer to the experimental measurements. This indicates that multi- dimensional ow effects were partly responsible for the relatively high shock speeds measured in the experiments. The code also has the ability to simulate two-dimensional fluid-structure interac- tions. To achieve this the Euler equations are solved for a general moving frame of reference. Mesh management during a simulation is important. This includes the ability to automatically generate a new mesh when the current mesh becomes distorted (due to the motion of the structures) and the transfer of the solution from the old mesh to the new. The shock induced rupture of thin diaphragms was examined. Previous one dimen- sional models are awed because they do not simultaneously consider the diaphragm mass and allow the upstream gas to penetrate the diaphragm mass. Two multi- dimensional models which allow the upstream gas to penetrate are described. The first model assumes the diaphragm vaporises immediately after the arrival of the incident shock. The second model assumes the diaphragm shatters into a number of pieces which can be treated as rigid bodies. The results from both models are compared with experimental data.
|
3 |
Sketch-based intuitive 3D model deformationsBao, Xin January 2014 (has links)
In 3D modelling software, deformations are used to add, to remove, or to modify geometric features of existing 3D models to create new models with similar but slightly different details. Traditional techniques for deforming virtual 3D models require users to explicitly define control points and regions of interest (ROIs), and to define precisely how to deform ROIs using control points. The awkwardness of defining these factors in traditional 3D modelling software makes it difficult for people with limited experience of 3D modelling to deform existing 3D models as they expect. As applications which require virtual 3D model processing become more and more widespread, it becomes increasingly desirable to lower the "difficulty of use" threshold of 3D model deformations for users. This thesis argues that the user experience, in terms of intuitiveness and ease of use, of a user interface for deforming virtual 3D models, can be greatly enhanced by employing sketch-based 3D model deformation techniques, which require the minimal quantities of interactions, while keeping the plausibility of the results of deformations as well as the responsiveness of the algorithms, based on modern home grade computing devices. A prototype system for sketch-based 3D model deformations is developed and implemented to support this hypothesis, which allows the user to perform a deformation using a single deforming stroke, eliminating the need to explicitly select control points, the ROI and the deforming operation. GPU based accelerations have been employed to optimise the runtime performance of the system, so that the system is responsive enough for real-time interactions. The studies of the runtime performance and the usability of the prototype system are conducted to provide evidence to support the hypothesis.
|
Page generated in 0.0967 seconds