• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 60
  • 60
  • 23
  • 17
  • 14
  • 13
  • 13
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Generalizable surrogate models for the improved early-stage exploration of structural design alternatives in building construction

Nourbakhsh, Mehdi 27 May 2016 (has links)
The optimization of complex structures is extremely time consuming. To obtain their optimization results, researchers often wait for several hours and even days. Then, if they have to make a slight change in their input parameters, they must run their optimization problem again. This iterative process of defining a problem and finding a set of optimized solutions may take several days and sometimes several weeks. Therefore, to reduce optimization time, researchers have developed various approximation-based models that predict the results of time-consuming analysis. These simple analytical models, known as “meta- or surrogate models,” are based on data available from limited analysis runs. These “models of the model” seek to approximate computation-intensive functions within a considerably shorter time than expensive simulation codes that require significant computing power. One of the limitations of metamodels (or interchangeably surrogate models) developed for the structural approximation of trusses and space frames is lack of generalizability. Since such metamodels are exclusively designed for a specific structure, they can predict the performance of only the structures for which they are designed. For instance, if a metamodel is designed for a ten-bar truss, it cannot predict the analysis results of another ten-bar truss with different boundary conditions. In addition, they cannot be re-used if the topology of a structure changes (e.g., from a ten-bar truss to a 12-bar truss). If designers change the topology, they must generate new sample data and re-train their model. Therefore, the predictability of these exclusive models is limited. From a combination of the analysis of data from structures with various geometries, the objective of this study is to create, test, and validate generalizable metamodels that predict the results of finite element analysis. Developing these models requires two main steps: feature generation and model creation. In the first step, involving the use of 11 features for nodes and three for members, the physical representation of four types of domes, slabs, and walls were transformed into numerical values. Then, by randomly varying the cross-sectional area, the stress value of each member was recorded. In the second step, these feature vectors were used to create, test, and verify various metamodels in an examination of four hypotheses. The results of the hypotheses show that with generalizable metamodels, the analysis of data from various structures can be combined and used for predicting the performance of the members of structures or new structures within the same class of geometry. For instance, given the same radius for all domes, a metamodel generated from the analysis of data from a 700-, 980-, and 1,525-member dome can predict the structural performance of the members of these domes or a new dome with 250 members. In addition, the results show that generalizable metamodels are able to more closely predict the results of a finite element analysis than metamodels exclusively created for a specific structure. A case study was selected to examine the application of generalizable metamodels for the early-stage exploration of structural design alternatives in a construction project. The results illustrates that the optimization with generalizable metamodels reduces the time and cost of the project, fostering more efficient planning and more rapid decision-making by architects, contractors, and engineers at the early stage of construction projects.
2

Time-averaged Surrogate Modeling for Small Scale Propellers Based on High-Fidelity CFD Simulations

Carroll, Joseph Ray 14 December 2013 (has links)
Many Small Unmanned Aerial Vehicles (SUAV) are driven by small scale, fixed blade propellers. The flow produced by the propeller, known as the propeller slipstream, can have significant impact on SUAV aerodynamics. In the design and analysis process for SUAVs, numerous Computational Fluid Dynamic (CFD) simulations of the coupled aircraft and propeller are often conducted which require a time-averaged, steady-state approximation of the propeller for computational efficiency. Most steady-state propeller models apply an actuator disk of momentum sources to model the thrust and swirl imparted to the flow field by a propeller. These momentum source models are based on simplified theories which lack accuracy. Currently, the most common momentum source models are based on blade element theory. Blade element theory discretizes the propeller blade into airfoil sections and assumes them to behave as two-dimensional (2D) airfoils. Blade element theory neglects many 3D flow effects that can greatly affect propeller performance limiting its accuracy and range of application. The research work in this dissertation uses a surrogate modeling method to develop a more accurate momentum source propeller model. Surrogate models for the time averaged thrust and swirl produced by each blade element are trained from a database of timeurate, highidelity 3D CFD propeller simulations. Since the surrogate models are trained from these highidelity CFD simulations, various 3D effects on propellers are inherently accounted for such as tip loss, hub loss, post stall effect, and element interaction. These efficient polynomial response surface surrogate models are functions of local flow properties at the blade elements and are embedded into 3D CFD simulations as locally adaptive momentum source terms. Results of the radial distribution of thrust and swirl for the steady-state surrogate propeller model are compared to that of time-dependent, highidelity 3D CFD propeller simulations for various aircraft-propeller coupled situations. This surrogate propeller model which is dependent on local flow field properties simulates the time-averaged flow field produced by the propeller at a momentum source term level of detail. Due to the nature of the training cases, it also captures the accuracy of time-dependent 3D CFD propeller simulations but at a much lower cost.
3

Advanced Machine Learning for Surrogate Modeling in Complex Engineering Systems

Lee, Cheol Hei 02 August 2023 (has links)
Surrogate models are indispensable in the analysis of engineering systems. The quality of surrogate models is determined by the data quality and the model class but achieving a high standard of them is challenging in complex engineering systems. Heterogeneity, implicit constraints, and extreme events are typical examples of the factors that complicate systems, yet they have been underestimated or disregarded in machine learning. This dissertation is dedicated to tackling the challenges in surrogate modeling of complex engineering systems by developing the following machine learning methodologies. (i) Partitioned active learning partitions the design space according to heterogeneity in response features, thereby exploiting localized models to measure the informativeness of unlabeled data. (ii) For the systems with implicit constraints, failure-averse active learning incorporates constraint outputs to estimate the safe region and avoid undesirable failures in learning the target function. (iii) The multi-output extreme spatial learning enables modeling and simulating extreme events in composite fuselage assembly. The proposed methods were applied to real-world case studies and outperformed benchmark methods. / Doctor of Philosophy / Data-driven decisions are ubiquitous in the engineering domain, in which data-driven models are fundamental. Active learning is a subdomain in machine learning that enables data-efficient modeling, and extreme spatial modeling is suitable for analyzing rare events. Although they are superb techniques for data-driven modeling, existing methods thereof cannot effectively address modern engineering systems complicated by heterogeneity, implicit constraints, and rare events. This dissertation is dedicated to advancing active learning and extreme spatial modeling for complex engineering systems by proposing three methodologies. The first method is partitioned active learning that efficiently learns systems, changing their behaviors, by localizing the information measurement. Second, failure-averse active learning is established to learn systems subject to implicit constraints, which cannot be analytically solved, and to minimize constraint violations. Lastly, the multi-output extreme spatial model is developed to model and simulate rare events that are associated with extremely large values in the aircraft manufacturing system. The proposed methods overcome the limitations of existing methods and outperform benchmark methods in the case studies.
4

Machine Learning Applications in Structural Analysis and Design

Seo, Junhyeon 05 October 2022 (has links)
Artificial intelligence (AI) has progressed significantly during the last several decades, along with the rapid advancements in computational power. This advanced technology is currently being employed in various engineering fields, not just in computer science. In aerospace engineering, AI and machine learning (ML), a major branch of AI, are now playing an important role in various applications, such as automated systems, unmanned aerial vehicles, aerospace optimum design structure, etc. This dissertation mainly focuses on structural engineering to employ AI to develop lighter and safer aircraft structures as well as challenges involving structural optimization and analysis. Therefore, various ML applications are studied in this research to provide novel frameworks for structural optimization, analysis, and design. First, the application of a deep-learning-based (DL) convolutional neural network (CNN) was studied to develop a surrogate model for providing optimum structural topology. Typically, conventional structural topology optimization requires a large number of computations due to the iterative finite element analyses (FEAs) needed to obtain optimal structural layouts under given load and boundary conditions. A proposed surrogate model in this study predicts the material density layout inputting the static analysis results using the initial geometry but without performing iterative FEAs. The developed surrogate models were validated with various example cases. Using the proposed method, the total calculation time was reduced by 98 % as compared to conventional topology optimization once the CNN had been trained. The predicted results have equal structural performance levels compared to the optimum structures derived by conventional topology optimization considered ``ground truths". Secondly, reinforcement learning (RL) is studied to create a stand-alone AI system that can design the structure from trial-and-error experiences. RL application is one of the major ML branches that mimic human behavior, specifically how human beings solve problems based on their experience. The main RL algorithm assumes that the human problem-solving process can be improved by earning positive and negative rewards from good and bad experiences, respectively. Therefore, this algorithm can be applied to solve structural design problems whereby engineers can improve the structural design by finding the weaknesses and enhancing them using a trial and error approach. To prove this concept, an AI system with the RL algorithm was implemented to drive the optimum truss structure using continuous and discrete cross-section choices under a set of given constraints. This study also proposed a unique reward function system to examine the constraints in structural design problems. As a result, the independent AI system can be developed from the experience-based training process, and this system can design the structure by itself without significant human intervention. Finally, this dissertation proposes an ML-based classification tool to categorize the vibrational mode shapes of tires. In general, tire vibration significantly affects driving quality, such as stability, ride comfort, noise performance, etc. Therefore, a comprehensive study for identifying the vibrational features is necessary to design the high-performance tire by considering the geometry, material, and operation conditions. Typically, the vibrational characteristics can be obtained from the modal test or numerical analysis. These identified modal characteristics can be used to categorize the tire mode shapes to determine the specific mode cause poorer driving performances. This study suggests a method to develop an ML-based classification tool that can efficiently categorize the mode shape using advanced feature recognition and classification algorithms. The best-performed classification tool can accurately predict the tire category without manual effort. Therefore, the proposed classification tool can be used to categorize the tire mode shapes for subsequent tire performance and improve the design process by reducing the time and resources for expensive calculations or experiments. / Doctor of Philosophy / Artificial intelligence (AI) has significantly progressed during the last several decades with the rapid advancement of computational capabilities. This advanced technology is currently employed to problems in various engineering fields, not just problems in computer science. Machine learning (ML), a major branch of AI, is actively applied to mechanical/structural problems since an ML model can replace a physical system with a surrogate model, which can be used to predict, control, and optimize its behavior. This dissertation provides a new framework to design and analyze structures using ML-based techniques. In particular, the latest ML technologies, such as convolutional neural networks, widely used for image processing and feature recognition, are applied to replace numerical calculations in structural optimization and analysis with the ML-based system. Also, this dissertation suggests how to develop a smart system that can design the structure by itself using reinforcement learning, which is utilized for autonomous driving systems and robot walking algorithms. Finally, this dissertation suggests an ML-based classification approach to categorize complex vibration modes of a structure.
5

Control, Localization, and Shock Optimization of Icosahedral Tensegrity Systems

Layer, Brett 05 June 2024 (has links) (PDF)
Exploring the design space of tensegrity systems is the basis of the work presented in this thesis. The areas explored as part of this research include the optimization of tensegrity structures to minimize the size of a tensegrity structure given payload shock constraints, and the control and locomotion of an icosahedral tensegrity system using movable masses and using an accelerometer in conjunction with leveraging geometrical knowledge of an icosahedral tensegrity system to localize the system after the system moves. In the optimization design space, a simplified model was created to represent an icosahedral tensegrity structure. This was done by assuming that a system of springs could represent an icosahedral system with enough fidelity to be useful for optimization. These results were then validated and tested. The most extensive part of the research preformed was in regards to the control of a Tensegrity Icosahedron. This structure utilized novel locomotion techniques to allow the structure to move by changing its center of mass. Essentially, instead of actuating the system by changing the length of the strings that make up the system state, the system's center of mass is moved using movable masses. These masses make it so the system can rotate about one of the base pivot points. A controller was also created that allows for this system to go to a target point if the state of the system is known. Finally, work was done to attempt to localize a structure by combining a motion model based off the geometry of the structure and a measurement model based on accelerometer readings during the movement of the structure into an EKF. This EKF was then used to localize the structure based on the predicted motion model and the measurement model prescribed by the accelerometer. This allowed for the system's state to be estimated to within 3 standard deviations of the uncertainty of the motion and measurement models. Additional work on this system was also done to make a physical model of the system. This work includes making a bar so that movable masses can pass through it, creating an accelerometer model to roughly determine the system's state, and tracking the system’s displacement using some steady-state model assumptions.
6

Využití umělých neuronových sítí k urychlení evolučních algoritmů / Utilizing artificial neural networks to accelerate evolutionary algorithms

Wimberský, Antonín January 2011 (has links)
In the present work, we study possibilities of using artificial neural networks for accelerating of evolutionary algorithms. Improving consists in decreasing in number of calls to the fitness function, the evaluation of which is in some kinds of optimization problems very time- consuming and expensive. We use neural network as a regression model, which serves for fitness estimation in a run of evolutionary algorithm. Together with the regression model, we work also with the real fitness function, which we use for re-evaluation of individuals that are selecting according to a beforehand chosen strategy. These individuals re-evaluated by the real fitness function are used for improving the regression model. Because a significant number of individuals are evaluated only with the regression model, the number of calls to the real fitness function, that is needed for finding of a good solution of the optimization problem, is substantially reduced.
7

Parallel multi-modal optimal design and sensitivity assessment for electric power systems

Yazdanpanah Goharrizi, Ali 05 April 2016 (has links)
This thesis proposes a novel algorithm to optimize multi-modal, nonlinear, black-box objective functions for electric power system design using an electromagnetic transients (EMT) simulator. The algorithm discovers multiple local optimal solutions for a given complex power system, and then generates accurate surrogate models of an objective function around each discovered local optimal solution. These surrogate models represent the local behaviour of the objective function that can be used in the subsequent stages of sensitivity analyses. Using surrogate models instead of intensive transient simulation during sensitivity analysis reduces computational intensity and simulation time. This makes the proposed algorithm particularly suited for optimization of computationally expensive black-box functions. The stages of the algorithm can be implemented independently and hence the computations can be done in parallel. Therefore, the algorithm is implemented in a parallel environment to gain significant speed-up in the design of electric power systems. Comparative studies in terms of objective function evaluation and computation time are provided. Using several multi-modal benchmark objective functions, the superiority of the proposed algorithm compared to other recently developed algorithms is demonstrated. Additionally, the application of the algorithm in the design process of complex electric power system demonstrated through several examples. The case studies show that the parallelized algorithm provides computational savings up to 39 times compared to the conventional sequential approach. / May 2016
8

Multiscale modeling of multimaterial systems using a Kriging based approach

Sen, Oishik 01 December 2016 (has links)
The present work presents a framework for multiscale modeling of multimaterial flows using surrogate modeling techniques in the particular context of shocks interacting with clusters of particles. The work builds a framework for bridging scales in shock-particle interaction by using ensembles of resolved mesoscale computations of shocked particle laden flows. The information from mesoscale models is “lifted” by constructing metamodels of the closure terms - the thesis analyzes several issues pertaining to surrogate-based multiscale modeling frameworks. First, to create surrogate models, the effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver. After this, closure laws for drag are constructed in the form of surrogate models derived from real-time resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the least number of mesoscale simulations. It is shown that unlike the DKG method, the MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. In macroscale models for shock-particle interactions, Subgrid Particle Reynolds’ Stress Equivalent (SPARSE) terms arise because of velocity fluctuations due to fluid-particle interaction in the subgrid/meso scales. Mesoscale computations are performed to calculate the SPARSE terms and the kinetic energy of the fluctuations for different values of Mach Number and particle volume fraction. Closure laws for SPARSE terms are constructed using the MBKG method. It is found that the directions normal and parallel to those of shock propagation are the principal directions of the SPARSE tensor. It is also found that the kinetic energy of the fluctuations is independent of the particle volume fraction and is 12-15% of the incoming shock kinetic energy for higher Mach Numbers. Finally, the thesis addresses the cost of performing large ensembles of resolved mesoscale computations for constructing surrogates. Variable fidelity techniques are used to construct an initial surrogate from ensembles of coarse-grid, relative inexpensive computations, while the use of resolved high-fidelity simulations is limited to the correction of initial surrogate. Different variable-fidelity techniques, viz the Space Mapping Method, RBFs and the MBKG methods are evaluated based on their ability to correct the initial surrogate. It is found that the MBKG method uses the least number of resolved mesoscale computations to correct the low-fidelity metamodel. Instead of using 56 high-fidelity computations for obtaining a surrogate, the MBKG method constructs surrogates from only 15 resolved computations, resulting in drastic reduction of computational cost.
9

Aplicação de um modelo substituto para otimização estrutural topológica com restrição de tensão e estimativa de erro a posteriori

Varella, Guilherme January 2015 (has links)
Este trabalho apresenta uma metodologia de otimização topológica visando reduzir o volume de uma estrutura tridimensional sujeita a restrição de tensão. A análise estrutural é feita através do método dos elementos finitos, as tensões são calculadas nos pontos de integração Gaussiana e suavizadas. Para evitar problemas associados a singularidades na tensão é aplicado o método de relaxação de tensão, que penaliza o tensor constitutivo. A norma-p é utilizada para simular a função máximo, que é utilizada como restrição global de tensão. O estimador de erro de Zienkiewicz e Zhu é usado para calcular o erro da tensão, que é considerado durante o cálculo da norma-p, tornando o processo de otimização mais robusto. Para o processo de otimização é utilizada o método de programação linear sequencial, sendo todas as derivadas calculadas analiticamente. É proposto um critério para remoção de elementos de baixa densidade, que se mostrou eficiente contribuindo para gerar estruturas bem definidas e reduzindo significativamente o tempo computacional. O fenômeno de instabilidade de tabuleiro é contornado com o uso de um filtro linear de densidade. Para reduzir o tempo dispendido no cálculo das derivadas e aumentar o desempenho do processo de otimização é proposto um modelo substituto (surrogate model) que é utilizado em iterações internas na programação linear sequencial. O modelo substituto não reduz o tempo de cálculo de cada iteração, entretanto reduziu consideravelmente o número de avaliações da derivada. O algoritmo proposto foi testado otimizando quatro estruturas, e comparado com variações do método e com outros autores quando possível, comprovando a validade da metodologia empregada. / This work presents a methodology for stress-constrained topology optimization, aiming to minimize material volume. Structural analysis is performed by the finite element method, and stress is computed at the elemental Gaussian integration points, and then smoothed over the mesh. In order to avoid the stress singularity phenomenon a constitutive tensor penalization is employed. A normalized version of the p-norm is used as a global stress measure instead of local stress constraint. A finite element error estimator is considered in the stress constraint calculation. In order to solve the optimization process, Sequential Linear Programming is employed, with all derivatives being calculated analiticaly. A criterion is proposed to remove low density elements, contributing for well-defined structures and reducing significantly the computational time. Checkerboard instability is circumvented with a linear density filter. To reduce the computational time and enhance the performance of the code, a surrogate model is used in inner iterations of the Sequential Linear Programming. The present algorithm was evaluated optimizing four structures, and comparing with variations of the methodolgy and results from other authors, when possible, presenting good results and thus verifying the validity of the procedure.
10

Aplicação de um modelo substituto para otimização estrutural topológica com restrição de tensão e estimativa de erro a posteriori

Varella, Guilherme January 2015 (has links)
Este trabalho apresenta uma metodologia de otimização topológica visando reduzir o volume de uma estrutura tridimensional sujeita a restrição de tensão. A análise estrutural é feita através do método dos elementos finitos, as tensões são calculadas nos pontos de integração Gaussiana e suavizadas. Para evitar problemas associados a singularidades na tensão é aplicado o método de relaxação de tensão, que penaliza o tensor constitutivo. A norma-p é utilizada para simular a função máximo, que é utilizada como restrição global de tensão. O estimador de erro de Zienkiewicz e Zhu é usado para calcular o erro da tensão, que é considerado durante o cálculo da norma-p, tornando o processo de otimização mais robusto. Para o processo de otimização é utilizada o método de programação linear sequencial, sendo todas as derivadas calculadas analiticamente. É proposto um critério para remoção de elementos de baixa densidade, que se mostrou eficiente contribuindo para gerar estruturas bem definidas e reduzindo significativamente o tempo computacional. O fenômeno de instabilidade de tabuleiro é contornado com o uso de um filtro linear de densidade. Para reduzir o tempo dispendido no cálculo das derivadas e aumentar o desempenho do processo de otimização é proposto um modelo substituto (surrogate model) que é utilizado em iterações internas na programação linear sequencial. O modelo substituto não reduz o tempo de cálculo de cada iteração, entretanto reduziu consideravelmente o número de avaliações da derivada. O algoritmo proposto foi testado otimizando quatro estruturas, e comparado com variações do método e com outros autores quando possível, comprovando a validade da metodologia empregada. / This work presents a methodology for stress-constrained topology optimization, aiming to minimize material volume. Structural analysis is performed by the finite element method, and stress is computed at the elemental Gaussian integration points, and then smoothed over the mesh. In order to avoid the stress singularity phenomenon a constitutive tensor penalization is employed. A normalized version of the p-norm is used as a global stress measure instead of local stress constraint. A finite element error estimator is considered in the stress constraint calculation. In order to solve the optimization process, Sequential Linear Programming is employed, with all derivatives being calculated analiticaly. A criterion is proposed to remove low density elements, contributing for well-defined structures and reducing significantly the computational time. Checkerboard instability is circumvented with a linear density filter. To reduce the computational time and enhance the performance of the code, a surrogate model is used in inner iterations of the Sequential Linear Programming. The present algorithm was evaluated optimizing four structures, and comparing with variations of the methodolgy and results from other authors, when possible, presenting good results and thus verifying the validity of the procedure.

Page generated in 0.0742 seconds