• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 7
  • 3
  • 2
  • 2
  • Tagged with
  • 45
  • 45
  • 28
  • 14
  • 11
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multilevel Design Optimization of Automotive Structures Using Dummy- and Vehicle-Based Responses

Gandikota, Imtiaz Shareef 17 August 2013 (has links)
A computationally efficient multilevel decomposition and optimization framework is developed for application to automotive structures. A full scale finite element (FE) model of a passenger car along with a dummy and occupant restraint system (ORS) is used to analyze crashworthiness and occupant safety criteria in two crash scenarios. The vehicle and ORS models are incorporated into a decomposed multilevel framework and optimized with mass and occupant injury criteria as objectives. A surrogate modeling technique is used to approximate the computationally expensive nonlinear FE responses. A multilevel target matching optimization problem is formulated to obtain a design satisfying system level performance targets. A balance is sought between crashworthiness and structural rigidity while minimizing overall mass of the vehicle. Two separate design problems involving crash and crash+vibration are considered. A major finding of this study is that, it is possible to achieve greater weight savings by including dummy-based responses in optimization problem.
2

A Trust Region Filter Algorithm for Surrogate-based Optimization

Eason, John P. 01 April 2018 (has links)
Modern nonlinear programming solvers can efficiently handle very large scale optimization problems when accurate derivative information is available. However, black box or derivative free modeling components are often unavoidable in practice when the modeled phenomena may cross length and time scales. This work is motivated by examples in chemical process optimization where most unit operations have well-known equation oriented representations, but some portion of the model (e.g. a complex reactor model) may only be available with an external function call. The concept of a surrogate model is frequently used to solve this type of problem. A surrogate model is an equation oriented approximation of the black box that allows traditional derivative based optimization to be applied directly. However, optimization tends to exploit approximation errors in the surrogate model leading to inaccurate solutions and repeated rebuilding of the surrogate model. Even if the surrogate model is perfectly accurate at the solution, this only guarantees that the original problem is feasible. Since optimality conditions require gradient information, a higher degree of accuracy is required. In this work, we consider the general problem of hybrid glass box/black box optimization, or gray box optimization, with focus on guaranteeing that a surrogate-based optimization strategy converges to optimal points of the original detailed model. We first propose an algorithm that combines ideas from SQP filter methods and derivative free trust region methods to solve this class of problems. The black box portion of the model is replaced by a sequence of surrogate models (i.e. surrogate models) in trust region subproblems. By carefully managing surrogate model construction, the algorithm is guaranteed to converge to true optimal solutions. Then, we discuss how this algorithm can be modified for effective application to practical problems. Performance is demonstrated on a test set of benchmarks as well as a set of case studies relating to chemical process optimization. In particular, application to the oxycombustion carbon capture power generation process leads to significant efficiency improvements. Finally, extensions of surrogate-based optimization to other contexts is explored through a case study with physical properties.
3

Simulation based exploration of a loading strategy for a LHD-vehicle / Simuleringsbaserad utforskning av styrstrategier för frontlastare

Lindmark, Daniel January 2016 (has links)
Optimizing the loading process of a front loader vehicle is a challenging task. The design space is large and depends on the design of the vehicle, the strategy of the loading process, the nature of the material to load etcetera. Finding an optimal loading strategy, with respect to production and damage on equipment would greatly improve the production and environmental impacts in mining and construction. In this thesis, a method for exploring the design space of a loading strategy is presented. The loading strategy depends on four design variables that controls the shape of the trajectory relative to the shape of the pile. The responses investigated is the production, vehicle damage and work interruptions due to rock spill. Using multi-body dynamic simulations many different strategies can be tested with little cost. The result of these simulations are then used to build surrogate models of the original unknown function. The surrogate models are used to visualize and explore the design space and construct Pareto fronts for the competing responses. The surrogate models were able to predict the production function from the simulations well. The damage and rock spill surrogate models was moderately good in predicting the simulations but still good enough to explore how the design variables affect the response. The produced Pareto fronts makes it easy for the decision maker to compare sets of design variables and choose an optimal design for the loading strategy.
4

Using surrogate models to analyze the impact of geometry on the energy efficiency of buildings

Bhatta, Bhumika 22 December 2021 (has links)
In recent times data-driven approaches to parametrically optimize and explore building geometry has been proven to be a powerful tool that can replace computationally expensive and time-consuming simulations for energy prediction in the early design process. In this research, we explore the use of surrogate models, i.e. efficient statistical approximations of expensive physics-based building simulation models, to lower the computational burden of large-scale building geometry analysis. We try different approaches and techniques to train a machine learning model using multiple datasets to analyze the impact of geometry and envelope features on the energy efficiency of buildings. These contributions are presented in the form of two conference papers and one journal paper (being prepared for submission) that iteratively build up the underlying methodology. The first conference paper contains preliminary experiments using 4 manually generated building geometries for office buildings. Data were generated by simulating various building samples in EnergyPlus for different geometries. We used the generated data to train a machine learning model using support vector regression. We trained two separate models for predicting heating and cooling loads. The lesson learned from this first experiment was that the prediction of the models was not great due to insufficient geometric features explaining the variability in geometry and the lack of sufficient data for varied geometries. The second conference paper developed a novel dataset of 38,000 building energy models for varied geometry using 2D images of real-world residences. We developed a workflow in the Grasshopper/Rhino environment which can convert 2D images of a floor plan into a vector format then into a building energy model ready to be simulated in EnergyPlus. The workflow can also extract up to 20 geometric features from the model, to be used as features in the machine learning process. We used these features and the simulation results to train a neural network-based surrogate model. A sensitivity analysis was performed to understand the impact and importance of each feature to the energy use of the building. From the results of the experiment, we found that off-the-shelf neural network-based surrogates provided with engineered features can very well emulate the desired simulation outputs. We also repeated the experiment for 6 different climatic zones across Canada to understand the impact of geometric features across various climates; these findings are presented in an appendix. iv In the journal paper, we explored two different methodologies to train surrogate models: monolithic and component-based. We explored the component-based modeling technique as it allows the model to be more versatile if we need to add more components to it, ultimately increasing the usability of the model. We conducted further experiments by adding complexity to the geometry surrogate model. We introduced 10 envelope features as an input to the surrogate along with the 20 geometric features. We trained 6 different surrogate models using different datasets by varying geometric and envelope features. From the results of the experiment, we found that the monolithic model performs the best but the component-based surrogate also falls into an acceptable range of accuracy. From the overall results across the three papers, we see that simple neural network-based surrogate models perform really well to emulate simulation outcomes over a wide variety of geometries and envelope features / Graduate
5

Síntese ótima de processos não isotérmicos de tratamento de efluentes. / Optimal synthesis of non-isothermal wastewater treatment processes.

Graciano, José Eduardo Alves 24 February 2012 (has links)
As metodologias para a síntese de processos, baseadas em otimização de superestruturas, mostram-se mais poderosas do que as baseadas no projeto em heuríticas, já que levam em consideração as inúmeras possibilidades de construção do processo. É pratica comum a utilização de modelos simplificados (como modelos de rendimento), na representação dos equipamentos dentro destas superestruturas, uma vez que a utilização de modelos fenomenológicos tornar-se-ia inviável, devido aos tempos computacionais e à dificuldade técnica de se programar os módulos e propriedades termodinâmicas de um simulador comercial, dentro de uma plataforma de otimização como o GAMS. Modelos com alta precisão, que não requerem grandes tempos computacionais, reduzidos ou de superfície de resposta, aproximam modelos fenomenológicos e podem ser introduzidos em problemas de síntese, obtendo soluções mais precisas. Neste trabalho, foram construídos modelos reduzidos para dois processos comumente encontrados em sistemas de tratamento de efluentes de refinarias de petróleo: uma torre de resfriamento e um stripper de vapor. Os dados utilizados para a correlação dos modelos reduzidos foram gerados, através da simulação de modelos semi-fenomenológicos ou fenomenológicos destes equipamentos. É proposta uma metodologia original para gerar modelos reduzidos, que diferentemente dos trabalhos anteriores em que se utilizam modelos reduzidos tipo caixa preta, emprega-se um modelo reduzido tipo caixa cinza para a representação da coluna de stripper, o que aumenta a capacidade de correlação do modelo ao conjunto de dados, resultando em menores erros de predição. Dois tipos clássicos de modelos reduzidos foram utilizados dentro dos modelos caixa-cinza: um modelo baseado em redes neurais e um modelo polinomial. Ambos mostraram-se capazes de representar o modelo fenomenológico com pequenos erros e permitindo resolver o problema de síntese em um tempo computacional razoável (da ordem de 10 segundos). Nota-se também que são obtidas várias soluções ótimas locais que diferem ligeiramente de acordo com a abordagem utilizada, mas que qualitativamente correspondem a um mesmo conjunto de soluções. / The superstructure optimization-based methodologies for process synthesis can be more powerful than the heuristic based methodologies, because they cope the many possibilities in order to design the process. It is common practice to use simplified models (efficiency-based models), in the representation of equipment in the superstructures, because the use of phenomenological models would be unfeasible due the high computational time and the technical difficulties in order to program module and thermodynamics properties packages of commercial simulators, into an optimization platform as GAMS. High accuracy models, which do not require large computational time, can be obtained using different types of surrogate or response surface, approximate phenomenological models of the system and can be introduced into the synthesis problems, leading to more precise solutions. In this work, surrogate models were built for two processes usually found in water treatment networks in petroleum refineries: a cooling tower and a steam stripper. The data sets used for fitting the surrogate model were generated by the semi-phenomenological or the first principles model of those equipment. An original methodology for generating surrogate model is proposed that differently of previous works (which make use of black box surrogate models) uses gray box models to represents de stripper column. This approach improves the correlation capacity of the surrogate model to the set data and reduces the prediction errors. Finally, two classic surrogate models were used inside the gray-box models: a neural network based model and a polynomial model. The results showed that both are able to represent the phenomenological model with small errors to solve the synthesis problem in a short computational time (which not exceed in a magnitude of 10 seconds). It is noticed also that many local solutions are obtained, these differ slightly depending on the approach used, but qualitatively represent the same set of solutions.
6

Síntese ótima de processos não isotérmicos de tratamento de efluentes. / Optimal synthesis of non-isothermal wastewater treatment processes.

José Eduardo Alves Graciano 24 February 2012 (has links)
As metodologias para a síntese de processos, baseadas em otimização de superestruturas, mostram-se mais poderosas do que as baseadas no projeto em heuríticas, já que levam em consideração as inúmeras possibilidades de construção do processo. É pratica comum a utilização de modelos simplificados (como modelos de rendimento), na representação dos equipamentos dentro destas superestruturas, uma vez que a utilização de modelos fenomenológicos tornar-se-ia inviável, devido aos tempos computacionais e à dificuldade técnica de se programar os módulos e propriedades termodinâmicas de um simulador comercial, dentro de uma plataforma de otimização como o GAMS. Modelos com alta precisão, que não requerem grandes tempos computacionais, reduzidos ou de superfície de resposta, aproximam modelos fenomenológicos e podem ser introduzidos em problemas de síntese, obtendo soluções mais precisas. Neste trabalho, foram construídos modelos reduzidos para dois processos comumente encontrados em sistemas de tratamento de efluentes de refinarias de petróleo: uma torre de resfriamento e um stripper de vapor. Os dados utilizados para a correlação dos modelos reduzidos foram gerados, através da simulação de modelos semi-fenomenológicos ou fenomenológicos destes equipamentos. É proposta uma metodologia original para gerar modelos reduzidos, que diferentemente dos trabalhos anteriores em que se utilizam modelos reduzidos tipo caixa preta, emprega-se um modelo reduzido tipo caixa cinza para a representação da coluna de stripper, o que aumenta a capacidade de correlação do modelo ao conjunto de dados, resultando em menores erros de predição. Dois tipos clássicos de modelos reduzidos foram utilizados dentro dos modelos caixa-cinza: um modelo baseado em redes neurais e um modelo polinomial. Ambos mostraram-se capazes de representar o modelo fenomenológico com pequenos erros e permitindo resolver o problema de síntese em um tempo computacional razoável (da ordem de 10 segundos). Nota-se também que são obtidas várias soluções ótimas locais que diferem ligeiramente de acordo com a abordagem utilizada, mas que qualitativamente correspondem a um mesmo conjunto de soluções. / The superstructure optimization-based methodologies for process synthesis can be more powerful than the heuristic based methodologies, because they cope the many possibilities in order to design the process. It is common practice to use simplified models (efficiency-based models), in the representation of equipment in the superstructures, because the use of phenomenological models would be unfeasible due the high computational time and the technical difficulties in order to program module and thermodynamics properties packages of commercial simulators, into an optimization platform as GAMS. High accuracy models, which do not require large computational time, can be obtained using different types of surrogate or response surface, approximate phenomenological models of the system and can be introduced into the synthesis problems, leading to more precise solutions. In this work, surrogate models were built for two processes usually found in water treatment networks in petroleum refineries: a cooling tower and a steam stripper. The data sets used for fitting the surrogate model were generated by the semi-phenomenological or the first principles model of those equipment. An original methodology for generating surrogate model is proposed that differently of previous works (which make use of black box surrogate models) uses gray box models to represents de stripper column. This approach improves the correlation capacity of the surrogate model to the set data and reduces the prediction errors. Finally, two classic surrogate models were used inside the gray-box models: a neural network based model and a polynomial model. The results showed that both are able to represent the phenomenological model with small errors to solve the synthesis problem in a short computational time (which not exceed in a magnitude of 10 seconds). It is noticed also that many local solutions are obtained, these differ slightly depending on the approach used, but qualitatively represent the same set of solutions.
7

RELIABILITY AND RISK ASSESSMENT OF NETWORKED URBAN INFRASTRUCTURE SYSTEMS UNDER NATURAL HAZARDS

Rokneddin, Keivan 16 September 2013 (has links)
Modern societies increasingly depend on the reliable functioning of urban infrastructure systems in the aftermath of natural disasters such as hurricane and earthquake events. Apart from a sizable capital for maintenance and expansion, the reliable performance of infrastructure systems under extreme hazards also requires strategic planning and effective resource assignment. Hence, efficient system reliability and risk assessment methods are needed to provide insights to system stakeholders to understand infrastructure performance under different hazard scenarios and accordingly make informed decisions in response to them. Moreover, efficient assignment of limited financial and human resources for maintenance and retrofit actions requires new methods to identify critical system components under extreme events. Infrastructure systems such as highway bridge networks are spatially distributed systems with many linked components. Therefore, network models describing them as mathematical graphs with nodes and links naturally apply to study their performance. Owing to their complex topology, general system reliability methods are ineffective to evaluate the reliability of large infrastructure systems. This research develops computationally efficient methods such as a modified Markov Chain Monte Carlo simulations algorithm for network reliability, and proposes a network reliability framework (BRAN: Bridge Reliability Assessment in Networks) that is applicable to large and complex highway bridge systems. Since the response of system components to hazard scenario events are often correlated, the BRAN framework enables accounting for correlated component failure probabilities stemming from different correlation sources. Failure correlations from non-hazard sources are particularly emphasized, as they potentially have a significant impact on network reliability estimates, and yet they have often been ignored or only partially considered in the literature of infrastructure system reliability. The developed network reliability framework is also used for probabilistic risk assessment, where network reliability is assigned as the network performance metric. Risk analysis studies may require prohibitively large number of simulations for large and complex infrastructure systems, as they involve evaluating the network reliability for multiple hazard scenarios. This thesis addresses this challenge by developing network surrogate models by statistical learning tools such as random forests. The surrogate models can replace network reliability simulations in a risk analysis framework, and significantly reduce computation times. Therefore, the proposed approach provides an alternative to the established methods to enhance the computational efficiency of risk assessments, by developing a surrogate model of the complex system at hand rather than reducing the number of analyzed hazard scenarios by either hazard consistent scenario generation or importance sampling. Nevertheless, the application of surrogate models can be combined with scenario reduction methods to improve even further the analysis efficiency. To address the problem of prioritizing system components for maintenance and retrofit actions, two advanced metrics are developed in this research to rank the criticality of system components. Both developed metrics combine system component fragilities with the topological characteristics of the network, and provide rankings which are either conditioned on specific hazard scenarios or probabilistic, based on the preference of infrastructure system stakeholders. Nevertheless, they both offer enhanced efficiency and practical applicability compared to the existing methods. The developed frameworks for network reliability evaluation, risk assessment, and component prioritization are intended to address important gaps in the state-of-the-art management and planning for infrastructure systems under natural hazards. Their application can enhance public safety by informing the decision making process for expansion, maintenance, and retrofit actions for infrastructure systems.
8

A Hybrid Optimization Scheme for Helicopters with Composite Rotor Blades

Ku, Jieun 18 May 2007 (has links)
Rotorcraft optimization is a challenging problem due to its conflicting requirements among many disciplines and highly coupled design variables affecting the overall design. Also, the design process for a composite rotor blade is often ambiguous because of its design space. Furthermore, analytical tools do not produce acceptable results compared with flight test when it comes to aerodynamics and aeroelasticity unless realistic models are used, which leads to excessive computer time per iteration. To comply these requirements, computationally efficient yet realistic tools for rotorcraft analysis, such as VABS and DYMORE were used as analysis tools. These tools decompose a three-dimensional problem into a two-dimensional cross-sectional and a one-dimensional beam analysis. Also, to eliminate the human interaction between iterations, a previously VABS-ANSYS macro was modified and automated. The automated tool shortened the computer time needed to generate the VABS input file for each analysis from hours to seconds. MATLAB was used as the wrapper tool to integrate VABS, DYMORE and the VABS-ANSYS macro into the methodology. This methodology uses Genetic Algorithm and gradient-based methods as optimization schemes. The baseline model is the rotor system of generic Georgia Tech Helicopter (GTH), which is a three-bladed, soft-in-plane, bearingless rotor system. The resulting methodology is a two-level optimization, global and local. Previous studies showed that when stiffnesses are used as design variables in optimization, these values act as if they are independent and produce design requirements that cannot be achieved by local-level optimization. To force design variables at the global level to stay within the feasible design space of the local level, a surrogate model was adapted into the methodology. For the surrogate model, different ``design of experiments" (DOE) methods were tested to find the most computationally efficient DOE method. The response surface method (RSM) and Kriging were tested for the optimization problem. The results show that using the surrogate model speeds up the optimization process and the Kriging model shows superior performance over RSM models. As a result, the global-level optimizer produces requirements that the local optimizer can achieve.
9

Improved accuracy of surrogate models using output postprocessing

Andersson, Daniel January 2007 (has links)
<p>Using surrogate approximations (e.g. Kriging interpolation or artifical neural networks) is an established technique for decreasing the execution time of simulation optimization problems. However, constructing surrogate approximations can be impossible when facing complex simulation inputs, and instead one is forced to use a surrogate model, which explicitly attempts to simulate the inner workings of the underlying simulation model. This dissertation has investigated if postprocessing the output of a surrogate model with an artificial neural network can increase its accuracy and value in simulation optimization problems. Results indicate that the technique has potential in that when output post-processing was enabled the accuracy of the surrogate model increased, i.e. its output more losely matched the output of the real simulation model. No apparent improvement in optimization performance could be observed however. It was speculated that this was due to either the optimization algorithm used not taking advantage of the improved accuracy of the surrogate model, or the fact the the improved accuracy of the surrogate model was to small to make any measurable impact. Further investigation of these issues must be conducted in order to get a better understanding of the pros and cons of the technique.</p>
10

Development of optimization methods to solve computationally expensive problems

Isaacs, Amitay, Engineering & Information Technology, Australian Defence Force Academy, UNSW January 2009 (has links)
Evolutionary algorithms (EAs) are population based heuristic optimization methods used to solve single and multi-objective optimization problems. They can simultaneously search multiple regions to find global optimum solutions. As EAs do not require gradient information for the search, they can be applied to optimization problems involving functions of real, integer, or discrete variables. One of the drawbacks of EAs is that they require evaluations of numerous candidate solutions for convergence. Most real life engineering design optimization problems involve highly nonlinear objective and constraint functions arising out of computationally expensive simulations. For such problems, the computation cost of optimization using EAs can become quite prohibitive. This has stimulated the research into improving the efficiency of EAs reported herein. In this thesis, two major improvements are suggested for EAs. The first improvement is the use of spatial surrogate models to replace the expensive simulations for the evaluation of candidate solutions, and other is a novel constraint handling technique. These modifications to EAs are tested on a number of numerical benchmarks and engineering examples using a fixed number of evaluations and the results are compared with basic EA. addition, the spatial surrogates are used in the truss design application. A generic framework for using spatial surrogate modeling, is proposed. Multiple types of surrogate models are used for better approximation performance and a prediction accuracy based validation is used to ensure that the approximations do not misguide the evolutionary search. Two EAs are proposed using spatial surrogate models for evaluation and evolution. For numerical benchmarks, the spatial surrogate assisted EAs obtain significantly better (even orders of magnitude better) results than EA and on an average 5-20% improvements in the objective value are observed for engineering examples. Most EAs use constraint handling schemes that prefer feasible solutions over infeasible solutions. In the proposed infeasibility driven evolutionary algorithm (IDEA), a few infeasible solutions are maintained in the population to augment the evolutionary search through the infeasible regions along with the feasible regions to accelerate convergence. The studies on single and multi-objective test problems demonstrate the faster convergence of IDEA over EA. In addition, the infeasible solutions in the population can be used for trade-off studies. Finally, discrete structures optimization (DSO) algorithm is proposed for sizing and topology optimization of trusses. In DSO, topology optimization and sizing optimization are separated to speed up the search for the optimum design. The optimum topology is identified using strain energy based material removal procedure. The topology optimization process correctly identifies the optimum topology for 2-D and 3-D trusses using less than 200 function evaluations. The sizing optimization is performed later to find the optimum cross-sectional areas of structural elements. In surrogate assisted DSO (SDSO), spatial surrogates are used to accelerate the sizing optimization. The truss designs obtained using SDSO are very close (within 7% of the weight) to the best reported in the literature using only a fraction of the function evaluations (less than 7%).

Page generated in 0.0488 seconds