Spelling suggestions: "subject:"multidisciplinary design"" "subject:"ultidisciplinary design""
131 |
Finite Element Analysis and Genetic Algorithm Optimization Design for the Actuator Placement on a Large Adaptive StructureSheng, Lizeng 29 December 2004 (has links)
The dissertation focuses on one of the major research needs in the area of adaptive /intelligent/smart structures, the development and application of finite element analysis and genetic algorithms for optimal design of large-scale adaptive structures. We first review some basic concepts in finite element method and genetic algorithms, along with the research on smart structures. Then we propose a solution methodology for solving a critical problem in the design of a next generation of large-scale adaptive structures -- optimal placements of a large number of actuators to control thermal deformations. After briefly reviewing the three most frequently used general approaches to derive a finite element formulation, the dissertation presents techniques associated with general shell finite element analysis using flat triangular laminated composite elements. The element used here has three nodes and eighteen degrees of freedom and is obtained by combining a triangular membrane element and a triangular plate bending element. The element includes the coupling effect between membrane deformation and bending deformation. The membrane element is derived from the linear strain triangular element using Cook's transformation. The discrete Kirchhoff triangular (DKT) element is used as the plate bending element. For completeness, a complete derivation of the DKT is presented. Geometrically nonlinear finite element formulation is derived for the analysis of adaptive structures under the combined thermal and electrical loads. Next, we solve the optimization problems of placing a large number of piezoelectric actuators to control thermal distortions in a large mirror in the presence of four different thermal loads. We then extend this to a multi-objective optimization problem of determining only one set of piezoelectric actuator locations that can be used to control the deformation in the same mirror under the action of any one of the four thermal loads. A series of genetic algorithms, GA Version 1, 2 and 3, were developed to find the optimal locations of piezoelectric actuators from the order of 10<SUP>21</SUP> ~ 10<SUP>56</SUP> candidate placements. Introducing a variable population approach, we improve the flexibility of selection operation in genetic algorithms. Incorporating mutation and hill climbing into micro-genetic algorithms, we are able to develop a more efficient genetic algorithm. Through extensive numerical experiments, we find that the design search space for the optimal placements of a large number of actuators is highly multi-modal and that the most distinct nature of genetic algorithms is their robustness. They give results that are random but with only a slight variability. The genetic algorithms can be used to get adequate solution using a limited number of evaluations. To get the highest quality solution, multiple runs including different random seed generators are necessary. The investigation time can be significantly reduced using a very coarse grain parallel computing. Overall, the methodology of using finite element analysis and genetic algorithm optimization provides a robust solution approach for the challenging problem of optimal placements of a large number of actuators in the design of next generation of adaptive structures. / Ph. D.
|
132 |
Multidisciplinary Design Under Uncertainty Framework of a Spacecraft and Trajectory for an Interplanetary MissionSiddhesh Ajay Naidu (18437880) 28 April 2024 (has links)
<p dir="ltr">Design under uncertainty (DUU) for spacecraft is crucial in ensuring mission success, especially given the criticality of their failure. To obtain a more realistic understanding of space systems, it is beneficial to holistically couple the modeling of the spacecraft and its trajectory as a multidisciplinary analysis (MDA). In this work, a MDA model is developed for an Earth-Mars mission by employing the general mission analysis tool (GMAT) to model the mission trajectory and rocket propulsion analysis (RPA) to design the engines. By utilizing this direct MDA model, the deterministic optimization (DO) of the system is performed first and yields a design that completed the mission in 307 days while requiring 475 kg of fuel. The direct MDA model is also integrated into a Monte Carlo simulation (MCS) to investigate the uncertainty quantification (UQ) of the spacecraft and trajectory system. When considering the combined uncertainty in the launch date for a 20-day window and the specific impulses, the time of flight ranges from 275 to 330 days and the total fuel consumption ranges from 475 to 950 kg. The spacecraft velocity exhibits deviations ranging from 2 to 4 km/s at any given instance in the Earth inertial frame. The amount of fuel consumed during the TCM ranges from 1 to 250 kg, while during the MOI, the amount of fuel consumed ranges from 350 to 810 kg. The usage of the direct MDA model for optimization and uncertainty quantification of the system can be computationally prohibitive for DUU. To address this challenge, the effectiveness of utilizing surrogate-based approaches for performing UQ is demonstrated, resulting in significantly lower computational costs. Gaussian processes (GP) models trained on data from the MDA model were implemented into the UQ framework and their results were compared to those of the direct MDA method. When considering the combined uncertainty from both sources, the surrogate-based method had a mean error of 1.67% and required only 29% of the computational time. When compared to the direct MDA, the time of flight range matched well. While the TCM and MOI fuel consumption ranges were smaller by 5 kg. These GP models were integrated into the DUU framework to perform reliability-based design optimization (RBDO) feasibly for the spacecraft and trajectory system. For the combined uncertainty, the DO design yielded a poor reliability of 54%, underscoring the necessity for performing RBDO. The DUU framework obtained a design with a significantly improved reliability of 99%, which required an additional 39.19 kg of fuel and also resulted in a reduced time of flight by 0.55 days.</p>
|
133 |
INKJET PRINTING: FACING CHALLENGES AND ITS NEW APPLICATIONS IN COATING INDUSTRYPoozesh, Sadegh 01 January 2015 (has links)
This study is devoted to some of the most important issues for advancing inkjet printing for possible application in the coating industry with a focus on piezoelectric droplet on demand (DOD) inkjet technology. Current problems, as embodied in liquid filament breakup along with satellite droplet formation and reduction in droplet sizes, are discussed and then potential solutions identified. For satellite droplets, it is shown that liquid filament break-up behavior can be predicted by using a combination of two pi-numbers, including the Weber number, We and the Ohnesorge number, Oh, or the Reynolds number, Re, and the Weber number, We. All of these are dependent only on the ejected liquid properties and the velocity waveform at the print-head inlet. These new criteria are shown to have merit in comparison to currently used criteria for identifying filament physical features such as length and diameter that control the formation of subsequent droplets. In addition, this study performs scaling analyses for the design and operation of inkjet printing heads. Because droplet sizes from inkjet nozzles are typically on the order of nozzle dimensions, a numerical simulation is carried out to provide insight into how to reduce droplet sizes by employing a novel input waveform impressed on the print-head liquid inflow without changing the nozzle geometry. A regime map for characterizing the generation of small droplets based on We and a non-dimensional frequency, Ω is proposed and discussed. In an attempt to advance inkjet printing technology for coating purposes, a prototype was designed and then tested numerically. The numerical simulation successfully proved that the proposed prototype could be useful for coating purposes by repeatedly producing mono-dispersed droplets with controllable size and spacing. Finally, the influences of two independent piezoelectric characteristics - the maximum head displacement and corresponding frequency, was investigated to examine the quality of filament breakup quality and favorable piezoelectric displacements and frequencies were identified.
|
134 |
Multi-criteria analysis in naval ship designAnil, Kivanc A. 03 1900 (has links)
Approved for public release, distribution is unlimited / Numerous optimization problems involve systems with multiple and often contradictory criteria. Such contradictory criteria have been an issue for marine/naval engineering design studies for many years. This problem becomes more important when one considers novel ship types with very limited or no operational record. A number of approaches have been proposed to overcome these multiple criteria design optimization problems. This Thesis follows the Parameter Space Investigation (PSI) technique to address these problems. The PSI method is implemented with a software package called MOVI (Multi-criteria Optimization and Vector Identification). Two marine/naval engineering design optimization models were investigated using the PSI technique along with the MOVI software. The first example was a bulk carrier design model which was previously studied with other optimization methods. This model, which was selected due to its relatively small dimensionality and the availability of existing studies, was utilized in order to demonstrate and validate the features of the proposed approach. A more realistic example was based on the "MIT Functional Ship Design Synthesis Model" with a greater number of parameters, criteria, and functional constraints. A series of optimization studies conducted for this model demonstrated that the proposed approach can be implemented in a naval ship design environment and can lead to a large design parameter space exploration with minimum computational effort. / Lieutenant Junior Grade, Turkish Navy
|
135 |
Otimização multidisciplinar em projeto de asas flexíveis / Multidisciplinary design optimization of flexible wingsCaixeta Júnior, Paulo Roberto 23 November 2006 (has links)
A indústria aeronáutica vem promovendo avanços tecnológicos em velocidades crescentes, para sobreviver em mercados extremamente competitivos. Neste cenário, torna-se imprescindível o uso de ferramentas de projeto que agilizem o desenvolvimento de novas aeronaves. Os atuais recursos computacionais permitiram um grande aumento no número de ferramentas que auxiliam o trabalho de projetistas e engenheiros. O projeto de uma aeronave é uma tarefa multidisciplinar por essência, o que logo incentivou o desenvolvimento de ferramentas computacionais que trabalhem com várias áreas ao mesmo tempo. Entre elas se destaca a otimização multidisciplinar em projeto, que une métodos de otimização à modelos matemáticos de áreas distintas de um projeto para encontrar soluções de compromisso. O presente trabalho introduz a otimização multidisciplinar em projeto (Multidisciplinary Design Optimization - MDO) e discorre sobre algumas aplicações possíveis desta metodologia. Foi realizada a implementação de um sistema de MDO para o projeto de asas flexíveis, considerando restrições de aeroelasticidade dinâmica e massa estrutural. Como meta, deseja-se encontrar distribuições ideais de rigidezes flexional e torcional da estrutura da asa, para maximizar a velocidade crítica de flutter e minimizar a massa estrutural. Para tanto, foram utilizados um modelo dinâmico-estrutural baseado no método dos elementos finitos, um modelo aerodinâmico não-estacionário baseado na teoria das faixas e nas soluções bidimensionais de Theodorsen, um modelo de previsão de flutter que utiliza o método K e, por fim, um otimizador baseado no método de algoritmos genéticos (AGs). São apresentados os detalhes empregados em cada modelo, as restrições aplicadas e a maneira como eles interagem ao longo da otimização. É feita uma análise para a escolha dos parâmetros de otimização por AG e em seguida a avaliação de dois casos, para verificação da funcionalidade do sistema implementado. Os resultados obtidos demonstram uma metodologia eficiente, que é capaz de buscar soluções ótimas para problemas propostos, que com devidos ajustes pode ter enorme valor para acelerar o desenvolvimento de novas aeronaves. / The aeronautical industry is always trying to speed up technological advances in order to survive in extremely competitive markets. In this scenario, the use of design tools to accelerate the development of new aircraft becomes essential. Current computational resources allow greater increase in the number of design tools to assist the work of aeronautical engineers. In essence, the design of an aircraft is a multidisciplinary task, which stimulates the development of computational tools that work with different areas at the same time. Among them, the multidisciplinary design optimization (MDO) can be distinguished, which combines optimization methods to mathematical models of distinct areas of a design to find compromise solutions. The present work introduces MDO and discourses on some possible applications of this methodology. The implementation of a MDO system for the design of flexible wings, considering dynamic aeroelasticity restrictions and the structural mass, was carried out. As goal, it is desired to find ideal flexional and torsional stiffness distributions of the wing structure, that maximize the critical flutter speed and minimize the structural mass. To do so, it was employed a structural dynamics model based on the finite element method, a nonstationary aerodynamic model based on the strip theory and Theodorsens two-dimensional solutions, a flutter prediction model based on the K method and a genetic algorithm (GA). Details on the model, restrictions applied and the way the models interact to each other through the optimization are presented. It is made an analysis for choosing the GA optimization parameters and then, the evaluation of two cases to verify the functionality of the implemented system. The results obtained illustrate an efficient methodology, capable of searching optimal solutions for proposed problems, that with the right adjustments can be of great value to accelerate the development of new aircraft.
|
136 |
Development of Safety Standards for CubeSat Propulsion SystemsCheney, Liam Jon 28 February 2014 (has links)
The CubeSat community has begun to develop and implement propulsion systems. This movement represents a new capability which may satisfy mission needs such as orbital and constellation maintenance, formation flight, de-orbit, and even interplanetary travel. With the freedom and capability granted by propulsion systems, CubeSat providers must accept new responsibilities in proportion to the potential hazards that propulsion systems may present.
The Cal Poly CubeSat program publishes and maintains the CubeSat Design Specification (CDS). They wish to help the CubeSat community to safety and responsibly expand its capabilities to include propulsive designs. For this reason, the author embarked on the task of developing a draft of safety standards CubeSat propulsion systems.
Wherever possible, the standards are based on existing documents. The author provides an overview of certain concepts in systems safety with respect to the classification of hazards, determination of required fault tolerances, and the use of inhibits to satisfy fault tolerance requirements. The author discusses hazards that could exist during ground operations and through launch with respect to hazardous materials and pressure systems. Most of the standards related to Range Safety are drawn from AFSPCMAN 91-710. Having reviewed a range of hypothetical propulsion system architectures with an engineer from Range Safety at Vandenberg Air Force Base, the author compiled a case study. The author discusses many aspects of orbital safety. The author discusses the risk of collision with the host vehicle and with third party satellites along with the trackability of CubeSats using propulsion systems. Some recommendations are given for working with the Joint Functional Component Command for Space (JFCC SPACE), thanks to the input of two engineers who work with the Joint Space Operations Center (JSpOC). Command Security is discussed as an important aspect of a mission which implements a propulsion system. The author also discusses End-of-Life procedures such as safing and de-orbit operations. The orbital safety standards are intended to promote “good citizenship.”
The author steps through each proposed standard and offers justification. The author is confident that these standards will set the stage for a dialogue in the CubeSat community which will lead to the formulation of a reasonable and comprehensive set of standards. The author hopes that the discussions given throughout this document will help CubeSat developers to visualize the path to flight readiness so that they can get started on the right foot.
|
137 |
Optimal allocation of thermodynamic irreversibility for the integrated design of propulsion and thermal management systemsMaser, Adam Charles 13 November 2012 (has links)
More electric aircraft systems, high power avionics, and a reduction in heat sink capacity have placed a larger emphasis on correctly satisfying aircraft thermal management requirements during conceptual design. Thermal management systems must be capable of dealing with these rising heat loads, while simultaneously meeting mission performance. Since all subsystem power and cooling requirements are ultimately traced back to the engine, the growing interactions between the propulsion and thermal management systems are becoming more significant. As a result, it is necessary to consider their integrated performance during the conceptual design of the aircraft gas turbine engine cycle to ensure that thermal requirements are met. This can be accomplished by using thermodynamic modeling and simulation to investigate the subsystem interactions while conducting the necessary design trades to establish the engine cycle. As the foundation for this research, a parsimonious, transparent thermodynamic model of propulsion and thermal management systems performance was created with a focus on capturing the physics that have the largest impact on propulsion design choices. A key aspect of this approach is the incorporation of physics-based formulations involving the concurrent usage of the first and second laws of thermodynamics to achieve a clearer view of the component-level losses. This is facilitated by the direct prediction of the exergy destruction distribution throughout the integrated system and the resulting quantification of available work losses over the time history of the mission. The characterization of the thermodynamic irreversibility distribution helps give the designer an absolute and consistent view of the tradeoffs associated with the design of the system. Consequently, this leads directly to the question of the optimal allocation of irreversibility across each of the components. An irreversibility allocation approach based on the economic concept of resource allocation is demonstrated for a canonical propulsion and thermal management systems architecture. By posing the problem in economic terms, exergy destruction is treated as a true common currency to barter for improved efficiency, cost, and performance. This then enables the propulsion systems designer to better fulfill system-level requirements and to create a system more robust to future requirements.
|
138 |
Enabling methods for the design and optimization of detection architecturesPayan, Alexia Paule Marie-Renee 08 April 2013 (has links)
The surveillance of geographic borders and critical infrastructures using limited sensor capability has always been a challenging task in many homeland security applications. While geographic borders may be very long and may go through isolated areas, critical assets may be large and numerous and may be located in highly populated areas. As a result, it is virtually impossible to secure each and every mile of border around the country, and each and every critical infrastructure inside the country. Most often, a compromise must be made between the percentage of border or critical asset covered by surveillance systems and the induced cost. Although threats to homeland security can be conceived to take place in many forms, those regarding illegal penetration of the air, land, and maritime domains under the cover of day-to-day activities have been identified to be of particular interest. For instance, the proliferation of drug smuggling, illegal immigration, international organized crime, resource exploitation, and more recently, modern piracy, require the strengthening of land border and maritime awareness and increasingly complex and challenging national security environments. The complexity and challenges associated to the above mission and to the protection of the homeland may explain why a methodology enabling the design and optimization of distributed detection systems architectures, able to provide accurate scanning of the air, land, and maritime domains, in a specific geographic and climatic environment, is a capital concern for the defense and protection community. This thesis proposes a methodology aimed at addressing the aforementioned gaps and challenges. The methodology particularly reformulates the problem in clear terms so as to facilitate the subsequent modeling and simulation of potential operational scenarios. The needs and challenges involved in the proposed study are investigated and a detailed description of a multidisciplinary strategy for the design and optimization of detection architectures in terms of detection performance and cost is provided. This implies the creation of a framework for the modeling and simulation of notional scenarios, as well as the development of improved methods for accurate optimization of detection architectures. More precisely, the present thesis describes a new approach to determining detection architectures able to provide effective coverage of a given geographical environment at a minimum cost, by optimizing the appropriate number, types, and locations of surveillance and detection systems. The objective of the optimization is twofold. First, given the topography of the terrain under study, several promising locations are determined for each sensor system based on the percentage of terrain it is covering. Second, architectures of sensor systems able to effectively cover large percentages of the terrain at minimal costs are determined by optimizing the number, types and locations of each detection system in the architecture. To do so, a modified Genetic Algorithm and a modified Particle Swarm Optimization are investigated and their ability to provide consistent results is compared. Ultimately, the modified Particle Swarm Optimization algorithm is used to obtain a Pareto frontier of detection architectures able to satisfy varying customer preferences on coverage performance and related cost.
|
139 |
Conceptual Design and Technical Risk Analysis of Quiet Commercial Aircraft Using Physics-Based Noise Analysis MethodsOlson, Erik Davin 19 May 2006 (has links)
An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid tradeoff and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The analysis process was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and takeoff and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing the physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.
|
140 |
A Systematic Process for Adaptive Concept ExplorationNixon, Janel Nicole 29 November 2006 (has links)
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation.
The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information.
The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.
|
Page generated in 0.0893 seconds