Spelling suggestions: "subject:"design aptimization"" "subject:"design anoptimization""
201 |
Reliability-Based Design Optimization of Nonlinear Beam-ColumnsLi, Zhongwei 30 April 2018 (has links)
This dissertation addresses the ultimate strength analysis of nonlinear beam-columns under axial compression, the sensitivity of the ultimate strength, structural optimization and reliability analysis using ultimate strength analysis, and Reliability-Based Design Optimization (RBDO) of the nonlinear beam-columns. The ultimate strength analysis is based on nonlinear beam theory with material and geometric nonlinearities. Nonlinear constitutive law is developed for elastic-perfectly-plastic beam cross-section consisting of base plate and T-bar stiffener. The analysis method is validated using commercial nonlinear finite element analysis. A new direct solving method is developed, which combines the original governing equations with their derivatives with respect to deformation matric and solves for the ultimate strength directly. Structural optimization and reliability analysis use a gradient-based algorithm and need accurate sensitivities of the ultimate strength to design variables. Semi-analytic sensitivity of the ultimate strength is calculated from a linear set of analytical sensitivity equations which use the Jacobian matrix of the direct solving method. The derivatives of the structural residual equations in the sensitivity equation set are calculated using complex step method. The semi-analytic sensitivity is more robust and efficient as compared to finite difference sensitivity. The design variables are the cross-sectional geometric parameters. Random variables include material properties, geometric parameters, initial deflection and nondeterministic load. Failure probabilities calculated by ultimate strength reliability analysis are validated by Monte Carlo Simulation. Double-loop RBDO minimizes structural weight with reliability index constraint. The sensitivity of reliability index with respect to design variables is calculated from the gradient of limit state function at the solution of reliability analysis. By using the ultimate strength direct solving method, semi-analytic sensitivity and gradient-based optimization algorithm, the RBDO method is found to be robust and efficient for nonlinear beam-columns. The ultimate strength direct solving method, semi-analytic sensitivity, structural optimization, reliability analysis, and RBDO method can be applied to more complicated engineering structures including stiffened panels and aerospace/ocean structures. / Ph. D. / This dissertation presents a Reliability-Based Design Optimization (RBDO) procedure for nonlinear beam-columns. The beam-column cross-section has asymmetric I shape and the nonlinear material model allows plastic deformation. Structural optimization minimizes the structural weight while maintaining an ultimate strength level, i.e. the maximum load it can carry. In reality, the geometric parameters and material properties of the beam-column vary from the design value. These uncertain variations will affect the strength of the structure. Structural reliability analysis accounts for the uncertainties in structural design. Reliability index is a measurement of the structure’s probability of failure by considering these uncertainties. RBDO minimizes the structural weight while maintaining the reliability level of the beam-column. A novel numerical method is presented which solves an explicit set of equations to obtain the maximum strength of the beam-column directly. By using this method, the RBDO procedure is found to be efficient and robust.
|
202 |
Improving Aerospace System Robustness Integration Of Singular Value Decomposition And Network CentralityContreras, Ricardo S 01 June 2024 (has links) (PDF)
This thesis presents an approach to understanding and enhancing the robustness of aerospace systems through the integration of network analysis, Design Structure Matrix (DSM), Singular Value Decomposition (SVD), and robustness simulation techniques. This methodology called the Importance Hierarchy method, provides a comprehensive framework for decomposing the interactions of aerospace systems, focusing on identifying and evaluating critical components and interactions.
The Importance Hierarchy method was used to study the Autonomous Flight Termination Unit (AFTU) by analyzing its components and interdependencies. The AFTU was selected because it recently completed a Preliminary Design Review (PDR), and its design and operational functions were well-known. During the PDR phase, it was found that the GPS circuit card assembly was vulnerable due to the extreme vibration environment. This research analysis has identified the GPS unit as a critical component through network analysis due to its pivotal role and interdependencies with other components. Therefore, if this analysis was performed during the PDR phase, it could have saved time and money.
The findings from this research can contribute significantly to the aerospace industry by providing a robust framework for strategic decision-making, visually communicating complex system dynamics, and understanding the robustness of aerospace systems. This approach is instrumental in mitigating risks and ensuring aerospace systems' long-term reliability and safety.
|
203 |
Conducted EMI Noise Prediction and Filter Design OptimizationWang, Zijian 04 October 2016 (has links)
Power factor correction (PFC) converter is a species of switching mode power supply (SMPS) which is widely used in offline frond-end converter for the distributed power systems to reduce the grid harmonic distortion. With the fast development of information technology and multi-media systems, high frequency PFC power supplies for servers, desktops, laptops and flat-panel TVs, etc. are required for more efficient power delivery within limited spaces. Therefore the critical conduction mode (CRM) PFC converter has been becoming more and more popular for these information technology applications due to its advantages in inherent zero-voltage soft switching (ZVS) and negligible diode reverse recovery. With the emerging of the high voltage GaN devices, the goal of achieving soft switching for high frequency PFC converters is the top priority and the trend of adopting the CRM PFC converter is becoming clearer.
However, there is the stringent electromagnetic interference (EMI) regulation worldwide. For the CRM PFC converter, there are several challenges on meeting the EMI standards. First, for the CRM PFC converter, the switching frequency is variable during the half line cycle and has very wide range dependent on the AC line RMS voltage and the load, which makes it unlike the traditional constant-frequency PFC converter and therefore the knowledge and experience of the EMI characteristics for the traditional constant-frequency PFC converter cannot be directly applied to the CRM PFC converter.
Second, for the CRM PFC converter, the switching frequency is also dependent on the inductance of the boost inductor. It means the EMI spectrum of the CRM PFC converter is tightly related the boost inductor selection during the design of the PFC power stage. Therefore, unlike the traditional constant-frequency PFC converter, the selection of the boost inductor is also part of the EMI filter design process and EMI filter optimization should begin at the same time when the power stage design starts.
Third, since the EMI filter optimization needs to begin before the proto-type of the CRM PFC converter is completed, the traditional EMI-measurement based EMI filter design will become much more complex and time-consuming if it is applied to the CRM PFC converter. Therefore, a new methodology must be developed to evaluate the EMI performance of the CRM PFC converter, help to simplify the process of the EMI filter design and achieve the EMI filter optimization.
To overcome these challenges, a novel mathematical analysis method for variable frequency PFC converter is thus proposed in this dissertation. Based on the mathematical analysis, the quasi-peak EMI noise, which is specifically required in most EMI regulation standards, is investigated and accurately predicted for the first time. A complete approximate model is derived to predict the quasi-peak DM EMI noise for the CRM PFC converter. Experiments are carried out to verify the validity of the prediction. Based on the DM EMI noise prediction, worst case analysis is carried out and the worst DM EMI noise case for all the input line and load conditions can be found to avoid the overdesign of the EMI filter. Based on the discovered worst case, criteria to ease the DM EMI filter design procedure of the CRM boost PFC are given for different boost inductor selection. Optimized design procedure of the EMI filter for the front-end converter is then discussed. Experiments are carried out to verify the validity of the whole methodology. / Ph. D. / Power factor correction (PFC) converter is widely used in offline frond-end converter for the distributed power systems to reduce the grid harmonic distortion. With the fast development of information technology and multi-media systems, high frequency PFC power supplies for servers, desktops, laptops and flat-panel TVs, etc. are required for more efficient power delivery within limited spaces. Therefore the critical conduction mode (CRM) PFC converter has been becoming more and more popular for these information technology applications.
However, there is the stringent electromagnetic interference (EMI) regulation worldwide. For the CRM PFC converter, there are many challenges on meeting the EMI standards. To overcome these challenges, a novel mathematical analysis method for variable frequency PFC converter is thus proposed in this dissertation. A complete approximate model is derived to predict the quasi-peak DM EMI noise for the CRM PFC converter. Experiments are carried out to verify the validity of the prediction. Based on the DM EMI noise prediction, worst case analysis is carried out and based on the discovered worst case, criteria to ease the DM EMI filter design procedure of the CRM boost PFC are given for different boost inductor selection. Optimized design procedure of the EMI filter for the front-end converter is then discussed. Experiments are carried out to verify the validity of the whole methodology.
|
204 |
The Effect of Reducing Cruise Altitude on the Topology and Emissions of a Commercial Transport AircraftMcDonald, Melea E. 02 September 2010 (has links)
In recent years, research has been conducted for alternative commercial transonic aircraft design configurations, such as the strut- braced wing and the truss-braced wing aircraft designs, in order to improve aircraft performance and reduce the impact of aircraft emissions as compared to a typical cantilever wing design. Research performed by Virginia Tech in conjunction with NASA Langley Research Center shows that these alternative configurations result in 20% or more reduction in fuel consumption, and thus emissions. Another option to reduce the impact of emissions on the environment is to reduce the aircraft cruise altitude, where less nitrous oxides are released into the atmosphere and contrail formation is less likely. The following study was performed using multidisciplinary design optimization (MDO) in ModelCenterTM for cantilever wing, strut-braced wing, and truss-braced wing designs and optimized for minimum takeoff gross weight at 7730 NM range and minimum fuel weight for 7730 and 4000 NM range at the following cruise altitudes: 25,000; 30,000; and 35,000 ft. For the longer range, both objective functions exhibit a large penalty in fuel weight and takeoff gross weight due to the increased drag from the fixed fuselage when reducing cruise altitude. For the shorter range, there was a slight increase in takeoff gross weight even though there was a large increase in fuel weight for decreased cruise altitudes. Thus, the benefits of reducing cruise altitude were offset by increased fuel weight. Either a two-jury truss-braced wing or telescopic strut could be studied to reduce the fuel penalty. / Master of Science
|
205 |
Structural Optimization and Design of a Strut-Braced Wing AircraftNaghshineh-Pour, Amir H. 15 December 1998 (has links)
A significant improvement can be achieved in the performance of transonic transport aircraft using Multidisciplinary Design Optimization (MDO) by implementing truss-braced wing concepts in combination with other advanced technologies and novel design innovations. A considerable reduction in drag can be obtained by using a high aspect ratio wing with thin airfoil sections and tip-mounted engines. However, such wing structures could suffer from a significant weight penalty. Thus, the use of an external strut or a truss bracing is promising for weight reduction.
Due to the unconventional nature of the proposed concept, commonly available wing weight equations for transport aircraft will not be sufficiently accurate. Hence, a bending material weight calculation procedure was developed to take into account the influence of the strut upon the wing weight, and this was coupled to the Flight Optimization System (FLOPS) for total wing weight estimation. The wing bending material weight for single-strut configurations is estimated by modeling the wing structure as an idealized double-plate model using a piecewise linear load method.
Two maneuver load conditions 2.5g and -1.0g factor of safety of 1.5 and a 2.0g taxi bump are considered as the critical load conditions to determine the wing bending material weight. From preliminary analyses, the buckling of the strut under the -1.0g load condition proved to be the critical structural challenge. To address this issue, an innovative design strategy introduces a telescoping sleeve mechanism to allow the strut to be inactive during negative g maneuvers and active during positive g maneuvers. Also, more wing weight reduction is obtained by optimizing the strut force, a strut offset length, and the wing-strut junction location. The best configuration shows a 9.2% savings in takeoff gross weight, an 18.2% savings in wing weight and a 15.4% savings in fuel weight compared to a cantilever wing counterpart. / Master of Science
|
206 |
Metamodel-based collaborative optimization frameworkZadeh, Parviz M., Toropov, V.V., Wood, Alastair S. January 2009 (has links)
No / This paper focuses on the metamodel-based collaborative optimization (CO). The objective is to improve the computational efficiency of CO in order to handle multidisciplinary design optimization problems utilising high fidelity models. To address these issues, two levels of metamodel building techniques are proposed: metamodels in the disciplinary optimization are based on multi-fidelity modelling (the interaction of low and high fidelity models) and for the system level optimization a combination of a global metamodel based on the moving least squares method and trust region strategy is introduced. The proposed method is demonstrated on a continuous fiber-reinforced composite beam test problem. Results show that methods introduced in this paper provide an effective way of improving computational efficiency of CO based on high fidelity simulation models.
|
207 |
Application of analytical target cascading for engine calibration optimization problemKianifar, Mohammed R., Campean, Felician 08 1900 (has links)
No / This paper presents the development of an Analytical Target Cascading (ATC) Multidisciplinary Design Optimization (MDO) framework for a steady-state engine calibration optimization problem. The implementation novelty of this research is the use of the ATC framework to formulate the complex multi-objective engine calibration problem, delivering a considerable enhancement compared to the conventional 2-stage calibration optimization approach [1]. A case study of a steady-state calibration optimization of a Gasoline Direct Injection (GDI) engine was used for the calibration problem analysis as ATC. The case study results provided useful insight on the efficiency of the ATC approach in delivering superior calibration solutions, in terms of “global” system level objectives (e.g. improved fuel economy and reduced particulate emissions), while meeting “local” subsystem level requirements (such as combustion stability and exhaust gas temperature constraints). The ATC structure facilitated the articulation of engineering preference for smooth calibration maps via the ATC linking variables, with the potential to deliver important time saving for the overall calibration development process.
|
208 |
Exploring The Feasibility Of The Resonance Corridor Method For Post Mission Disposal Of High-LEO ConstellationsPorter, Payton G 01 June 2024 (has links) (PDF)
In the upcoming decade, the proliferation of high-LEO constellations is expected to exceed 20,000 objects, yet comprehensive Post Mission Disposal (PMD) strategies for these constellations are currently lacking. With the inherent challenges of efficiently deorbiting satellites from High-LEO orbits, there arises an urgent need to explore innovative approaches. Building upon insights garnered from the ReDSHIFT project and anticipating the proliferation of high-LEO constellations such as OneWeb, TeleSat, and GuoWang, this thesis delves into the potential viability of the Resonance Corridor Method for PMD. The investigation encompasses key metrics, including deorbit timelines and $\Delta v$ requirements to meet regulatory standards or recommendations, with comparisons drawn against alternative methods like Perigee Decrease and Graveyard Orbit solutions. Through this analysis, scenarios emerge where the Resonance Corridor method demonstrates advantages, offering feasible delta-v values while ensuring compliance with regulatory standards and recommendations. The findings yield categorizations of high-LEO constellation shells into specific disposal feasibility groups, thereby providing valuable insights into how space sustainability practices can be added into spacecraft design to align with evolving space debris mitigation standards. Additionally, certain altitude-inclination combinations are found to naturally align with the resonance corridor method, while others necessitate minor architectural adjustments to optimize effectiveness.
|
209 |
Methods for parameterizing and exploring Pareto frontiers using barycentric coordinatesDaskilewicz, Matthew John 08 April 2013 (has links)
The research objective of this dissertation is to create and demonstrate methods for parameterizing the Pareto frontiers of continuous multi-attribute design problems using barycentric coordinates, and in doing so, to enable intuitive exploration of optimal trade spaces. This work is enabled by two observations about Pareto frontiers that have not been previously addressed in the engineering design literature. First, the observation that the mapping between non-dominated designs and Pareto efficient response vectors is a bijection almost everywhere suggests that points on the Pareto frontier can be inverted to find their corresponding design variable vectors. Second, the observation that certain common classes of Pareto frontiers are topologically equivalent to simplices suggests that a barycentric coordinate system will be more useful for parameterizing the frontier than the Cartesian coordinate systems typically used to parameterize the design and objective spaces.
By defining such a coordinate system, the design problem may be reformulated from y = f(x) to (y,x) = g(p) where x is a vector of design variables, y is a vector of attributes and p is a vector of barycentric coordinates. Exploration of the design problem using p as the independent variables has the following desirable properties: 1) Every vector p corresponds to a particular Pareto efficient design, and every Pareto efficient design corresponds to a particular vector p. 2) The number of p-coordinates is equal to the number of attributes regardless of the number of design variables. 3) Each attribute y_i has a corresponding coordinate p_i such that increasing the value of p_i corresponds to a motion along the Pareto frontier that improves y_i monotonically.
The primary contribution of this work is the development of three methods for forming a barycentric coordinate system on the Pareto frontier, two of which are entirely original. The first method, named "non-domination level coordinates," constructs a coordinate system based on the (k-1)-attribute non-domination levels of a discretely sampled Pareto frontier. The second method is based on a modification to an existing "normal boundary intersection" multi-objective optimizer that adaptively redistributes its search basepoints in order to sample from the entire frontier uniformly. The weights associated with each basepoint can then serve as a coordinate system on the frontier. The third method, named "Pareto simplex self-organizing maps" uses a modified a self-organizing map training algorithm with a barycentric-grid node topology to iteratively conform a coordinate grid to the sampled Pareto frontier.
|
210 |
Development And Design Optimization Of Laminated Composite Structures Using Failure Mechanism Based Failure CriterionNaik, G Narayana 12 1900 (has links)
In recent years, use of composites is increasing in most fields of engineering such as aerospace, automotive, civil construction, marine, prosthetics, etc., because of its light weight, very high specific strength and stiffness, corrosion resistance, high thermal resistance etc. It can be seen that the specific strength of fibers are many orders more compared to metals. Thus, laminated fiber reinforced plastics have emerged to be attractive materials for many engineering applications. Though the uses of composites are enormous, there is always an element of fuzziness in the design of composites. Composite structures are required to be designed to resist high stresses. For this, one requires a reliable failure criterion. The anisotropic behaviour of composites makes it very difficult to formulate failure criteria and experimentally verify it, which require one to perform necessary bi-axial tests and plot the failure envelopes. Failure criteria are usually based on certain assumption, which are some times questionable. This is because, the failure process in composites is quite complex. The failure in a composite is normally based on initiating failure mechanisms such as fiber breaks, fiber compressive failure, matrix cracks, matrix crushing, delamination, disbonds or a combination of these. The initiating failure mechanism is the one, which is/are responsible for initiating failure in a laminated composites. Initiating failure mechanisms are generally dependant on the type of loading, geometry, material properties, condition of manufacture, boundary conditions, weather conditions etc. Since, composite materials exhibit directional properties, their applications and failure conditions should be properly examined and in addition to this, robust computational tools have to be exploited for the design of structural components for efficient utilisation of these materials.
Design of structural components requires reliable failure criteria for the safe design of the components. Several failure criteria are available for the design of composite laminates. None of the available anisotropic strength criteria represents observed results sufficiently accurate to be employed confidently by itself in design. Most of the failure criteria are validated based on the available uniaxial test data, whereas, in practical situations, laminates are subjected to at least biaxial states of stresses. Since, the generation of biaxial test data are very difficult and time consuming to obtain, it is indeed a necessity to develop computational tools for modelling the biaxial behavior of the composite laminates. Understanding of the initiating failure mechanisms and the development of reliable failure criteria is an essential prerequisite for effective utilization of composite materials. Most of the failure criteria, considers the uniaxial test data with constant shear stress to develop failure envelopes, but in reality, structures are subjected to biaxial normal stresses as well as shear stresses. Hence, one can develop different failure envelopes depending upon the percentage of the shear stress content.
As mentioned earlier, safe design of the composite structural components require reliable failure criterion. Currently two broad approaches, namely, (1) Damage Tolerance Based Design and (2)Failure Criteria Based Design are in use for the design of laminated structures in aerospace industry. Both approaches have some limitations. The damage tolerance based design suffers from a lack of proper definition of damage and the inability of analytical tools to handle realistic damage. The failure criteria based design, although relatively, more attractive in view of the simplicity, it forces the designer to use unverified design points in stress space, resulting in unpredictable failure conditions. Generally, failure envelopes are constructed using 4 or 5 experimental constants. In this type of approach, small experimental errors in these constants lead to large shift in the failure boundaries raising doubts about the reliability of the boundary in some segments. Further, they contain segments which have no experimental support and so can lead to either conservative or nonconservative designs. Conservative design leads to extra weight, a situation not acceptable in aerospace industry. Whereas, a nonconservative design, is obviously prohibitive, as it implies failure. Hence, both the damage tolerance based design and failure criteria based design have limitations. A new method, which combines the advantages of both the approaches is desirable. This issue is also thoroughly debated in many international conference on composites. Several pioneers in the composite industry indicated the need for further research work in the development of reliable failure criteria. Hence, this is motivated to carry out research work for the development of new failure criterion for the design of composite structures.
Several experts meetings held world wide towards the assessment of existing failure theories and computer codes for the design of composite structures. One such meeting is the experts meeting held at United Kingdom in 1991.This meeting held at St. Albans(UK) on ’Failure of Polymeric Composites and Structures: Mechanisms and Criteria for the Prediction of Performance’, in 1991 by UK Science & Engineering Council and UK Institute of Mechanical Engineers. After thorough deliberations it was concluded that
1. There is no universal definition of failure of composites.
2. There is little or lack of faith in the failure criteria that are in current use and
3. There is a need to carry out World Wide Failure Exercise(WWFE)
Based on the experts suggestions, Hinton and Soden initiated the WWFE in consultation with Prof.Bryan Harris (Editor, Journal of Composite Science and Tech-nology)to have a program to get comparative assessment of existing failure criteria and codes with following aims
1. Establish the current level of maturity of theories for predicting the failure response of fiber reinforced plastic(FRP)laminates.
2. Closing the knowledge gap between theoreticians and design practitioners in this field.
3. Stimulating the composites’ community into providing design engineers with more robust and accurate failure prediction methods, and the confidence to use them.
The organisers invited pioneers in the composite industry for the program of WWFE. Among the pioneer in the composite industry Professor Hashin declined to participate in the program and had written a letter to the organisers saying that, My only work in this subject relates to failure criteria of unidirectional fiber composites, not to laminates. I do not believe that even the most complete information about failure of single plies is sufficient to predict the failure of a laminate, consisting of such plies. A laminate is a structure which undergoes a complex damage process (mostly of cracking) until it finally fails. The analysis of such a process is a prerequisite for failure analysis. ”While significant advances have been made in this direction we have not yet arrived at the practical goal of failure prediction”.
Another important conference held in France in 1999, Composites for the next Millennium (Proceedingof Symposium in honor of S.W.Tsaion his 70th Birth Day Torus, France, July 2-3, 1999, pp.19.) also concludedon similar line to the meeting held at UK in 1991. Paul A Lagace and S. Mark Spearing, have pointed out that, by referring to the article on ’Predicting Failure in Composite Laminates: the background to the exercise’, by M.J.Hinton & P.D.Soden, Composites Science and Technology, Vol.58, No.7(1998), pp.1005. ”After Over thirty years of work ’The’ composite failure criterion is still an elusive entity”. Numerous researchers have produced dozens of approaches. Hundreds of papers, manuscripts and reports were written and presentations made to address the latest thoughts, add data to accumulated knowledge bases and continue the scholarly debate.
Thus, the out come of these experts meeting, is that, there is a need to develop new failure theories and due to complexities associated with experimentation, especially getting bi-axial data, computational methods are the only viable alternative. Currently, biaxial data on composites is very limited as the biaxial testing of laminates is very difficult and standardization of biaxial data is yet to be done. All these experts comments and suggestions motivated us to carry out research work towards the development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ based on initiating failure mechanisms.
The objectives of the thesis are
1. Identification of the failure mechanism based failure criteria for the specific initiating failure mechanism and to assign the specific failure criteria for specific initiating failure mechanism,
2. Use of the ’failure mechanism based design’ method for composite pressurant tanks and to evaluate it, by comparing it with some of the standard ’failure criteria’ based designs from the point of view of overall weight of the pressurant tank,
3. Development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ without shear stress content and the corresponding failure envelope,
4. Development of different failure envelopes with the effect of shear stress depending upon the percentage of shear stress content and
5. Design of composite laminates with the Failure Mechanism Based Failure Criterion using optimization techniques such as Genetic Algorithms(GA) and Vector Evaluated Particle Swarm Optimization(VEPSO) and the comparison of design with other failure criteria such as Tsai-Wu and Maximum Stress failure criteria.
The following paragraphs describe about the achievement of these objectives.
In chapter 2, a rectangular panel subjected to boundary displacements is used as an example to illustrate the concept of failure mechanism based design. Composite Laminates are generally designed using a failure criteria, based on a set of standard experimental strength values. Failure of composite laminates involves different failure mechanisms depending upon the stress state and so different failure mechanisms become dominant at different points on the failure envelope. Use of a single failure criteria, as is normally done in designing laminates, is unlikely to be satisfactory for all combination of stresses. As an alternate use of a simple failure criteria to identify the dominant failure mechanism and the design of the laminate using appropriate failure mechanism based criteria is suggested in this thesis. A complete 3-D stress analysis has been carried out using a general purpose NISA Finite Element Software. Comparison of results using standard failure criteria such as Maximum Stress, Maximum Strain, Tsai-Wu, Yamada-Sun, Maximum Fiber Strain, Grumman, O’brien, & Lagace, indicate substantial differences in predicting the first ply failure. Results for Failure Load Factors, based on the failure mechanism based approach are included. Identification of the failure mechanism at highly stressed regions and the design of the component, to withstand an artificial defect, representative this failure mechanism, provides a realistic approach to achieve necessary strength without adding unnecessary weight to the structure.
It is indicated that the failure mechanism based design approach offers a reliable way of assessing critically stressed regions to eliminate the uncertainties associated with the failure criteria.
In chapter 3, the failure mechanism based design approach has been applied to a composite pressurant tanks of upper stages of launch vehicles and propulsion systems of space crafts. The problem is studied using the failure mechanism based design approach, by introducing an artificial matrix crack representative of the initiating failure mechanism in the highly stressed regions and the strain energy release rate (SERR) are calculated. The total SERR value is obtained as 3330.23 J/m2, which is very high compared to the Gc(135 J/m2) value, which means the crack will grow further. The failure load fraction at which the crack has a tendency to grow is estimated to be 0.04054.Results indicates that there are significant differences in the failure load fraction for different failure criteria.Comparison with Failure Mechanism Based Criterion (FMBC) clearly indicates matrix cracks occur at loads much below the design load yet fibers are able to carrythe design load.
In chapter 4, a Failure Mechanism Based Failure Criterion(FMBFC)has been proposed for the development of failure envelope for unidirectional composite plies. A representative volume element of the laminate under local loading is micromechanically modelled to predict the experimentally determined strengths and this model is then used to predict points on the failure envelope in the neighborhood of the experimental points. The NISA finite element software has been used to determine the stresses in the representative volume element. From these micro-stresses, the strength of the lamina is predicted. A correction factor is used to match the prediction of the present model with the experimentally determined strength so that, the model can be expected to provide accurate prediction of the strength in the neighborhood of the experimental points. A procedure for the construction of the failure envelope in the stress space has been outlined and the results are compared with the some of the standard and widely used failure criteria in the composite industry. Comparison of results with the Tsai-Wu failure criterion shows that there are significant differences, particularly in the third quadrant, when the ply is under bi-axial compressive loading. Comparison with maximum stress criterion indicates better correlation. The present failure mechanism based failure criterion approach opens a new possibility of constructing reliable failure envelopes for bi-axial loading applications, using the standard uniaxialtest data.
In chapter 5, the new failure criterion for laminated composites developed based on initiating failure mechanism as mentioned in chapter 4 for without shear stress condition is extended to obtain the failure envelopes with the shear stress condition. The approach is based on Micromechanical analysis of composites, wherein a representative volume consists of a fiber surrounded by matrix in appropriate volume fraction and modeled using 3-D finite elements to predict the strengths.In this chapter, different failure envelopes are developed by varying shear stress say from 0% of shear strength to 100% of shear strength in steps of 25% of shear strength. Results obtained from this approach are compared with Tsai-Wu and Maximum stress failure criteria. The results show that the predicted strengths match more closely with maximum stress criterion. Hence, it can be concluded that influence of shear stress on the failure of the lamina is of little consequence as far as the prediction of strengths in laminates.
In chapter 6, the failure mechanism based failure criterion, developed by the authors is used for the design optimization of the laminates and the percentage savings in total weight of the laminate is presented. The design optimization of composite laminates are performed using Genetic Algorithms. The genetic algorithm is one of the robust tools available for the optimum design of composite laminates. The genetic algorithms employ techniques originated from biology and dependon the application of Darwin’s principle of survival of the fittest. When a population of biological creatures is permitted to evolve over generations, individual characteristics that are beneficial for survival have a tendency to be passed on to future generations, since individuals carrying them get more chances to breed. In biological populations, these characteristics are stored in chromosomal strings. The mechanics of natural genetics is derived from operations that result in arranged yet randomized exchange of genetic information between the chromosomal strings of the reproducing parents and consists of reproduction, cross over, mutation, and inversion of the chromosomal strings. Here, optimization of the weight of the composite laminates for given loading and material properties is considered. The genetic algorithms have the capability of selecting choice of orientation, thickness of single ply, number of plies and stacking sequence of the layers.
In this chapter, minimum weight design of composite laminates is presented using the Failure Mechanism Based(FMB), Maximum Stress and Tsai-Wu failure criteria. The objective is to demonstrate the effectiveness of the newly proposed FMB Failure Criterion(FMBFC) in composite design. The FMBFC considers different failure mechanisms such as fiber breaks, matrix cracks, fiber compressive failure, and matrix crushing which are relevant for different loadin gconditions. FMB and Maximum Stress failure criteria predicts byupto 43 percent savings in weight of the laminates compared to Tsai-Wu failure criterion in some quadrants of the failure envelope. The Tsai-Wu failure criterion over predicts the weight of the laminate by up to 86 percent in the third quadrant of the failure envelope compared to FMB and Maximum Stress failure criteria, when the laminate is subjected to biaxial compressive loading. It is found that the FMB and Maximum Stress failure criteria give comparable weight estimates. The FMBFC can be considered for use in the strength design of composite structures.
In chapter 7, Particle swarm optimization is used for design optimization of composite laminates. Particle swarm optimization(PSO)is a novel meta-heuristic inspired by the flocking behaviour of birds. The application of PSO to composite design optimization problems has not yet been extensively explored. Composite laminate optimization typically consists in determining the number of layers, stacking sequence and thickness of ply that gives the desired properties. This chapter details the use of Vector Evaluated Particle Swarm Optimization(VEPSO) algorithm, a multi-objective variant of PSO for composite laminate design optimization. VEPSO is a modern coevolutionary algorithm which employs multiple swarms to handle the multiple objectives and the information migration between these swarms ensures that a global optimum solution is reached. The current problem has been formulated as a classical multi-objective optimization problem, with objectives of minimizing weight of the component for a required strength and minimizing the totalcost incurred, such that the component does not fail. In this chapter, an optimum configuration for a multi-layered unidirectional carbon/epoxy laminate is determined using VEPSO. The results are presented for various loading configurations of the composite structures. The VEPSO predicts the same minimum weight optimization and percentage savings in weight of the laminate when compared to GA for all loading conditions.There is small difference in results predicted by VEPSO and GA for some loading and stacking sequence configurations, which is mainly due to random selection of swarm particles and generation of populations re-spectively.The difference can be prevented by running the same programme repeatedly.
The Thesis is concluded by highlighting the future scope of several potential applications based on the developments reported in the thesis.
|
Page generated in 0.1115 seconds