• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 20
  • 14
  • 13
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 402
  • 402
  • 185
  • 177
  • 104
  • 70
  • 52
  • 49
  • 46
  • 42
  • 40
  • 39
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Software for Multidisciplinary Design Optimization of Truss-Braced Wing Aircraft with Deep Learning based Transonic Flutter Prediction Model

Khan, Kamrul Hasan 20 November 2023 (has links)
This study presents a new Python-based novel framework, in a distributed computing environment for multidisciplinary design optimization (MDO) called DELWARX. DELWARX also includes a transonic flutter analysis approach that is computationally very efficient, yet accurate enough for conceptual design and optimization studies. This transonic flutter analysis approach is designed for large aspect-ratio wings and attached flow. The framework employs particle swarm optimization with penalty functions for exploring optimal Transonic Truss Braced Wing (TTBW) aircraft design, similar to the Boeing 737-800 type of mission with a cruise Mach of 0.8, a range of 3115 n miles, and 162 passengers, with two different objective functions, the fuel weight and the maximum take-off gross weight, while satisfying all the required constraints. Proper memory management is applied to effectively address memory-related issues, which are often a limiting factor in distributed computing. The parallel implementation in MDO using 60 processors allowed a reduction in the wall-clock time by 96% which is around 24 times faster than the optimization using a single processor. The results include a comparison of the TTBW designs for the medium-range missions with and without the flutter constraint. Importantly, the framework achieves extremely low computation times due to its parallel optimization capability, retains all the previous functionalities of the previous Virginia Tech MDO framework, and replaces the previously employed linear flutter analysis with a more accurate nonlinear transonic flutter computation. These features of DELWARX are expected to facilitate a more accurate MDO study for innovative transport aircraft configurations operating in the transonic flight regime. High-fidelity CFD simulation is performed to verify the result obtained from extended Strip theory based aerodynamic analysis method. An approach is presented to develop a deep neural network (DNN)-based surrogate model for fast and accurate prediction of flutter constraints in multidisciplinary design optimization (MDO) of Transonic Truss Braced Wing (TTBW) aircraft in the transonic region. The integration of the surrogate model in the MDO framework shows lower computation times than the MDO with nonlinear flutter analysis. The developed surrogate models can predict the optimum design. The wall-clock time of the design analysis method was reduced by 1500 times as compared to the result implemented in the previous framework, DELWARX. / Doctor of Philosophy / The current study presents DELWARX, a novel Python-based framework specifically engineered for the optimization of aircraft designs, with a primary focus on enhancing the performance of aircraft wings under transonic conditions (speeds approaching the speed of sound). This advancement is particularly pertinent for aircraft with a mission analogous to the Boeing 737-800, which necessitates a harmonious balance between speed, range, passenger capacity, and fuel efficiency. A salient feature of DELWARX is its adeptness in analyzing and optimizing wing flutter, a critical issue where wings may experience hazardous vibrations at certain velocities. This is particularly vital for wings characterized by a high aspect ratio (wings that are long and narrow), presenting a substantial challenge in the domain of aircraft design. DELWARX surpasses preceding methodologies by implementing a sophisticated computational technique known as particle swarm optimization, analogous to the collective movement observed in bird flocks, integrated with penalty functions that serve to exclude design solutions that fail to meet predefined standards. This approach is akin to navigating through a maze with specific pathways rendered inaccessible due to certain constraints. The efficiency of DELWARX is markedly enhanced by its ability to distribute computational tasks across 60 processors, achieving a computation speed that is 24 times faster than that of a single-processor operation. This distribution results in a significant reduction of overall computation time by 96%, representing a substantial advancement in processing efficiency. Further, DELWARX introduces an enhanced level of precision in its operations. It supplants former methods of flutter analysis with a more sophisticated, nonlinear approach tailored for transonic speeds. Consequently, the framework's predictions and optimization strategies for aircraft wing designs are imbued with increased reliability and accuracy. Moreover, DELWARX also integrates a Deep Neural Network (DNN), an advanced form of artificial intelligence, to swiftly and precisely predict flutter constraints. This integration manifests as a highly intelligent system capable of instantaneously estimating the performance of various designs, thereby expediting the optimization process. DELWARX employs high-fidelity Computational Fluid Dynamics (CFD) simulations to verify its findings. These simulations utilize intricate models to simulate the aerodynamics of air flow over aircraft wings, thereby ensuring that the optimized designs are not only theoretically sound but also pragmatically effective. In conclusion, DELWARX represents a significant leap in the field of multidisciplinary design optimization. It offers a robust and efficient tool for the design of aircraft wings, especially in the context of transonic flight. This framework heralds a new era in the optimization of aircraft designs, enabling more innovative and efficient solutions in the aerospace industry.
202

The Effect of Reducing Cruise Altitude on the Topology and Emissions of a Commercial Transport Aircraft

McDonald, Melea E. 02 September 2010 (has links)
In recent years, research has been conducted for alternative commercial transonic aircraft design configurations, such as the strut- braced wing and the truss-braced wing aircraft designs, in order to improve aircraft performance and reduce the impact of aircraft emissions as compared to a typical cantilever wing design. Research performed by Virginia Tech in conjunction with NASA Langley Research Center shows that these alternative configurations result in 20% or more reduction in fuel consumption, and thus emissions. Another option to reduce the impact of emissions on the environment is to reduce the aircraft cruise altitude, where less nitrous oxides are released into the atmosphere and contrail formation is less likely. The following study was performed using multidisciplinary design optimization (MDO) in ModelCenterTM for cantilever wing, strut-braced wing, and truss-braced wing designs and optimized for minimum takeoff gross weight at 7730 NM range and minimum fuel weight for 7730 and 4000 NM range at the following cruise altitudes: 25,000; 30,000; and 35,000 ft. For the longer range, both objective functions exhibit a large penalty in fuel weight and takeoff gross weight due to the increased drag from the fixed fuselage when reducing cruise altitude. For the shorter range, there was a slight increase in takeoff gross weight even though there was a large increase in fuel weight for decreased cruise altitudes. Thus, the benefits of reducing cruise altitude were offset by increased fuel weight. Either a two-jury truss-braced wing or telescopic strut could be studied to reduce the fuel penalty. / Master of Science
203

Structural Optimization and Design of a Strut-Braced Wing Aircraft

Naghshineh-Pour, Amir H. 15 December 1998 (has links)
A significant improvement can be achieved in the performance of transonic transport aircraft using Multidisciplinary Design Optimization (MDO) by implementing truss-braced wing concepts in combination with other advanced technologies and novel design innovations. A considerable reduction in drag can be obtained by using a high aspect ratio wing with thin airfoil sections and tip-mounted engines. However, such wing structures could suffer from a significant weight penalty. Thus, the use of an external strut or a truss bracing is promising for weight reduction. Due to the unconventional nature of the proposed concept, commonly available wing weight equations for transport aircraft will not be sufficiently accurate. Hence, a bending material weight calculation procedure was developed to take into account the influence of the strut upon the wing weight, and this was coupled to the Flight Optimization System (FLOPS) for total wing weight estimation. The wing bending material weight for single-strut configurations is estimated by modeling the wing structure as an idealized double-plate model using a piecewise linear load method. Two maneuver load conditions 2.5g and -1.0g factor of safety of 1.5 and a 2.0g taxi bump are considered as the critical load conditions to determine the wing bending material weight. From preliminary analyses, the buckling of the strut under the -1.0g load condition proved to be the critical structural challenge. To address this issue, an innovative design strategy introduces a telescoping sleeve mechanism to allow the strut to be inactive during negative g maneuvers and active during positive g maneuvers. Also, more wing weight reduction is obtained by optimizing the strut force, a strut offset length, and the wing-strut junction location. The best configuration shows a 9.2% savings in takeoff gross weight, an 18.2% savings in wing weight and a 15.4% savings in fuel weight compared to a cantilever wing counterpart. / Master of Science
204

Metamodel-based collaborative optimization framework

Zadeh, Parviz M., Toropov, V.V., Wood, Alastair S. January 2009 (has links)
No / This paper focuses on the metamodel-based collaborative optimization (CO). The objective is to improve the computational efficiency of CO in order to handle multidisciplinary design optimization problems utilising high fidelity models. To address these issues, two levels of metamodel building techniques are proposed: metamodels in the disciplinary optimization are based on multi-fidelity modelling (the interaction of low and high fidelity models) and for the system level optimization a combination of a global metamodel based on the moving least squares method and trust region strategy is introduced. The proposed method is demonstrated on a continuous fiber-reinforced composite beam test problem. Results show that methods introduced in this paper provide an effective way of improving computational efficiency of CO based on high fidelity simulation models.
205

Application of analytical target cascading for engine calibration optimization problem

Kianifar, Mohammed R., Campean, Felician 08 1900 (has links)
No / This paper presents the development of an Analytical Target Cascading (ATC) Multidisciplinary Design Optimization (MDO) framework for a steady-state engine calibration optimization problem. The implementation novelty of this research is the use of the ATC framework to formulate the complex multi-objective engine calibration problem, delivering a considerable enhancement compared to the conventional 2-stage calibration optimization approach [1]. A case study of a steady-state calibration optimization of a Gasoline Direct Injection (GDI) engine was used for the calibration problem analysis as ATC. The case study results provided useful insight on the efficiency of the ATC approach in delivering superior calibration solutions, in terms of “global” system level objectives (e.g. improved fuel economy and reduced particulate emissions), while meeting “local” subsystem level requirements (such as combustion stability and exhaust gas temperature constraints). The ATC structure facilitated the articulation of engineering preference for smooth calibration maps via the ATC linking variables, with the potential to deliver important time saving for the overall calibration development process.
206

Exploring The Feasibility Of The Resonance Corridor Method For Post Mission Disposal Of High-LEO Constellations

Porter, Payton G 01 June 2024 (has links) (PDF)
In the upcoming decade, the proliferation of high-LEO constellations is expected to exceed 20,000 objects, yet comprehensive Post Mission Disposal (PMD) strategies for these constellations are currently lacking. With the inherent challenges of efficiently deorbiting satellites from High-LEO orbits, there arises an urgent need to explore innovative approaches. Building upon insights garnered from the ReDSHIFT project and anticipating the proliferation of high-LEO constellations such as OneWeb, TeleSat, and GuoWang, this thesis delves into the potential viability of the Resonance Corridor Method for PMD. The investigation encompasses key metrics, including deorbit timelines and $\Delta v$ requirements to meet regulatory standards or recommendations, with comparisons drawn against alternative methods like Perigee Decrease and Graveyard Orbit solutions. Through this analysis, scenarios emerge where the Resonance Corridor method demonstrates advantages, offering feasible delta-v values while ensuring compliance with regulatory standards and recommendations. The findings yield categorizations of high-LEO constellation shells into specific disposal feasibility groups, thereby providing valuable insights into how space sustainability practices can be added into spacecraft design to align with evolving space debris mitigation standards. Additionally, certain altitude-inclination combinations are found to naturally align with the resonance corridor method, while others necessitate minor architectural adjustments to optimize effectiveness.
207

Reliability-Based Design Optimization of Nonlinear Beam-Columns

Li, Zhongwei 30 April 2018 (has links)
This dissertation addresses the ultimate strength analysis of nonlinear beam-columns under axial compression, the sensitivity of the ultimate strength, structural optimization and reliability analysis using ultimate strength analysis, and Reliability-Based Design Optimization (RBDO) of the nonlinear beam-columns. The ultimate strength analysis is based on nonlinear beam theory with material and geometric nonlinearities. Nonlinear constitutive law is developed for elastic-perfectly-plastic beam cross-section consisting of base plate and T-bar stiffener. The analysis method is validated using commercial nonlinear finite element analysis. A new direct solving method is developed, which combines the original governing equations with their derivatives with respect to deformation matric and solves for the ultimate strength directly. Structural optimization and reliability analysis use a gradient-based algorithm and need accurate sensitivities of the ultimate strength to design variables. Semi-analytic sensitivity of the ultimate strength is calculated from a linear set of analytical sensitivity equations which use the Jacobian matrix of the direct solving method. The derivatives of the structural residual equations in the sensitivity equation set are calculated using complex step method. The semi-analytic sensitivity is more robust and efficient as compared to finite difference sensitivity. The design variables are the cross-sectional geometric parameters. Random variables include material properties, geometric parameters, initial deflection and nondeterministic load. Failure probabilities calculated by ultimate strength reliability analysis are validated by Monte Carlo Simulation. Double-loop RBDO minimizes structural weight with reliability index constraint. The sensitivity of reliability index with respect to design variables is calculated from the gradient of limit state function at the solution of reliability analysis. By using the ultimate strength direct solving method, semi-analytic sensitivity and gradient-based optimization algorithm, the RBDO method is found to be robust and efficient for nonlinear beam-columns. The ultimate strength direct solving method, semi-analytic sensitivity, structural optimization, reliability analysis, and RBDO method can be applied to more complicated engineering structures including stiffened panels and aerospace/ocean structures. / Ph. D. / This dissertation presents a Reliability-Based Design Optimization (RBDO) procedure for nonlinear beam-columns. The beam-column cross-section has asymmetric I shape and the nonlinear material model allows plastic deformation. Structural optimization minimizes the structural weight while maintaining an ultimate strength level, i.e. the maximum load it can carry. In reality, the geometric parameters and material properties of the beam-column vary from the design value. These uncertain variations will affect the strength of the structure. Structural reliability analysis accounts for the uncertainties in structural design. Reliability index is a measurement of the structure’s probability of failure by considering these uncertainties. RBDO minimizes the structural weight while maintaining the reliability level of the beam-column. A novel numerical method is presented which solves an explicit set of equations to obtain the maximum strength of the beam-column directly. By using this method, the RBDO procedure is found to be efficient and robust.
208

Methods for parameterizing and exploring Pareto frontiers using barycentric coordinates

Daskilewicz, Matthew John 08 April 2013 (has links)
The research objective of this dissertation is to create and demonstrate methods for parameterizing the Pareto frontiers of continuous multi-attribute design problems using barycentric coordinates, and in doing so, to enable intuitive exploration of optimal trade spaces. This work is enabled by two observations about Pareto frontiers that have not been previously addressed in the engineering design literature. First, the observation that the mapping between non-dominated designs and Pareto efficient response vectors is a bijection almost everywhere suggests that points on the Pareto frontier can be inverted to find their corresponding design variable vectors. Second, the observation that certain common classes of Pareto frontiers are topologically equivalent to simplices suggests that a barycentric coordinate system will be more useful for parameterizing the frontier than the Cartesian coordinate systems typically used to parameterize the design and objective spaces. By defining such a coordinate system, the design problem may be reformulated from y = f(x) to (y,x) = g(p) where x is a vector of design variables, y is a vector of attributes and p is a vector of barycentric coordinates. Exploration of the design problem using p as the independent variables has the following desirable properties: 1) Every vector p corresponds to a particular Pareto efficient design, and every Pareto efficient design corresponds to a particular vector p. 2) The number of p-coordinates is equal to the number of attributes regardless of the number of design variables. 3) Each attribute y_i has a corresponding coordinate p_i such that increasing the value of p_i corresponds to a motion along the Pareto frontier that improves y_i monotonically. The primary contribution of this work is the development of three methods for forming a barycentric coordinate system on the Pareto frontier, two of which are entirely original. The first method, named "non-domination level coordinates," constructs a coordinate system based on the (k-1)-attribute non-domination levels of a discretely sampled Pareto frontier. The second method is based on a modification to an existing "normal boundary intersection" multi-objective optimizer that adaptively redistributes its search basepoints in order to sample from the entire frontier uniformly. The weights associated with each basepoint can then serve as a coordinate system on the frontier. The third method, named "Pareto simplex self-organizing maps" uses a modified a self-organizing map training algorithm with a barycentric-grid node topology to iteratively conform a coordinate grid to the sampled Pareto frontier.
209

Development And Design Optimization Of Laminated Composite Structures Using Failure Mechanism Based Failure Criterion

Naik, G Narayana 12 1900 (has links)
In recent years, use of composites is increasing in most fields of engineering such as aerospace, automotive, civil construction, marine, prosthetics, etc., because of its light weight, very high specific strength and stiffness, corrosion resistance, high thermal resistance etc. It can be seen that the specific strength of fibers are many orders more compared to metals. Thus, laminated fiber reinforced plastics have emerged to be attractive materials for many engineering applications. Though the uses of composites are enormous, there is always an element of fuzziness in the design of composites. Composite structures are required to be designed to resist high stresses. For this, one requires a reliable failure criterion. The anisotropic behaviour of composites makes it very difficult to formulate failure criteria and experimentally verify it, which require one to perform necessary bi-axial tests and plot the failure envelopes. Failure criteria are usually based on certain assumption, which are some times questionable. This is because, the failure process in composites is quite complex. The failure in a composite is normally based on initiating failure mechanisms such as fiber breaks, fiber compressive failure, matrix cracks, matrix crushing, delamination, disbonds or a combination of these. The initiating failure mechanism is the one, which is/are responsible for initiating failure in a laminated composites. Initiating failure mechanisms are generally dependant on the type of loading, geometry, material properties, condition of manufacture, boundary conditions, weather conditions etc. Since, composite materials exhibit directional properties, their applications and failure conditions should be properly examined and in addition to this, robust computational tools have to be exploited for the design of structural components for efficient utilisation of these materials. Design of structural components requires reliable failure criteria for the safe design of the components. Several failure criteria are available for the design of composite laminates. None of the available anisotropic strength criteria represents observed results sufficiently accurate to be employed confidently by itself in design. Most of the failure criteria are validated based on the available uniaxial test data, whereas, in practical situations, laminates are subjected to at least biaxial states of stresses. Since, the generation of biaxial test data are very difficult and time consuming to obtain, it is indeed a necessity to develop computational tools for modelling the biaxial behavior of the composite laminates. Understanding of the initiating failure mechanisms and the development of reliable failure criteria is an essential prerequisite for effective utilization of composite materials. Most of the failure criteria, considers the uniaxial test data with constant shear stress to develop failure envelopes, but in reality, structures are subjected to biaxial normal stresses as well as shear stresses. Hence, one can develop different failure envelopes depending upon the percentage of the shear stress content. As mentioned earlier, safe design of the composite structural components require reliable failure criterion. Currently two broad approaches, namely, (1) Damage Tolerance Based Design and (2)Failure Criteria Based Design are in use for the design of laminated structures in aerospace industry. Both approaches have some limitations. The damage tolerance based design suffers from a lack of proper definition of damage and the inability of analytical tools to handle realistic damage. The failure criteria based design, although relatively, more attractive in view of the simplicity, it forces the designer to use unverified design points in stress space, resulting in unpredictable failure conditions. Generally, failure envelopes are constructed using 4 or 5 experimental constants. In this type of approach, small experimental errors in these constants lead to large shift in the failure boundaries raising doubts about the reliability of the boundary in some segments. Further, they contain segments which have no experimental support and so can lead to either conservative or nonconservative designs. Conservative design leads to extra weight, a situation not acceptable in aerospace industry. Whereas, a nonconservative design, is obviously prohibitive, as it implies failure. Hence, both the damage tolerance based design and failure criteria based design have limitations. A new method, which combines the advantages of both the approaches is desirable. This issue is also thoroughly debated in many international conference on composites. Several pioneers in the composite industry indicated the need for further research work in the development of reliable failure criteria. Hence, this is motivated to carry out research work for the development of new failure criterion for the design of composite structures. Several experts meetings held world wide towards the assessment of existing failure theories and computer codes for the design of composite structures. One such meeting is the experts meeting held at United Kingdom in 1991.This meeting held at St. Albans(UK) on ’Failure of Polymeric Composites and Structures: Mechanisms and Criteria for the Prediction of Performance’, in 1991 by UK Science & Engineering Council and UK Institute of Mechanical Engineers. After thorough deliberations it was concluded that 1. There is no universal definition of failure of composites. 2. There is little or lack of faith in the failure criteria that are in current use and 3. There is a need to carry out World Wide Failure Exercise(WWFE) Based on the experts suggestions, Hinton and Soden initiated the WWFE in consultation with Prof.Bryan Harris (Editor, Journal of Composite Science and Tech-nology)to have a program to get comparative assessment of existing failure criteria and codes with following aims 1. Establish the current level of maturity of theories for predicting the failure response of fiber reinforced plastic(FRP)laminates. 2. Closing the knowledge gap between theoreticians and design practitioners in this field. 3. Stimulating the composites’ community into providing design engineers with more robust and accurate failure prediction methods, and the confidence to use them. The organisers invited pioneers in the composite industry for the program of WWFE. Among the pioneer in the composite industry Professor Hashin declined to participate in the program and had written a letter to the organisers saying that, My only work in this subject relates to failure criteria of unidirectional fiber composites, not to laminates. I do not believe that even the most complete information about failure of single plies is sufficient to predict the failure of a laminate, consisting of such plies. A laminate is a structure which undergoes a complex damage process (mostly of cracking) until it finally fails. The analysis of such a process is a prerequisite for failure analysis. ”While significant advances have been made in this direction we have not yet arrived at the practical goal of failure prediction”. Another important conference held in France in 1999, Composites for the next Millennium (Proceedingof Symposium in honor of S.W.Tsaion his 70th Birth Day Torus, France, July 2-3, 1999, pp.19.) also concludedon similar line to the meeting held at UK in 1991. Paul A Lagace and S. Mark Spearing, have pointed out that, by referring to the article on ’Predicting Failure in Composite Laminates: the background to the exercise’, by M.J.Hinton & P.D.Soden, Composites Science and Technology, Vol.58, No.7(1998), pp.1005. ”After Over thirty years of work ’The’ composite failure criterion is still an elusive entity”. Numerous researchers have produced dozens of approaches. Hundreds of papers, manuscripts and reports were written and presentations made to address the latest thoughts, add data to accumulated knowledge bases and continue the scholarly debate. Thus, the out come of these experts meeting, is that, there is a need to develop new failure theories and due to complexities associated with experimentation, especially getting bi-axial data, computational methods are the only viable alternative. Currently, biaxial data on composites is very limited as the biaxial testing of laminates is very difficult and standardization of biaxial data is yet to be done. All these experts comments and suggestions motivated us to carry out research work towards the development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ based on initiating failure mechanisms. The objectives of the thesis are 1. Identification of the failure mechanism based failure criteria for the specific initiating failure mechanism and to assign the specific failure criteria for specific initiating failure mechanism, 2. Use of the ’failure mechanism based design’ method for composite pressurant tanks and to evaluate it, by comparing it with some of the standard ’failure criteria’ based designs from the point of view of overall weight of the pressurant tank, 3. Development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ without shear stress content and the corresponding failure envelope, 4. Development of different failure envelopes with the effect of shear stress depending upon the percentage of shear stress content and 5. Design of composite laminates with the Failure Mechanism Based Failure Criterion using optimization techniques such as Genetic Algorithms(GA) and Vector Evaluated Particle Swarm Optimization(VEPSO) and the comparison of design with other failure criteria such as Tsai-Wu and Maximum Stress failure criteria. The following paragraphs describe about the achievement of these objectives. In chapter 2, a rectangular panel subjected to boundary displacements is used as an example to illustrate the concept of failure mechanism based design. Composite Laminates are generally designed using a failure criteria, based on a set of standard experimental strength values. Failure of composite laminates involves different failure mechanisms depending upon the stress state and so different failure mechanisms become dominant at different points on the failure envelope. Use of a single failure criteria, as is normally done in designing laminates, is unlikely to be satisfactory for all combination of stresses. As an alternate use of a simple failure criteria to identify the dominant failure mechanism and the design of the laminate using appropriate failure mechanism based criteria is suggested in this thesis. A complete 3-D stress analysis has been carried out using a general purpose NISA Finite Element Software. Comparison of results using standard failure criteria such as Maximum Stress, Maximum Strain, Tsai-Wu, Yamada-Sun, Maximum Fiber Strain, Grumman, O’brien, & Lagace, indicate substantial differences in predicting the first ply failure. Results for Failure Load Factors, based on the failure mechanism based approach are included. Identification of the failure mechanism at highly stressed regions and the design of the component, to withstand an artificial defect, representative this failure mechanism, provides a realistic approach to achieve necessary strength without adding unnecessary weight to the structure. It is indicated that the failure mechanism based design approach offers a reliable way of assessing critically stressed regions to eliminate the uncertainties associated with the failure criteria. In chapter 3, the failure mechanism based design approach has been applied to a composite pressurant tanks of upper stages of launch vehicles and propulsion systems of space crafts. The problem is studied using the failure mechanism based design approach, by introducing an artificial matrix crack representative of the initiating failure mechanism in the highly stressed regions and the strain energy release rate (SERR) are calculated. The total SERR value is obtained as 3330.23 J/m2, which is very high compared to the Gc(135 J/m2) value, which means the crack will grow further. The failure load fraction at which the crack has a tendency to grow is estimated to be 0.04054.Results indicates that there are significant differences in the failure load fraction for different failure criteria.Comparison with Failure Mechanism Based Criterion (FMBC) clearly indicates matrix cracks occur at loads much below the design load yet fibers are able to carrythe design load. In chapter 4, a Failure Mechanism Based Failure Criterion(FMBFC)has been proposed for the development of failure envelope for unidirectional composite plies. A representative volume element of the laminate under local loading is micromechanically modelled to predict the experimentally determined strengths and this model is then used to predict points on the failure envelope in the neighborhood of the experimental points. The NISA finite element software has been used to determine the stresses in the representative volume element. From these micro-stresses, the strength of the lamina is predicted. A correction factor is used to match the prediction of the present model with the experimentally determined strength so that, the model can be expected to provide accurate prediction of the strength in the neighborhood of the experimental points. A procedure for the construction of the failure envelope in the stress space has been outlined and the results are compared with the some of the standard and widely used failure criteria in the composite industry. Comparison of results with the Tsai-Wu failure criterion shows that there are significant differences, particularly in the third quadrant, when the ply is under bi-axial compressive loading. Comparison with maximum stress criterion indicates better correlation. The present failure mechanism based failure criterion approach opens a new possibility of constructing reliable failure envelopes for bi-axial loading applications, using the standard uniaxialtest data. In chapter 5, the new failure criterion for laminated composites developed based on initiating failure mechanism as mentioned in chapter 4 for without shear stress condition is extended to obtain the failure envelopes with the shear stress condition. The approach is based on Micromechanical analysis of composites, wherein a representative volume consists of a fiber surrounded by matrix in appropriate volume fraction and modeled using 3-D finite elements to predict the strengths.In this chapter, different failure envelopes are developed by varying shear stress say from 0% of shear strength to 100% of shear strength in steps of 25% of shear strength. Results obtained from this approach are compared with Tsai-Wu and Maximum stress failure criteria. The results show that the predicted strengths match more closely with maximum stress criterion. Hence, it can be concluded that influence of shear stress on the failure of the lamina is of little consequence as far as the prediction of strengths in laminates. In chapter 6, the failure mechanism based failure criterion, developed by the authors is used for the design optimization of the laminates and the percentage savings in total weight of the laminate is presented. The design optimization of composite laminates are performed using Genetic Algorithms. The genetic algorithm is one of the robust tools available for the optimum design of composite laminates. The genetic algorithms employ techniques originated from biology and dependon the application of Darwin’s principle of survival of the fittest. When a population of biological creatures is permitted to evolve over generations, individual characteristics that are beneficial for survival have a tendency to be passed on to future generations, since individuals carrying them get more chances to breed. In biological populations, these characteristics are stored in chromosomal strings. The mechanics of natural genetics is derived from operations that result in arranged yet randomized exchange of genetic information between the chromosomal strings of the reproducing parents and consists of reproduction, cross over, mutation, and inversion of the chromosomal strings. Here, optimization of the weight of the composite laminates for given loading and material properties is considered. The genetic algorithms have the capability of selecting choice of orientation, thickness of single ply, number of plies and stacking sequence of the layers. In this chapter, minimum weight design of composite laminates is presented using the Failure Mechanism Based(FMB), Maximum Stress and Tsai-Wu failure criteria. The objective is to demonstrate the effectiveness of the newly proposed FMB Failure Criterion(FMBFC) in composite design. The FMBFC considers different failure mechanisms such as fiber breaks, matrix cracks, fiber compressive failure, and matrix crushing which are relevant for different loadin gconditions. FMB and Maximum Stress failure criteria predicts byupto 43 percent savings in weight of the laminates compared to Tsai-Wu failure criterion in some quadrants of the failure envelope. The Tsai-Wu failure criterion over predicts the weight of the laminate by up to 86 percent in the third quadrant of the failure envelope compared to FMB and Maximum Stress failure criteria, when the laminate is subjected to biaxial compressive loading. It is found that the FMB and Maximum Stress failure criteria give comparable weight estimates. The FMBFC can be considered for use in the strength design of composite structures. In chapter 7, Particle swarm optimization is used for design optimization of composite laminates. Particle swarm optimization(PSO)is a novel meta-heuristic inspired by the flocking behaviour of birds. The application of PSO to composite design optimization problems has not yet been extensively explored. Composite laminate optimization typically consists in determining the number of layers, stacking sequence and thickness of ply that gives the desired properties. This chapter details the use of Vector Evaluated Particle Swarm Optimization(VEPSO) algorithm, a multi-objective variant of PSO for composite laminate design optimization. VEPSO is a modern coevolutionary algorithm which employs multiple swarms to handle the multiple objectives and the information migration between these swarms ensures that a global optimum solution is reached. The current problem has been formulated as a classical multi-objective optimization problem, with objectives of minimizing weight of the component for a required strength and minimizing the totalcost incurred, such that the component does not fail. In this chapter, an optimum configuration for a multi-layered unidirectional carbon/epoxy laminate is determined using VEPSO. The results are presented for various loading configurations of the composite structures. The VEPSO predicts the same minimum weight optimization and percentage savings in weight of the laminate when compared to GA for all loading conditions.There is small difference in results predicted by VEPSO and GA for some loading and stacking sequence configurations, which is mainly due to random selection of swarm particles and generation of populations re-spectively.The difference can be prevented by running the same programme repeatedly. The Thesis is concluded by highlighting the future scope of several potential applications based on the developments reported in the thesis.
210

Modelling and autoresonant control design of ultrasonically assisted drilling applications

Li, Xuan January 2014 (has links)
The target of the research is to employ the autoresonant control technique in order to maintain the nonlinear oscillation mode at resonance (i.e. ultrasonic vibration at the tip of a drill bit at a constant level) during vibro-impact process. Numerical simulations and experiments have been executed. A simplified Matlab-Simulink model which simulates the ultrasonically assisted machining process consists of two parts. The first part represents an ultrasonic transducer that contains a piezoelectric transducer and a 2-step concentrator (waveguide). The second part reflects the applied load to the ultrasonic transducer due to the vibro-impact process. Parameters of the numerical models have been established based on experimental measurements and the model validity has been confirmed through experiments performed on an electromechanical ultrasonic transducer. The model of the ultrasonic transducer together with the model of the applied load was supplemented with a model of the autoresonant control system. The autoresonant control intends to provide the possibility of self-tuning and self-adaptation mechanism for an ultrasonic transducer to maintain its resonant regime of oscillations automatically by means of positive feedback. This is done through a signal to be controlled (please refer to Figure 7.2 and Figure 7.3) transformation and amplification. In order to examine the effectiveness and the efficiency of the autoresonant control system, three control strategies have been employed depending on the attributes of the signals to be controlled . Mechanical feedback control uses a displacement signal at the end of the 2nd step of the ultrasonic transducer. The other two control strategies are current feedback control and power feedback control. Current feedback control employs the electrical current flowing through the piezoceramic rings (piezoelectric transducer) as the signal to be controlled while power feedback control takes into account both the electrical current and the power of the ultrasonic transducer. Comparison of the results of the ultrasonic vibrating system excitation with different control strategies is presented. It should be noted that during numerical simulation the tool effect is not considered due to the complexity of a drill bit creates during the Ultrasonically Assisted Drilling (UAD) process. An effective autoresonant control system was developed and manufactured for machining experiments. Experiments on Ultrasonically Assisted Drilling (UAD) have been performed to validate and compare with the numerical results. Two sizes of drill bits with diameters 3mm and 6mm were applied in combination with three autoresonant control strategies. These were executed during drilling aluminium alloys with one fixed rotational speed associated with several different feed rates. Vibration levels, control efforts, feed force reduction were monitored during experiments. Holes quality and surface finish examinations supplement analysis of the autoresonant control results. In addition, another interesting research on the investigation of the universal matchbox (transformer) has been carried out. Introducing a varying air gap between two ferrite cores allows the optimization of the ultrasonic vibrating system, in terms of the vibration level, effective matchbox inductance, voltage and current level, phase difference between voltage and current, supplied active power etc (more details please refer to Appendix I).

Page generated in 0.1148 seconds