• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 70
  • 29
  • 23
  • 22
  • 14
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 412
  • 412
  • 351
  • 82
  • 78
  • 74
  • 69
  • 63
  • 55
  • 47
  • 44
  • 43
  • 42
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Metodologia para coordenação otimizada entre relés de distância e direcionais de sobrecorrente em sistemas de transmissão de energia elétrica / Methodology for optimized coordination of distance and directional overcurrent relays in electrical transmission systems

Vinícius de Cillo Moro 05 September 2014 (has links)
A proteção de sistemas de energia elétrica possui papel extremamente importante no aspecto de garantir o fornecimento de energia de maneira segura e confiável. Assim, a ação indevida ou a não atuação deste sistema de proteção pode causar danos materiais ou econômicos tanto para as concessionárias quanto para os consumidores de energia elétrica. Dessa forma, o sistema de proteção deve estar bem ajustado para que possa garantir suas funções, sendo sensível, seletivo, confiável e rápido. Para tanto, uma boa coordenação entre os relés de proteção deve ser estabelecida. No caso de um sistema de transmissão, o qual costuma ser um sistema malhado, a proteção é comumente realizada por relés de distância aliados a relés de sobrecorrente com unidade direcional, sendo que estes funcionam como elemento de retaguarda daqueles. O processo de ajuste desses relés é um trabalho muito difícil e demorado, que pode ainda estar sujeito a erros do engenheiro de proteção responsável pelo estudo. Neste contexto, este trabalho tem como objetivo desenvolver uma metodologia baseada na otimização por enxame de partículas que obtenha automaticamente os ajustes desses relés de forma a garantir a coordenação e seletividade entre eles, tornando assim o processo de ajuste mais rápido e preciso. Dessa forma, essa metodologia pode constituir uma ferramenta de auxílio muito favorável ao engenheiro de proteção. Além disso, como em todo problema de otimização, a função objetivo e as restrições foram definidas de maneira a retratar o problema de coordenação envolvendo tanto os relés de distância quanto os direcionais de sobrecorrente. A metodologia foi aplicada a dois sistemas, um fictício com 16 relés e um sistema de transmissão real com 44 relés, sendo que em ambos os casos ela apresentou resultados bastante satisfatórios proporcionando ajustes bem coordenados. / Electrical power systems protection has a very important role in the aspect of ensuring energy supply with safety and reliability. Thus, improper action or non-actuation of this protection system can cause materials and/or economics damages to electricity utilities as well as ordinary energy consumers. Therefore the protection system must be well adjusted so it can ensure its functions and thus being sensible, selective, reliable and fast. In order to achieve these characteristics, the protective relays must be well coordinated. In the case of meshed transmission systems, the protection is generally performed by distance relays as primary protection associated with directional overcurrent relays as backup protection. The process of adjusting these relays is very tough, slow and it can even be subject to the protection engineer mistakes. In this context, this work aims to develop a particle swarm optimization based methodology that can automatically obtain these relays adjusts so they can ensure relays coordination and selectivity, and then make this setting process faster and more precise. Thus, this methodology may provide a very favorable tool to aid the protection engineer. Moreover, as in any optimization problem, the objective function and the constraints were defined to represent the coordination problem involving both distance and directional overcurrent relays. The methodology was applied to two systems, a fictitious with 16 relays and a real transmission system with 44 relays, and in both cases it showed satisfactory results providing well-coordinated settings.
332

Algoritmo enxame de partículas discreto para coordenação de relés direcionais de sobrecorrente em sistemas elétricos de potência / Discrete particle swarm algorithm for directional overcurrent relays coordination in electric power system

Bernardes, Wellington Maycon Santos 26 March 2013 (has links)
Este trabalho propõe uma metodologia baseada em técnicas inteligentes capaz de fornecer uma coordenação otimizada de relés direcionais de sobrecorrente instalados em sistemas de energia elétrica. O problema é modelado como um caso de programação não linear inteira mista, em que os relés permitem ajustes discretizados de múltiplos de tempo e/ou múltiplos de corrente. A solução do problema de otimização correspondente é obtida através de uma metaheurística nomeada como Discrete Particle Swarm Optimization. Na literatura técnico-científica esse problema geralmente é linearizado e aplicam-se arredondamentos das variáveis discretas. Na metodologia proposta, as variáveis discretas são tratadas adequadamente para utilização na metaheurística e são apresentados os resultados que foram comparados com os obtidos pelo modelo clássico de otimização implementado no General Algebraic Modeling System (GAMS). Tendo em vista os aspectos observados, o método permite ao engenheiro de proteção ter um subsídio adicional na tarefa da coordenação dos relés direcionais de sobrecorrente, disponibilizando uma técnica eficaz e de fácil aplicabilidade ao sistema elétrico a ser protegido, independentemente da topologia e condição operacional. / This work proposes a methodology that based on intelligent technique to obtain an optimized coordination of directional overcurrent relays in electric power systems. The problem is modeled as a mixed integer nonlinear problem, because the relays allows a discrete setting of time and/or current multipliers. The solution of the proposed optimization problem is obtained from the proposed metaheuristic named as Discrete Particle Swarm Optimization. In scientific and technical literature this problem is usually linearized and discrete variables are rounded off. In the proposed method, the discrete variables are modeled adequately in the metaheuristic and the results are compared to the classical optimization solvers implemented in General Algebraic Modeling System (GAMS). The method provides an important method for helping the engineers in to coordinate directional overcurrent relays in a very optimized way. It has high potential for the application to realistic systems, regardless of topology and operating condition.
333

Applications of Artificial Intelligence in Power Systems

Rastgoufard, Samin 18 May 2018 (has links)
Artificial intelligence tools, which are fast, robust and adaptive can overcome the drawbacks of traditional solutions for several power systems problems. In this work, applications of AI techniques have been studied for solving two important problems in power systems. The first problem is static security evaluation (SSE). The objective of SSE is to identify the contingencies in planning and operations of power systems. Numerical conventional solutions are time-consuming, computationally expensive, and are not suitable for online applications. SSE may be considered as a binary-classification, multi-classification or regression problem. In this work, multi-support vector machine is combined with several evolutionary computation algorithms, including particle swarm optimization (PSO), differential evolution, Ant colony optimization for the continuous domain, and harmony search techniques to solve the SSE. Moreover, support vector regression is combined with modified PSO with a proposed modification on the inertia weight in order to solve the SSE. Also, the correct accuracy of classification, the speed of training, and the final cost of using power equipment heavily depend on the selected input features. In this dissertation, multi-object PSO has been used to solve this problem. Furthermore, a multi-classifier voting scheme is proposed to get the final test output. The classifiers participating in the voting scheme include multi-SVM with different types of kernels and random forests with an adaptive number of trees. In short, the development and performance of different machine learning tools combined with evolutionary computation techniques have been studied to solve the online SSE. The performance of the proposed techniques is tested on several benchmark systems, namely the IEEE 9-bus, 14-bus, 39-bus, 57-bus, 118-bus, and 300-bus power systems. The second problem is the non-convex, nonlinear, and non-differentiable economic dispatch (ED) problem. The purpose of solving the ED is to improve the cost-effectiveness of power generation. To solve ED with multi-fuel options, prohibited operating zones, valve point effect, and transmission line losses, genetic algorithm (GA) variant-based methods, such as breeder GA, fast navigating GA, twin removal GA, kite GA, and United GA are used. The IEEE systems with 6-units, 10-units, and 15-units are used to study the efficiency of the algorithms.
334

Optimisation par essaims particulaires pour la logistique urbaine / Particle Swarm Optimization for urban logistics

Peng, Zhihao 18 July 2019 (has links)
Dans cette thèse, nous nous intéressons à la gestion des flux de marchandises en zone urbaine aussi appelée logistique du dernier kilomètre, et associée à divers enjeux d’actualité : économique, environnemental, et sociétal. Quatre principaux acteurs sont concernés par ces enjeux : chargeurs, clients, transporteurs et collectivités, ayant chacun des priorités différentes (amélioration de la qualité de service, minimisation de la distance parcourue, réduction des émissions de gaz à effet de serre, …). Face à ces défis dans la ville, un levier d’action possible consiste à optimiser les tournées effectuées pour la livraison et/ou la collecte des marchandises. Trois types de flux urbains sont considérés : en provenance ou à destination de la ville, et intra-urbains. Pour les flux sortants et entrants dans la ville, les marchandises sont d’abord regroupées dans un entrepôt situé en périphérie urbaine. S’il existe plusieurs entrepôts, le problème de planification associé est de type Location Routing Problem (LRP). Nous en étudions une de ses variantes appelée Capacitated Location Routing Problem (CLRP). Dans cette dernière, en respectant la contrainte de capacité imposée sur les véhicules et les dépôts, la localisation des dépôts et la planification des tournées sont considérées en même temps. L’objectif est de minimiser le coût total qui est constitué du coût d’ouverture des dépôts, du coût d’utilisation des véhicules, et du coût de la distance parcourue. Pour tous les flux, nous cherchons également à résoudre un problème de tournées de type Pickup and Delivery Problem (PDP), dans lequel une flotte de véhicules effectue simultanément des opérations de collecte et de livraison. Nous nous sommes focalisés sur deux de ses variantes : la variante sélective où toutes les demandes ne sont pas toujours satisfaites, dans un contexte de demandes appairées et de sites contraints par des horaires d’ouverture et fermeture (Selective Pickup and Delivery Problem with Time Windows and Paired Demands, ou SPDPTWPD). La seconde variante étudiée est l’extension de la première en ajoutant la possibilité d’effectuer les transports en plusieurs étapes par l’introduction d’opérations d’échanges des marchandises entre véhicules en des sites de transfert (Selective Pickup and Delivery with Transfers ou SPDPT). Les objectifs considérés pour ces deux variantes de PDP sont de maximiser le profit et de minimiser la distance. Chaque problème étudié fait l’objet d’une description formelle, d’une modélisation mathématique sous forme de programme linéaire, puis d’une résolution par des méthodes exactes, heuristiques et/ou métaheuristiques. En particulier nous avons développé des algorithmes basés sur une métaheuristique appelée Particle Swarm Optimization, que nous avons hybridée avec de la recherche locale. Les approches sont validées sur des instances de différentes tailles issues de la littérature et/ou que nous avons générées. Les résultats sont analysés de façon critique pour mettre en évidence les avantages et inconvénients de chaque méthode. / In this thesis, we are interested in the management of goods flows in urban areas, also called last mile logistics, and associated with various current issues: economic, environmental, and societal. Four main stakeholders are involved by these challenges: shippers, customers, carriers and local authorities, each with different priorities (improving service quality, minimizing the travelling distance, reducing greenhouse gas emissions, etc.). Faced with these challenges in the city, one possible action lever is to optimize the routes for the pickup and/or delivery of goods. Three types of urban flows are considered: from or to the city, and intra-urban. For outgoing and incoming flows into the city, the goods are first grouped in a warehouse located on the suburban area of the city. If there are several warehouses, the associated planning problem is the Location Routing Problem (LRP). We are studying one of its variants called the Capacitated Location Routing Problem (CLRP). In this problem, by respecting the capacity constraint on vehicles and depots, the location of depots and route planning are considered at the same time. The objective is to minimize the total cost, which consists of the cost of opening depots, the cost of using vehicles, and the cost of the travelling distance. For all flows, we are also looking to solve a Pickup and Delivery Problem (PDP), in which a fleet of vehicles simultaneously carries out pickup and delivery operations. We focus on two of its variants: the selective variant where not all requests are satisfied, in a context of paired demands and time windows on sites (Selective Pickup and Delivery Problem with Time Windows and Paired Demands, or SPDPTWPD). The second studied variant is the extension of the first one by adding the possibility of carrying out transport in several stages by introducing operations for the exchange of goods between vehicles at transfer sites (Selective Pickup and Delivery with Transfers or SPDPT). The considered objectives for these two variants of PDP are to maximize profit and to minimize distance. Each studied problem is formally described, mathematically modelled as a linear program and then solved by exact, heuristic and/or metaheuristic methods. In particular, we have developed algorithms based on a metaheuristic called Particle Swarm Optimization, which we have hybridized with local search operators. The approaches are validated on instances of different sizes from the literature and/or on instances that we have generated. The results are critically analyzed to highlight the advantages and drawbacks of each method.
335

Caractérisation, identification et optimisation des systèmes mécaniques complexes par mise en oeuvre de simulateurs hybrides matériels/logiciels / Characterization, identification and optimization of complex mechanical systems by implementing hybrid hardware / software simulators

Salmon, Sébastien 21 May 2012 (has links)
La conception de systèmes complexes, et plus particulièrement de micro-systèmes complexes embarqués, posent des problèmes tels que l'intégration des composants, la consommation d'énergie, la fiabilité, les délais de mise sur marché, ...La conception mécatronique apparait comme étant particulièrement adaptée à ces systèmes car elle intègre intimement simulations, expérimentations, interactions entre sous-systèmes et cycles de reconception à tous les niveaux. Le produit obtenu est plus optimisé, plus performant et les délais de mise sur le marché sont réduits.Cette thèse a permis de trouver des méthodes de caractérisation, d'identification de paramètres ainsi que d'optimisation de systèmes mécatroniques actifs par la constitution de modèles numériques, de bancs d'expériences numériques, physiques et hybrides. Le cadre est bien précis : c'est celui d'un actionneur piézoélectrique amplifié, de sa commande ainsi que de la constitution générale de la boucle fermée d'un système mécatronique l'intégrant, les conclusions étant généralisables.Au cours de cette thèse, ont été introduits, avec succès, différents concepts :– Le « Signal Libre ». Un nouveau signal de commande des actionneurs piézoélectriques, basé sur les splines, maximise la vitesse de déplacement de l'actionneur et minimise sa consommation énergétique.– Deux améliorations de l'algorithme d'optimisation par essais de particules. La première introduit un arrêt de l'algorithme par la mesure du rayon de l'essaim ; le rayon limite est défini par la limite de mesurabilité des paramètres à optimiser ("Radius") ; la seconde ajoute la possibilité pour l'essaim de se transférer à une meilleure position tout en gardant sa géométrie. Ceci permet d' accélérer la convergence ("BSG-Starcraft")– L'optimisation expérimentale. Le modèle numérique étant très incertain, il est remplacé directement par le système réel dans le processus d'optimisation. Les résultats sont de qualité supérieure à ceux obtenus à partir de la simulation numérique. / The design of complex systems, especially of embedded complex micro systems, causes problems such as component integration, power consumption, reliability, time-to-market ....Mechatronics design appears to be particularly suitable for these systems because it integrates closely simulations, experiments, interactions between subsystems and redesign cycles at all levels. The resulting product is more optimized, more efficient and time-to-market is reduced.This thesis led to methods of characterization and parameter identification but also to methods for optimizing active mechatronic systems through numerical model building and different bench types, i.e. digital, physical, and hybrid. The framework is specifically that of an amplified piezoelectric actuator, its control as well as the general constitution of the closed loop of the related mechatronic system. The conclusions are generalizable.In this thesis, different concepts have been successfully introduced: - The "Free Signal". A new control signal of the piezoelectric actuator, based on splines, maximizes the speed of the actuator movement and minimizes the energy consumption. - Two improvements of the particle swarm optimization algorithm. The first one introduces a stopping criteria by measuring the swarm radius; the limit radius is defined by the measurability limit of the parameters to be optimized (“Radius”). The second one adds a swarm ability: it can jump to a better location keeping its geometry. This allows a faster convergence rate (“BSG-Starcraft”). - The experimental optimization. The numerical model being very uncertain, it is directly replaced by the real system in the optimization process. This leads to better results than those obtained using numerical simulation.
336

Prediction of properties and optimal design of microstructure of multi-phase and multi-layer C/SiC composites / La prédiction des propriétés et l'optimisation de la microstructure ds composites multi-phases et multi-couches C/SiC

Xu, Yingjie 08 July 2011 (has links)
Les matériaux composites à matrice de carbure de silicium renforcée par des fibres decarbone (C/SiC) sont des composites à matrice céramique (CMC), très prometteurs pour desapplications à haute température, comme le secteur aéronautique. Dans cette thèse, sontmenées des études particulières concernant les propriétés de ces matériaux : prédiction despropriétés mécanique (élastiques), analyses thermiques (optimisation des contraintesthermiques), simulation de l’oxydation à haute température.Une méthode basée sur l’énergie de déformation est proposée pour la prédiction desconstantes élastiques et des coefficients de dilatation thermiques de matériaux compositesorthotropes 3D. Dans cette méthode, les constantes élastiques et les coefficients de dilatationthermique sont obtenus en analysant la relation entre l'énergie de déformation de lamicrostructure et celle du modèle homogénéisé équivalent sous certaines conditions auxlimites thermiques et élastiques. Différents types de matériaux composites sont testés pourvalider le modèle.Différentes configurations géométriques du volume élémentaire représentatif des compositesC/SiC (2D tissés et 3D tressés) sont analysées en détail. Pour ce faire, la méthode énergétiquea été couplée à une analyse éléments finis. Des modèles EF des composites C/SiC ont étédéveloppés et liés à cette méthode énergétique pour évaluer les constantes élastiques et lescoefficients de dilatation thermique. Pour valider la modélisation proposée, les résultatsnumériques sont ensuite comparés à des résultats expérimentaux.Pour poursuivre cette analyse, une nouvelle stratégie d'analyse « globale/locale »(multi-échelle) est développée pour la détermination détaillée des contraintes dans lesstructures composites 2D tissés C/SiC. Sur la base de l'analyse par éléments finis, laprocédure effectue un passage de la structure composite homogénéisée (Echelle macro :modèle global) au modèle détaillé de la fibre (Echelle micro : modèle local). Ce passage entreles deux échelles est réalisé à partir des résultats de l'analyse globale et des conditions auxlimites du modèle local. Les contraintes obtenues via cette approche sont ensuite comparées àcelles obtenues à l’aide d’une analyse EF classique.IVLa prise des contraintes résiduelles thermiques (contraintes d’origine thermique dans lesfibres et la matrice) joue un rôle majeur dans le comportement des composites à matricescéramiques. Leurs valeurs influencent fortement la contrainte de microfissuration de lamatrice. Dans cette thèse, on cherche donc à minimiser cette contrainte résiduelle thermique(TRS) par une méthode d’optimisation de type métaheuristique: Particle Swarm Optimization(PSO), Optimisation par essaims particulaires.Des modèles éléments finis du volume élémentaire représentatif de composites 1-Dunidirectionnels C/SiC avec des interfaces multi-couches sont générés et une analyse paréléments finis est réalisée afin de déterminer les contraintes résiduelles thermiques. Unschéma d'optimisation couple l'algorithme PSO avec la MEF pour réduire les contraintesrésiduelles thermiques dans les composites C/SiC en optimisant les épaisseurs des interfacesmulti-couches.Un modèle numérique est développé pour étudier le processus d'oxydation de microstructureet la dégradation des propriétés élastiques de composites 2-D tissés C/SiC oxydant àtempérature intermédiaire (T<900°C). La microstructure du volume élémentaire représentatifde composite oxydé est modélisée sur la base de la cinétique d'oxydation. La méthode del'énergie de déformation est ensuite appliquée au modèle éléments finis de la microstructureoxydé pour prédire les propriétés élastiques des composites. Les paramètres d'environnement,à savoir, la température et la pression, sont étudiées pour voir leurs influences sur lecomportement d'oxydation de composites C/SiC. / Carbon fiber-reinforced silicon carbide matrix (C/SiC) composite is a ceramic matrixcomposite (CMC) that has considerable promise for use in high-temperature structuralapplications. In this thesis, systematic numerical studies including the prediction of elasticand thermal properties, analysis and optimization of stresses and simulation ofhigh-temperature oxidations are presented for the investigation of C/SiC composites.A strain energy method is firstly proposed for the prediction of the effective elastic constantsand coefficients of thermal expansion (CTEs) of 3D orthotropic composite materials. Thismethod derives the effective elastic tensors and CTEs by analyzing the relationship betweenthe strain energy of the microstructure and that of the homogenized equivalent model underspecific thermo-elastic boundary conditions. Different kinds of composites are tested tovalidate the model.Geometrical configurations of the representative volume cell (RVC) of 2-D woven and 3-Dbraided C/SiC composites are analyzed in details. The finite element models of 2-D wovenand 3-D braided C/SiC composites are then established and combined with the stain energymethod to evaluate the effective elastic constants and CTEs of these composites. Numericalresults obtained by the proposed model are then compared with the results measuredexperimentally.A global/local analysis strategy is developed for the determination of the detailed stresses inthe 2-D woven C/SiC composite structures. On the basis of the finite element analysis, theprocedure is carried out sequentially from the homogenized composite structure of themacro-scale (global model) to the parameterized detailed fiber tow model of the micro-scale(local model). The bridge between two scales is realized by mapping the global analysisresult as the boundary conditions of the local tow model. The stress results by global/localmethod are finally compared to those by conventional finite element analyses.Optimal design for minimizing thermal residual stress (TRS) in 1-D unidirectional C/SiCcomposites is studied. The finite element models of RVC of 1-D unidirectional C/SiCIIcomposites with multi-layer interfaces are generated and finite element analysis is realized todetermine the TRS distributions. An optimization scheme which combines a modifiedParticle Swarm Optimization (PSO) algorithm and the finite element analysis is used toreduce the TRS in the C/SiC composites by controlling the multi-layer interfaces thicknesses.A numerical model is finally developed to study the microstructure oxidation process and thedegradation of elastic properties of 2-D woven C/SiC composites exposed to air oxidizingenvironments at intermediate temperature (T<900°C). The oxidized RVC microstructure ismodeled based on the oxidation kinetics analysis. The strain energy method is then combinedwith the finite element model of oxidized RVC to predict the elastic properties of composites.The environmental parameters, i.e., temperature and pressure are studied to show theirinfluences upon the oxidation behavior of C/SiC composites.
337

Maintenance optimization for power distribution systems

Hilber, Patrik January 2008 (has links)
Maximum asset performance is one of the major goals for electric power distribution system operators (DSOs). To reach this goal minimal life cycle cost and maintenance optimization become crucial while meeting demands from customers and regulators. One of the fundamental objectives is therefore to relate maintenance and reliability in an efficient and effective way. Furthermore, this necessitates the determination of the optimal balance between pre¬ventive and corrective maintenance, which is the main problem addressed in the thesis. The balance between preventive and corrective maintenance is approached as a multiobjective optimization problem, with the customer interruption costs on one hand and the maintenance budget of the DSO on the other. Solutions are obtained with meta-heuristics, developed for the specific problem, as well as with an Evolutionary Particle Swarm Optimization algorithm. The methods deliver a Pareto border, a set of several solutions, which the operator can choose between, depending on preferences. The optimization is built on component reliability importance indices, developed specifically for power systems. One vital aspect of the indices is that they work with several supply and load points simultaneously, addressing the multistate-reliability of power systems. For the computation of the indices both analytical and simulation based techniques are used. The indices constitute the connection between component reliability performance and system performance and so enable the maintenance optimization. The developed methods have been tested and improved in two case studies, based on real systems and data, proving the methods’ usefulness and showing that they are ready to be applied to power distribution systems. It is in addition noted that the methods could, with some modifications, be applied to other types of infrastructures. However, in order to perform the optimization, a reliability model of the studied power system is required, as well as estimates on effects of maintenance actions (changes in failure rate) and their related costs. Given this, a generally decreased level of total maintenance cost and a better system reliability performance can be given to the DSO and customers respectively. This is achieved by focusing the preventive maintenance to components with a high potential for improvement from system perspective. / QC 20100810
338

Ultimate Load Capacity Of Optimally Designed Cellular Beams

Erdal, Ferhat 01 February 2011 (has links) (PDF)
Cellular beams became increasingly popular as an efficient structural form in steel construction since their introduction. Their sophisticated design and profiling process provides greater flexibility in beam proportioning for strength, depth, size and location of circular holes. The purpose of manufacturing these beams is to increase overall beam depth, the moment of inertia and section modulus, which results in greater strength and rigidity. Cellular beams are used as primary or secondary floor beams in order to achieve long spans and service integration. They are also used as roof beams beyond the range of portal-frame construction, and are the perfect solution for curved roof applications, combining weight savings with a low-cost manufacturing process. The purpose of the current research is to study optimum design, ultimate load capacity under applied load and finite element analysis of non-composite cellular beams. The first part of the research program focuses on the optimum design of steel cellular beams using one of the stochastic search methods called &ldquo / harmony search algorithm&rdquo / . The minimum weight is taken as the design objective while the design constraints are implemented from the Steel Construction Institute. Design constraints include the displacement limitations, overall beam flexural capacity, beam shear capacity, overall beam buckling strength, web post flexure and buckling, vierendeel bending of upper and lower tees and local buckling of compression flange. The design methods adopted in this publication are consistent with BS5950. In the second part of the research, which is the experimental work, twelve non-composite cellular beams are tested to determine the ultimate load carrying capacities of these beams under using a hydraulic plug to apply point load. The tested cellular beam specimens have been designed by using harmony search algorithm. Finally, finite element analysis program is used to perform elastic buckling analysis and predict critical loads of all steel cellular beams. Finite element analysis results are then compared with experimental test results for each tested cellular beam.
339

Development And Design Optimization Of Laminated Composite Structures Using Failure Mechanism Based Failure Criterion

Naik, G Narayana 12 1900 (has links)
In recent years, use of composites is increasing in most fields of engineering such as aerospace, automotive, civil construction, marine, prosthetics, etc., because of its light weight, very high specific strength and stiffness, corrosion resistance, high thermal resistance etc. It can be seen that the specific strength of fibers are many orders more compared to metals. Thus, laminated fiber reinforced plastics have emerged to be attractive materials for many engineering applications. Though the uses of composites are enormous, there is always an element of fuzziness in the design of composites. Composite structures are required to be designed to resist high stresses. For this, one requires a reliable failure criterion. The anisotropic behaviour of composites makes it very difficult to formulate failure criteria and experimentally verify it, which require one to perform necessary bi-axial tests and plot the failure envelopes. Failure criteria are usually based on certain assumption, which are some times questionable. This is because, the failure process in composites is quite complex. The failure in a composite is normally based on initiating failure mechanisms such as fiber breaks, fiber compressive failure, matrix cracks, matrix crushing, delamination, disbonds or a combination of these. The initiating failure mechanism is the one, which is/are responsible for initiating failure in a laminated composites. Initiating failure mechanisms are generally dependant on the type of loading, geometry, material properties, condition of manufacture, boundary conditions, weather conditions etc. Since, composite materials exhibit directional properties, their applications and failure conditions should be properly examined and in addition to this, robust computational tools have to be exploited for the design of structural components for efficient utilisation of these materials. Design of structural components requires reliable failure criteria for the safe design of the components. Several failure criteria are available for the design of composite laminates. None of the available anisotropic strength criteria represents observed results sufficiently accurate to be employed confidently by itself in design. Most of the failure criteria are validated based on the available uniaxial test data, whereas, in practical situations, laminates are subjected to at least biaxial states of stresses. Since, the generation of biaxial test data are very difficult and time consuming to obtain, it is indeed a necessity to develop computational tools for modelling the biaxial behavior of the composite laminates. Understanding of the initiating failure mechanisms and the development of reliable failure criteria is an essential prerequisite for effective utilization of composite materials. Most of the failure criteria, considers the uniaxial test data with constant shear stress to develop failure envelopes, but in reality, structures are subjected to biaxial normal stresses as well as shear stresses. Hence, one can develop different failure envelopes depending upon the percentage of the shear stress content. As mentioned earlier, safe design of the composite structural components require reliable failure criterion. Currently two broad approaches, namely, (1) Damage Tolerance Based Design and (2)Failure Criteria Based Design are in use for the design of laminated structures in aerospace industry. Both approaches have some limitations. The damage tolerance based design suffers from a lack of proper definition of damage and the inability of analytical tools to handle realistic damage. The failure criteria based design, although relatively, more attractive in view of the simplicity, it forces the designer to use unverified design points in stress space, resulting in unpredictable failure conditions. Generally, failure envelopes are constructed using 4 or 5 experimental constants. In this type of approach, small experimental errors in these constants lead to large shift in the failure boundaries raising doubts about the reliability of the boundary in some segments. Further, they contain segments which have no experimental support and so can lead to either conservative or nonconservative designs. Conservative design leads to extra weight, a situation not acceptable in aerospace industry. Whereas, a nonconservative design, is obviously prohibitive, as it implies failure. Hence, both the damage tolerance based design and failure criteria based design have limitations. A new method, which combines the advantages of both the approaches is desirable. This issue is also thoroughly debated in many international conference on composites. Several pioneers in the composite industry indicated the need for further research work in the development of reliable failure criteria. Hence, this is motivated to carry out research work for the development of new failure criterion for the design of composite structures. Several experts meetings held world wide towards the assessment of existing failure theories and computer codes for the design of composite structures. One such meeting is the experts meeting held at United Kingdom in 1991.This meeting held at St. Albans(UK) on ’Failure of Polymeric Composites and Structures: Mechanisms and Criteria for the Prediction of Performance’, in 1991 by UK Science & Engineering Council and UK Institute of Mechanical Engineers. After thorough deliberations it was concluded that 1. There is no universal definition of failure of composites. 2. There is little or lack of faith in the failure criteria that are in current use and 3. There is a need to carry out World Wide Failure Exercise(WWFE) Based on the experts suggestions, Hinton and Soden initiated the WWFE in consultation with Prof.Bryan Harris (Editor, Journal of Composite Science and Tech-nology)to have a program to get comparative assessment of existing failure criteria and codes with following aims 1. Establish the current level of maturity of theories for predicting the failure response of fiber reinforced plastic(FRP)laminates. 2. Closing the knowledge gap between theoreticians and design practitioners in this field. 3. Stimulating the composites’ community into providing design engineers with more robust and accurate failure prediction methods, and the confidence to use them. The organisers invited pioneers in the composite industry for the program of WWFE. Among the pioneer in the composite industry Professor Hashin declined to participate in the program and had written a letter to the organisers saying that, My only work in this subject relates to failure criteria of unidirectional fiber composites, not to laminates. I do not believe that even the most complete information about failure of single plies is sufficient to predict the failure of a laminate, consisting of such plies. A laminate is a structure which undergoes a complex damage process (mostly of cracking) until it finally fails. The analysis of such a process is a prerequisite for failure analysis. ”While significant advances have been made in this direction we have not yet arrived at the practical goal of failure prediction”. Another important conference held in France in 1999, Composites for the next Millennium (Proceedingof Symposium in honor of S.W.Tsaion his 70th Birth Day Torus, France, July 2-3, 1999, pp.19.) also concludedon similar line to the meeting held at UK in 1991. Paul A Lagace and S. Mark Spearing, have pointed out that, by referring to the article on ’Predicting Failure in Composite Laminates: the background to the exercise’, by M.J.Hinton & P.D.Soden, Composites Science and Technology, Vol.58, No.7(1998), pp.1005. ”After Over thirty years of work ’The’ composite failure criterion is still an elusive entity”. Numerous researchers have produced dozens of approaches. Hundreds of papers, manuscripts and reports were written and presentations made to address the latest thoughts, add data to accumulated knowledge bases and continue the scholarly debate. Thus, the out come of these experts meeting, is that, there is a need to develop new failure theories and due to complexities associated with experimentation, especially getting bi-axial data, computational methods are the only viable alternative. Currently, biaxial data on composites is very limited as the biaxial testing of laminates is very difficult and standardization of biaxial data is yet to be done. All these experts comments and suggestions motivated us to carry out research work towards the development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ based on initiating failure mechanisms. The objectives of the thesis are 1. Identification of the failure mechanism based failure criteria for the specific initiating failure mechanism and to assign the specific failure criteria for specific initiating failure mechanism, 2. Use of the ’failure mechanism based design’ method for composite pressurant tanks and to evaluate it, by comparing it with some of the standard ’failure criteria’ based designs from the point of view of overall weight of the pressurant tank, 3. Development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ without shear stress content and the corresponding failure envelope, 4. Development of different failure envelopes with the effect of shear stress depending upon the percentage of shear stress content and 5. Design of composite laminates with the Failure Mechanism Based Failure Criterion using optimization techniques such as Genetic Algorithms(GA) and Vector Evaluated Particle Swarm Optimization(VEPSO) and the comparison of design with other failure criteria such as Tsai-Wu and Maximum Stress failure criteria. The following paragraphs describe about the achievement of these objectives. In chapter 2, a rectangular panel subjected to boundary displacements is used as an example to illustrate the concept of failure mechanism based design. Composite Laminates are generally designed using a failure criteria, based on a set of standard experimental strength values. Failure of composite laminates involves different failure mechanisms depending upon the stress state and so different failure mechanisms become dominant at different points on the failure envelope. Use of a single failure criteria, as is normally done in designing laminates, is unlikely to be satisfactory for all combination of stresses. As an alternate use of a simple failure criteria to identify the dominant failure mechanism and the design of the laminate using appropriate failure mechanism based criteria is suggested in this thesis. A complete 3-D stress analysis has been carried out using a general purpose NISA Finite Element Software. Comparison of results using standard failure criteria such as Maximum Stress, Maximum Strain, Tsai-Wu, Yamada-Sun, Maximum Fiber Strain, Grumman, O’brien, & Lagace, indicate substantial differences in predicting the first ply failure. Results for Failure Load Factors, based on the failure mechanism based approach are included. Identification of the failure mechanism at highly stressed regions and the design of the component, to withstand an artificial defect, representative this failure mechanism, provides a realistic approach to achieve necessary strength without adding unnecessary weight to the structure. It is indicated that the failure mechanism based design approach offers a reliable way of assessing critically stressed regions to eliminate the uncertainties associated with the failure criteria. In chapter 3, the failure mechanism based design approach has been applied to a composite pressurant tanks of upper stages of launch vehicles and propulsion systems of space crafts. The problem is studied using the failure mechanism based design approach, by introducing an artificial matrix crack representative of the initiating failure mechanism in the highly stressed regions and the strain energy release rate (SERR) are calculated. The total SERR value is obtained as 3330.23 J/m2, which is very high compared to the Gc(135 J/m2) value, which means the crack will grow further. The failure load fraction at which the crack has a tendency to grow is estimated to be 0.04054.Results indicates that there are significant differences in the failure load fraction for different failure criteria.Comparison with Failure Mechanism Based Criterion (FMBC) clearly indicates matrix cracks occur at loads much below the design load yet fibers are able to carrythe design load. In chapter 4, a Failure Mechanism Based Failure Criterion(FMBFC)has been proposed for the development of failure envelope for unidirectional composite plies. A representative volume element of the laminate under local loading is micromechanically modelled to predict the experimentally determined strengths and this model is then used to predict points on the failure envelope in the neighborhood of the experimental points. The NISA finite element software has been used to determine the stresses in the representative volume element. From these micro-stresses, the strength of the lamina is predicted. A correction factor is used to match the prediction of the present model with the experimentally determined strength so that, the model can be expected to provide accurate prediction of the strength in the neighborhood of the experimental points. A procedure for the construction of the failure envelope in the stress space has been outlined and the results are compared with the some of the standard and widely used failure criteria in the composite industry. Comparison of results with the Tsai-Wu failure criterion shows that there are significant differences, particularly in the third quadrant, when the ply is under bi-axial compressive loading. Comparison with maximum stress criterion indicates better correlation. The present failure mechanism based failure criterion approach opens a new possibility of constructing reliable failure envelopes for bi-axial loading applications, using the standard uniaxialtest data. In chapter 5, the new failure criterion for laminated composites developed based on initiating failure mechanism as mentioned in chapter 4 for without shear stress condition is extended to obtain the failure envelopes with the shear stress condition. The approach is based on Micromechanical analysis of composites, wherein a representative volume consists of a fiber surrounded by matrix in appropriate volume fraction and modeled using 3-D finite elements to predict the strengths.In this chapter, different failure envelopes are developed by varying shear stress say from 0% of shear strength to 100% of shear strength in steps of 25% of shear strength. Results obtained from this approach are compared with Tsai-Wu and Maximum stress failure criteria. The results show that the predicted strengths match more closely with maximum stress criterion. Hence, it can be concluded that influence of shear stress on the failure of the lamina is of little consequence as far as the prediction of strengths in laminates. In chapter 6, the failure mechanism based failure criterion, developed by the authors is used for the design optimization of the laminates and the percentage savings in total weight of the laminate is presented. The design optimization of composite laminates are performed using Genetic Algorithms. The genetic algorithm is one of the robust tools available for the optimum design of composite laminates. The genetic algorithms employ techniques originated from biology and dependon the application of Darwin’s principle of survival of the fittest. When a population of biological creatures is permitted to evolve over generations, individual characteristics that are beneficial for survival have a tendency to be passed on to future generations, since individuals carrying them get more chances to breed. In biological populations, these characteristics are stored in chromosomal strings. The mechanics of natural genetics is derived from operations that result in arranged yet randomized exchange of genetic information between the chromosomal strings of the reproducing parents and consists of reproduction, cross over, mutation, and inversion of the chromosomal strings. Here, optimization of the weight of the composite laminates for given loading and material properties is considered. The genetic algorithms have the capability of selecting choice of orientation, thickness of single ply, number of plies and stacking sequence of the layers. In this chapter, minimum weight design of composite laminates is presented using the Failure Mechanism Based(FMB), Maximum Stress and Tsai-Wu failure criteria. The objective is to demonstrate the effectiveness of the newly proposed FMB Failure Criterion(FMBFC) in composite design. The FMBFC considers different failure mechanisms such as fiber breaks, matrix cracks, fiber compressive failure, and matrix crushing which are relevant for different loadin gconditions. FMB and Maximum Stress failure criteria predicts byupto 43 percent savings in weight of the laminates compared to Tsai-Wu failure criterion in some quadrants of the failure envelope. The Tsai-Wu failure criterion over predicts the weight of the laminate by up to 86 percent in the third quadrant of the failure envelope compared to FMB and Maximum Stress failure criteria, when the laminate is subjected to biaxial compressive loading. It is found that the FMB and Maximum Stress failure criteria give comparable weight estimates. The FMBFC can be considered for use in the strength design of composite structures. In chapter 7, Particle swarm optimization is used for design optimization of composite laminates. Particle swarm optimization(PSO)is a novel meta-heuristic inspired by the flocking behaviour of birds. The application of PSO to composite design optimization problems has not yet been extensively explored. Composite laminate optimization typically consists in determining the number of layers, stacking sequence and thickness of ply that gives the desired properties. This chapter details the use of Vector Evaluated Particle Swarm Optimization(VEPSO) algorithm, a multi-objective variant of PSO for composite laminate design optimization. VEPSO is a modern coevolutionary algorithm which employs multiple swarms to handle the multiple objectives and the information migration between these swarms ensures that a global optimum solution is reached. The current problem has been formulated as a classical multi-objective optimization problem, with objectives of minimizing weight of the component for a required strength and minimizing the totalcost incurred, such that the component does not fail. In this chapter, an optimum configuration for a multi-layered unidirectional carbon/epoxy laminate is determined using VEPSO. The results are presented for various loading configurations of the composite structures. The VEPSO predicts the same minimum weight optimization and percentage savings in weight of the laminate when compared to GA for all loading conditions.There is small difference in results predicted by VEPSO and GA for some loading and stacking sequence configurations, which is mainly due to random selection of swarm particles and generation of populations re-spectively.The difference can be prevented by running the same programme repeatedly. The Thesis is concluded by highlighting the future scope of several potential applications based on the developments reported in the thesis.
340

Bayesian belief networks for dementia diagnosis and other applications : a comparison of hand-crafting and construction using a novel data driven technique

Oteniya, Lloyd January 2008 (has links)
The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ''learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ''real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.

Page generated in 0.0832 seconds