• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 59
  • 18
  • 18
  • 14
  • 10
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 291
  • 291
  • 291
  • 58
  • 54
  • 53
  • 52
  • 40
  • 39
  • 36
  • 35
  • 35
  • 32
  • 31
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Maintenance optimization for power distribution systems

Hilber, Patrik January 2008 (has links)
Maximum asset performance is one of the major goals for electric power distribution system operators (DSOs). To reach this goal minimal life cycle cost and maintenance optimization become crucial while meeting demands from customers and regulators. One of the fundamental objectives is therefore to relate maintenance and reliability in an efficient and effective way. Furthermore, this necessitates the determination of the optimal balance between pre¬ventive and corrective maintenance, which is the main problem addressed in the thesis. The balance between preventive and corrective maintenance is approached as a multiobjective optimization problem, with the customer interruption costs on one hand and the maintenance budget of the DSO on the other. Solutions are obtained with meta-heuristics, developed for the specific problem, as well as with an Evolutionary Particle Swarm Optimization algorithm. The methods deliver a Pareto border, a set of several solutions, which the operator can choose between, depending on preferences. The optimization is built on component reliability importance indices, developed specifically for power systems. One vital aspect of the indices is that they work with several supply and load points simultaneously, addressing the multistate-reliability of power systems. For the computation of the indices both analytical and simulation based techniques are used. The indices constitute the connection between component reliability performance and system performance and so enable the maintenance optimization. The developed methods have been tested and improved in two case studies, based on real systems and data, proving the methods’ usefulness and showing that they are ready to be applied to power distribution systems. It is in addition noted that the methods could, with some modifications, be applied to other types of infrastructures. However, in order to perform the optimization, a reliability model of the studied power system is required, as well as estimates on effects of maintenance actions (changes in failure rate) and their related costs. Given this, a generally decreased level of total maintenance cost and a better system reliability performance can be given to the DSO and customers respectively. This is achieved by focusing the preventive maintenance to components with a high potential for improvement from system perspective. / QC 20100810
232

Development And Design Optimization Of Laminated Composite Structures Using Failure Mechanism Based Failure Criterion

Naik, G Narayana 12 1900 (has links)
In recent years, use of composites is increasing in most fields of engineering such as aerospace, automotive, civil construction, marine, prosthetics, etc., because of its light weight, very high specific strength and stiffness, corrosion resistance, high thermal resistance etc. It can be seen that the specific strength of fibers are many orders more compared to metals. Thus, laminated fiber reinforced plastics have emerged to be attractive materials for many engineering applications. Though the uses of composites are enormous, there is always an element of fuzziness in the design of composites. Composite structures are required to be designed to resist high stresses. For this, one requires a reliable failure criterion. The anisotropic behaviour of composites makes it very difficult to formulate failure criteria and experimentally verify it, which require one to perform necessary bi-axial tests and plot the failure envelopes. Failure criteria are usually based on certain assumption, which are some times questionable. This is because, the failure process in composites is quite complex. The failure in a composite is normally based on initiating failure mechanisms such as fiber breaks, fiber compressive failure, matrix cracks, matrix crushing, delamination, disbonds or a combination of these. The initiating failure mechanism is the one, which is/are responsible for initiating failure in a laminated composites. Initiating failure mechanisms are generally dependant on the type of loading, geometry, material properties, condition of manufacture, boundary conditions, weather conditions etc. Since, composite materials exhibit directional properties, their applications and failure conditions should be properly examined and in addition to this, robust computational tools have to be exploited for the design of structural components for efficient utilisation of these materials. Design of structural components requires reliable failure criteria for the safe design of the components. Several failure criteria are available for the design of composite laminates. None of the available anisotropic strength criteria represents observed results sufficiently accurate to be employed confidently by itself in design. Most of the failure criteria are validated based on the available uniaxial test data, whereas, in practical situations, laminates are subjected to at least biaxial states of stresses. Since, the generation of biaxial test data are very difficult and time consuming to obtain, it is indeed a necessity to develop computational tools for modelling the biaxial behavior of the composite laminates. Understanding of the initiating failure mechanisms and the development of reliable failure criteria is an essential prerequisite for effective utilization of composite materials. Most of the failure criteria, considers the uniaxial test data with constant shear stress to develop failure envelopes, but in reality, structures are subjected to biaxial normal stresses as well as shear stresses. Hence, one can develop different failure envelopes depending upon the percentage of the shear stress content. As mentioned earlier, safe design of the composite structural components require reliable failure criterion. Currently two broad approaches, namely, (1) Damage Tolerance Based Design and (2)Failure Criteria Based Design are in use for the design of laminated structures in aerospace industry. Both approaches have some limitations. The damage tolerance based design suffers from a lack of proper definition of damage and the inability of analytical tools to handle realistic damage. The failure criteria based design, although relatively, more attractive in view of the simplicity, it forces the designer to use unverified design points in stress space, resulting in unpredictable failure conditions. Generally, failure envelopes are constructed using 4 or 5 experimental constants. In this type of approach, small experimental errors in these constants lead to large shift in the failure boundaries raising doubts about the reliability of the boundary in some segments. Further, they contain segments which have no experimental support and so can lead to either conservative or nonconservative designs. Conservative design leads to extra weight, a situation not acceptable in aerospace industry. Whereas, a nonconservative design, is obviously prohibitive, as it implies failure. Hence, both the damage tolerance based design and failure criteria based design have limitations. A new method, which combines the advantages of both the approaches is desirable. This issue is also thoroughly debated in many international conference on composites. Several pioneers in the composite industry indicated the need for further research work in the development of reliable failure criteria. Hence, this is motivated to carry out research work for the development of new failure criterion for the design of composite structures. Several experts meetings held world wide towards the assessment of existing failure theories and computer codes for the design of composite structures. One such meeting is the experts meeting held at United Kingdom in 1991.This meeting held at St. Albans(UK) on ’Failure of Polymeric Composites and Structures: Mechanisms and Criteria for the Prediction of Performance’, in 1991 by UK Science & Engineering Council and UK Institute of Mechanical Engineers. After thorough deliberations it was concluded that 1. There is no universal definition of failure of composites. 2. There is little or lack of faith in the failure criteria that are in current use and 3. There is a need to carry out World Wide Failure Exercise(WWFE) Based on the experts suggestions, Hinton and Soden initiated the WWFE in consultation with Prof.Bryan Harris (Editor, Journal of Composite Science and Tech-nology)to have a program to get comparative assessment of existing failure criteria and codes with following aims 1. Establish the current level of maturity of theories for predicting the failure response of fiber reinforced plastic(FRP)laminates. 2. Closing the knowledge gap between theoreticians and design practitioners in this field. 3. Stimulating the composites’ community into providing design engineers with more robust and accurate failure prediction methods, and the confidence to use them. The organisers invited pioneers in the composite industry for the program of WWFE. Among the pioneer in the composite industry Professor Hashin declined to participate in the program and had written a letter to the organisers saying that, My only work in this subject relates to failure criteria of unidirectional fiber composites, not to laminates. I do not believe that even the most complete information about failure of single plies is sufficient to predict the failure of a laminate, consisting of such plies. A laminate is a structure which undergoes a complex damage process (mostly of cracking) until it finally fails. The analysis of such a process is a prerequisite for failure analysis. ”While significant advances have been made in this direction we have not yet arrived at the practical goal of failure prediction”. Another important conference held in France in 1999, Composites for the next Millennium (Proceedingof Symposium in honor of S.W.Tsaion his 70th Birth Day Torus, France, July 2-3, 1999, pp.19.) also concludedon similar line to the meeting held at UK in 1991. Paul A Lagace and S. Mark Spearing, have pointed out that, by referring to the article on ’Predicting Failure in Composite Laminates: the background to the exercise’, by M.J.Hinton & P.D.Soden, Composites Science and Technology, Vol.58, No.7(1998), pp.1005. ”After Over thirty years of work ’The’ composite failure criterion is still an elusive entity”. Numerous researchers have produced dozens of approaches. Hundreds of papers, manuscripts and reports were written and presentations made to address the latest thoughts, add data to accumulated knowledge bases and continue the scholarly debate. Thus, the out come of these experts meeting, is that, there is a need to develop new failure theories and due to complexities associated with experimentation, especially getting bi-axial data, computational methods are the only viable alternative. Currently, biaxial data on composites is very limited as the biaxial testing of laminates is very difficult and standardization of biaxial data is yet to be done. All these experts comments and suggestions motivated us to carry out research work towards the development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ based on initiating failure mechanisms. The objectives of the thesis are 1. Identification of the failure mechanism based failure criteria for the specific initiating failure mechanism and to assign the specific failure criteria for specific initiating failure mechanism, 2. Use of the ’failure mechanism based design’ method for composite pressurant tanks and to evaluate it, by comparing it with some of the standard ’failure criteria’ based designs from the point of view of overall weight of the pressurant tank, 3. Development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ without shear stress content and the corresponding failure envelope, 4. Development of different failure envelopes with the effect of shear stress depending upon the percentage of shear stress content and 5. Design of composite laminates with the Failure Mechanism Based Failure Criterion using optimization techniques such as Genetic Algorithms(GA) and Vector Evaluated Particle Swarm Optimization(VEPSO) and the comparison of design with other failure criteria such as Tsai-Wu and Maximum Stress failure criteria. The following paragraphs describe about the achievement of these objectives. In chapter 2, a rectangular panel subjected to boundary displacements is used as an example to illustrate the concept of failure mechanism based design. Composite Laminates are generally designed using a failure criteria, based on a set of standard experimental strength values. Failure of composite laminates involves different failure mechanisms depending upon the stress state and so different failure mechanisms become dominant at different points on the failure envelope. Use of a single failure criteria, as is normally done in designing laminates, is unlikely to be satisfactory for all combination of stresses. As an alternate use of a simple failure criteria to identify the dominant failure mechanism and the design of the laminate using appropriate failure mechanism based criteria is suggested in this thesis. A complete 3-D stress analysis has been carried out using a general purpose NISA Finite Element Software. Comparison of results using standard failure criteria such as Maximum Stress, Maximum Strain, Tsai-Wu, Yamada-Sun, Maximum Fiber Strain, Grumman, O’brien, & Lagace, indicate substantial differences in predicting the first ply failure. Results for Failure Load Factors, based on the failure mechanism based approach are included. Identification of the failure mechanism at highly stressed regions and the design of the component, to withstand an artificial defect, representative this failure mechanism, provides a realistic approach to achieve necessary strength without adding unnecessary weight to the structure. It is indicated that the failure mechanism based design approach offers a reliable way of assessing critically stressed regions to eliminate the uncertainties associated with the failure criteria. In chapter 3, the failure mechanism based design approach has been applied to a composite pressurant tanks of upper stages of launch vehicles and propulsion systems of space crafts. The problem is studied using the failure mechanism based design approach, by introducing an artificial matrix crack representative of the initiating failure mechanism in the highly stressed regions and the strain energy release rate (SERR) are calculated. The total SERR value is obtained as 3330.23 J/m2, which is very high compared to the Gc(135 J/m2) value, which means the crack will grow further. The failure load fraction at which the crack has a tendency to grow is estimated to be 0.04054.Results indicates that there are significant differences in the failure load fraction for different failure criteria.Comparison with Failure Mechanism Based Criterion (FMBC) clearly indicates matrix cracks occur at loads much below the design load yet fibers are able to carrythe design load. In chapter 4, a Failure Mechanism Based Failure Criterion(FMBFC)has been proposed for the development of failure envelope for unidirectional composite plies. A representative volume element of the laminate under local loading is micromechanically modelled to predict the experimentally determined strengths and this model is then used to predict points on the failure envelope in the neighborhood of the experimental points. The NISA finite element software has been used to determine the stresses in the representative volume element. From these micro-stresses, the strength of the lamina is predicted. A correction factor is used to match the prediction of the present model with the experimentally determined strength so that, the model can be expected to provide accurate prediction of the strength in the neighborhood of the experimental points. A procedure for the construction of the failure envelope in the stress space has been outlined and the results are compared with the some of the standard and widely used failure criteria in the composite industry. Comparison of results with the Tsai-Wu failure criterion shows that there are significant differences, particularly in the third quadrant, when the ply is under bi-axial compressive loading. Comparison with maximum stress criterion indicates better correlation. The present failure mechanism based failure criterion approach opens a new possibility of constructing reliable failure envelopes for bi-axial loading applications, using the standard uniaxialtest data. In chapter 5, the new failure criterion for laminated composites developed based on initiating failure mechanism as mentioned in chapter 4 for without shear stress condition is extended to obtain the failure envelopes with the shear stress condition. The approach is based on Micromechanical analysis of composites, wherein a representative volume consists of a fiber surrounded by matrix in appropriate volume fraction and modeled using 3-D finite elements to predict the strengths.In this chapter, different failure envelopes are developed by varying shear stress say from 0% of shear strength to 100% of shear strength in steps of 25% of shear strength. Results obtained from this approach are compared with Tsai-Wu and Maximum stress failure criteria. The results show that the predicted strengths match more closely with maximum stress criterion. Hence, it can be concluded that influence of shear stress on the failure of the lamina is of little consequence as far as the prediction of strengths in laminates. In chapter 6, the failure mechanism based failure criterion, developed by the authors is used for the design optimization of the laminates and the percentage savings in total weight of the laminate is presented. The design optimization of composite laminates are performed using Genetic Algorithms. The genetic algorithm is one of the robust tools available for the optimum design of composite laminates. The genetic algorithms employ techniques originated from biology and dependon the application of Darwin’s principle of survival of the fittest. When a population of biological creatures is permitted to evolve over generations, individual characteristics that are beneficial for survival have a tendency to be passed on to future generations, since individuals carrying them get more chances to breed. In biological populations, these characteristics are stored in chromosomal strings. The mechanics of natural genetics is derived from operations that result in arranged yet randomized exchange of genetic information between the chromosomal strings of the reproducing parents and consists of reproduction, cross over, mutation, and inversion of the chromosomal strings. Here, optimization of the weight of the composite laminates for given loading and material properties is considered. The genetic algorithms have the capability of selecting choice of orientation, thickness of single ply, number of plies and stacking sequence of the layers. In this chapter, minimum weight design of composite laminates is presented using the Failure Mechanism Based(FMB), Maximum Stress and Tsai-Wu failure criteria. The objective is to demonstrate the effectiveness of the newly proposed FMB Failure Criterion(FMBFC) in composite design. The FMBFC considers different failure mechanisms such as fiber breaks, matrix cracks, fiber compressive failure, and matrix crushing which are relevant for different loadin gconditions. FMB and Maximum Stress failure criteria predicts byupto 43 percent savings in weight of the laminates compared to Tsai-Wu failure criterion in some quadrants of the failure envelope. The Tsai-Wu failure criterion over predicts the weight of the laminate by up to 86 percent in the third quadrant of the failure envelope compared to FMB and Maximum Stress failure criteria, when the laminate is subjected to biaxial compressive loading. It is found that the FMB and Maximum Stress failure criteria give comparable weight estimates. The FMBFC can be considered for use in the strength design of composite structures. In chapter 7, Particle swarm optimization is used for design optimization of composite laminates. Particle swarm optimization(PSO)is a novel meta-heuristic inspired by the flocking behaviour of birds. The application of PSO to composite design optimization problems has not yet been extensively explored. Composite laminate optimization typically consists in determining the number of layers, stacking sequence and thickness of ply that gives the desired properties. This chapter details the use of Vector Evaluated Particle Swarm Optimization(VEPSO) algorithm, a multi-objective variant of PSO for composite laminate design optimization. VEPSO is a modern coevolutionary algorithm which employs multiple swarms to handle the multiple objectives and the information migration between these swarms ensures that a global optimum solution is reached. The current problem has been formulated as a classical multi-objective optimization problem, with objectives of minimizing weight of the component for a required strength and minimizing the totalcost incurred, such that the component does not fail. In this chapter, an optimum configuration for a multi-layered unidirectional carbon/epoxy laminate is determined using VEPSO. The results are presented for various loading configurations of the composite structures. The VEPSO predicts the same minimum weight optimization and percentage savings in weight of the laminate when compared to GA for all loading conditions.There is small difference in results predicted by VEPSO and GA for some loading and stacking sequence configurations, which is mainly due to random selection of swarm particles and generation of populations re-spectively.The difference can be prevented by running the same programme repeatedly. The Thesis is concluded by highlighting the future scope of several potential applications based on the developments reported in the thesis.
233

運用曲面擬合提升幾何法大地起伏值精度之研究 / The Study of Applying Surface Fitting to Improve Geometric Geoidal Undulation

蔡名曜 Unknown Date (has links)
大地起伏值為正高與橢球高的差異量,如果取得高精度的大地起伏值,可以利用衛星定位測量施測橢球高並計算得到高精度的正高,其成本低廉,可望取代傳統的水準測量。而大地起伏值可以分為幾何法或重力法的大地起伏值,其中幾何法的大地起伏值計算方法簡易且精度高,可以利用曲面擬合方法取得之。但是幾何法的大地起伏值會受到地形起伏的影響,大範圍的曲面擬合會降低其精度。台灣的地形起伏大,難以進行大範圍曲面擬合。 於是本研究利用環域方法搜尋待測點位鄰近的水準點參與曲面方程式擬合大地起伏,試圖找到最適合的大地起伏擬合範圍。成果顯示:環域的範圍從10公里至30公里,利用二次曲面方程式擬合大地起伏在台灣平地區域能夠達到預測精度與內部精度同時低於5公分。另外由於衛星定位測量橢球高的誤差較高,需進行資料品質評估並進行粗差偵測。針對粗差偵測提出新的方法,利用最佳化演算法中的量子行為粒子群演算法計算最小二乘平差法中的權矩陣,期望能夠將粗差觀測量的權重降低,達到粗差偵測的效果。成果顯示最佳化權矩陣演算法,能夠將粗差對平差系統的影響量降到最低。 本研究建立一套台灣地區的大地起伏擬合作業程序:利用環域搜尋鄰近水準點、曲面方程式及環域範圍選擇與資料的粗差偵測,可獲得高品質的大地起伏。 / The geoidal undulation is the difference of ellipsoid height and orthometric height. We can obtain high accuracy of orthometric height by existing high accuracy of geoidal undulation and the ellipsoidal height measuring by GPS. It expected to replace the traditional leveling survey due to the less cost. This study uses buffer method to search the leveling benchmarks around the object point, attempts to find the proper range of fitting geoidal undulation to curve surface. Experimental results shows that it can archive 5cm level on both prediction error and internal precision by fitting geoidal undulation on 2nd curve surface model where the buffer range is from 10 km to 30 km. In this study, also uses the quantum-behaved particle swarm optimization to calculate the weight matrix of least square adjustment, the purpose is to down-weighting the suspicious outlier, and detect the outlier. Experimental results shows that the optimal weight matrix algorithm can reduce the influence of outlier. This study establish a procedure of fitting geoidal undulation: using buffer analysis to search the adjacent leveling benchmark, selecting the proper buffer range and surface equation and detecting outlier in data.
234

Pattern Discovery in Protein Structures and Interaction Networks

Ahmed, Hazem Radwan A. 21 April 2014 (has links)
Pattern discovery in protein structures is a fundamental task in computational biology, with important applications in protein structure prediction, profiling and alignment. We propose a novel approach for pattern discovery in protein structures using Particle Swarm-based flying windows over potentially promising regions of the search space. Using a heuristic search, based on Particle Swarm Optimization (PSO) is, however, easily trapped in local optima due to the sparse nature of the problem search space. Thus, we introduce a novel fitness-based stagnation detection technique that effectively and efficiently restarts the search process to escape potential local optima. The proposed fitness-based method significantly outperforms the commonly-used distance-based method when tested on eight classical and advanced (shifted/rotated) benchmark functions, as well as on two other applications for proteomic pattern matching and discovery. The main idea is to make use of the already-calculated fitness values of swarm particles, instead of their pairwise distance values, to predict an imminent stagnation situation. That is, the proposed fitness-based method does not require any computational overhead of repeatedly calculating pairwise distances between all particles at each iteration. Moreover, the fitness-based method is less dependent on the problem search space, compared with the distance-based method. The proposed pattern discovery algorithms are first applied to protein contact maps, which are the 2D compact representation of protein structures. Then, they are extended to work on actual protein 3D structures and interaction networks, offering a novel and low-cost approach to protein structure classification and interaction prediction. Concerning protein structure classification, the proposed PSO-based approach correctly distinguishes between the positive and negative examples in two protein datasets over 50 trials. As for protein interaction prediction, the proposed approach works effectively on complex, mostly sparse protein interaction networks, and predicts high-confidence protein-protein interactions — validated by more than one computational and experimental source — through knowledge transfer between topologically-similar interaction patterns of close proximity. Such encouraging results demonstrate that pattern discovery in protein structures and interaction networks are promising new applications of the fast-growing and far-reaching PSO algorithms, which is the main argument of this thesis. / Thesis (Ph.D, Computing) -- Queen's University, 2014-04-21 12:54:03.37
235

Integrated control of wind farms, facts devices and the power network using neural networks and adaptive critic designs

Qiao, Wei 08 July 2008 (has links)
Worldwide concern about the environmental problems and a possible energy crisis has led to increasing interest in clean and renewable energy generation. Among various renewable energy sources, wind power is the most rapidly growing one. Therefore, how to provide efficient, reliable, and high-performance wind power generation and distribution has become an important and practical issue in the power industry. In addition, because of the new constraints placed by the environmental and economical factors, the trend of power system planning and operation is toward maximum utilization of the existing infrastructure with tight system operating and stability margins. This trend, together with the increased penetration of renewable energy sources, will bring new challenges to power system operation, control, stability and reliability which require innovative solutions. Flexible ac transmission system (FACTS) devices, through their fast, flexible, and effective control capability, provide one possible solution to these challenges. To fully utilize the capability of individual power system components, e.g., wind turbine generators (WTGs) and FACTS devices, their control systems must be suitably designed with high reliability. Moreover, in order to optimize local as well as system-wide performance and stability of the power system, real-time local and wide-area coordinated control is becoming an important issue. Power systems containing conventional synchronous generators, WTGs, and FACTS devices are large-scale, nonlinear, nonstationary, stochastic and complex systems distributed over large geographic areas. Traditional mathematical tools and system control techniques have limitations to control such complex systems to achieve an optimal performance. Intelligent and bio-inspired techniques, such as swarm intelligence, neural networks, and adaptive critic designs, are emerging as promising alternative technologies for power system control and performance optimization. This work focuses on the development of advanced optimization and intelligent control algorithms to improve the stability, reliability and dynamic performance of WTGs, FACTS devices, and the associated power networks. The proposed optimization and control algorithms are validated by simulation studies in PSCAD/EMTDC, experimental studies, or real-time implementations using Real Time Digital Simulation (RTDS) and TMS320C6701 Digital Signal Processor (DSP) Platform. Results show that they significantly improve electrical energy security, reliability and sustainability.
236

Análise híbrida numérico-experimental da troca de calor por convecção forçada em aletas planas / Numerical-experimental hybrid analysis of heat change by forced convection in plana fins

Silva, Maico Jeremia da 28 July 2015 (has links)
Made available in DSpace on 2016-12-12T20:25:12Z (GMT). No. of bitstreams: 1 Maico Jeremias da Silva.pdf: 5502697 bytes, checksum: 292822b79334828b19d2d99087aae96b (MD5) Previous issue date: 2015-07-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work discusses the behaviour of the convection heat transfer coefficient in plane fins with variation of air flow velocity and fin spacing. The differential equation that governs heat transfer in fins is discretized by the finite volume method, which enables computation of the fin thermal characteristics, such as temperature distribution, heat transfer and fin efficiency. The determination of the convection coefficient, important parameter for thermal analysis, is performed by applying a heuristic optimization method, known as Particle Swarm Optimization, which combines data measured from experimental analysis conducted in wind tunnel with the aforementioned heat transfer numerical approximation of fins. / Este trabalho analisa os efeitos da velocidade do escoamento de ar e do espaçamento entre aletas no coeficiente de convecção forçada sobre as superfícies de aletas planas. As equações diferenciais que definem a transferência de calor em aletas são discretizadas pela técnica de volumes finitos possibilitando a determinação das características térmicas da aleta, tais como perfil de temperatura, calor transferido e eficiência térmica. A determinação do coeficiente de convecção, parâmetro fundamental para análises térmicas, é realizada mediante a aplicação de um método de otimização heurístico, conhecido como Método do Enxame de Partículas, que combina os dados obtidos da análise experimental realizada em túnel de vento com a aproximação numérica da troca de calor na aleta descrita acima
237

Contribution à la synthèse et l’optimisation multi-objectif par essaims particulaires de lois de commande robuste RST de systèmes dynamiques / Contribution to the synthesis and multi-objective particle swarm optimization for robust RST control laws of dynamic systems

Madiouni, Riadh 20 June 2016 (has links)
Ces travaux de recherche portent sur la synthèse systématique et l’optimisation de correcteurs numériques à structure polynomiale RST par approches métaheuristiques. Les problèmes classiques de placement de pôles et de calibrage des fonctions de sensibilité de la boucle fermée RST sont formulés sous forme de problèmes d’optimisation multi-objectif sous contraintes pour lequel des algorithmes métaheuristiques de type NSGA-II, MODE, MOPSO et epsilon-MOPSO sont proposés et adaptés. Deux formulations du problème de synthèse RST ont été proposées. La première approche, formulée dans le domaine temporel, consiste à minimiser des indices de performance, de type ISE et MO, issus de la théorie de la commande optimale et liés essentiellement à la réponse indicielle du système corrigé. Ces critères sont optimisés sous des contraintes non analytiques définis par des gabarits temporels sur la dynamique de la boucle fermée. Dans la deuxième approche de synthèse RST, une formulation dans le domaine fréquentiel est retenue. La stratégie proposée consiste à définir et calculer une fonction de sensibilité de sortie désirée en satisfaisant des contraintes de robustesse de H∞. L’utilisation de parties fixes dans la fonction de sensibilité de sortie désirée assurera un placement partiel des pôles de la boucle fermée RST. L’inverse d’une telle fonction de sensibilité désirée définira le filtre de pondération H∞ associé. Un intérêt particulier est porté à l’approche d’optimisation par essaim particulière PSO pour la résolution des problèmes multi-objectif de commande reformulés. Un algorithme MOPSO à grille adaptative est proposé et puis perfectionné à base des concepts de l’epsilon-dominance. L’algorithme epsilon-MOPSO obtenu a montré, par comparaison avec les algorithmes MOPSO, NSGA-II et MODE, des performances supérieures en termes de diversité des solutions de Pareto et de rapidité en temps de convergence. Des métriques de type distance générationnelle, taux d’erreurs et espacement sont toutefois considérées pour l’analyse statistique des résultats de mise en œuvre obtenus. Une application à la commande en vitesse variable d’un moteur électrique DC est effectuée, également pour la commande en position d’un système de transmission flexible à charges variables. La mise en œuvre par simulations numériques sur les procédés considérés est également présentée dans le but de montrer la validité et l’efficacité de l’approche de commande optimale RST proposée / This research focuses on the systematic synthesis and optimization of digital RST structure based controllers thanks to global metaheuristics approaches. The classic and hard problems of closed-loop poles placement and sensitivity functions shaping of RST control are well formulated as constrained multi-objective problems to be solved with proposed metaheuristics algorithms NSGA-II, MODE, MOPSO and especially epsilon-MOPSO. Two formulations of the metaheuristics-tuned RST problem have been proposed. The first one, which is given in the time domain, deals with the minimization of several performance criteria like the Integral Square Error (ISE) and the Maximum Overshoot (MO) indices. These optimal criteria, related primarily to the step response of the controlled plant, are optimized under non-analytical constraints defined by temporal templates on the closed-loop dynamics. In the second approach, a formulation in the frequency domain is retained. The proposed strategy aims to optimize a desired output sensitivity function satisfying H∞ robustness constraints. The use of a suitable fixed part of the optimized output sensitivity function will provide partial pole placement of the closed-loop dynamics of the digital RST controller. The opposite of such desired sensitivity function will define the associated H∞ weighting filter. The Multi-Objective Particle Swarm Optimization (MOPSO) technique is particularly retained for the resolution of all formulated multi-objective RST control problems. An adaptive grid based MOPSO algorithm is firstly proposed and then improved based on the epsilon-dominance concepts. Such proposed epsilon-MOPSO algorithm, with a good diversity of the provided Pareto solutions and fast convergence time, showed a remarkable superiority compared to the standard MOPSO, NSGA-II and MODE algorithms. Performance metrics, such as generational distance, error rate and spacing, are presented for the statistical analysis of the achieved multi-optimization results. An application to the variable speed RST control of an electrical DC drive is performed, also for the RST position control of a flexible transmission plant with varying loads. Demonstrative simulations and comparisons are carried out in order to show the validity and the effectiveness of the proposed metaheuristics-based tuned RST control approach, which is formulated in the multi-objective optimization framework
238

Localização colaborativa em robótica de enxame. / Collaborative localization in swarm robotics.

Alan Oliveira de Sá 26 May 2015 (has links)
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro / Diversas das possíveis aplicações da robótica de enxame demandam que cada robô seja capaz de estimar a sua posição. A informação de localização dos robôs é necessária, por exemplo, para que cada elemento do enxame possa se posicionar dentro de uma formatura de robôs pré-definida. Da mesma forma, quando os robôs atuam como sensores móveis, a informação de posição é necessária para que seja possível identificar o local dos eventos medidos. Em virtude do tamanho, custo e energia dos dispositivos, bem como limitações impostas pelo ambiente de operação, a solução mais evidente, i.e. utilizar um Sistema de Posicionamento Global (GPS), torna-se muitas vezes inviável. O método proposto neste trabalho permite que as posições absolutas de um conjunto de nós desconhecidos sejam estimadas, com base nas coordenadas de um conjunto de nós de referência e nas medidas de distância tomadas entre os nós da rede. A solução é obtida por meio de uma estratégia de processamento distribuído, onde cada nó desconhecido estima sua própria posição e ajuda os seus vizinhos a calcular as suas respectivas coordenadas. A solução conta com um novo método denominado Multi-hop Collaborative Min-Max Localization (MCMM), ora proposto com o objetivo de melhorar a qualidade da posição inicial dos nós desconhecidos em caso de falhas durante o reconhecimento dos nós de referência. O refinamento das posições é feito com base nos algoritmos de busca por retrocesso (BSA) e de otimização por enxame de partículas (PSO), cujos desempenhos são comparados. Para compor a função objetivo, é introduzido um novo método para o cálculo do fator de confiança dos nós da rede, o Fator de Confiança pela Área Min-Max (MMA-CF), o qual é comparado com o Fator de Confiança por Saltos às Referências (HTA-CF), previamente existente. Com base no método de localização proposto, foram desenvolvidos quatro algoritmos, os quais são avaliados por meio de simulações realizadas no MATLABr e experimentos conduzidos em enxames de robôs do tipo Kilobot. O desempenho dos algoritmos é avaliado em problemas com diferentes topologias, quantidades de nós e proporção de nós de referência. O desempenho dos algoritmos é também comparado com o de outros algoritmos de localização, tendo apresentado resultados 40% a 51% melhores. Os resultados das simulações e dos experimentos demonstram a eficácia do método proposto. / Many applications of Swarm Robotic Systems (SRSs) require that a robot is able to discover its position. The location information of the robots is required, for example, to allow them to be correctly positioned within a predefined swarm formation. Similarly, when the robots act as mobile sensors, the position information is needed to allow the identification of the location of the measured events. Due to the size, cost and energy source restrictions of these devices, or even limitations imposed by the operating environment, the straightforward solution, i.e. the use of a Global Positioning System (GPS), is often not feasible. The method proposed in this work allows the estimation of the absolute positions of a set of unknown nodes, based on the coordinates of a set of reference nodes and the distances measured between nodes. The solution is achieved by means of a distributed processing strategy, where each unknown node estimates its own position and helps its neighbors to compute their respective coordinates. The solution makes use of a new method called Multi-hop Collaborative Min-Max Localization (MCMM), herein proposed, aiming to improve the quality of the initial positions estimated by the unknown nodes in case of failure during the recognition of the reference nodes. The positions refinement is achieved based on the Backtracking Search Optimization Algorithm (BSA) and the Particle Swarm Optimization (PSO), whose performances are compared. To compose the objective function, a new method to compute the confidence factor of the network nodes is introduced, the Min-max Area Confidence Factor (MMA-CF), which is compared with the existing Hops to Anchor Confidence Factor (HTA-CF). Based on the proposed localization method, four algorithms were developed and further evaluated through a set of simulations in MATLABr and experiments in swarms of type Kilobot robots. The performance of the algorithms is evaluated on problems with different topologies, quantities of nodes and proportion of reference nodes. The performance of the algorithms is also compared with the performance of other localization algorithms, showing improvements between 40% to 51%. The simulations and experiments outcomes demonstrate the effectiveness of the proposed method.
239

Otimização por Nuvem de Partículas e Busca Tabu para Problema da Diversidade Máxima

Bonotto, Edison Luiz 31 January 2017 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-29T14:15:20Z No. of bitstreams: 1 arquivototal.pdf: 1397036 bytes, checksum: 303111e916d8c9feca61ed32762bf54c (MD5) / Made available in DSpace on 2017-06-29T14:15:20Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1397036 bytes, checksum: 303111e916d8c9feca61ed32762bf54c (MD5) Previous issue date: 2017-01-31 / The Maximu m Diversity Problem (MDP) is a problem of combinatorial optimization area that aims to select a pre-set number of elements in a given set so that a sum of the differences between the selected elements are greater as possible. MDP belongs to the class of NP-Hard problems, that is, there is no known algorithm that solves in polynomial time accurately. Because they have a complexity of exponential order, require efficient heuristics to provide satisfactory results in acceptable time. However, heuristics do not guarantee the optimality of the solution found. This paper proposes a new hybrid approach for a resolution of the Maximum Diversity Problem and is based on the Particle Swarm Optimization (PSO) and Tabu Search (TS) metaheuristics, The algorithm is called PSO_TS. The use of PSO_TS achieves the best results for known instances testing in the literature, thus demonstrating be competitive with the best algorithms in terms of quality of the solutions. / O Problema da Diversidade Máxima (MDP) é um problema da área de Otimização Combinatória que tem por objetivo selecionar um número pré-estabelecido de elementos de um dado conjunto de maneira tal que a soma das diversidades entre os elementos selecionados seja a maior possível. O MDP pertence a classe de problemas NP-difícil, isto é, não existe algoritmo conhecido que o resolva de forma exata em tempo polinomial. Por apresentarem uma complexidade de ordem exponencial, exigem heurísticas eficientes que forneçam resultados satisfatórios em tempos aceitáveis. Entretanto, as heurísticas não garantem otimalidade da solução encontrada. Este trabalho propõe uma nova abordagem híbrida para a resolução do Problema da Diversidade Máxima e está baseada nas meta-heurísticas de Otimização por Nuvem de Partículas (PSO) e Busca Tabu(TS). O algoritmo foi denominado PSO_TS. Para a validação do método, os resultados encontrados são comparados com os melhores existentes na literatura.
240

Desenvolvimento de uma metodologia experimental para obtenção e caracterização de formulações de compostos de borracha EPDM / Development of experimental method for obtaining and characterization of EPDM rubber compound formulations

Palaoro, Denilso 24 February 2015 (has links)
Made available in DSpace on 2016-12-08T15:56:17Z (GMT). No. of bitstreams: 1 Denilso Palaoro.pdf: 3446834 bytes, checksum: a842ffb16a48a459dc2b2e44efa303af (MD5) Previous issue date: 2015-02-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The rubber industry has a great role in many areas of the economy, such as automotive, construction, footwear industry, hospital, etc. Rubber products are produced from complex mixtures of different raw materials, both natural and synthetic sources. From an industrial point of view, a major difficulty is to develop a formulation that meets the particular product requirements. This work seeks to develop an experimental methodology to obtain EPDM rubber compounds, using experimental design techniques coupled with computer numerical optimisation. A fractional factorial design was used to design and analyse the experiments, with factors the contents of calcium carbonate, paraffinic oil and vulcanizing accelerator (weight fractions). Twelve properties were measured (six of original samples, three of heat aged and three of processing). Statistical analyses enabled to find regression models for the properties and the cost of the formulation. A computer program was developed to minimise the cost function, subject to constraints on the properties. The results showed that it was possible to obtain formulations of EPDM rubber compounds which can be optimized at low cost, for example, of US$ 2.02/kg a 2.43/kg, for use in hoses and pads at different manufacturing processes such as compression moulding, transfer or injection. Selected compositions were analysed by FTIR, SEM and TGA, regards to their chemical and structural characteristics. Compositions with low vulcanization accelerator contents contribute to form cross-links with many sulphur- sulphur between the carbon chains which can damage mechanical properties in the original cured and aged samples. Using higher accelerator content, better properties are achieved probably due to lower content of the same sulphur- sulphur cross-links between polymer chains. The EPDM compounds studied may be used in cushions, hoses products which can withstand to hot air environments. Thus, the present study provides an experimental and scientific technique that allows developing rubber compounds with increased efficiency and reliability in research and development, taking into account the cost of the material. / A indústria da borracha possui um papel significativo em diversas áreas da economia, tais como: indústria automobilística, construção civil, indústria calçadista, hospitalar, etc. Artefatos de borracha são produzidos a partir de misturas complexas de diversas matérias-primas, naturais e sintéticas. Na indústria, uma grande dificuldade é desenvolver uma formulação que atenda aos requisitos de determinado produto. Assim, neste trabalho, busca se desenvolver uma metodologia experimental para obtenção de compostos de borracha de etileno propileno dieno (EPDM), utilizando-se técnicas de planejamento de experimentos aliado com otimização numérica computacional. Foi utilizado um planejamento fatorial fracionado 33-1 (três níveis e três fatores), sendo os fatores: teor de carbonato de cálcio, teor de óleo parafínico e teor de acelerador de vulcanização. Foram medidas no total, 12 propriedades (seis originais, três envelhecidas e três propriedades de processos). A partir de estudos estatísticos foram obtidos modelos de regressão para as propriedades e para o custo da formulação. Um programa computacional foi desenvolvido para minimizar a função custo, sujeita às restrições nas propriedades. Os resultados mostraram que foi possível obter formulações de compostos de borracha EPDM otimizadas a um custo variando entre US$ 2,02/kg a 2,43/kg, para aplicação em mangueiras e coxins e em processos de transformação diversos, tais como, moldagem por compressão, transferência ou injeção. Composições selecionadas foram escolhidas e analisadas, por meio de FTIR, MEV e TGA, quanto às suas características químicas e estruturais. Composições com baixos teores de acelerador de vulcanização contribuem para formar ligações cruzadas com cerca de 4 a 7 átomos de enxofre entre as cadeias carbônicas, prejudicando as propriedades mecânicas no vulcanizado original e envelhecido. Com um teor maior de acelerador, melhores propriedades são obtidas, tendo em vista que um número elevado de ligações cruzadas com uma quantidade de átomos de enxofre inferior a 4 na cadeia são formadas. Os compostos de EPDM estudados podem ser utilizados em produtos de coxins e mangueiras para aplicações resistentes ao calor. Assim, esta pesquisa apresenta uma metodologia experimental, para a pesquisa e desenvolvimento de compostos de borracha EPDM, levando em conta o custo do material e restrições nas propriedades.

Page generated in 0.0443 seconds