• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 59
  • 23
  • 21
  • 18
  • 12
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 324
  • 324
  • 324
  • 64
  • 62
  • 60
  • 56
  • 44
  • 39
  • 37
  • 36
  • 36
  • 35
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Στατιστική και υπολογιστική νοημοσύνη

Γεωργίου, Βασίλειος 12 April 2010 (has links)
Η παρούσα διατριβή ασχολείται με τη μελέτη και την ανάπτυξη μοντέλων ταξινόμησης τα οποία βασίζονται στα Πιθανοτικά Νευρωνικά Δίκτυα (ΠΝΔ). Τα προτεινόμενα μοντέλα αναπτύχθηκαν ενσωματώνοντας στατιστικές μεθόδους αλλά και μεθόδους από διάφορα πεδία της Υπολογιστικής Νοημοσύνης (ΥΝ). Συγκεκριμένα, χρησιμοποιήθηκαν οι Διαφοροεξελικτικοί αλγόριθμοι βελτιστοποίησης και η Βελτιστοποίηση με Σμήνος Σωματιδίων (ΒΣΣ) για την αναζήτηση βέλτιστων τιμών των παραμέτρων των ΠΝΔ. Επιπλέον, ενσωματώθηκε η τεχνική bagging για την ανάπτυξη συστάδας μοντέλων ταξινόμησης. Μια άλλη προσέγγιση ήταν η ανάπτυξη ενός Μπεϋζιανού μοντέλου για την εκτίμηση των παραμέτρων του ΠΝΔ χρησιμοποιώντας τον δειγματολήπτη Gibbs. Επίσης, ενσωματώθηκε μια Ασαφή Συνάρτηση Συμμετοχής για την καλύτερη στάθμιση των τεχνητών νευρώνων του ΠΝΔ καθώς και ένα νέο σχήμα διάσπασης του συνόλου εκπαίδευσης σε προβλήματα ταξινόμησης πολλαπλών κλάσεων όταν ο ταξινομητής μπορεί να επιτύχει ταξινόμηση δύο κλάσεων.Τα προτεινόμενα μοντέλα ταξινόμησης εφαρμόστηκαν σε μια σειρά από πραγματικά προβλήματα από διάφορες επιστημονικές περιοχές με ενθαρρυντικά αποτελέσματα. / The present thesis is dealing with the study and the development of classification models that are based on Probabilistic Neural Networks (PNN). The proposed models were developed by the incorporation of statistical methods as well as methods from several fields of Computational Intelligence (CI) into PNNs. In particular, the Differential Evolutionary optimization algorithms and Particle Swarm Optimization algorithms are employed for the search of promising values of PNNs’ parameters. Moreover, the bagging technique was incorporated for the development of an ensemble of classification models. Another approach was the construction of a Bayesian model for the estimation of PNN’s parameters utilizing the Gibbs sampler. Furthermore, a Fuzzy Membership Function was incorporated to achieve an improved weighting of PNN’s neurons. A new decomposition scheme is proposed for multi-class classification problems when a two-class classifier is employed. The proposed classification models were applied to a series of real-world problems from several scientific areas with encouraging results.
192

Prediction of properties and optimal design of microstructure of multi-phase and multi-layer C/SiC composites

Xu, Yingjie 08 July 2011 (has links) (PDF)
Carbon fiber-reinforced silicon carbide matrix (C/SiC) composite is a ceramic matrixcomposite (CMC) that has considerable promise for use in high-temperature structuralapplications. In this thesis, systematic numerical studies including the prediction of elasticand thermal properties, analysis and optimization of stresses and simulation ofhigh-temperature oxidations are presented for the investigation of C/SiC composites.A strain energy method is firstly proposed for the prediction of the effective elastic constantsand coefficients of thermal expansion (CTEs) of 3D orthotropic composite materials. Thismethod derives the effective elastic tensors and CTEs by analyzing the relationship betweenthe strain energy of the microstructure and that of the homogenized equivalent model underspecific thermo-elastic boundary conditions. Different kinds of composites are tested tovalidate the model.Geometrical configurations of the representative volume cell (RVC) of 2-D woven and 3-Dbraided C/SiC composites are analyzed in details. The finite element models of 2-D wovenand 3-D braided C/SiC composites are then established and combined with the stain energymethod to evaluate the effective elastic constants and CTEs of these composites. Numericalresults obtained by the proposed model are then compared with the results measuredexperimentally.A global/local analysis strategy is developed for the determination of the detailed stresses inthe 2-D woven C/SiC composite structures. On the basis of the finite element analysis, theprocedure is carried out sequentially from the homogenized composite structure of themacro-scale (global model) to the parameterized detailed fiber tow model of the micro-scale(local model). The bridge between two scales is realized by mapping the global analysisresult as the boundary conditions of the local tow model. The stress results by global/localmethod are finally compared to those by conventional finite element analyses.Optimal design for minimizing thermal residual stress (TRS) in 1-D unidirectional C/SiCcomposites is studied. The finite element models of RVC of 1-D unidirectional C/SiCIIcomposites with multi-layer interfaces are generated and finite element analysis is realized todetermine the TRS distributions. An optimization scheme which combines a modifiedParticle Swarm Optimization (PSO) algorithm and the finite element analysis is used toreduce the TRS in the C/SiC composites by controlling the multi-layer interfaces thicknesses.A numerical model is finally developed to study the microstructure oxidation process and thedegradation of elastic properties of 2-D woven C/SiC composites exposed to air oxidizingenvironments at intermediate temperature (T<900°C). The oxidized RVC microstructure ismodeled based on the oxidation kinetics analysis. The strain energy method is then combinedwith the finite element model of oxidized RVC to predict the elastic properties of composites.The environmental parameters, i.e., temperature and pressure are studied to show theirinfluences upon the oxidation behavior of C/SiC composites.
193

Optimization of power system performance using facts devices

del Valle, Yamille E. 02 July 2009 (has links)
The object of this research is to optimize the overall power system performance using FACTS devices. Particularly, it is intended to improve the reliability, and the performance of the power system considering steady state operating condition as well as the system subjected to small and large disturbances. The methodology proposed to achieve this goal corresponds to an enhanced particle swarm optimizer (Enhanced-PSO) that is proven in this work to have several advantages, in terms of accuracy and computational effort, as compared with other existing methods. Once the performance of the Enhanced PSO is verified, a multi-stage PSO-based optimization framework is proposed for optimizing the power system reliability (N-1 contingency criterion). The algorithm finds optimal settings for present infrastructure (generator outputs, transformers tap ratios and capacitor banks settings) as well as optimal control references for distributed static series compensators (DSSC) and optimal locations, sizes and control settings for static compensator (STATCOM) units. Finally, a two-stage optimization algorithm is proposed to improve the power system performance in steady state conditions and when small and large perturbations are applied to the system. In this case, the algorithm provides optimal control references for DSSC modules, optimal location and sizes for capacitor banks, and optimal location, sizes and control parameters for STATCOM units (internal and external controllers), so that the loadability and the damping of the system are maximized at minimum cost. Simulation results throughout this research show a significant improvement of the power system reliability and performance after the system is optimized.
194

Advanced Computational Methods for Power System Data Analysis in an Electricity Market

Ke Meng Unknown Date (has links)
The power industry has undergone significant restructuring throughout the world since the 1990s. In particular, its traditional, vertically monopolistic structures have been reformed into competitive markets in pursuit of increased efficiency in electricity production and utilization. However, along with market deregulation, power systems presently face severe challenges. One is power system stability, a problem that has attracted widespread concern because of severe blackouts experienced in the USA, the UK, Italy, and other countries. Another is that electricity market operation warrants more effective planning, management, and direction techniques due to the ever expanding large-scale interconnection of power grids. Moreover, many exterior constraints, such as environmental protection influences and associated government regulations, now need to be taken into consideration. All these have made existing challenges even more complex. One consequence is that more advanced power system data analysis methods are required in the deregulated, market-oriented environment. At the same time, the computational power of modern computers and the application of databases have facilitated the effective employment of new data analysis techniques. In this thesis, the reported research is directed at developing computational intelligence based techniques to solve several power system problems that emerge in deregulated electricity markets. Four major contributions are included in the thesis: a newly proposed quantum-inspired particle swarm optimization and self-adaptive learning scheme for radial basis function neural networks; online wavelet denoising techniques; electricity regional reference price forecasting methods in the electricity market; and power system security assessment approaches for deregulated markets, including fault analysis, voltage profile prediction under contingencies, and machine learning based load shedding scheme for voltage stability enhancement. Evolutionary algorithms (EAs) inspired by biological evolution mechanisms have had great success in power system stability analysis and operation planning. Here, a new quantum-inspired particle swarm optimization (QPSO) is proposed. Its inspiration stems from quantum computation theory, whose mechanism is totally different from those of original EAs. The benchmark data sets and economic load dispatch research results show that the QPSO improves on other versions of evolutionary algorithms in terms of both speed and accuracy. Compared to the original PSO, it greatly enhances the searching ability and efficiently manages system constraints. Then, fuzzy C-means (FCM) and QPSO are applied to train radial basis function (RBF) neural networks with the capacity to auto-configure the network structures and obtain the model parameters. The benchmark data sets test results suggest that the proposed training algorithms ensure good performance on data clustering, also improve training and generalization capabilities of RBF neural networks. Wavelet analysis has been widely used in signal estimation, classification, and compression. Denoising with traditional wavelet transforms always exhibits visual artefacts because of translation-variant. Furthermore, in most cases, wavelet denoising of real-time signals is actualized via offline processing which limits the efficacy of such real-time applications. In the present context, an online wavelet denoising method using a moving window technique is proposed. Problems that may occur in real-time wavelet denoising, such as border distortion and pseudo-Gibbs phenomena, are effectively solved by using window extension and window circle spinning methods. This provides an effective data pre-processing technique for the online application of other data analysis approaches. In a competitive electricity market, price forecasting is one of the essential functions required of a generation company and the system operator. It provides critical information for building up effective risk management plans by market participants, especially those companies that generate and retail electrical power. Here, an RBF neural network is adopted as a predictor of the electricity market regional reference price in the Australian national electricity market (NEM). Furthermore, the wavelet denoising technique is adopted to pre-process the historical price data. The promising network prediction performance with respect to price data demonstrates the efficiency of the proposed method, with real-time wavelet denoising making feasible the online application of the proposed price prediction method. Along with market deregulation, power system security assessment has attracted great concern from both academic and industry analysts, especially after several devastating blackouts in the USA, the UK, and Russia. This thesis goes on to propose an efficient composite method for cascading failure prevention comprising three major stages. Firstly, a hybrid method based on principal component analysis (PCA) and specific statistic measures is used to detect system faults. Secondly, the RBF neural network is then used for power network bus voltage profile prediction. Tests are carried out by means of the “N-1” and “N-1-1” methods applied in the New England power system through PSS/E dynamic simulations. Results show that system faults can be reliably detected and voltage profiles can be correctly predicted. In contrast to traditional methods involving phase calculation, this technique uses raw data from time domains and is computationally inexpensive in terms of both memory and speed for practical applications. This establishes a connection between power system fault analysis and cascading analysis. Finally, a multi-stage model predictive control (MPC) based load shedding scheme for ensuring power system voltage stability is proposed. It has been demonstrated that optimal action in the process of load shedding for voltage stability during emergencies can be achieved as a consequence. Based on above discussions, a framework for analysing power system voltage stability and ensuring its enhancement is proposed, with such a framework able to be used as an effective means of cascading failure analysis. In summary, the research reported in this thesis provides a composite framework for power system data analysis in a market environment. It covers advanced techniques of computational intelligence and machine learning, also proposes effective solutions for both the market operation and the system stability related problems facing today’s power industry.
195

Otimização por enxame de partículas em arquiteturas paralelas de alto desempenho. / Particle swarm optimization in high-performance parallel architectures.

Rogério de Moraes Calazan 21 February 2013 (has links)
A Otimização por Enxame de Partículas (PSO, Particle Swarm Optimization) é uma técnica de otimização que vem sendo utilizada na solução de diversos problemas, em diferentes áreas do conhecimento. Porém, a maioria das implementações é realizada de modo sequencial. O processo de otimização necessita de um grande número de avaliações da função objetivo, principalmente em problemas complexos que envolvam uma grande quantidade de partículas e dimensões. Consequentemente, o algoritmo pode se tornar ineficiente em termos do desempenho obtido, tempo de resposta e até na qualidade do resultado esperado. Para superar tais dificuldades, pode-se utilizar a computação de alto desempenho e paralelizar o algoritmo, de acordo com as características da arquitetura, visando o aumento de desempenho, a minimização do tempo de resposta e melhoria da qualidade do resultado final. Nesta dissertação, o algoritmo PSO é paralelizado utilizando três estratégias que abordarão diferentes granularidades do problema, assim como dividir o trabalho de otimização entre vários subenxames cooperativos. Um dos algoritmos paralelos desenvolvidos, chamado PPSO, é implementado diretamente em hardware, utilizando uma FPGA. Todas as estratégias propostas, PPSO (Parallel PSO), PDPSO (Parallel Dimension PSO) e CPPSO (Cooperative Parallel PSO), são implementadas visando às arquiteturas paralelas baseadas em multiprocessadores, multicomputadores e GPU. Os diferentes testes realizados mostram que, nos problemas com um maior número de partículas e dimensões e utilizando uma estratégia com granularidade mais fina (PDPSO e CPPSO), a GPU obteve os melhores resultados. Enquanto, utilizando uma estratégia com uma granularidade mais grossa (PPSO), a implementação em multicomputador obteve os melhores resultados. / Particle Swarm Optimization (PSO) is an optimization technique that is used to solve many problems in different applications. However, most implementations are sequential. The optimization process requires a large number of evaluations of the objective function, especially in complex problems, involving a large amount of particles and dimensions. As a result, the algorithm may become inefficient in terms of performance, execution time and even the quality of the expected result. To overcome these difficulties,high performance computing and parallel algorithms can be used, taking into account to the characteristics of the architecture. This should increase performance, minimize response time and may even improve the quality of the final result. In this dissertation, the PSO algorithm is parallelized using three different strategies that consider different granularities of the problem, and the division of the optimization work among several cooperative sub-swarms. One of the developed parallel algorithms, namely PPSO, is implemented directly in hardware, using an FPGA. All the proposed strategies, namely PPSO ( Parallel PSO), PDPSO (Parallel Dimension PSO) and CPPSO (Cooperative Parallel PSO), are implemented in a multiprocessor, multicomputer and GPU based parallel architectures. The different performed assessments show that the GPU achieved the best results for problems with high number of particles and dimensions when a strategy with finer granularity is used, namely PDPSO and CPPSO. In contrast with this, when using a strategy with a coarser granularity, namely PPSO, the multi-computer based implementation achieved the best results.
196

Proposição e análise de modelos híbridos para o problema de escalonamento de produção em oficina de máquinas / Presentation and analysis of hybridization models for the jobshop scheduling problem

Tatiana Balbi Fraga 26 March 2010 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nas últimas décadas, o problema de escalonamento da produção em oficina de máquinas, na literatura referido como JSSP (do inglês Job Shop Scheduling Problem), tem recebido grande destaque por parte de pesquisadores do mundo inteiro. Uma das razões que justificam tamanho interesse está em sua alta complexidade. O JSSP é um problema de análise combinatória classificado como NP-Difícil e, apesar de existir uma grande variedade de métodos e heurísticas que são capazes de resolvê-lo, ainda não existe hoje nenhum método ou heurística capaz de encontrar soluções ótimas para todos os problemas testes apresentados na literatura. A outra razão basea-se no fato de que esse problema encontra-se presente no diaa- dia das indústrias de transformação de vários segmento e, uma vez que a otimização do escalonamento pode gerar uma redução significativa no tempo de produção e, consequentemente, um melhor aproveitamento dos recursos de produção, ele pode gerar um forte impacto no lucro dessas indústrias, principalmente nos casos em que o setor de produção é responsável por grande parte dos seus custos totais. Entre as heurísticas que podem ser aplicadas à solução deste problema, o Busca Tabu e o Multidão de Partículas apresentam uma boa performance para a maioria dos problemas testes encontrados na literatura. Geralmente, a heurística Busca Tabu apresenta uma boa e rápida convergência para pontos ótimos ou subótimos, contudo esta convergência é frequentemente interrompida por processos cíclicos e a performance do método depende fortemente da solução inicial e do ajuste de seus parâmetros. A heurística Multidão de Partículas tende a convergir para pontos ótimos, ao custo de um grande esforço computacional, sendo que sua performance também apresenta uma grande sensibilidade ao ajuste de seus parâmetros. Como as diferentes heurísticas aplicadas ao problema apresentam pontos positivos e negativos, atualmente alguns pesquisadores começam a concentrar seus esforços na hibridização das heurísticas existentes no intuito de gerar novas heurísticas híbridas que reúnam as qualidades de suas heurísticas de base, buscando desta forma diminuir ou mesmo eliminar seus aspectos negativos. Neste trabalho, em um primeiro momento, são apresentados três modelos de hibridização baseados no esquema geral das Heurísticas de Busca Local, os quais são testados com as heurísticas Busca Tabu e Multidão de Partículas. Posteriormente é apresentada uma adaptação do método Colisão de Partículas, originalmente desenvolvido para problemas contínuos, onde o método Busca Tabu é utilizado como operador de exploração local e operadores de mutação são utilizados para perturbação da solução. Como resultado, este trabalho mostra que, no caso dos modelos híbridos, a natureza complementar e diferente dos métodos Busca Tabu e Multidão de Partículas, na forma como são aqui apresentados, da origem à algoritmos robustos capazes de gerar solução ótimas ou muito boas e muito menos sensíveis ao ajuste dos parâmetros de cada um dos métodos de origem. No caso do método Colisão de Partículas, o novo algorítimo é capaz de atenuar a sensibilidade ao ajuste dos parâmetros e de evitar os processos cíclicos do método Busca Tabu, produzindo assim melhores resultados. / In recent decades, the Job Shop Scheduling Ploblem (JSSP) has received great attention of researchers worldwide. One of the reasons for such interest is its high complexity. The JSSP is a combinatorial optimization problem classified as NP-Hard and, although there is a variety of methods and heuristics that are able to solve it, even today no method or heuristic is able to find optimal solutions for all benchmarcks presented in the literature. The other reason builds on noted fact that this problem is present in day-to-day of industries of various segments and, since the optimal scheduling may cause a significant reduction in production time and thus a better utilization of manufacturing resources, it can generate a strong impact on the gain of these industries, especially in cases where the production sector is responsible for most of their total costs. Among the heuristics that can be applied to the solution of this problem, the Tabu Search and the Particle Swarm Optimization show good performance for most benchmarcks found in the literature. Usually, the Taboo Search heuristic presents a good and fast convergence to the optimal or sub-optimal points, but this convergence is frequently interrupted by cyclical processes, offset, the Particle Swarm Optimization heuristic tends towards a convergence by means of a lot of computational time, and the performance of both heuristics strongly depends on the adjusting of its parameters. This thesis presents four different hybridization models to solve the classical Job Shop Scheduling Problem, three of which based on the general schema of Local Search Heuristics and the fourth based on the method Particle Collision. These models are analyzed with these two heuristics, Taboo Search and Particle Swarm Optimization, and the elements of this heuristics, showing what aspects must be considered in order to achieve a best solution of the one obtained by the original heuristics in a considerable computational time. As results this thesis demonstrates that the four models are able to improve the robustness of the original heuristics and the results found by Taboo Search.
197

Uma meta-heurística para uma classe de problemas de otimização de carteiras de investimentos

Silva, Yuri Laio Teixeira Veras 16 February 2017 (has links)
Submitted by Leonardo Cavalcante (leo.ocavalcante@gmail.com) on 2018-06-11T11:34:10Z No. of bitstreams: 1 Arquivototal.pdf: 1995596 bytes, checksum: bfcc1e1f3a77514dcbf7a8e4f5e4701b (MD5) / Made available in DSpace on 2018-06-11T11:34:10Z (GMT). No. of bitstreams: 1 Arquivototal.pdf: 1995596 bytes, checksum: bfcc1e1f3a77514dcbf7a8e4f5e4701b (MD5) Previous issue date: 2017-02-16 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / The problem in investment portfolio selection consists in the allocation of resources to a finite number of assets, aiming, in its classic approach, to overcome a trade-off between the risk and expected return of the portfolio. This problem is one of the most important topics targeted at today’s financial and economic issues. Since the pioneering works of Markowitz, the issue is treated as an optimisation problem with the two aforementioned objectives. However, in recent years, various restrictions and additional risk measurements were identified in the literature, such as, for example, cardinality restrictions, minimum transaction lot and asset pre-selection. This practice aims to bring the issue closer to the reality encountered in financial markets. In that regard, this paper proposes a metaheuristic called Particle Swarm for the optimisation of several PSPs, in such a way that allows the resolution of the problem considering a set of restrictions chosen by the investor. / O problema de seleção de carteiras de investimentos (PSP) consiste na alocação de recursos a um número finito de ativos, objetivando, em sua abordagem clássica, superar um trade-off entre o retorno esperado e o risco da carteira. Tal problema ´e uma das temáticas mais importantes voltadas a questões financeiras e econômicas da atualidade. Desde os pioneiros trabalhos de Markowitz, o assunto é tratado como um problema de otimização com esses dois objetivos citados. Entretanto, nos últimos anos, diversas restrições e mensurações de riscos adicionais foram consideradas na literatura, como, por exemplo, restrições de cardinalidade, de lote mínimo de transação e de pré-seleção de ativos. Tal prática visa aproximar o problema da realidade encontrada nos mercados financeiros. Neste contexto, o presente trabalho propõe uma meta-heurística denominada Adaptive Non-dominated Sorting Multiobjective Particle Swarm Optimization para a otimização de vários problemas envolvendo PSP, de modo que permita a resolução do problema considerando um conjunto de restri¸c˜oes escolhidas pelo investidor.
198

Otimização por enxame de partículas em arquiteturas paralelas de alto desempenho. / Particle swarm optimization in high-performance parallel architectures.

Rogério de Moraes Calazan 21 February 2013 (has links)
A Otimização por Enxame de Partículas (PSO, Particle Swarm Optimization) é uma técnica de otimização que vem sendo utilizada na solução de diversos problemas, em diferentes áreas do conhecimento. Porém, a maioria das implementações é realizada de modo sequencial. O processo de otimização necessita de um grande número de avaliações da função objetivo, principalmente em problemas complexos que envolvam uma grande quantidade de partículas e dimensões. Consequentemente, o algoritmo pode se tornar ineficiente em termos do desempenho obtido, tempo de resposta e até na qualidade do resultado esperado. Para superar tais dificuldades, pode-se utilizar a computação de alto desempenho e paralelizar o algoritmo, de acordo com as características da arquitetura, visando o aumento de desempenho, a minimização do tempo de resposta e melhoria da qualidade do resultado final. Nesta dissertação, o algoritmo PSO é paralelizado utilizando três estratégias que abordarão diferentes granularidades do problema, assim como dividir o trabalho de otimização entre vários subenxames cooperativos. Um dos algoritmos paralelos desenvolvidos, chamado PPSO, é implementado diretamente em hardware, utilizando uma FPGA. Todas as estratégias propostas, PPSO (Parallel PSO), PDPSO (Parallel Dimension PSO) e CPPSO (Cooperative Parallel PSO), são implementadas visando às arquiteturas paralelas baseadas em multiprocessadores, multicomputadores e GPU. Os diferentes testes realizados mostram que, nos problemas com um maior número de partículas e dimensões e utilizando uma estratégia com granularidade mais fina (PDPSO e CPPSO), a GPU obteve os melhores resultados. Enquanto, utilizando uma estratégia com uma granularidade mais grossa (PPSO), a implementação em multicomputador obteve os melhores resultados. / Particle Swarm Optimization (PSO) is an optimization technique that is used to solve many problems in different applications. However, most implementations are sequential. The optimization process requires a large number of evaluations of the objective function, especially in complex problems, involving a large amount of particles and dimensions. As a result, the algorithm may become inefficient in terms of performance, execution time and even the quality of the expected result. To overcome these difficulties,high performance computing and parallel algorithms can be used, taking into account to the characteristics of the architecture. This should increase performance, minimize response time and may even improve the quality of the final result. In this dissertation, the PSO algorithm is parallelized using three different strategies that consider different granularities of the problem, and the division of the optimization work among several cooperative sub-swarms. One of the developed parallel algorithms, namely PPSO, is implemented directly in hardware, using an FPGA. All the proposed strategies, namely PPSO ( Parallel PSO), PDPSO (Parallel Dimension PSO) and CPPSO (Cooperative Parallel PSO), are implemented in a multiprocessor, multicomputer and GPU based parallel architectures. The different performed assessments show that the GPU achieved the best results for problems with high number of particles and dimensions when a strategy with finer granularity is used, namely PDPSO and CPPSO. In contrast with this, when using a strategy with a coarser granularity, namely PPSO, the multi-computer based implementation achieved the best results.
199

Uma eficiente metodologia para reconfiguração de redes de distribuição de energia elétrica usando otimização por

Prieto, Laura Paulina Velez January 2015 (has links)
Orientador: Prof. Dr. Edmarcio Antonio Belati / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia Elétrica, 2015. / Esta pesquisa apresenta uma metodologia para reconfiguração de sistemas elétricos de distribuição baseada na metaheurística "otimização por enxame de partículas" do inglês Particle Swarm Optimization, denominada por PSO. Na metodologia proposta, inicialmente são estabelecidos os subconjuntos de chaves candidatas, calculados com base no número de chaves abertas necessárias para manter a radialidade do sistema. Assim, o espaço de busca diminui consideravelmente. O algoritmo de solução foi desenvolvido para minimizar as perdas de potência nas linhas da rede de distribuição, sujeita às seguintes restrições: a) limite de tensão; b) ilhamento de carga; c) radialidade do sistema e d) balanço das potências ativa e reativa nos nós da rede. Alterações na formulação clássica do PSO foram realizadas de modo a tornar o processo de busca mais eficiente. O processo de busca que compõem a metodologia foi detalhado em um sistema de 5 barras e 7 linhas. A técnica foi validada em quatro sistemas: 16 barras e 21 chaves; 33 barras e 37 chaves; 70 barras e 74 chaves; e 136 barras e 156 chaves. Comparando os resultados para os quatro sistemas testados com os resultados existentes na literatura, em todos os casos foi encontrada a topologia com o menor número de perdas já encontrada na literatura consultada até o momento. / This research presents a method for network reconfiguration in distribution systems based on the metaheuristics "Particle Swarm Optimization". In this method, the candidate switch subsets are calculated based on the number of open switches necessary to maintain the radial configuration. Then, the search space reduces substantially. The algorithm was developed to minimize the power losses in the lines of the distribution system considering the following constrains: a) voltage limits; b) load connectivity; c) radial configuration and d) power balancing. The original version of PSO was modified to improve the search process. The search process that composes the methodology is detailed in a system of 5 nodes and 7 switches. The technique was validated in four systems: 16 nodes and 21 switches, 33 nodes and 37 switches and 70 nodes and 74 switches and 136 nodes and 156 switches. Comparing the results for the four systems tested with existing literature results in all cases showed the topology with fewer losses already found in the literature to date.
200

Metodologia de estimação dos parâmetros de um módulo termoelétrico baseada na implementação do algoritmo PSO

Giratá, Daniel Ricardo Ojeda January 2016 (has links)
Orientador: Prof. Dr. Luiz A. Luz de Almeida / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia Elétrica, 2016. / Modulos termoeletricos (TEM-Thermoelectric Modules) sao utilizados na geraçao de energia eletrica e na construcao de camaras termicas para caracterizacao de materiais como ligas de memoria de forma (SMA-Smart Memory Allow), dentre outros. Para ter uma correta representacao do TEM e necessaria a criaçao de um modelo matematico que consiga representar o seu funcionamento, tanto em corrente cont'ýnua como em demais frequ¿encias relevantes. No presente trabalho 'e proposto um modelo para a representa¸c¿ao de uma c¿amera t'ermica constru'ýda a partir de dois TEM, considerando-se as n¿ao-linearidades destes. M'etodos cl'assicos de estima¸c¿ao para modelos lineares nos par¿ametros n¿ao se aplicam para o modelo proposto. Para obten¸c¿ao dos valores dos par¿ametros do TEM, este 'e excitado com um sinal aleat'orio de multi-n'ývel (PRBS-Pseudo Random Binary Sequence) e a resposta 'e utilizada para o m'etodo n¿ao determin'ýstico do algoritmo de otimiza¸c¿ao, baseada no enxame de part'ýculas (PSO-Particle Swarm Optimization) fazer a estima¸c¿ao. O modelo escolhido para a caracteriza¸c¿ao da c¿amara t'ermica 'e n¿ao-linear. Este cont'em os par¿ametros t'ermicos din¿amicos, tais como: a camada superior, a placa superior, camada central, placa inferior e o dissipador de calor de cada um dos TEM, sendo no total 21 par¿ametros calculados pelo algoritmo PSO. O sinal de excita¸c¿ao consiste em um ru'ýdo branco que 'e antes filtrado, resultando em um sinal dinamicamente persistente, de tal forma que o TEM seja bem caracterizado. Resultados de simula¸c¿oes mostram a efetividade do algoritmo PSO na estima¸c¿ao de par¿ametros do modelo. / Thermoelectric Modules (TEM) are used in the power generation and construction of thermal cameras for material characterization such as Smart Memory Allow (SMA), among other. In order to obtain a correct TEM representation, it is necessary a proper model identification procedure to represent the TEM operation, both in D.C. and other relevant frequencies. In this paper, a TEM model is proposed, for the representation of a thermal camera built from two TEM. TEM non linear characteristics were considered. Classical methods for linear parameters estimation are not apply to the proposed model. To obtain the TEM parameters, it power density of a white noise, and then is used the temperature response for the Particle Swarm Optimization algorithm (PSO) to make the estimation. The chosen model is nonlinear with 21 parameters, wich represent the TEM: the top layer, the hot side, the middle layer, cold side and the heatsink. For numerical stability, the white noise excitation is filtered before, geting a dynamically persistent signal, so TEM will be properly characterized. Simulation results show the effectiveness of the PSO in TEM parameters estimation.

Page generated in 0.164 seconds