• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 19
  • 9
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 48
  • 27
  • 24
  • 24
  • 21
  • 21
  • 20
  • 19
  • 18
  • 16
  • 13
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Comparação de modelos paramétricos e não paramétricos de atuadores com fluido magneto reológico / Comparison of parametric and non parametric models of rheological magnetic fluid actuators

Teixeira, Philippe César Fernandes 26 June 2017 (has links)
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / FAPEMIG - Fundação de Amparo a Pesquisa do Estado de Minas Gerais / Desde seu surgimento no início da década de setenta, os sistemas semiativos vêm ganhando cada vez mais espaço nos projetos de engenharia. No caso específico dos sistemas semiativos que utilizam fluido magneto reológico, os primeiros produtos com esse material inteligente só foram comercializados com sucesso pela primeira vez em 1998 pela LORD® Corporation, que é fornecedora do atuador utilizado neste trabalho. A partir daí as aplicações não pararam de crescer. Desde sistemas de suspensão inteligentes de pontes e edifícios, visando a segurança de pessoas e a saúde estrutural das construções, até assentos de veículos, visando o conforto e segurança do passageiro. O objetivo desse trabalho foi o de apresentar uma metodologia de validação de modelos matemáticos de amortecedores com fluido magneto reológico, tanto paramétricos quanto não paramétricos. Para validação, utilizou-se da técnica de problemas inversos para otimizar o modelo estudado em relação aos dados experimentais, através da minimização do erro relativo, usando a norma da diferença entre as forças obtidas do ensaio experimental realizado e do modelo numérico implementado em ambiente MATLAB®, dividida pela norma da força experimental. As vantagens dos modelos matemáticos paramétricos é que permitem rápida convergência dos resultados; já a desvantagem é que, por seguir uma lei de formação bem definida matematicamente, as formas das curvas são “rígidas”, ou seja, sem liberdade para mudar sua configuração. Normalmente, essas curvas seguem uma tendência bem característica. O modelo não paramétrico aplicado neste trabalho é baseado na lógica fuzzy (lógica nebulosa), conferindo maior “liberdade” para modelar adequadamente todos os pontos da curva experimental. Contudo, a dificuldade em encontrar os parâmetros fuzzy são grandes, a ponto de prejudicar o resultado da validação. Por fim, conclui-se que o modelo histerético, paramétrico, apresentou os melhores resultados, menor custo computacional e maior facilidade de implementação. / Since the appearance of semiactive systems in the early seventies, they have been gaining more applications in several engineering projects. In the specific case of semiactive systems using magneto rheological fluid, the first commercial products with this intelligent material were only successfully marketed for the first time in 1998 by LORD® Corporation, the same manufacturer of the actuator used in the present research work. Since then, the applications did not stop growing. From intelligent suspension systems to bridges and buildings, aiming at the safety of people and the structural health of buildings, to vehicle seats, aiming at passenger comfort and safety, magneto rheological fluid actuators occupy a large spectrum of applications. The objective of this work is to present a methodology for the validation of mathematical models, both parametric and non-parametric. For validation purposes, inverse problem techniques were used to optimize the model studied with respect to the experimental data, using the minimization of a relative error, based on the norm of the difference between the forces obtained from the test performed and the numerical model implemented in MATLAB® environment, divided by the norm of the experimental force. The advantages of parametric mathematical models are that they led to a rapid convergence to the results, and the disadvantage is that, since they have a well-defined law of formation, the shapes of the characteristic curves of the actuators are "rigid", i.e., they do not have enough freedom to change their shape drastically. Usually these curves follow a well-defined trend. The nonparametric model studied in this work is based on fuzzy logic, which has a greater "freedom" to model all the points of the experimental curve, conveniently. However, the difficulty in finding the fuzzy parameters is very important, to the point of compromising the validation result. Finally, it was concluded that the parametric hysteretic model presented the best results for design purposes, lower computational cost, and easier implementation as compared with competing models. / Dissertação (Mestrado)
52

Risk–based modeling, simulation and optimization for the integration of renewable distributed generation into electric power networks / Modélisation, simulation et optimisation basée sur le risque pour l’intégration de génération distribuée renouvelable dans des réseaux de puissance électrique

Mena, Rodrigo 30 June 2015 (has links)
Il est prévu que la génération distribuée par l’entremise d’énergie de sources renouvelables (DG) continuera à jouer un rôle clé dans le développement et l’exploitation des systèmes de puissance électrique durables, efficaces et fiables, en vertu de cette fournit une alternative pratique de décentralisation et diversification de la demande globale d’énergie, bénéficiant de sources d’énergie plus propres et plus sûrs. L’intégration de DG renouvelable dans les réseaux électriques existants pose des défis socio–technico–économiques, qu’ont attirés de la recherche et de progrès substantiels.Dans ce contexte, la présente thèse a pour objet la conception et le développement d’un cadre de modélisation, simulation et optimisation pour l’intégration de DG renouvelable dans des réseaux de puissance électrique existants. Le problème spécifique à considérer est celui de la sélection de la technologie,la taille et l’emplacement de des unités de génération renouvelable d’énergie, sous des contraintes techniques, opérationnelles et économiques. Dans ce problème, les questions de recherche clés à aborder sont: (i) la représentation et le traitement des variables physiques incertains (comme la disponibilité de les diverses sources primaires d’énergie renouvelables, l’approvisionnement d’électricité en vrac, la demande de puissance et l’apparition de défaillances de composants) qui déterminent dynamiquement l’exploitation du réseau DG–intégré, (ii) la propagation de ces incertitudes sur la réponse opérationnelle du système et le suivi du risque associé et (iii) les efforts de calcul intensif résultant du problème complexe d’optimisation combinatoire associé à l’intégration de DG renouvelable.Pour l’évaluation du système avec un plan d’intégration de DG renouvelable donné, un modèle de calcul de simulation Monte Carlo non–séquentielle et des flux de puissance optimale (MCS–OPF) a été conçu et mis en oeuvre, et qui émule l’exploitation du réseau DG–intégré. Réalisations aléatoires de scénarios opérationnels sont générés par échantillonnage à partir des différentes distributions des variables incertaines, et pour chaque scénario, la performance du système est évaluée en termes économiques et de la fiabilité de l’approvisionnement en électricité, représenté par le coût global (CG) et l’énergie non fournie (ENS), respectivement. Pour mesurer et contrôler le risque par rapport à la performance du système, deux indicateurs sont introduits, la valeur–à–risque conditionnelle(CVaR) et l’écart du CVaR (DCVaR).Pour la sélection optimale de la technologie, la taille et l’emplacement des unités DG renouvelables,deux approches distinctes d’optimisation multi–objectif (MOO) ont été mis en oeuvre par moteurs de recherche d’heuristique d’optimisation (HO). La première approche est basée sur l’algorithme génétique élitiste de tri non-dominé (NSGA–II) et vise à la réduction concomitante de l’espérance mathématique de CG et de ENS, dénotés ECG et EENS, respectivement, combiné avec leur valeurs correspondent de CVaR(CG) et CVaR(ENS); la seconde approche effectue un recherche à évolution différentielle MOO (DE) pour minimiser simultanément ECG et s’écart associé DCVaR(CG). Les deux approches d’optimisation intègrent la modèle de calcul MCS–OPF pour évaluer la performance de chaque réseau DG–intégré proposé par le moteur de recherche HO.Le défi provenant de les grands efforts de calcul requises par les cadres de simulation et d’optimisation proposée a été abordée par l’introduction d’une technique originale, qui niche l’analyse de classification hiérarchique (HCA) dans un moteur de recherche de DE.Exemples d’application des cadres proposés ont été élaborés, concernant une adaptation duréseau test de distribution électrique IEEE 13–noeuds et un cadre réaliste du système test de sous–transmission et de distribution IEEE 30–noeuds. [...] / Renewable distributed generation (DG) is expected to continue playing a fundamental role in the development and operation of sustainable, efficient and reliable electric power systems, by virtue of offering a practical alternative to diversify and decentralize the overall power generation, benefiting from cleaner and safer energy sources. The integration of renewable DG in the existing electric powernetworks poses socio–techno–economical challenges, which have attracted substantial research and advancement.In this context, the focus of the present thesis is the design and development of a modeling,simulation and optimization framework for the integration of renewable DG into electric powernetworks. The specific problem considered is that of selecting the technology, size and location of renewable generation units, under technical, operational and economic constraints. Within this problem, key research questions to be addressed are: (i) the representation and treatment of the uncertain physical variables (like the availability of diverse primary renewable energy sources, bulk–power supply, power demands and occurrence of components failures) that dynamically determine the DG–integrated network operation, (ii) the propagation of these uncertainties onto the system operational response and the control of the associated risk and (iii) the intensive computational efforts resulting from the complex combinatorial optimization problem of renewable DG integration.For the evaluation of the system with a given plan of renewable DG, a non–sequential MonteCarlo simulation and optimal power flow (MCS–OPF) computational model has been designed and implemented, that emulates the DG–integrated network operation. Random realizations of operational scenarios are generated by sampling from the different uncertain variables distributions,and for each scenario the system performance is evaluated in terms of economics and reliability of power supply, represented by the global cost (CG) and the energy not supplied (ENS), respectively.To measure and control the risk relative to system performance, two indicators are introduced, the conditional value–at–risk (CVaR) and the CVaR deviation (DCVaR).For the optimal technology selection, size and location of the renewable DG units, two distinct multi–objective optimization (MOO) approaches have been implemented by heuristic optimization(HO) search engines. The first approach is based on the fast non–dominated sorting genetic algorithm(NSGA–II) and aims at the concurrent minimization of the expected values of CG and ENS, thenECG and EENS, respectively, combined with their corresponding CVaR(CG) and CVaR(ENS) values; the second approach carries out a MOO differential evolution (DE) search to minimize simultaneously ECG and its associated deviation DCVaR(CG). Both optimization approaches embed the MCS–OPF computational model to evaluate the performance of each DG–integrated network proposed by the HO search engine. The challenge coming from the large computational efforts required by the proposed simulation and optimization frameworks has been addressed introducing an original technique, which nests hierarchical clustering analysis (HCA) within a DE search engine. Examples of application of the proposed frameworks have been worked out, regarding an adaptation of the IEEE 13 bus distribution test feeder and a realistic setting of the IEEE 30 bussub–transmission and distribution test system. The results show that these frameworks are effectivein finding optimal DG–integrated networks solutions, while controlling risk from two distinctperspectives: directly through the use of CVaR and indirectly by targeting uncertainty in the form ofDCVaR. Moreover, CVaR acts as an enabler of trade–offs between optimal expected performanceand risk, and DCVaR integrates also uncertainty into the analysis, providing a wider spectrum ofinformation for well–supported and confident decision making.
53

Modelagem do processo de fermentação etanólica com interferência de bactérias heterofermentativa e homofermentativa / Interference modelling of heterofermentative and homofermentative bacteria in the ethanol fermentation process

Jean Mimar Santa Cruz Yabarrena 04 May 2012 (has links)
O processo fermentativo para a obtenção de etanol constitui um sistema complexo. Durante a fermentação se desenvolvem infecções em forma crônica e os surtos de infecção aguda aparecem em condições que; por causa da dinâmica não-linear do processo que envolve uma rede de reações metabólicas dos microrganismos, a alta dependência às condições de contorno e pelas interações sinérgicas e antagônicas envolvidas neste ecossistema; ainda representam um tema relevante de pesquisa em aberto. O presente trabalho propõe contribuir na tarefa de interpretar tais efeitos por intermédio de um modelo que inclua a interferência das bactérias. É proposto um cenário em escala laboratorial e controlado com a linhagem isolada de Saccharomyces cerevisiae PE-2 em co-cultura com bactérias de metabolismo heterofermentativo e homofermentativo. A interferência é explicada por intermédio de um modelo com efeito fixo baseado em um modelo não estruturado proposto por Lee (1983), e modificado a partir dos estudos de Andrietta (2003). São conduzidos ensaios de quatro tratamentos: uma fermentação controle e outras contaminadas com cada um dos diferentes tipos de bactérias e também em conjunto, a fim de fornecer dados experimentais para ajuste do modelo em discussão. Em seguida, são realizadas estimativas dos parâmetros que compõem as equações diferenciais da cinética da fermentação, utilizando um algoritmo genético baseado em evolução diferencial Storn e Price (1997). Completa-se a avaliação dos parâmetros por intermédio da análise de sensibilidade paramétrica dos mesmos. De posse desses resultados, é utilizado o modelo do tratamento controle como base, e são inseridos vetores de variáveis categóricas, correspondentes a efeitos fixos. Tais variáveis permitem modelar a interferência da contaminação no modelo matemático proposto. A estatística descritiva, a análise utilizando inferência bayesiana e a interpretação bioquímica dos resultados complementam as inferências obtidas a respeito do modelo. A análise de sensibilidade e correlação dos parâmetros, mostrou que os modelos para a cinética da fermentação, estudados e bem conhecidos não são adequados para modelar o processo, pela alta correlação dos seus parâmetros. O tratamento controle teve um rendimento de 46,97% na produção de etanol, o tratamento com bactéria heterofermentativa teve uma redução de 2.35% e com a mistura de ambas a redução foi de 1,58%. Uma das principais contribuições deste estudo relacionasse à produção de glicerol, o resultado mostra que no há impacto significativo na presença de bactérias homofermentativas e sim uma clara tendência a inibir a sua produção. Apresentam-se também indícios de sinergia entre as bactérias e de consumo de manitol pela bactéria homofermentativa. / The fermentation process for obtaining ethanol is a complex system. During fermentation develop chronic infections and outbreaks of acute infection in conditions that appear, because of the nonlinear dynamics of the process that involves a network of metabolic reactions of microorganisms, the high dependence on boundary conditions and the synergistic interactions and antagonistic involved in this ecosystem, still represent a relevant topic of research open. This paper proposes to contribute in the interpreting such effects through a model that includes the interference of bacteria. We propose a scenario in laboratory scale and controlled with the strain isolated from Saccharomyces cerevisiae PE-2 in co-culture with bacteria and homofermentativo heterofermentative metabolism. The interference is explained by means of a template with fixed effect based on a non structured proposed by Lee (1983) and modified from studies Andrietta (2003). Tests are conducted four treatments: a control fermentation and other contaminated with each of the different bacteria and also in conjunction in order to provide experimental data to fit the model under discussion. Are then carried estimates of the parameters that make up the differential equations of the kinetics of the fermentation, using a genetic algorithm based on differential evolution Storm and Price (1997). To complete the parameters evaluated through analysis of the same parameter sensitivity. With these results, the model is used as the basis of the control treatment, and are inserted vectors of categorical variables, corresponding to fixed effects. These variables allow modeling the interference of contamination in the mathematical model. Descriptive statistics and Bayesian inference analysis and interpretation of biochemical results obtained complement inferences about the model. Sensitivity analysis and correlation of parameters for the models showed that the kinetics of the fermentation, well studied and known are not suitable for modeling the process, the high correlation between the parameters. The control had a yield of 46.97% in ethanol, treatment with heterofermentative bacteria was reduced by 2.35% and a mixture of both reduction was 1.58%. A major contribution of this study is related to the glycerol, the result shows that no significant impact on the presence of homofermentative bacteria but a clear tendency to inhibit their production. It also presents evidence of synergy between bacteria and consumption of mannitol by homofermentative bacteria.
54

Reconstrução de imagens de tomografia por impedância elétrica usando evolução diferencial

RIBEIRO, Reiga Ramalho 23 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-09-20T13:03:02Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Versão_Digital_REIGA.pdf: 3705889 bytes, checksum: 551e1d47969ce5d1aa92cdb311f41304 (MD5) / Made available in DSpace on 2016-09-20T13:03:02Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Versão_Digital_REIGA.pdf: 3705889 bytes, checksum: 551e1d47969ce5d1aa92cdb311f41304 (MD5) Previous issue date: 2016-02-23 / CAPES / A Tomografia por Impedância Elétrica (TIE) é uma técnica que visa reconstruir imagens do interior de um corpo de forma não-invasiva e não-destrutiva. Com base na aplicação de corrente elétrica e na medição dos potenciais de borda do corpo, feita através de eletrodos, um algoritmo de reconstrução de imagens de TIE gera o mapa de condutividade elétrica do interior deste corpo. Diversos métodos são aplicados para gerar imagens de TIE, porém ainda são geradas imagens de contorno suave. Isto acontece devido à natureza matemática do problema de reconstrução da TIE como um problema mal-posto e mal-condicionado. Isto significa que não existe uma distribuição de condutividade interna exata para uma determinada distribuição de potenciais de borda. A TIE é governada matematicamente pela equação de Poisson e a geração da imagem envolve a resolução iterativa de um problema direto, que trata da obtenção dos potenciais de borda a partir de uma distribuição interna de condutividade. O problema direto, neste trabalho, foi aplicado através do Método dos Elementos Finitos. Desta forma, é possível aplicar técnicas de busca e otimização que objetivam minimizar o erro médio quadrático relativo (função objetivo) entre os potenciais de borda mensurados no corpo (imagem ouro) e os potencias gerados pela resolução do problema direto de um candidato à solução. Assim, o objetivo deste trabalho foi construir uma ferramenta computacional baseada em algoritmos de busca e otimização híbridos, com destaque para a Evolução Diferencial, a fim de reconstruir imagens de TIE. Para efeitos de comparação também foram utilizados para gerar imagens de TIE: Algoritmos Genéticos, Otimização por Enxame de Partículas e Recozimento Simulado. As simulações foram feitas no EIDORS, uma ferramenta usada em MatLab/ GNU Octave com código aberto voltada para a comunidade de TIE. Os experimentos foram feitos utilizando três diferentes configurações de imagens ouro (fantomas). As análises foram feitas de duas formas, sendo elas, qualitativa: na forma de o quão as imagens geradas pela técnica de otimização são parecidas com seu respectivo fantoma; quantitativa: tempo computacional, através da evolução do erro relativo calculado pela função objetivo do melhor candidato à solução ao longo do tempo de reconstrução das imagens de TIE; e custo computacional, através da avaliação da evolução do erro relativo ao longo da quantidade de cálculos da função objetivo pelo algoritmo. Foram gerados resultados para Algoritmos Genéticos, cinco versões clássicas de Evolução Diferencial, versão modificada de Evolução Diferencial, Otimização por Enxame de Partículas, Recozimento Simulado e três novas técnicas híbridas baseadas em Evolução Diferencial propostas neste trabalho. De acordo com os resultados obtidos, vemos que todas as técnicas híbridas foram eficientes para resolução do problema da TIE, obtendo bons resultados qualitativos e quantitativos desde 50 iterações destes algoritmos. Porém, merece destacar o rendimento do algoritmo obtido pela hibridização da Evolução Diferencial e Recozimento Simulado por ser a técnica aqui proposta mais promissora na reconstrução de imagens de TIE, onde mostrou ser mais rápida e menos custosa computacionalmente do que as outras técnicas propostas. Os resultados desta pesquisa geraram diversas contribuições na forma de artigos publicados em eventos nacionais e internacionais. / Electrical Impedance Tomography (EIT) is a technique that aim to reconstruct images of the interior of a body in a non-invasive and non-destructive form. Based on the application of the electrical current and on the measurement of the body’s edge electrical potential, made through of electrodes, an EIT image reconstruction algorithm generates the conductivity distribution map of this body’s interior. Several methods are applied to generate EIT images; however, they are still generated smooth contour images. This is due of the mathematical nature of EIT reconstruction problem as an ill-posed and ill-conditioned problem. Thus, there is not an exact internal conductivity distribution for one determinate edge potential distribution. The EIT is ruled mathematically by Poisson’s equations, and the image generation involves an iterative resolution of a direct problem, that treats the obtainment of the edge potentials through of an internal distribution of conductivity. The direct problem, in this dissertation, was applied through of Finite Elements Method. Thereby, is possible to apply search and optimization techniques that aim to minimize the mean square error relative (objective function) between the edge potentials measured in the body (gold image) and the potential generated by the resolution of the direct problem of a solution candidate. Thus, the goal of this work was to construct a computational tool based in hybrid search and optimization algorithms, highlighting the Differential Evolution, in order to reconstruct EIT images. For comparison, it was also used to generate EIT images: Genetic Algorithm, Particle Optimization Swarm and Simulated Annealing. The simulations were made in EIDORS, a tool used in MatLab/GNU Octave open source toward the TIE community. The experiments were performed using three different configurations of gold images (phantoms). The analyzes were done in two ways, as follows, qualitative: in the form of how the images generated by the optimization technique are similar to their respective phantom; quantitative: computational time, by the evolution of the relative error calculated for the objective function of the best candidate to the solution over time the EIT images reconstruction; and computational cost, by evaluating the evolution of the relative error over the amount of calculations of the objective functions by the algorithm. Results were generated for Genetic Algorithms, five classical versions of Differential Evolution, modified version of the Differential Evolution, Particle Optimization Swarm, Simulated Annealing and three new hybrid techniques based in Differential Evolution proposed in this work. According to the results obtained, we see that all hybrid techniques were efficient in solving the EIT problem, getting good qualitative and quantitative results from 50 iterations of these algorithms. Nevertheless, it deserves highlight the algorithm performance obtained by hybridization of Differential Evolution and Simulated Annealing to be the most promising technique here proposed to reconstruct EIT images, which proved to be faster and less expensive computationally than other proposed techniques. The results of this research generate several contributions in the form of published paper in national and international events.
55

Reconstrução de imagens de tomografia por impedância elétrica usando evolução diferencial

RIBEIRO, Reiga Ramalho 23 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-09-20T13:21:46Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Versão_Digital_REIGA.pdf: 3705889 bytes, checksum: 551e1d47969ce5d1aa92cdb311f41304 (MD5) / Made available in DSpace on 2016-09-20T13:21:46Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Versão_Digital_REIGA.pdf: 3705889 bytes, checksum: 551e1d47969ce5d1aa92cdb311f41304 (MD5) Previous issue date: 2016-02-23 / CAPES / A Tomografia por Impedância Elétrica (TIE) é uma técnica que visa reconstruir imagens do interior de um corpo de forma não-invasiva e não-destrutiva. Com base na aplicação de corrente elétrica e na medição dos potenciais de borda do corpo, feita através de eletrodos, um algoritmo de reconstrução de imagens de TIE gera o mapa de condutividade elétrica do interior deste corpo. Diversos métodos são aplicados para gerar imagens de TIE, porém ainda são geradas imagens de contorno suave. Isto acontece devido à natureza matemática do problema de reconstrução da TIE como um problema mal-posto e mal-condicionado. Isto significa que não existe uma distribuição de condutividade interna exata para uma determinada distribuição de potenciais de borda. A TIE é governada matematicamente pela equação de Poisson e a geração da imagem envolve a resolução iterativa de um problema direto, que trata da obtenção dos potenciais de borda a partir de uma distribuição interna de condutividade. O problema direto, neste trabalho, foi aplicado através do Método dos Elementos Finitos. Desta forma, é possível aplicar técnicas de busca e otimização que objetivam minimizar o erro médio quadrático relativo (função objetivo) entre os potenciais de borda mensurados no corpo (imagem ouro) e os potencias gerados pela resolução do problema direto de um candidato à solução. Assim, o objetivo deste trabalho foi construir uma ferramenta computacional baseada em algoritmos de busca e otimização híbridos, com destaque para a Evolução Diferencial, a fim de reconstruir imagens de TIE. Para efeitos de comparação também foram utilizados para gerar imagens de TIE: Algoritmos Genéticos, Otimização por Enxame de Partículas e Recozimento Simulado. As simulações foram feitas no EIDORS, uma ferramenta usada em MatLab/ GNU Octave com código aberto voltada para a comunidade de TIE. Os experimentos foram feitos utilizando três diferentes configurações de imagens ouro (fantomas). As análises foram feitas de duas formas, sendo elas, qualitativa: na forma de o quão as imagens geradas pela técnica de otimização são parecidas com seu respectivo fantoma; quantitativa: tempo computacional, através da evolução do erro relativo calculado pela função objetivo do melhor candidato à solução ao longo do tempo de reconstrução das imagens de TIE; e custo computacional, através da avaliação da evolução do erro relativo ao longo da quantidade de cálculos da função objetivo pelo algoritmo. Foram gerados resultados para Algoritmos Genéticos, cinco versões clássicas de Evolução Diferencial, versão modificada de Evolução Diferencial, Otimização por Enxame de Partículas, Recozimento Simulado e três novas técnicas híbridas baseadas em Evolução Diferencial propostas neste trabalho. De acordo com os resultados obtidos, vemos que todas as técnicas híbridas foram eficientes para resolução do problema da TIE, obtendo bons resultados qualitativos e quantitativos desde 50 iterações destes algoritmos. Porém, merece destacar o rendimento do algoritmo obtido pela hibridização da Evolução Diferencial e Recozimento Simulado por ser a técnica aqui proposta mais promissora na reconstrução de imagens de TIE, onde mostrou ser mais rápida e menos custosa computacionalmente do que as outras técnicas propostas. Os resultados desta pesquisa geraram diversas contribuições na forma de artigos publicados em eventos nacionais e internacionais. / Electrical Impedance Tomography (EIT) is a technique that aim to reconstruct images of the interior of a body in a non-invasive and non-destructive form. Based on the application of the electrical current and on the measurement of the body’s edge electrical potential, made through of electrodes, an EIT image reconstruction algorithm generates the conductivity distribution map of this body’s interior. Several methods are applied to generate EIT images; however, they are still generated smooth contour images. This is due of the mathematical nature of EIT reconstruction problem as an ill-posed and ill-conditioned problem. Thus, there is not an exact internal conductivity distribution for one determinate edge potential distribution. The EIT is ruled mathematically by Poisson’s equations, and the image generation involves an iterative resolution of a direct problem, that treats the obtainment of the edge potentials through of an internal distribution of conductivity. The direct problem, in this dissertation, was applied through of Finite Elements Method. Thereby, is possible to apply search and optimization techniques that aim to minimize the mean square error relative (objective function) between the edge potentials measured in the body (gold image) and the potential generated by the resolution of the direct problem of a solution candidate. Thus, the goal of this work was to construct a computational tool based in hybrid search and optimization algorithms, highlighting the Differential Evolution, in order to reconstruct EIT images. For comparison, it was also used to generate EIT images: Genetic Algorithm, Particle Optimization Swarm and Simulated Annealing. The simulations were made in EIDORS, a tool used in MatLab/GNU Octave open source toward the TIE community. The experiments were performed using three different configurations of gold images (phantoms). The analyzes were done in two ways, as follows, qualitative: in the form of how the images generated by the optimization technique are similar to their respective phantom; quantitative: computational time, by the evolution of the relative error calculated for the objective function of the best candidate to the solution over time the EIT images reconstruction; and computational cost, by evaluating the evolution of the relative error over the amount of calculations of the objective functions by the algorithm. Results were generated for Genetic Algorithms, five classical versions of Differential Evolution, modified version of the Differential Evolution, Particle Optimization Swarm, Simulated Annealing and three new hybrid techniques based in Differential Evolution proposed in this work. According to the results obtained, we see that all hybrid techniques were efficient in solving the EIT problem, getting good qualitative and quantitative results from 50 iterations of these algorithms. Nevertheless, it deserves highlight the algorithm performance obtained by hybridization of Differential Evolution and Simulated Annealing to be the most promising technique here proposed to reconstruct EIT images, which proved to be faster and less expensive computationally than other proposed techniques. The results of this research generate several contributions in the form of published paper in national and international events.
56

An efficient entropy estimation approach

Paavola, M. (Marko) 01 November 2011 (has links)
Abstract Advances in miniaturisation have led to the development of new wireless measurement technologies such as wireless sensor networks (WSNs). A WSN consists of low cost nodes, which are battery-operated devices, capable of sensing the environment, transmitting and receiving, and computing. While a WSN has several advantages, including cost-effectiveness and easy installation, the nodes suffer from small memory, low computing power, small bandwidth and limited energy supply. In order to cope with restrictions on resources, data processing methods should be as efficient as possible. As a result, high quality approximates are preferred instead of accurate answers. The aim of this thesis was to propose an efficient entropy approximation method for resource-constrained environments. Specifically, the algorithm should use a small, constant amount of memory, and have certain accuracy and low computational demand. The performance of the proposed algorithm was evaluated experimentally with three case studies. The first study focused on the online monitoring of WSN communications performance in an industrial environment. The monitoring approach was based on the observation that entropy could be applied to assess the impact of interferences on time-delay variation of periodic tasks. The main purpose of the additional two cases, depth of anaesthesia (DOA) –monitoring and benchmarking with simulated data sets was to provide additional evidence on the general applicability of the proposed method. Moreover, in case of DOA-monitoring, an efficient entropy approximation could assist in the development of handheld devices or processing large amount of online data from different channels simultaneously. The initial results from the communication and DOA monitoring applications as well as from simulations were encouraging. Therefore, based on the case studies, the proposed method was able to meet the stated requirements. Since entropy is a widely used quantity, the method is also expected to have a variety of applications in measurement systems with similar requirements. / Tiivistelmä Mekaanisten- ja puolijohdekomponenttien pienentyminen on mahdollistanut uusien mittaustekniikoiden, kuten langattomien anturiverkkojen kehittämisen. Anturiverkot koostuvat halvoista, paristokäyttöisistä solmuista, jotka pystyvät mittaamaan ympäristöään sekä käsittelemään, lähettämään ja vastaanottamaan tietoja. Anturiverkkojen etuja ovat kustannustehokkuus ja helppo käyttöönotto, rajoitteina puolestaan vähäinen muisti- ja tiedonsiirtokapasiteetti, alhainen laskentateho ja rajoitettu energiavarasto. Näiden rajoitteiden vuoksi solmuissa käytettävien laskentamenetelmien tulee olla mahdollisimman tehokkaita. Tämän työn tavoitteena oli esittää tehokas entropian laskentamenetelmä resursseiltaan rajoitettuihin ympäristöihin. Algoritmin vaadittiin olevan riittävän tarkka, muistinkulutukseltaan pieni ja vakiosuuruinen sekä laskennallisesti tehokas. Työssä kehitetyn menetelmän suorituskykyä tutkittiin sovellusesimerkkien avulla. Ensimmäisessä tapauksessa perehdyttiin anturiverkon viestiyhteyksien reaaliaikaiseen valvontaan. Lähestymistavan taustalla oli aiempi tutkimus, jonka perusteella entropian avulla voidaan havainnoida häiriöiden vaikutusta viestien viiveiden vaihteluun. Muiden sovellusesimerkkien, anestesian syvyysindikaattorin ja simulaatiokokeiden, päätavoite oli tutkia menetelmän yleistettävyyttä. Erityisesti anestesian syvyyden seurannassa menetelmän arvioitiin voivan olla lisäksi hyödyksi langattomien, käsikäyttöisten syvyysmittareiden kehittämisessä ja suurten mittausmäärien reaaliaikaisessa käsittelyssä. Alustavat tulokset langattoman verkon yhteyksien ja anestesian syvyyden valvonnasta sekä simuloinneista olivat lupaavia. Sovellusesimerkkien perusteella esitetty algoritmi kykeni vastaamaan asetettuihin vaatimuksiin. Koska entropia on laajalti käytetty suure, menetelmä saattaa soveltua useisiin mittausympäristöihin, joissa on samankaltaisia vaatimuksia.
57

Angle modulated population based algorithms to solve binary problems

Pampara, Gary 24 February 2012 (has links)
Recently, continuous-valued optimization problems have received a great amount of focus, resulting in optimization algorithms which are very efficient within the continuous-valued space. Many optimization problems are, however, defined within the binary-valued problem space. These continuous-valued optimization algorithms can not operate directly on a binary-valued problem representation, without algorithm adaptations because the mathematics used within these algorithms generally fails within a binary problem space. Unfortunately, such adaptations may alter the behavior of the algorithm, potentially degrading the performance of the original continuous-valued optimization algorithm. Additionally, binary representations present complications with respect to increasing problem dimensionality, interdependencies between dimensions, and a loss of precision. This research investigates the possibility of applying continuous-valued optimization algorithms to solve binary-valued problems, without requiring algorithm adaptation. This is achieved through the application of a mapping technique, known as angle modulation. Angle modulation effectively addresses most of the problems associated with the use of a binary representation by abstracting a binary problem into a four-dimensional continuous-valued space, from which a binary solution is then obtained. The abstraction is obtained as a bit-generating function produced by a continuous-valued algorithm. A binary solution is then obtained by sampling the bit-generating function. This thesis proposes a number of population-based angle-modulated continuous-valued algorithms to solve binary-valued problems. These algorithms are then compared to binary algorithm counterparts, using a suite of benchmark functions. Empirical analysis will show that the angle-modulated continuous-valued algorithms are viable alternatives to binary optimization algorithms. Copyright 2012, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. Please cite as follows: Pamparà, G 2012, Angle modulated population based algorithms to solve binary problems, MSc dissertation, University of Pretoria, Pretoria, viewed yymmdd < http://upetd.up.ac.za/thesis/available/etd-02242012-090312 / > C12/4/188/gm / Dissertation (MSc)--University of Pretoria, 2012. / Computer Science / unrestricted
58

Amélioration des métaheuristiques d'optimisation à l'aide de l'analyse de sensibilité / Improvement of optimization metaheuristics with sensitivity analysis

Loubiere, Peio 21 November 2016 (has links)
L'optimisation difficile représente une classe de problèmes dont la résolution ne peut être obtenue par une méthode exacte en un temps polynomial.Trouver une solution en un temps raisonnable oblige à trouver un compromis quant à son exactitude.Les métaheuristiques sont une classe d'algorithmes permettant de résoudre de tels problèmes, de manière générique et efficiente (i.e. trouver une solution satisfaisante selon des critères définis: temps, erreur, etc.).Le premier chapitre de cette thèse est notamment consacré à la description de cette problématique et à l'étude détaillée de deux familles de métaheuristiques à population, les algorithmes évolutionnaires et les algorithmes d'intelligence en essaim.Afin de proposer une approche innovante dans le domaine des métaheuristiques, ce premier chapitre présente également la notion d'analyse de sensibilité.L'analyse de sensibilité permet d'évaluer l'influence des paramètres d'une fonction sur son résultat.Son étude caractérise globalement le comportement de la fonction à optimiser (linéarité, influence, corrélation, etc.) sur son espace de recherche.L'incorporation d'une méthode d'analyse de sensibilité au sein d'une métaheuristique permet d'orienter sa recherche le long des dimensions les plus prometteuses.Deux algorithmes réunissant ces notions sont proposés aux deuxième et troisième chapitres.Pour le premier algorithme, ABC-Morris, la méthode de Morris est introduite dans la métaheuristique de colonie d'abeilles artificielles (ABC).Cette inclusion est dédiée, les méthodes reposant sur deux équations similaires.Afin de généraliser l'approche, une nouvelle méthode, NN-LCC, est ensuite développée et son intégration générique est illustrée sur deux métaheuristiques, ABC avec taux de modification et évolution différentielle.L'efficacité des approches proposées est testée sur le jeu de données de la conférence CEC 2013. L'étude se réalise en deux parties: une analyse classique de la méthode vis-à-vis de plusieurs algorithmes de la littérature, puis vis-à-vis de l'algorithme d'origine en désactivant un ensemble de dimensions, provoquant une forte disparité des influences / Hard optimization stands for a class of problems which solutions cannot be found by an exact method, with a polynomial complexity.Finding the solution in an acceptable time requires compromises about its accuracy.Metaheuristics are high-level algorithms that solve these kind of problems. They are generic and efficient (i.e. they find an acceptable solution according to defined criteria such as time, error, etc.).The first chapter of this thesis is partially dedicated to the state-of-the-art of these issues, especially the study of two families of population based metaheuristics: evolutionnary algorithms and swarm intelligence based algorithms.In order to propose an innovative approach in metaheuristics research field, sensitivity analysis is presented in a second part of this chapter.Sensitivity analysis aims at evaluating arameters influence on a function response. Its study characterises globally a objective function behavior (linearity, non linearity, influence, etc.), over its search space.Including a sensitivity analysis method in a metaheuristic enhances its seach capabilities along most promising dimensions.Two algorithms, binding these two concepts, are proposed in second and third parts.In the first one, ABC-Morris, Morris method is included in artificial bee colony algorithm.This encapsulation is dedicated because of the similarity of their bare bone equations, With the aim of generalizing the approach, a new method is developped and its generic integration is illustrated on two metaheuristics.The efficiency of the two methods is tested on the CEC 2013 conference benchmark. The study contains two steps: an usual performance analysis of the method, on this benchmark, regarding several state-of-the-art algorithms and the comparison with its original version when influences are uneven deactivating a subset of dimensions
59

Um algoritmo de evolução diferencial com penalização adaptativa para otimização estrutural multiobjetivo

Vargas, Dênis Emanuel da Costa 05 November 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-01-15T14:16:25Z No. of bitstreams: 1 denisemanueldacostavargas.pdf: 16589539 bytes, checksum: 44a0869db27ffd5f8254f85fb69ab78c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-01-25T17:40:31Z (GMT) No. of bitstreams: 1 denisemanueldacostavargas.pdf: 16589539 bytes, checksum: 44a0869db27ffd5f8254f85fb69ab78c (MD5) / Made available in DSpace on 2016-01-25T17:40:31Z (GMT). No. of bitstreams: 1 denisemanueldacostavargas.pdf: 16589539 bytes, checksum: 44a0869db27ffd5f8254f85fb69ab78c (MD5) Previous issue date: 2015-11-05 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Problemas de Otimização Multiobjetivo (POMs) com restrições são frequentes em diversas áreas das ciências e engenharia, entre elas a Otimização Estrutural (OE). Apesar da Evolução Diferencial (ED) ser uma metaheurística muito atraente na resolução de problemas do mundo real, há uma carência na literatura de discussões sobre o desempenho em POMs de OE. Na sua grande maioria os problemas de OE apresentam restrições. Nesta tese utiliza-se uma técnica para o tratamento de restrições chamada de APM (Adaptive Penalty Method) que tem histórico de bons resultados quando aplicada em problemas monobjetivo de OE. Pelo potencial da ED na resolução de problemas do mundo real e da técnica APM em OE, juntamente com a escassez de trabalhos envolvendo esses elementos em POMs de OE, essa tese apresenta um estudo de um algoritmo bem conhecido de ED acoplado à técnica APM nesses problemas. Experimentos computacionais considerando cenários sem e com inserção de informações de preferência do usuário foram realizados em problemas com variáveis continuas e discretas. Os resultados foram comparados aos encontrados na literatura, além dos obtidos pelo algoritmo que representa o estado da arte. Comparou-se também os resultados obtidos pelo mesmo algoritmo de ED adotado, porém sem ser acoplado à técnica APM, objetivando investigar sua influência no desempenho da combinação proposta. As vantagens e desvantagens do algoritmo proposto em cada cenário são apresentadas nessa tese, além de sugestões para trabalhos futuros. / Multiobjective Optimization Problems (MOPs) with constraints are common in many areas of science and engineering, such as Structural Optimization (SO). In spite of Differential Evolution (DE) being a very attractive metaheuristic in real-world problems, no work was found assessing its performance in SO MOPs. Most OE problems have constraints. This thesis uses the constraint handling technique called Adaptive Penalty Method (APM) that has a history of good results when applied in monobjective problems of SO. Due to the potential of DE in solving real world problems and APM in SO problems, and also with the lack of studies involving these elements in SO MOPs, this work presents a study of a well-known DE algorithm coupled to the APM technique in these problems. Computational experiments considering scenarios with and without inclusion of user preference information were performed in problems with continuous and discrete variables. The results were compared with those in the literature, in addition to those obtained by the algorithm that represents the state of the art. They were also compared to the results obtained by the same DE algorithm adopted, but without the APM technique, aiming at investigating the influence of the APM technique in their performance. The advantages and disadvantages of the proposed algorithm in each scenario are presented in this research, as well as suggestions for future works.
60

Srážko-odtokový proces v podmínkách klimatické změny / Rainfall runoff process in time of climate change

Benáčková, Kateřina January 2018 (has links)
The aim of The Diploma Thesis was to compile a conceptual rainfall-runoff model, that would be eligible to model discharge in conditions of climate changes. After thorough verifications of possible variants, user program Runoff Prophet that is eligible to simulate discharge in closing profile of any river basin was compiled within this paper. Runoff Prophet is deterministic lumped model with monthly computation time step and from the hydrologic phenomena it takes soil moisture, evapotranspiration, groundwater flow and the watercourse flow into account. Its calibration is based on the differential evolution principle with Nash–Sutcliffe model efficiency coefficient as the calibration criterion. Developed software was tested on Vír I. catchment basin and the results of this probe were evaluated from viewpoints of air temperature, precipitation and discharge characteristics in the Dalečín measurement river cross section in distant future according to A1B SRES climate scenario, implemented in LARS-WG weather generator.

Page generated in 0.1437 seconds