• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 34
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 115
  • 115
  • 38
  • 30
  • 29
  • 24
  • 21
  • 19
  • 18
  • 18
  • 18
  • 17
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Towards Structural Health Monitoring of Gossamer Structures Using Conductive Polymer Nanocomposite Sensors

Sunny, Mohammed Rabius 14 September 2010 (has links)
The aim of this research is to calibrate conductive polymer nanocomposite materials for large strain sensing and develop a structural health monitoring algorithm for gossamer structures by using nanocomposites as strain sensors. Any health monitoring system works on the principle of sensing the response (strain, acceleration etc.) of the structure to an external excitation and analyzing the response to find out the location and the extent of the damage in the structure. A sensor network, a mathematical model of the structure, and a damage detection algorithm are necessary components of a structural health monitoring system. In normal operating conditions, a gossamer structure can experience normal strain as high as 50%. But presently available sensors can measure strain up to 10% only, as traditional strain sensor materials do not show low elastic modulus and high electrical conductivity simultaneously. Conductive polymer nanocomposite which can be stretched like rubber (up to 200%) and has high electrical conductivity (sheet resistance 100 Ohm/sq.) can be a possible large strain sensor material. But these materials show hysteresis and relaxation in the variation of electrical properties with mechanical strain. It makes the calibration of these materials difficult. We have carried out experiments on conductive polymer nanocomposite sensors to study the variation of electrical resistance with time dependent strain. Two mathematical models, based on the modified fractional calculus and the Preisach approaches, have been developed to model the variation of electrical resistance with strain in a conductive polymer. After that, a compensator based on a modified Preisach model has been developed. The compensator removes the effect of hysteresis and relaxation from the output (electrical resistance) obtained from the conductive polymer nanocomposite sensor. This helps in calibrating the material for its use in large strain sensing. Efficiency of both the mathematical models and the compensator has been shown by comparison of their results with the experimental data. A prestressed square membrane has been considered as an example structure for structural health monitoring. Finite element analysis using ABAQUS has been carried out to determine the response of the membrane to an uniform transverse dynamic pressure for different damage conditions. A neuro-fuzzy system has been designed to solve the inverse problem of detecting damages in the structure from the strain history sensed at different points of the structure by a sensor that may have a significant hysteresis. Damage feature index vector determined by wavelet analysis of the strain history at different points of the structure are taken by the neuro-fuzzy system as input. The neuro-fuzzy system detects the location and extent of the damage from the damage feature index vector by using some fuzzy rules. Rules associated with the fuzzy system are determined by a neural network training algorithm using a training dataset, containing a set of known input and output (damage feature index vectors, location and extent of damage for different damage conditions). This model is validated by using the sets of input-output other than those which were used to train the neural network. / Ph. D.
102

Implementations of Fuzzy Adaptive Dynamic Programming Controls on DC to DC Converters

Chotikorn, Nattapong 05 1900 (has links)
DC to DC converters stabilize the voltage obtained from voltage sources such as solar power system, wind energy sources, wave energy sources, rectified voltage from alternators, and so forth. Hence, the need for improving its control algorithm is inevitable. Many algorithms are applied to DC to DC converters. This thesis designs fuzzy adaptive dynamic programming (Fuzzy ADP) algorithm. Also, this thesis implements both adaptive dynamic programming (ADP) and Fuzzy ADP on DC to DC converters to observe the performance of the output voltage trajectories.
103

Three-dimensional hydrodynamic models coupled with GIS-based neuro-fuzzy classification for assessing environmental vulnerability of marine cage aquaculture

Navas, Juan Moreno January 2010 (has links)
There is considerable opportunity to develop new modelling techniques within a Geographic Information Systems (GIS) framework for the development of sustainable marine cage culture. However, the spatial data sets are often uncertain and incomplete, therefore new spatial models employing “soft computing” methods such as fuzzy logic may be more suitable. The aim of this study is to develop a model using Neuro-fuzzy techniques in a 3D GIS (Arc View 3.2) to predict coastal environmental vulnerability for Atlantic salmon cage aquaculture. A 3D hydrodynamic model (3DMOHID) coupled to a particle-tracking model is applied to study the circulation patterns, dispersion processes and residence time in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system), an area of restricted exchange, geometrically complicated with important aquaculture activities. The hydrodynamic model was calibrated and validated by comparison with sea surface and water flow measurements. The model provided spatial and temporal information on circulation, renewal time, helping to determine the influence of winds on circulation patterns and in particular the assessment of the hydrographic conditions with a strong influence on the management of fish cage culture. The particle-tracking model was used to study the transport and flushing processes. Instantaneous massive releases of particles from key boxes are modelled to analyse the ocean-fjord exchange characteristics and, by emulating discharge from finfish cages, to show the behaviour of waste in terms of water circulation and water exchange. In this study the results from the hydrodynamic model have been incorporated into GIS to provide an easy-to-use graphical user interface for 2D (maps), 3D and temporal visualization (animations), for interrogation of results. v Data on the physical environment and aquaculture suitability were derived from a 3- dimensional hydrodynamic model and GIS for incorporation into the final model framework and included mean and maximum current velocities, current flow quiescence time, water column stratification, sediment granulometry, particulate waste dispersion distance, oxygen depletion, water depth, coastal protection zones, and slope. The Neuro-fuzzy classification model NEFCLASS–J, was used to develop learning algorithms to create the structure (rule base) and the parameters (fuzzy sets) of a fuzzy classifier from a set of classified training data. A total of 42 training sites were sampled using stratified random sampling from the GIS raster data layers, and the vulnerability categories for each were manually classified into four categories based on the opinions of experts with field experience and specific knowledge of the environmental problems investigated. The final products, GIS/based Neuro Fuzzy maps were achieved by combining modeled and real environmental parameters relevant to marine fin fish Aquaculture. Environmental vulnerability models, based on Neuro-fuzzy techniques, showed sensitivity to the membership shapes of the fuzzy sets, the nature of the weightings applied to the model rules, and validation techniques used during the learning and validation process. The accuracy of the final classifier selected was R=85.71%, (estimated error value of ±16.5% from Cross Validation, N=10) with a Kappa coefficient of agreement of 81%. Unclassified cells in the whole spatial domain (of 1623 GIS cells) ranged from 0% to 24.18 %. A statistical comparison between vulnerability scores and a significant product of aquaculture waste (nitrogen concentrations in sediment under the salmon cages) showed that the final model gave a good correlation between predicted environmental vi vulnerability and sediment nitrogen levels, highlighting a number of areas with variable sensitivity to aquaculture. Further evaluation and analysis of the quality of the classification was achieved and the applicability of separability indexes was also studied. The inter-class separability estimations were performed on two different training data sets to assess the difficulty of the class separation problem under investigation. The Neuro-fuzzy classifier for a supervised and hard classification of coastal environmental vulnerability has demonstrated an ability to derive an accurate and reliable classification into areas of different levels of environmental vulnerability using a minimal number of training sets. The output will be an environmental spatial model for application in coastal areas intended to facilitate policy decision and to allow input into wider ranging spatial modelling projects, such as coastal zone management systems and effective environmental management of fish cage aquaculture.
104

Processamento Inteligente de Sinais de Press?o e Temperatura Adquiridos Atrav?s de Sensores Permanentes em Po?os de Petr?leo

Pires, Paulo Roberto da Motta 06 February 2012 (has links)
Made available in DSpace on 2014-12-17T14:08:50Z (GMT). No. of bitstreams: 1 PauloRMP_capa_ate_pag32.pdf: 5057325 bytes, checksum: bf8da0b02ad06ee116c93344fb67e976 (MD5) Previous issue date: 2012-02-06 / Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization / Originalmente voltadas ao monitoramento da opera??o, as medi??es cont?nuas de press?o e temperatura no fundo de po?o, realizadas atrav?s de PDGs (do ingl?s, Permanent Downhole Gauges), encontram vasta aplicabilidade no gerenciamento de reservat?rios. Para tanto, permitem o monitoramento do desempenho de po?os e a estimativa de par?metros de reservat?rios no longo prazo. Contudo, a despeito de sua inquestion?vel utilidade, os dados adquiridos de PDG apresentam grande conte?do de ru?do. Outro aspecto igualmente desfavor?vel reside na ocorr?ncia de valores esp?rios (outliers) imersos entre as medidas registradas pelo PDG. O presente trabalho aborda o tratamento inicial de sinais de press?o e temperatura, mediante t?cnicas de suaviza??o, mapas auto-organiz?veis e transformada wavelet discreta. Ademais, prop?e-se um sistema de detec??o de transientes relevantes para an?lise no longo hist?rico de registros, baseado no acoplamento entre clusteriza??o fuzzy e redes neurais feed-forward. Os resultados alcan?ados mostraram-se de todo satisfat?rios para po?os marinhos, atendendo a requisitos reais de utiliza??o dos sinais registrados por PDGs
105

Software quality studies using analytical metric analysis

Rodríguez Martínez, Cecilia January 2013 (has links)
Today engineering companies expend a large amount of resources on the detection and correction of the bugs (defects) in their software. These bugs are usually due to errors and mistakes made by programmers while writing the code or writing the specifications. No tool is able to detect all of these bugs. Some of these bugs remain undetected despite testing of the code. For these reasons, many researchers have tried to find indicators in the software’s source codes that can be used to predict the presence of bugs. Every bug in the source code is a potentially failure of the program to perform as expected. Therefore, programs are tested with many different cases in an attempt to cover all the possible paths through the program to detect all of these bugs. Early prediction of bugs informs the programmers about the location of the bugs in the code. Thus, programmers can more carefully test the more error prone files, and thus save a lot of time by not testing error free files. This thesis project created a tool that is able to predict error prone source code written in C++. In order to achieve this, we have utilized one predictor which has been extremely well studied: software metrics. Many studies have demonstrated that there is a relationship between software metrics and the presence of bugs. In this project a Neuro-Fuzzy hybrid model based on Fuzzy c-means and Radial Basis Neural Network has been used. The efficiency of the model has been tested in a software project at Ericsson. Testing of this model proved that the program does not achieve high accuracy due to the lack of independent samples in the data set. However, experiments did show that classification models provide better predictions than regression models. The thesis concluded by suggesting future work that could improve the performance of this program. / Idag spenderar ingenjörsföretag en stor mängd resurser på att upptäcka och korrigera buggar (fel) i sin mjukvara. Det är oftast programmerare som inför dessa buggar på grund av fel och misstag som uppkommer när de skriver koden eller specifikationerna. Inget verktyg kan detektera alla dessa buggar. Några av buggarna förblir oupptäckta trots testning av koden. Av dessa skäl har många forskare försökt hitta indikatorer i programvarans källkod som kan användas för att förutsäga förekomsten av buggar. Varje fel i källkoden är ett potentiellt misslyckande som gör att applikationen inte fungerar som förväntat. För att hitta buggarna testas koden med många olika testfall för att försöka täcka alla möjliga kombinationer och fall. Förutsägelse av buggar informerar programmerarna om var i koden buggarna finns. Således kan programmerarna mer noggrant testa felbenägna filer och därmed spara mycket tid genom att inte behöva testa felfria filer. Detta examensarbete har skapat ett verktyg som kan förutsäga felbenägen källkod skriven i C ++. För att uppnå detta har vi utnyttjat en välkänd metod som heter Software Metrics. Många studier har visat att det finns ett samband mellan Software Metrics och förekomsten av buggar. I detta projekt har en Neuro-Fuzzy hybridmodell baserad på Fuzzy c-means och Radial Basis Neural Network använts. Effektiviteten av modellen har testats i ett mjukvaruprojekt på Ericsson. Testning av denna modell visade att programmet inte Uppnå hög noggrannhet på grund av bristen av oberoende urval i datauppsättningen. Men gjordt experiment visade att klassificering modeller ger bättre förutsägelser än regressionsmodeller. Exjobbet avslutade genom att föreslå framtida arbetet som skulle kunna förbättra detta program. / Actualmente las empresas de ingeniería derivan una gran cantidad de recursos a la detección y corrección de errores en sus códigos software. Estos errores se deben generalmente a los errores cometidos por los desarrolladores cuando escriben el código o sus especificaciones.  No hay ninguna herramienta capaz de detectar todos estos errores y algunos de ellos pasan desapercibidos tras el proceso de pruebas. Por esta razón, numerosas investigaciones han intentado encontrar indicadores en los códigos fuente del software que puedan ser utilizados para detectar la presencia de errores. Cada error en un código fuente es un error potencial en el funcionamiento del programa, por ello los programas son sometidos a exhaustivas pruebas que cubren (o intentan cubrir) todos los posibles caminos del programa para detectar todos sus errores. La temprana localización de errores informa a los programadores dedicados a la realización de estas pruebas sobre la ubicación de estos errores en el código. Así, los programadores pueden probar con más cuidado los archivos más propensos a tener errores dejando a un lado los archivos libres de error. En este proyecto se ha creado una herramienta capaz de predecir código software propenso a errores escrito en C++. Para ello, en este proyecto se ha utilizado un indicador que ha sido cuidadosamente estudiado y ha demostrado su relación con la presencia de errores: las métricas del software. En este proyecto un modelo híbrido neuro-disfuso basado en Fuzzy c-means y en redes neuronales de función de base radial ha sido utilizado. La eficacia de este modelo ha sido probada en un proyecto software de Ericsson. Como resultado se ha comprobado que el modelo no alcanza una alta precisión debido a la falta de muestras independientes en el conjunto de datos y los experimentos han mostrado que los modelos de clasificación proporcionan mejores predicciones que los modelos de regresión. El proyecto concluye sugiriendo trabajo que mejoraría el funcionamiento del programa en el futuro.
106

Evaluation of seasonal impacts on nitrifiers and nitrification performance of a full-scale activated sludge system

Awolusi, Oluyemi Olatunji January 2016 (has links)
Submitted in complete fulfillment for the degree of Doctor of Philosophy (Biotechnology), Durban University of Technology, Durban, South Africa, 2016. / Seasonal nitrification breakdown is a major problem in wastewater treatment plants which makes it difficult for the plant operators to meet discharge limits. The present study focused on understanding the seasonal impact of environmental and operational parameters on nitrifiers and nitrification, in a biological nutrient removal wastewater treatment works situated in the midlands of KwaZulu Natal. Composite sludge samples (from the aeration tank), influent and effluent water samples were collected twice a month for 237 days. A combination of fluorescent in-situ hybridization, polymerase chain reaction (PCR)-clone library, quantitative polymerase chain reaction (qPCR) were employed for characterizing and quantifying the dominant nitrifiers in the plant. In order to have more insight into the activated sludge community structure, pyrosequencing was used in profiling the amoA locus of ammonia oxidizing bacteria (AOB) community whilst Illumina sequencing was used in characterising the plant’s total bacterial community. The nonlinear effect of operating parameters and environmental conditions on nitrification was also investigated using an adaptive neuro-fuzzy inference system (ANFIS), Pearson’s correlation coefficient and quadratic models. The plant operated with higher MLSS of 6157±783 mg/L during the first phase (winter) whilst it was 4728±1282 mg/L in summer. The temperature recorded in the aeration tanks ranged from 14.2oC to 25.1oC during the period. The average ammonia removal during winter was 60.0±18% whereas it was 83±13% during summer and this was found to correlate with temperature (r = 0.7671; P = 0.0008). A significant correlation was also found between the AOB (amoA gene) copy numbers and temperature in the reactors (α= 0.05; P=0.05), with the lowest AOB abundance recorded during winter. Sanger sequencing analysis indicated that the dominant nitrifiers were Nitrosomonas spp. Nitrobacter spp. and Nitrospira spp. Pyrosequencing revealed significant differences in the AOB population which was 6 times higher during summer compared to winter. The AOB sequences related to uncultured bacterium and uncultured AOB also showed an increase of 133% and 360% respectively when the season changed from winter to summer. This study suggests that vast population of novel, ecologically significant AOB species, which remain unexploited, still inhabit the complex activated sludge communities. Based on ANFIS model, AOB increased during summer season, when temperature was 1.4-fold higher than winter (r 0.517, p 0.048), and HRT decreased by 31% as a result of rainfall (r - 0.741, p 0.002). Food: microorganism ratio (F/M) and HRT formed the optimal combination of two inputs affecting the plant’s specific nitrification (qN), and their quadratic equation showed r2-value of 0.50. This study has significantly contributed towards understanding the complex relationship between the microbial population dynamics, wastewater composition and nitrification performance in a full-scale treatment plant situated in the subtropical region. This is the first study applying ANFIS technique to describe the nitrification performance at a full-scale WWTP, subjected to dynamic operational parameters. The study also demonstrated the successful application of ANFIS for determining and ranking the impact of various operating parameters on plant’s nitrification performance, which could not be achieved by the conventional spearman correlation due to the non-linearity of the interactions during wastewater treatment. Moreover, this study also represents the first-time amoA gene targeted pyrosequencing of AOB in a full-scale activated sludge is being done. / D
107

Return on Investment of the CFTP Framework With and Without Risk Assessment

Lee, Anne Lim 01 January 2017 (has links)
In recent years, numerous high tech companies have developed and used technology roadmaps when making their investment decisions. Jay Paap has proposed the Customer Focused Technology Planning (CFTP) framework to draw future technology roadmaps. However, the CFTP framework does not include risk assessment as a critical factor in decision making. The problem addressed in this quantitative study was that high tech companies are either losing money or getting a much smaller than expected return on investment when making technology investment decisions. The purpose of this research was to determine the relationship between returns on investment before and after adding risk assessment to the CFTP framework. Paap's CFTP framework and process to improve technology investments thus served as the theoretical framework for this study. Data were obtained from cloud computing companies using the companies' market risk data and actual returns on investment data. The results and findings of paired sample two-tailed t tests for means and equal variances showed that return on investment was positively related to adding a traditional risk assessment model to Paap's CFTP framework. These findings regarding the addition of risk assessment to the technology investment framework may be used by investors to (a) make better and more expeditious decisions, and (b) obtain a high return on technology investment by selecting the highest return value and lowest risk value.
108

Modeling and Diagnosis of Excimer Laser Ablation

Setia, Ronald 23 November 2005 (has links)
Recent advances in the miniaturization, functionality, and integration of integrated circuits and packages, such as the system-on-package (SOP) methodology, require increasing use of microvias that generates vertical signal paths in a high-density multilayer substrate. A scanning projection excimer laser system has been utilized to fabricate the microvias. In this thesis, a novel technique implementing statistical experimental design and neural networks (NNs) is used to characterize and model the excimer laser ablation process for microvia formation. Vias with diameters from 10 50 micrometer have been ablated in DuPont Kapton(r) E polyimide using an Anvik HexScan(tm) 2150 SXE pulsed excimer laser operating at 308 nm. Accurate NN models, developed from experimental data, are obtained for microvia responses, including ablated thickness, via diameter, wall angle, and resistance. Subsequent to modeling, NNs and genetic algorithms (GAs) are utilized to generate optimal process recipes for the laser tool. Such recipes can be used to produce desired microvia responses, including open vias, specific diameter, steep wall angle, and low resistance. With continuing advancement in the use of excimer laser systems in microsystems packaging has come an increasing need to offset capital equipment investment and lower equipment downtime. In this thesis, an automated in-line failure diagnosis system using NNs and Dempster-Shafer (D-S) theory is implemented. For the sake of comparison, an adaptive neuro-fuzzy approach is applied to achieve the same objective. Both the D-S theory and neuro-fuzzy logic are used to develop an automated inference system to specifically identify failures. Successful results in failure detection and diagnosis are obtained from the two approaches. The result of this investigation will benefit both engineering and management. Engineers will benefit from high yield, reliable production, and low equipment down-time. Business people, on the other hand, will benefit from cost-savings resulting from more production-worthy (i.e., lower maintenance) laser ablation equipment.
109

Fan And Pitch Angle Selection For Efficient Mine Ventilation Using Analytical Hierachy Process And Neuro Fuzzy Approach

Taghizadeh Vahed, Amir 01 May 2012 (has links) (PDF)
Ventilation is a critical task in underground mining operation. Lack of a good ventilation system causes accumulation of harmful gases, explosions, and even fatalities. A proper ventilation system provides adequate fresh air to miners for a safe and comfortable working environment. Fans, which provide air flow to different faces of a mine, have great impact in ventilation systems. Thus, selection of appropriate fans for a mine is the acute task. Unsuitable selection of a fan decreases safety and production rate, which increases capital and operational costs. Moreover, pitch angle of fans&rsquo / blades plays an important role in fan&rsquo / s efficiency. Therefore, selection of a fan and its pitch angle, which yields the maximum efficiency, is an emerging issue for an efficient mine ventilation. The main objective of this research study is to provide a decision making methodology for the selection of a main fan and its appropriate pitch angle for efficient mine ventilation. Nowadays, analytical hierarchy process as multi criteria decision making is used, and it yields outputs based on pairwise comparison. On the other hand, Fuzzy Logic as a soft computing method was combined with analytical hierarchy process and combined model did not yield appropriate results / because Fuzzy AHP increased uncertainty ratio in this study. However, fuzzy analytical hierarchy process might be inapplicable when it faces with vague and complex data set. Soft computing methods can be utilized for complicated situations. One of the soft computing methods is a Neuro-Fuzzy algorithm which is used in classification and DM issues. This study has two phases: i) selection of an appropriate fan using Analytical Hierarchy Process (AHP) and Fuzzy Analytical Hierarchy Process (Fuzzy AHP) and ii) selection of an appropriate pitch angle using Neuro-Fuzzy algorithm and Fuzzy AHP method. This study showed that AHP can be effectively utilized for main fan selection. It performs better than Fuzzy AHP because FAHP contains more expertise and makes problems more complex for evaluating. When FAHP and Neuro-Fuzzy is compared for pitch angle selection, both methodologies yielded the same results. Therefore, utilization of Neuro-Fuzzy in situation with complicated and vague data will be applicable.
110

Previsão de carga de curto prazo usando ensembles de previsores selecionados e evoluidos por algoritmos geneticos / Short-term load forecasting using esembles of selected and evolved predictors by genetic algorithms

Leone Filho, Marcos de Almeida 31 January 2006 (has links)
Orientador: Takaaki Ohishi / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-08T10:06:35Z (GMT). No. of bitstreams: 1 LeoneFilho_MarcosdeAlmeida_M.pdf: 1557959 bytes, checksum: 92dc63d4e3140cc61ba7900961c0e9fb (MD5) Previous issue date: 2006 / Resumo: Neste trabalho é proposta uma metodologia para previsão de séries temporais de carga de energia elétrica de curto prazo. Esta metodologia vem sendo muito utilizada no contexto da previsão de séries temporais e do reconhecimento de padrões. Os autores que propuseram esta metodologia a chamaram de "Ensembles". Este nome tenta explicar o é este modelo: uma combinação de partes que juntas formam um só modelo. Neste sentido, este nome expressa com relativa clareza qual é o principal aspecto desta metodologia, que no caso específico deste trabalho, é o de fazer várias previsões de uma mesma série temporal utilizando diferentes ferramentas que sozinhas são suficientemente competentes para prever a série temporal em questão, e em seguida combinar as soluções para, deste modo, tentar obter uma solução melhor do que quando é usada somente uma ferramenta. As ferramentas usadas para compor a previsão dos "Ensembles" finais são Redes Neurais Artificiais (RNAs) e Redes Neurais Nebulosas. Atualmente, estas redes são largamente utilizadas em problemas de previsão de séries temporais, principalmente quando o fator gerador destas séries é um sistema não-linear. Desta forma, isto as tornou candidatas potenciais para prever valores de uma série de cargas de energia elétrica, pois este tipo de série tem características essencialmente não-lineares. Sendo assim, foram utilizados quatro tipos de redes: RNAs MLPs, RNAs Recorrentes, RNAs de Base Radial e Redes Neurais Nebulosas tipo ANFIS. Com os modelos básicos de redes foram, utilizados Algoritmos Genéticos para evoluir os parâmetros destas redes e, assim, chegar a uma população de redes suficientemente competentes para fazer as previsões da série de cargas. Na próxima etapa, com os resultados das previsões da população de redes evoluídas foi feita a seleção dos melhores agrupamentos destas redes evoluídas e, como este processo requer a avaliação de diferentes configurações de modelos, esta seleção é baseada em Algoritmos Genéticos.Os resultados obtidos ao se utilizar "ensembles" mostraram que este modelo foi capaz de alcançar uma grande robustez na previsão, reduzindo os erros de previsão, suavizando os resultados de previsão e deixando o modelo menos suscetível a grandes erros quando surgem "outliers" no conjunto de dados / Abstract: This work proposes a methodology for short-term electric power load forecasting. This methodology is being widely used under the context of time series prediction and pattern recognition. It was named "ensembles" by the authors who developed it. This name carries the meaning of an assemblage of parts considered as forming a whole. Therefore, this name expresses rather clearly the main characteristic of this methodology, which under the framework of this study is to make several predictions of the same time series using various different tools in which every single one alone is sufficiently competent to predict the above mentioned time series. After that, the predictions are combined in order to achieve a better prediction compared to the one that is obtained if a single predictor is used. The tools implemented to form the final "ensembles" prediction are Artificial Neural Networks (ANNs) and Neuro-fuzzy Networks. Nowadays, these networks are being widely used in time series predictions problems, mainly when the factor that generates these series is a non-linear system. Hence, this fact has elected them as potential candidates to predict future values of an electric power load series because this series has essentially non-linear characteristics. As a result, four types of networks were utilized in this work: MLPs ANNs, Recurrent ANNs, Radial Basis ANNs and ANFIS type Neuro-fuzzy networks. So, with the basic networks models, Genetic Algorithms were applied to evolve the parameters of these networks and, as a consequence, a population of networks sufficiently capable of predicting future values of the load time series was built. On the next step, with the results obtained from the evolved population of networks, a selection of the most suitable results of the individual networks were made and, as soon as this process implies the evaluation of multiple different combinations of models, this methodology was based on Genetic Algorithms. Then, this selected networks were combined. The results when using "ensembles" revealed that this model was able to reach a great robustness in prediction tasks. In that sense, it was possible to reduce the level of prediction error, to smooth the resulting predictions and to make the model more stable reducing the possibilities of presenting high levels of errors when the used data set contains "outliers" / Mestrado / Energia Eletrica / Mestre em Engenharia Elétrica

Page generated in 0.0354 seconds