• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 69
  • 16
  • 9
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 64
  • 52
  • 50
  • 50
  • 48
  • 48
  • 46
  • 32
  • 32
  • 31
  • 31
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

[en] REGISTRATION OF 3D SEISMIC TO WELL DATA / [pt] REGISTRO DE SÍSMICA 3D A DADOS DE POÇOS

RODRIGO COSTA FERNANDES 08 March 2010 (has links)
[pt] A confiabilidade dos dados coletados diretamente ao longo do caminho de poços de petróleo é maior que a confiabilidade de dados sísmicos e, por isto, os primeiros podem ser utilizados para ajustar o volume de aquisição sísmica. Este trabalho propõe um ajuste dos volumes de amplitudes sísmicas através de uma algoritmo de três passos. O primeiro passo é a identificação de feições comuns através de um algoritmo de reconhecimento de padrões. O segundo passo consiste em gerar e otimizar uma malha alinhada às feições de interesse do dado sísmico voluméletrico através de um novo algoritmo baseado em processamento de imagens e inteligência computacional. E o terceiro e último passo é a realização de uma deformação volumétrica pontoa- ponto usando interpolação por funções de base radial para registrar o volume sísmico aos poços. A dissertação apresenta ainda resultados de implementações 2D e 3D dos algoritmos propostos de forma a permitir algumas conclusões e sugestões para trabalhos futuros. / [en] Data acquired directly from borehole are more reliable than seismic data, and then, the first can be used to adjust the second. This work proposes the correction of a volume of seismic amplitudes through a three step algorithm. The first step is the identification of common features in both sets using a pattern recognition algorithm. The second step consists of the generation and the optimization of a mesh aligned with the features in the volumetric data using a new algorithm based on image processing and computational intelligence. The last step is the seismic-to-well registration using a point-to-point volumetric deformation achieved by a radial basis function interpolation. The dissertation also presents some results from 2D and 3D implementations allowing conclusions and suggestions for future work.
92

Multiple sequence alignment using particle swarm optimization

Zablocki, Fabien Bernard Roman 16 January 2009 (has links)
The recent advent of bioinformatics has given rise to the central and recurrent problem of optimally aligning biological sequences. Many techniques have been proposed in an attempt to solve this complex problem with varying degrees of success. This thesis investigates the application of a computational intelligence technique known as particle swarm optimization (PSO) to the multiple sequence alignment (MSA) problem. Firstly, the performance of the standard PSO (S-PSO) and its characteristics are fully analyzed. Secondly, a scalability study is conducted that aims at expanding the S-PSO’s application to complex MSAs, as well as studying the behaviour of three other kinds of PSOs on the same problems. Experimental results show that the PSO is efficient in solving the MSA problem and compares positively with well-known CLUSTAL X and T-COFFEE. / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / Unrestricted
93

Using SetPSO to determine RNA secondary structure

Neethling, Charles Marais 16 February 2009 (has links)
RNA secondary structure prediction is an important field in Bioinformatics. A number of different approaches have been developed to simplify the determination of RNA molecule structures. RNA is a nucleic acid found in living organisms which fulfils a number of important roles in living cells. Knowledge of its structure is crucial in the understanding of its function. Determining RNA secondary structure computationally, rather than by physical means, has the advantage of being a quicker and cheaper method. This dissertation introduces a new Set-based Particle Swarm Optimisation algorithm, known as SetPSO for short, to optimise the structure of an RNA molecule, using an advanced thermodynamic model. Structure prediction is modelled as an energy minimisation problem. Particle swarm optimisation is a simple but effective stochastic optimisation technique developed by Kennedy and Eberhart. This simple technique was adapted to work with variable length particles which consist of a set of elements rather than a vector of real numbers. The effectiveness of this structure prediction approach was compared to that of a dynamic programming algorithm called mfold. It was found that SetPSO can be used as a combinatorial optimisation technique which can be applied to the problem of RNA secondary structure prediction. This research also included an investigation into the behaviour of the new SetPSO optimisation algorithm. Further study needs to be conducted to evaluate the performance of SetPSO on different combinatorial and set-based optimisation problems. / Dissertation (MS)--University of Pretoria, 2009. / Computer Science / unrestricted
94

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus January 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.
95

Empirical evaluation of optimization techniques for classification and prediction tasks

Leke, Collins Achepsah 27 March 2014 (has links)
M.Ing. (Electrical and Electronic Engineering) / Missing data is an issue which leads to a variety of problems in the analysis and processing of data in datasets in almost every aspect of day−to−day life. Due to this reason missing data and ways of handling this problem have been an area of research in a variety of disciplines in recent times. This thesis presents a method which is aimed at finding approximations to missing values in a dataset by making use of Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), Random Forest (RF), Negative Selection (NS) in combination with auto-associative neural networks, and also provides a comparative analysis of these algorithms. The methods suggested use the optimization algorithms to minimize an error function derived from training an auto-associative neural network during which the interrelationships between the inputs and the outputs are obtained and stored in the weights connecting the different layers of the network. The error function is expressed as the square of the difference between the actual observations and predicted values from an auto-associative neural network. In the event of missing data, all the values of the actual observations are not known hence, the error function is decomposed to depend on the known and unknown variable values. Multi Layer Perceptron (MLP) neural network is employed to train the neural networks using the Scaled Conjugate Gradient (SCG) method. The research primarily focusses on predicting missing data entries from two datasets being the Manufacturing dataset and the Forest Fire dataset. Prediction is a representation of how things will occur in the future based on past occurrences and experiences. The research also focuses on investigating the use of this proposed technique in approximating and classifying missing data with great accuracy from five classification datasets being the Australian Credit, German Credit, Japanese Credit, Heart Disease and Car Evaluation datasets. It also investigates the impact of using different neural network architectures in training the neural network and finding approximations for the missing values, and using the best possible architecture for evaluation purposes. It is revealed in this research that the approximated values for the missing data obtained by applying the proposed models are accurate with a high percentage of correlation between the actual missing values and corresponding approximated values using the proposed models on the Manufacturing dataset ranging between 94.7% and 95.2% with the exception of the Negative Selection algorithm which resulted in a 49.6% correlation coefficient value. On the Forest Fire dataset, it was observed that there was a low percentage correlation between the actual missing values and the corresponding approximated values in the range 0.95% to 4.49% due to the nature of the values of the variables in the dataset. The Negative Selection algorithm on this dataset revealed a negative percentage correlation between the actual values and the approximated values with a value of 100%. Approximations found for missing data are also observed to depend on the particular neural network architecture employed in training the dataset. Further analysis revealed that the Random Forest algorithm on average performed better than the GA, SA, PSO, and NS algorithms yielding the lowest Mean Square Error, Root Mean Square Error, and Mean Absolute Error values. On the other end of the scale was the NS algorithm which produced the highest values for the three error metrics bearing in mind that for these, the lower the values, the better the performance, and vice versa. The evaluation of the algorithms on the classification datasets revealed that the most accurate in classifying and identifying to which of a set of categories a new observation belonged on the basis of the training set of data is the Random Forest algorithm, which yielded the highest AUC percentage values on all of the five classification datasets. The differences between its AUC values and those of the GA, SA, PSO, and NS algorithms were statistically significant, with the most statistically significant differences observed when the AUC values for the Random Forest algorithm were compared to those of the Negative Selection algorithm on all five classification datasets. The GA, SA, and PSO algorithms produced AUC values which when compared against each other on all five classification datasets were not very different. Overall analysis on the datasets considered revealed that the algorithm which performed best in solving both the prediction and classification problems was the Random Forest algorithm as seen by the results obtained. The algorithm on the other end of the scale after comparisons of results was the Negative Selection algorithm which produced the highest error metric values for the prediction problems and the lowest AUC values for the classification problems.
96

A Computational Intelligence Approach to Clustering of Temporal Data

Georgieva, Kristina Slavomirova January 2015 (has links)
Temporal data is common in real-world datasets. Analysis of such data, for example by means of clustering algorithms, can be difficult due to its dynamic behaviour. There are various types of changes that may occur to clusters in a dataset. Firstly, data patterns can migrate between clusters, shrinking or expanding the clusters. Additionally, entire clusters may move around the search space. Lastly, clusters can split and merge. Data clustering, which is the process of grouping similar objects, is one approach to determine relationships among data patterns, but data clustering approaches can face limitations when applied to temporal data, such as difficulty tracking the moving clusters. This research aims to analyse the ability of particle swarm optimisation (PSO) and differential evolution (DE) algorithms to cluster temporal data. These algorithms experience two weaknesses when applied to temporal data. The first weakness is the loss of diversity, which refers to the fact that the population of the algorithm converges, becoming less diverse and, therefore, limiting the algorithm’s exploration capabilities. The second weakness, outdated memory, is only experienced by the PSO and refers to the previous personal best solutions found by the particles becoming obsolete as the environment changes. A data clustering algorithm that addresses these two weaknesses is necessary to cluster temporal data. This research describes various adaptations of PSO and DE algorithms for the purpose of clustering temporal data. The algorithms proposed aim to address the loss of diversity and outdated memory problems experienced by PSO and DE algorithms. These problems are addressed by combining approaches previously used for the purpose of dealing with temporal or dynamic data, such as repulsion and anti-convergence, with PSO and DE approaches used to cluster data. Six PSO algorithms are introduced in this research, namely the data clustering particle swarm optimisation (DCPSO), reinitialising data clustering particle swarm optimisation (RDCPSO), cooperative data clustering particle swarm optimisation (CDCPSO), multi-swarm data clustering particle swarm optimisation (MDCPSO), cooperative multi-swarm data clustering particle swarm optimisation (CMDCPSO), and elitist cooperative multi-swarm data clustering particle swarm optimisation (eCMDCPSO). Additionally, four DE algorithms are introduced, namely the data clustering differential evolution (DCDE), re-initialising data clustering differential evolution (RDCDE), dynamic data clustering differential evolution (DCDynDE), and cooperative dynamic data clustering differential evolution (CDCDynDE). The PSO and DE algorithms introduced require prior knowledge of the total number of clusters in the dataset. The total number of clusters in a real-world dataset, however, is not always known. For this reason, the best performing PSO and best performing DE are compared. The CDCDynDE is selected as the winning algorithm, which is then adapted to determine the optimal number of clusters dynamically. The resulting algorithm is the k-independent cooperative data clustering differential evolution (KCDCDynDE) algorithm, which was compared against the local network neighbourhood artificial immune system (LNNAIS) algorithm, which is an artificial immune system (AIS) designed to cluster temporal data and determine the total number of clusters dynamically. It was determined that the KCDCDynDE performed the clustering task well for problems with frequently changing data, high-dimensions, and pattern and cluster data migration types. / Dissertation (MSc)--University of Pretoria, 2015. / Computer Science / Unrestricted
97

Modelo de avaliação de risco para predição de preços de carne bovina utilizando inteligência computacional / Model of evaluation of risk for prediction of prices in the beef chain by using computational intelligence

Luciene Rose Lemes 08 August 2014 (has links)
A relação entre o preço futuro e o preço a vista é um fator que requer muita atenção e planejamento das atividades de comercialização agropecuária. As previsões de preços permitem fornecer a redução das incertezas dentro do mercado de carne bovina auxiliando na determinação da quantidade a ser produzida bem como no estabelecimento de políticas governamentais apropriadas e sustentáveis. Este trabalho tem como objetivo definir um modelo matemático capaz de predizer os preços de carne bovina usando inteligência computacional a partir da análise do ARIMA, Análise de Risco e Redes Neurais Artificiais, identificando aspectos quantitativos relacionados à lógica da decisão na formação do preço de venda, utilizando-se de séries temporais, a fim de explorar as correlações que impactam com maior frequência no preço de venda de carne bovina, por entender que este conhecimento pode aperfeiçoar os instrumentos de avaliação no processo de tomada decisão. A pesquisa caracteriza-se como descritiva, explicativa e quantitativa pois busca-se identificar fatores determinantes para a ocorrência dos fenômenos observados nas séries temporais de preço do boi gordo, traduzindo-se em números as informações classificadas e analisadas com o uso de técnicas estatísticas. Neste estudo foi utilizado a metodologia de séries temporais, aplicada à série histórica de preços do Boi Gordo no período de 23 de julho de 1997 a 18 de fevereiro de 2013, obtida junto ao Centro de Estudos Avançados em Economia Aplicada (CEPEA) da ESALQ/USP/Piracicaba. Estes dados representam o indicador de preço do Boi Gordo ESALQ/BM&F/BOVESPA utilizado como referencial para as negociações de compra e venda de contratos futuros. Os resultados demonstram a eficiência dos modelos propostos para a simulação e instrumentalização, ou seja, permitem avaliar o comportamento linear e não linear do modelo como ferramenta para a geração de informações e redução dos riscos que contribuirá para reduzir a subjetividade no processo de tomada de decisão. / The relationship between the future price and the cash down price of beef is a factor that requires much attention and planning in agricultural market activities. Price previews help to reduce uncertainties in the beef market and to assist in the determination of the quantity of beef to be produced as well as in the establishment of proper and sustainable governmental policies. This work aims to establish a mathematical model capable of predicting the price of beef using computational intelligence from the ARIMA analysis, Risk Analysis and Artificial Neural Network,by identifying the quantitative aspects related to the logic of decision in the formation of selling prices. This is done by using temporal series in order to explore the correlations that impact with greater frequency on the selling price of beef meat. Knowledge thus produced could improve the assessment instruments in the process of decision making. This research is characterized as descriptive, explanatory and quantitative because it intends to identify the determining factors for the occurrence of the observed phenomena in the time series of livestock prices where the classified and analyzed data result in numbers by the use of statistical techniques. The time series methodology was used. The historical series of livestock prices in the period from July 23rd 1997 to February 18th 2013 was obtained in the Center for Advanced Studies in Applied Economics (CEPEA) from ESALQ/USP/Piracicaba. This represents the livestock price indicator ESALQ/BM&F/BOVESPA that is used as a reference for the negotiations of purchase and sale in future contracts. The results show the effectiveness of the proposed models for simulation and orchestration, that is, they allow assessment of linear and non-linear behavior of the model as a tool for the generation of data and risk reductions that will contribute to less subjectivity in the decision-making process.
98

HVAC system study: a data-driven approach

Xu, Guanglin 01 May 2012 (has links)
The energy consumed by heating, ventilating, and air conditioning (HVAC) systems has increased in the past two decades. Thus, improving efficiency of HVAC systems has gained more and more attentions. This concern has posed challenges for modeling and optimizing HVAC systems. The traditional methods, such as analytical and statistical methods, are usually computationally complex and involve assumptions that may not hold in practice since HVAC system is a complex, nonlinear, and dynamic system. Data-mining approach is a novel science aiming at extracting system characteristics, identifying models and recognizing patterns from large-size data set. It has proved its power in modeling complex and nonlinear systems through various effective and successful applications in industrial, business, and medical areas. Classical data-mining techniques, such as neural networks and boosting tree have been largely applied into modeling HVAC systems in literature. Evolutionary computation, including swarm intelligence, have rapidly developed in the past decades and then applied to improving the performance of HVAC systems. This research focuses on modeling, optimizing, and controlling an HVAC system. Data-mining algorithms are first utilized to extract predictive models from experimental data set at Energy Resource Station in Ankeney. Evolutionary algorithms are then employed to solve the optimization models converted from the above data-driven models. In the optimization process, two set points of the HVAC system, supply air duct static pressure set point and supply air temperature set point, are controlled aiming at improving the energy efficiency and maintaining thermal comfort. The methodology presented in this research is applicable to various industrial processes other than HVAC systems.
99

Spamming mobile botnet detection using computational intelligence

Vural, Ickin January 2013 (has links)
This dissertation explores a new challenge to digital systems posed by the adaptation of mobile devices and proposes a countermeasure to secure systems against threats to this new digital ecosystem. The study provides the reader with background on the topics of spam, Botnets and machine learning before tackling the issue of mobile spam. The study presents the reader with a three tier model that uses machine learning techniques to combat spamming mobile Botnets. The three tier model is then developed into a prototype and demonstrated to the reader using test scenarios. Finally, this dissertation critically discusses the advantages of having using the three tier model to combat spamming Botnets. / Dissertation (MSc)--University of Pretoria, 2013. / gm2014 / Computer Science / unrestricted
100

Ein Simulator für das Immunsystem

Seifert, Christin 01 March 2004 (has links)
In this thesis a Simulator of the Immune System (IS) is developed. The implemented models of the IS refines and extends models of existing Artificial IS. / In der Arbeit wird der Prototyp eines Immunsystem-Simulators erstellt, der die vorhandenen Modelle Künstlicher Immunsystem verfeinert und die Untersuchung der Einflüsse verschiedener Parameter erlaubt.

Page generated in 0.1348 seconds