Spelling suggestions: "subject:"[een] COMPUTATIONAL INTELLIGENCE"" "subject:"[enn] COMPUTATIONAL INTELLIGENCE""
31 |
Niching strategies for particle swarm optimizationBrits, Riaan 19 February 2004 (has links)
Evolutionary algorithms and swarm intelligence techniques have been shown to successfully solve optimization problems where the goal is to find a single optimal solution. In multimodal domains where the goal is the locate multiple solutions in a single search space, these techniques fail. Niching algorithms extend existing global optimization algorithms to locate and maintain multiple solutions concurrently. In this thesis, strategies are developed that utilize the unique characteristics of the particle swarm optimization algorithm to perform niching. Shrinking topological neighborhoods and optimization with multiple subswarms are used to identify and stably maintain niches. Solving systems of equations and multimodal functions are used to demonstrate the effectiveness of the new algorithms. / Dissertation (MS)--University of Pretoria, 2005. / Computer Science / unrestricted
|
32 |
Intelligent pre-processing for data miningDe Bruin, Ludwig 26 June 2014 (has links)
M.Sc. (Information Technology) / Data is generated at an ever-increasing rate and it has become difficult to process or analyse it in its raw form. The most data is generated by processes or measuring equipment, resulting in very large volumes of data per time unit. Companies and corporations rely on their Management and Information Systems (MIS) teams to perform Extract, Transform and Load (ETL) operations to data warehouses on a daily basis in order to provide them with reports. Data mining is a Business Intelligence (BI) tool and can be defined as the process of discovering hidden information from existing data repositories. The successful operation of data mining algorithms requires data to be pre-processed for algorithms to derive IF-THEN rules. This dissertation presents a data pre-processing model to transform data in an intelligent manner to enhance its suitability for data mining operations. The Extract Pre- Process and Save for Data Mining (EPS4DM) model is proposed. This model will perform the pre-processing tasks required on a chosen dataset and transform the dataset into the formats required. This can be accessed by data mining algorithms from a data mining mart when needed. The proof of concept prototype features agent-based Computational Intelligence (CI) based algorithms, which allow the pre-processing tasks of classification and clustering as means of dimensionality reduction to be performed. The task of clustering requires the denormalisation of relational structures and is automated using a feature vector approach. A Particle Swarm Optimisation (PSO) algorithm is run on the patterns to find cluster centres based on Euclidean distances. The task of classification requires a feature vector as input and makes use of a Genetic Algorithm (GA) to produce a transformation matrix to reduce the number of significant features in the dataset. The results of both the classification and clustering processes are stored in the data mart.
|
33 |
The feature detection rule and its application within the negative selection algorithmPoggiolini, Mario 26 June 2009 (has links)
The negative selection algorithm developed by Forrest et al. was inspired by the manner in which T-cell lymphocytes mature within the thymus before being released into the blood system. The resultant T-cell lymphocytes, which are then released into the blood, exhibit an interesting characteristic: they are only activated by non-self cells that invade the human body. The work presented in this thesis examines the current body of research on the negative selection theory and introduces a new affinity threshold function, called the feature-detection rule. The feature-detection rule utilises the inter-relationship between both adjacent and non-adjacent features within a particular problem domain to determine if an artificial lymphocyte is activated by a particular antigen. The performance of the feature-detection rule is contrasted with traditional affinity-matching functions currently employed within negative selection theory, most notably the r-chunks rule (which subsumes the r-contiguous bits rule) and the hamming-distance rule. The performance will be characterised by considering the detection rate, false-alarm rate, degree of generalisation and degree of overfitting. The thesis will show that the feature-detection rule is superior to the r-chunks rule and the hamming-distance rule, in that the feature-detection rule requires a much smaller number of detectors to achieve greater detection rates and less false-alarm rates. The thesis additionally refutes that the way in which permutation masks are currently applied within negative selection theory is incorrect and counterproductive, while placing the feature-detection rule within the spectrum of affinity-matching functions currently employed by artificial immune-system (AIS) researchers. / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / Unrestricted
|
34 |
Electronic warfare asset allocation with human-swarm interactionBoler, William M. 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Finding the optimal placement of receiving assets among transmitting targets in
a three-dimensional (3D) space is a complex and dynamic problem that is solved in
this work. The placement of assets in R^6 to optimize the best coverage of transmitting
targets requires the placement in 3D-spatiality, center frequency assignment,
and antenna azimuth and elevation orientation, with respect to power coverage at
the receiver without overloading the feed-horn, maintaining suficient power sensitivity
levels, and maintaining terrain constraints. Further complexities result from
the human-user having necessary and time-constrained knowledge to real-world conditions
unknown to the problem space, such as enemy positions or special targets,
resulting in the requirement of the user to interact with the solution convergence
in some fashion. Particle Swarm Optimization (PSO) approaches this problem with
accurate and rapid approximation to the electronic warfare asset allocation problem
(EWAAP) with near-real-time solution convergence using a linear combination of
weighted components for tness comparison and particles representative of asset con-
gurations. Finally, optimizing the weights for the tness function requires the use
of unsupervised machine learning techniques to reduce the complexity of assigning a
tness function using a Meta-PSO. The result of this work implements a more realistic
asset allocation problem with directional antenna and complex terrain constraints
that is able to converge on a solution on average in 488.7167+-15.6580 ms and has a
standard deviation of 15.3901 for asset positions across solutions.
|
35 |
Fusion for Object DetectionWei, Pan 10 August 2018 (has links)
In a three-dimensional world, for perception of the objects around us, we not only wish to classify them, but also know where these objects are. The task of object detection combines both classification and localization. In addition to predicting the object category, we also predict where the object is from sensor data. As it is not known ahead of time how many objects that we have interest in are in the sensor data and where are they, the output size of object detection may change, which makes the object detection problem difficult. In this dissertation, I focus on the task of object detection, and use fusion to improve the detection accuracy and robustness. To be more specific, I propose a method to calculate measure of conflict. This method does not need external knowledge about the credibility of each source. Instead, it uses the information from the sources themselves to help assess the credibility of each source. I apply the proposed measure of conflict to fuse independent sources of tracking information from various stereo cameras. Besides, I propose a computational intelligence system for more accurate object detection in real--time. The proposed system uses online image augmentation before the detection stage during testing and fuses the detection results after. The fusion method is computationally intelligent based on the dynamic analysis of agreement among inputs. Comparing with other fusion operations such as average, median and non-maxima suppression, the proposed methods produces more accurate results in real-time. I also propose a multi--sensor fusion system, which incorporates advantages and mitigate disadvantages of each type of sensor (LiDAR and camera). Generally, camera can provide more texture and color information, but it cannot work in low visibility. On the other hand, LiDAR can provide accurate point positions and work at night or in moderate fog or rain. The proposed system uses the advantages of both camera and LiDAR and mitigate their disadvantages. The results show that comparing with LiDAR or camera detection alone, the fused result can extend the detection range up to 40 meters with increased detection accuracy and robustness.
|
36 |
Computational intelligence for safety assurance of cooperative systems of systemsKabir, Sohag, Papadopoulos, Y. 29 March 2021 (has links)
Yes / Cooperative Systems of Systems (CSoS) including
Autonomous systems (AS), such as autonomous cars and related
smart traffic infrastructures form a new technological frontier
for their enormous economic and societal potentials in various
domains. CSoS are often safety-critical systems, therefore, they
are expected to have a high level of dependability. Due to the
open and adaptive nature of the CSoS, the conventional methods
used to provide safety assurance for traditional systems cannot
be applied directly to these systems. Potential configurations and
scenarios during the evolving operation are infinite and cannot
be exhaustively analysed to provide guarantees a priori. This
paper presents a novel framework for dynamic safety assurance
of CSoS, which integrates design time models and runtime
techniques to provide continuous assurance for a CSoS and its
systems during operation. / Dependability Engineering Innovation for Cyber Physical Systems (DEIS) H2020 Project under Grant 732242.
|
37 |
Development of a fuzzy system design strategy using evolutionary computationBush, Brian O. January 1996 (has links)
No description available.
|
38 |
Object recognition and automatic selection in a Robotic Sorting CellJanse van Rensburg, Frederick Johannes 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2006. / This thesis relates to the development of an automated sorting cell as part of a flexible manufacturing line, with the use of object recognition. Algorithms for each of the individual subsections creating the cell, recognition, position calculation and robot integration were developed and tested.
The Fourier descriptors object recognition technique is investigated and used. Invariance to scale, rotation or translation of the boundary of an object recognition. Stereoscopy with basic trigonometry is used to calculate the position of recognised objects, after which they are handled by a robot. Integration of the robot into the project environment is done with trigonometry as well as Euler angles.
It is shown that a successful, automated sorting cell can be constructed with object recognition. The results show that reliable sorting can be done with available hardware and the algorithms development.
|
39 |
Utilising provenance to enhance social computationMarkovic, Milan January 2016 (has links)
Many online platforms employ networks of human workers to perform computational tasks that can be difficult for a machine to perform (e.g. recognising an object from an image). This approach can be referred to as social computation. However, systems that utilise social computation often suffer from a lack of transparency, which results in difficulties in the decision-making process (e.g. assessing reliability of outputs). This thesis investigates how the lack of transparency can be addressed by recording provenance, which includes descriptions of social computation workflows and their executions. In addition, it investigates the role of Semantic Web technologies in modelling and querying such provenance in order to support decision-making. Following analysis of several use-case scenarios, requirements for describing the provenance of a social computation are identified to provide the basis of the Social Computation Provenance model, SC-PROV. This model extends the W3C recommendation for modelling provenance on the Web (PROV) and the P-PLAN model for describing provenance of abstract workflows. To satisfy the identified provenance requirements, SC-PROV extends PROV and P-PLAN with a vocabulary for capturing social computation features such as social actors (e.g. workers and requesters), incentives (e.g. promises of monetary rewards received upon completion of a task), and conditions (e.g. constraints defining when an incentive should be awarded). The SC-PROV model is realised in an OWL ontology and used in a semantic annotation framework to capture the provenance of a simulated case study, which includes 46,665 diverse workflows. During the evaluation process, the SC-PROV vocabulary is used to construct provenance queries that support an example workflow selection metric based on trust assessments of various aspects of social computation workflows. The performance of the workflow selected by this metric is then evaluated against the performance of two control groups - one containing randomly selected workflows and the other containing workflows selected by a metric informed by provenance which lacks SCPROV descriptions. The examples described in this thesis establish the benefits of examining provenance as part of decision-making in the social computation domain, and illustrate the inability of current provenance models to fully support these processes. The evaluation of SC-PROV demonstrates its capabilities to produce provenance descriptions that extend to the social computation domain. The empirical evidence provided by the evaluation supports the conclusion that using SC-PROV enhances support for trust-based decision-making.
|
40 |
[en] QUANTUM-INSPIRED LINEAR GENETIC PROGRAMMING / [pt] PROGRAMAÇÃO GENÉTICA LINEAR COM INSPIRAÇÃO QUÂNTICADOUGLAS MOTA DIAS 26 May 2011 (has links)
[pt] A superioridade de desempenho dos algoritmos quânticos, em alguns problemas
específicos, reside no uso direto de fenômenos da mecânica quântica para
realizar operações com dados em computadores quânticos. Esta característica fez
surgir uma nova abordagem, denominada Computação com Inspiração Quântica,
cujo objetivo é criar algoritmos clássicos (executados em computadores clássicos)
que tirem proveito de princípios da mecânica quântica para melhorar seu desempenho.
Neste sentido, alguns algoritmos evolutivos com inspiração quântica tem
sido propostos e aplicados com sucesso em problemas de otimização combinatória
e numérica, apresentando desempenho superior àquele dos algoritmos evolutivos
convencionais, quanto à melhoria da qualidade das soluções e à redução do número
de avaliações necessárias para alcançá-las. Até o presente momento, no entanto,
este novo paradigma de inspiração quântica ainda não havia sido aplicado à Programação
Genética (PG), uma classe de algoritmos evolutivos que visa à síntese automática
de programas de computador. Esta tese propõe, desenvolve e testa um novo
modelo de algoritmo evolutivo com inspiração quântica, denominado Programação
Genética Linear com Inspiração Quântica (PGLIQ), para a evolução de programas
em código de máquina. A Programação Genética Linear é assim denominada
porque cada um dos seus indivíduos é representado por uma lista de instruções (estruturas
lineares), as quais são executadas sequencialmente. As contribuições deste
trabalho são o estudo e a formulação inédita do uso do paradigma da inspiração
quântica na síntese evolutiva de programas de computador. Uma das motivações
para a opção pela evolução de programas em código de máquina é que esta é a
abordagem de PG que, por oferecer a maior velocidade de execução, viabiliza experimentos
em larga escala. O modelo proposto é inspirado em sistemas quânticos
multiníveis e utiliza o qudit como unidade básica de informação quântica, o qual
representa a superposição dos estados de um sistema deste tipo. O funcionamento
do modelo se baseia em indivíduos quânticos, que representam a superposição de
todos os programas do espaço de busca, cuja observação gera indivíduos clássicos
e os programas (soluções). Nos testes são utilizados problemas de regressão simbólica
e de classificação binária para se avaliar o desempenho da PGLIQ e compará-lo
com o do modelo AIMGP (Automatic Induction of Machine Code by Genetic Programming),
considerado atualmente o modelo de PG mais eficiente na evolução de
código de máquina, conforme citado em inúmeras referências bibliográficas na área.
Os resultados mostram que a Programação Genética Linear com Inspiração Quântica
(PGLIQ) apresenta desempenho geral superior nestas classes de problemas, ao
encontrar melhores soluções (menores erros) a partir de um número menor de avaliações,
com a vantagem adicional de utilizar um número menor de parâmetros e
operadores que o modelo de referência. Nos testes comparativos, o modelo mostra
desempenho médio superior ao do modelo de referência para todos os estudos
de caso, obtendo erros de 3 a 31% menores nos problemas de regressão simbólica,
e de 36 a 39% nos problemas de classificação binária. Esta pesquisa conclui que
o paradigma da inspiração quântica pode ser uma abordagem competitiva para se
evoluir programas eficientemente, encorajando o aprimoramento e a extensão do
modelo aqui apresentado, assim como a criação de outros modelos de programação
genética com inspiração quântica. / [en] The superior performance of quantum algorithms in some specific problems
lies in the direct use of quantum mechanics phenomena to perform operations with
data on quantum computers. This feature has originated a new approach, named
Quantum-Inspired Computing, whose goal is to create classic algorithms (running
on classical computers) that take advantage of quantum mechanics principles to
improve their performance. In this sense, some quantum-inspired evolutionary algorithms
have been proposed and successfully applied in combinatorial and numerical
optimization problems, presenting a superior performance to that of conventional
evolutionary algorithms, by improving the quality of solutions and reducing
the number of evaluations needed to achieve them. To date, however, this
new paradigm of quantum inspiration had not yet been applied to Genetic Programming
(GP), a class of evolutionary algorithms that aims the automatic synthesis
of computer programs. This thesis proposes, develops and tests a novel model of
quantum-inspired evolutionary algorithm named Quantum-Inspired Linear Genetic
Programming (QILGP) for the evolution of machine code programs. Linear Genetic
Programming is so named because each of its individuals is represented by a list of
instructions (linear structures), which are sequentially executed. The contributions
of this work are the study and formulation of the novel use of quantum inspiration
paradigm on evolutionary synthesis of computer programs. One of the motivations
for choosing by the evolution of machine code programs is because this is the GP
approach that, by offering the highest speed of execution, makes feasible large-scale
experiments. The proposed model is inspired on multi-level quantum systems and
uses the qudit as the basic unit of quantum information, which represents the superposition
of states of such a system. The model’s operation is based on quantum individuals,
which represent a superposition of all programs of the search space, whose
observation leads to classical individuals and programs (solutions). The tests use
symbolic regression and binary classification problems to evaluate the performance
of QILGP and compare it with the AIMGP model (Automatic Induction of Machine
Code by Genetic Programming), which is currently considered the most efficient GP
model to evolve machine code, as cited in numerous references in this field. The results
show that Quantum-Inspired Linear Genetic Programming (QILGP) presents
superior overall performance in these classes of problems, by achieving better solutions
(smallest error) from a smaller number of evaluations, with the additional
advantage of using a smaller number of parameters and operators that the reference model. In comparative tests, the model shows average performance higher than that
of the reference model for all case studies, achieving errors 3-31% lower in the
problems of symbolic regression, and 36-39% in the binary classification problems.
This research concludes that the quantum inspiration paradigm can be a competitive
approach to efficiently evolve programs, encouraging the improvement and
extension of the model presented here, as well as the creation of other models of
quantum-inspired genetic programming.
|
Page generated in 0.0409 seconds