41 |
A distributed, multi-agent model for general purpose crowd simulationEkron, Kieron Charles 06 November 2012 (has links)
M.Sc. (Computer Science) / The purpose of the research presented in this dissertation is to explore the use of a distributed multi-agent system in a general purpose crowd simulation model. Crowd simulation is becoming an increasingly important tool for analysing new construction projects, as it enables safety and performance evaluations to be performed on architectural plans before the buildings have been constructed. Crowd simulation is a challenging problem, as it requires the simulation of complex interactions of people within a crowd. The dissertation investigates existing models of crowd simulation and identifies three primary sub-tasks of crowd simulation: deliberation, path planning and collision-avoiding movement. Deliberation is the process of determining which goal an agent will attempt to satisfy next. Path planning is the process of finding a collision-free path from an agent‟s current location towards its goal. Collision-avoiding movement deals with moving an agent along its calculated path while avoiding collisions with other agents. A multi-agent crowd simulation model, DiMACS, is proposed as a means of addressing the problem of crowd simulation. Multi-agent technology provides an effective solution for representing individuals within a crowd; each member of a crowd can be represented as an intelligent agent. Intelligent agents are capable of maintaining their own internal state and deciding on a course of action based on that internal state. DiMACS is capable of producing realistic simulations while making use of distributed and parallel processing to improve its performance. In addition, the model is highly customisable. The dissertation also presents a user-friendly method for configuring agents within a simulation that abstracts the complexity of agent behaviour away from a user so as to increase the accessibility of configuring the proposed model. In addition, an application programming interface is provided that enables developers to extend the model to simulate additional agent behaviours. The research shows how distributed and parallel processing may be used to improve the performance of an agent-based crowd simulation without compromising the accuracy of the simulation.
|
42 |
Computational intelligence technology for the generation of building layouts combined with multi-agent furniture placementBijker, Jacobus Jan 02 November 2012 (has links)
M.Sc. (Computer Science) / This dissertation presents a method for learning from existing building designs and generating new building layouts. Generating fully furnished building layouts could be very useful for video games or for assisting architects when designing new buildings. The core concern is to drastically reduce the workload required to design building layouts. The implemented prototype features a Computer Aided Design system, named CABuilD that allows users to design fully furnished multi-storey building layouts. Building layouts designed using CABuilD can be taught to an Artificial Immune System. The Artificial Immune System tracks information such as building layouts, room sizes and furniture layouts. Once building layouts has been taught to the artificial immune system, a generation algorithm can utilise the information in order to generate fully furnished building layouts. The generation algorithm that is presented allows fully furnished buildings to be generated from high-level information such as the number of rooms to include and a building perimeter. The presented algorithm differs from existing building generation methods in the following ways: Firstly existing methods either ignore building perimeters or assume a buildings perimeter is a rectangle. The presented method allows the user to specify a closed polygon as a building perimeter which will guide the generation of the building layout. Secondly existing generation methods tend to run from a set of rules. The implemented system learns from existing building layouts, effectively allowing it to generate different building types based on the building layouts that were taught to the system. Thirdly, the system generates both the building layout as well as the furniture within rooms. Existing systems only generate the building layout or the furniture, but not both. The prototype that was implemented as a proof of concept uses a number of biologically inspired techniques such as Ant algorithms, Particle Swarm Optimisation and Artificial Immune Systems. The system also employs multiple intelligent agents in order to furnished rooms. The prototype is capable of generating furnished building layouts in merely a few seconds, much faster than a human could design such a layout. Possible improvements and future work is presented at the end of the dissertation.
|
43 |
A Serendipitous Software Framework for Facilitating Collaboration in Computational IntelligencePeer, Edwin S. 10 June 2005 (has links)
A major flaw in the academic system, particularly pertaining to computer science, is that it rewards specialisation. The highly competitive quest for new scientific developments, or rather the quest for a better reputation and more funding, forces researchers to specialise in their own fields, leaving them little time to properly explore what others are doing, sometimes even within their own field of interest. Even the peer review process, which should provide the necessary balance, fails to achieve much diversity, since reviews are typically performed by persons who are again specialists in the particular field of the work. Further, software implementations are rarely reviewed, having as a consequence the publishing of untenable results. Unfortunately, these factors contribute to an environment which is not conducive to collaboration, a cornerstone of academia | building on the work of others. This work takes a step back and examines the general landscape of computational intelligence from a broad perspective, drawing on multiple disciplines to formulate a collaborative software platform, which is flexible enough to support the needs of this diverse research community. Interestingly, this project did not set out with these goals in mind, rather it evolved, over time, from something more specialised into the general framework described in this dissertation. Design patterns are studied as a means to manage the complexity of the computational intelligence paradigm in a flexible software implementation. Further, this dissertation demonstrates that releasing research software under an open source license eliminates some of the deficiencies of the academic process, while preserving, and even improving, the ability to build a reputation and pursue funding. Two software packages have been produced as products of this research: i) CILib, an open source library of computational intelligence algorithms; and ii) CiClops, which is a virtual laboratory for performing experiments that scale over multiple workstations. Together, these software packages are intended to improve the quality of research output and facilitate collaboration by sharing a repository of simulation data, statistical analysis tools and a single software implementation. / Dissertation (MSc)--University of Pretoria, 2006. / Computer Science / Unrestricted
|
44 |
Niching strategies for particle swarm optimizationBrits, Riaan 19 February 2004 (has links)
Evolutionary algorithms and swarm intelligence techniques have been shown to successfully solve optimization problems where the goal is to find a single optimal solution. In multimodal domains where the goal is the locate multiple solutions in a single search space, these techniques fail. Niching algorithms extend existing global optimization algorithms to locate and maintain multiple solutions concurrently. In this thesis, strategies are developed that utilize the unique characteristics of the particle swarm optimization algorithm to perform niching. Shrinking topological neighborhoods and optimization with multiple subswarms are used to identify and stably maintain niches. Solving systems of equations and multimodal functions are used to demonstrate the effectiveness of the new algorithms. / Dissertation (MS)--University of Pretoria, 2005. / Computer Science / unrestricted
|
45 |
Intelligent pre-processing for data miningDe Bruin, Ludwig 26 June 2014 (has links)
M.Sc. (Information Technology) / Data is generated at an ever-increasing rate and it has become difficult to process or analyse it in its raw form. The most data is generated by processes or measuring equipment, resulting in very large volumes of data per time unit. Companies and corporations rely on their Management and Information Systems (MIS) teams to perform Extract, Transform and Load (ETL) operations to data warehouses on a daily basis in order to provide them with reports. Data mining is a Business Intelligence (BI) tool and can be defined as the process of discovering hidden information from existing data repositories. The successful operation of data mining algorithms requires data to be pre-processed for algorithms to derive IF-THEN rules. This dissertation presents a data pre-processing model to transform data in an intelligent manner to enhance its suitability for data mining operations. The Extract Pre- Process and Save for Data Mining (EPS4DM) model is proposed. This model will perform the pre-processing tasks required on a chosen dataset and transform the dataset into the formats required. This can be accessed by data mining algorithms from a data mining mart when needed. The proof of concept prototype features agent-based Computational Intelligence (CI) based algorithms, which allow the pre-processing tasks of classification and clustering as means of dimensionality reduction to be performed. The task of clustering requires the denormalisation of relational structures and is automated using a feature vector approach. A Particle Swarm Optimisation (PSO) algorithm is run on the patterns to find cluster centres based on Euclidean distances. The task of classification requires a feature vector as input and makes use of a Genetic Algorithm (GA) to produce a transformation matrix to reduce the number of significant features in the dataset. The results of both the classification and clustering processes are stored in the data mart.
|
46 |
The feature detection rule and its application within the negative selection algorithmPoggiolini, Mario 26 June 2009 (has links)
The negative selection algorithm developed by Forrest et al. was inspired by the manner in which T-cell lymphocytes mature within the thymus before being released into the blood system. The resultant T-cell lymphocytes, which are then released into the blood, exhibit an interesting characteristic: they are only activated by non-self cells that invade the human body. The work presented in this thesis examines the current body of research on the negative selection theory and introduces a new affinity threshold function, called the feature-detection rule. The feature-detection rule utilises the inter-relationship between both adjacent and non-adjacent features within a particular problem domain to determine if an artificial lymphocyte is activated by a particular antigen. The performance of the feature-detection rule is contrasted with traditional affinity-matching functions currently employed within negative selection theory, most notably the r-chunks rule (which subsumes the r-contiguous bits rule) and the hamming-distance rule. The performance will be characterised by considering the detection rate, false-alarm rate, degree of generalisation and degree of overfitting. The thesis will show that the feature-detection rule is superior to the r-chunks rule and the hamming-distance rule, in that the feature-detection rule requires a much smaller number of detectors to achieve greater detection rates and less false-alarm rates. The thesis additionally refutes that the way in which permutation masks are currently applied within negative selection theory is incorrect and counterproductive, while placing the feature-detection rule within the spectrum of affinity-matching functions currently employed by artificial immune-system (AIS) researchers. / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / Unrestricted
|
47 |
Electronic warfare asset allocation with human-swarm interactionBoler, William M. 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Finding the optimal placement of receiving assets among transmitting targets in
a three-dimensional (3D) space is a complex and dynamic problem that is solved in
this work. The placement of assets in R^6 to optimize the best coverage of transmitting
targets requires the placement in 3D-spatiality, center frequency assignment,
and antenna azimuth and elevation orientation, with respect to power coverage at
the receiver without overloading the feed-horn, maintaining suficient power sensitivity
levels, and maintaining terrain constraints. Further complexities result from
the human-user having necessary and time-constrained knowledge to real-world conditions
unknown to the problem space, such as enemy positions or special targets,
resulting in the requirement of the user to interact with the solution convergence
in some fashion. Particle Swarm Optimization (PSO) approaches this problem with
accurate and rapid approximation to the electronic warfare asset allocation problem
(EWAAP) with near-real-time solution convergence using a linear combination of
weighted components for tness comparison and particles representative of asset con-
gurations. Finally, optimizing the weights for the tness function requires the use
of unsupervised machine learning techniques to reduce the complexity of assigning a
tness function using a Meta-PSO. The result of this work implements a more realistic
asset allocation problem with directional antenna and complex terrain constraints
that is able to converge on a solution on average in 488.7167+-15.6580 ms and has a
standard deviation of 15.3901 for asset positions across solutions.
|
48 |
Fusion for Object DetectionWei, Pan 10 August 2018 (has links)
In a three-dimensional world, for perception of the objects around us, we not only wish to classify them, but also know where these objects are. The task of object detection combines both classification and localization. In addition to predicting the object category, we also predict where the object is from sensor data. As it is not known ahead of time how many objects that we have interest in are in the sensor data and where are they, the output size of object detection may change, which makes the object detection problem difficult. In this dissertation, I focus on the task of object detection, and use fusion to improve the detection accuracy and robustness. To be more specific, I propose a method to calculate measure of conflict. This method does not need external knowledge about the credibility of each source. Instead, it uses the information from the sources themselves to help assess the credibility of each source. I apply the proposed measure of conflict to fuse independent sources of tracking information from various stereo cameras. Besides, I propose a computational intelligence system for more accurate object detection in real--time. The proposed system uses online image augmentation before the detection stage during testing and fuses the detection results after. The fusion method is computationally intelligent based on the dynamic analysis of agreement among inputs. Comparing with other fusion operations such as average, median and non-maxima suppression, the proposed methods produces more accurate results in real-time. I also propose a multi--sensor fusion system, which incorporates advantages and mitigate disadvantages of each type of sensor (LiDAR and camera). Generally, camera can provide more texture and color information, but it cannot work in low visibility. On the other hand, LiDAR can provide accurate point positions and work at night or in moderate fog or rain. The proposed system uses the advantages of both camera and LiDAR and mitigate their disadvantages. The results show that comparing with LiDAR or camera detection alone, the fused result can extend the detection range up to 40 meters with increased detection accuracy and robustness.
|
49 |
Computational intelligence for safety assurance of cooperative systems of systemsKabir, Sohag, Papadopoulos, Y. 29 March 2021 (has links)
Yes / Cooperative Systems of Systems (CSoS) including
Autonomous systems (AS), such as autonomous cars and related
smart traffic infrastructures form a new technological frontier
for their enormous economic and societal potentials in various
domains. CSoS are often safety-critical systems, therefore, they
are expected to have a high level of dependability. Due to the
open and adaptive nature of the CSoS, the conventional methods
used to provide safety assurance for traditional systems cannot
be applied directly to these systems. Potential configurations and
scenarios during the evolving operation are infinite and cannot
be exhaustively analysed to provide guarantees a priori. This
paper presents a novel framework for dynamic safety assurance
of CSoS, which integrates design time models and runtime
techniques to provide continuous assurance for a CSoS and its
systems during operation. / Dependability Engineering Innovation for Cyber Physical Systems (DEIS) H2020 Project under Grant 732242.
|
50 |
Development of a fuzzy system design strategy using evolutionary computationBush, Brian O. January 1996 (has links)
No description available.
|
Page generated in 0.1087 seconds