• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 122
  • 43
  • 41
  • 39
  • 14
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 769
  • 416
  • 412
  • 229
  • 125
  • 101
  • 100
  • 98
  • 92
  • 87
  • 83
  • 80
  • 79
  • 76
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A particle swarm optimization approach for tuning of SISO PID control loops

Pillay, Nelendran January 2008 (has links)
Thesis submitted in compliance with the requirements for the Master's Degree in Technology: Electrical Engineering - Light Current, Durban University of Technology, Department of Electronic Engineering, 2008. / Linear control systems can be easily tuned using classical tuning techniques such as the Ziegler-Nichols and Cohen-Coon tuning formulae. Empirical studies have found that these conventional tuning methods result in an unsatisfactory control performance when they are used for processes experiencing the negative destabilizing effects of strong nonlinearities. It is for this reason that control practitioners often prefer to tune most nonlinear systems using trial and error tuning, or intuitive tuning. A need therefore exists for the development of a suitable tuning technique that is applicable for a wide range of control loops that do not respond satisfactorily to conventional tuning. Emerging technologies such as Swarm Intelligence (SI) have been utilized to solve many non-linear engineering problems. Particle Swarm Optimization (PSO), developed by Eberhart and Kennedy (1995), is a sub-field of SI and was inspired by swarming patterns occurring in nature such as flocking birds. It was observed that each individual exchanges previous experience, hence knowledge of the “best position” attained by an individual becomes globally known. In the study, the problem of identifying the PID controller parameters is considered as an optimization problem. An attempt has been made to determine the PID parameters employing the PSO technique. A wide range of typical process models commonly encountered in industry is used to assess the efficacy of the PSO methodology. Comparisons are made between the PSO technique and other conventional methods using simulations and real-time control. / National Research Foundation
102

Nature inspired computational intelligence for financial contagion modelling

Liu, Fang January 2014 (has links)
Financial contagion refers to a scenario in which small shocks, which initially affect only a few financial institutions or a particular region of the economy, spread to the rest of the financial sector and other countries whose economies were previously healthy. This resembles the “transmission” of a medical disease. Financial contagion happens both at domestic level and international level. At domestic level, usually the failure of a domestic bank or financial intermediary triggers transmission by defaulting on inter-bank liabilities, selling assets in a fire sale, and undermining confidence in similar banks. An example of this phenomenon is the failure of Lehman Brothers and the subsequent turmoil in the US financial markets. International financial contagion happens in both advanced economies and developing economies, and is the transmission of financial crises across financial markets. Within the current globalise financial system, with large volumes of cash flow and cross-regional operations of large banks and hedge funds, financial contagion usually happens simultaneously among both domestic institutions and across countries. There is no conclusive definition of financial contagion, most research papers study contagion by analyzing the change in the variance-covariance matrix during the period of market turmoil. King and Wadhwani (1990) first test the correlations between the US, UK and Japan, during the US stock market crash of 1987. Boyer (1997) finds significant increases in correlation during financial crises, and reinforces a definition of financial contagion as a correlation changing during the crash period. Forbes and Rigobon (2002) give a definition of financial contagion. In their work, the term interdependence is used as the alternative to contagion. They claim that for the period they study, there is no contagion but only interdependence. Interdependence leads to common price movements during periods both of stability and turmoil. In the past two decades, many studies (e.g. Kaminsky et at., 1998; Kaminsky 1999) have developed early warning systems focused on the origins of financial crises rather than on financial contagion. Further authors (e.g. Forbes and Rigobon, 2002; Caporale et al, 2005), on the other hand, have focused on studying contagion or interdependence. In this thesis, an overall mechanism is proposed that simulates characteristics of propagating crisis through contagion. Within that scope, a new co-evolutionary market model is developed, where some of the technical traders change their behaviour during crisis to transform into herd traders making their decisions based on market sentiment rather than underlying strategies or factors. The thesis focuses on the transformation of market interdependence into contagion and on the contagion effects. The author first build a multi-national platform to allow different type of players to trade implementing their own rules and considering information from the domestic and a foreign market. Traders’ strategies and the performance of the simulated domestic market are trained using historical prices on both markets, and optimizing artificial market’s parameters through immune - particle swarm optimization techniques (I-PSO). The author also introduces a mechanism contributing to the transformation of technical into herd traders. A generalized auto-regressive conditional heteroscedasticity - copula (GARCH-copula) is further applied to calculate the tail dependence between the affected market and the origin of the crisis, and that parameter is used in the fitness function for selecting the best solutions within the evolving population of possible model parameters, and therefore in the optimization criteria for contagion simulation. The overall model is also applied in predictive mode, where the author optimize in the pre-crisis period using data from the domestic market and the crisis-origin foreign market, and predict in the crisis period using data from the foreign market and predicting the affected domestic market.
103

Musical abstractions for multi-robot coordination

Santos Fernandez, Maria Teresa 27 May 2016 (has links)
This work presents a new approach to human-swarm interactions, a discipline which addresses the problem of how a human operator can influence the behavior of large groups of robots, providing high-level information understandable by the team. While there exist potential advantages of introducing a human in the control loop of a robot swarm, how the human must be incorporated is not a simple problem. For the intervention of a human operator to be favorable to the performance of the team, the means and form of the information between the human and the robot swarm must be adequately defined: we need to design which device will be provided to the operator to interact with the swarm and how the information will be shaped so that both the human and the robot team understand it. Coordination of multi-robot systems involves the generation of involved motion patterns for the individual agents that result in an overall organized movement. We introduce in this thesis a new human-swarm interaction modality based on music theory, a discipline studied for centuries and capable of creating complex sound structures. In particular, we have focused on understanding how we can apply rules and structures from music theory to an operator's input so that each command both specifies the goal location to be visited and the geometry to be adopted by the swarm. We interpret the sequence of locations to be visited by the swarm as a musical melody, identifying each note with a certain location in the robots' workspace. Once the objective path is defined in the form of a melody, we can apply rules from harmony, a discipline of music theory, to create chords that harmonize the input melody. The interest in using these chords lies fundamentally in that they are structured combinations of pitches, heard simultaneously. These inherent structures will be used to determine the geometry that should be displayed by the team. The developed multi-robot control is applied to a team of differential drive mobile robots through an electronic piano.
104

Modelación y Optimización de Redes IP Usando Herramientas de Inteligencia Computacional

Urrutia Arestizábal, Patricio Alejandro January 2007 (has links)
No description available.
105

Protein-ligand docking and virtual screening based on chaos-embedded particle swarm optimization algorithm

Tai, Hio Kuan January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
106

A Decentralized Strategy for Swarm Robots to Manage Spatially Distributed Tasks

Sheth, Rohit S 27 April 2017 (has links)
Large-scale scenarios such as search-and-rescue operations, agriculture, warehouse, surveillance, and construction consist of multiple tasks to be performed at the same time. These tasks have non-trivial spatial distributions. Robot swarms are envisioned to be efficient, robust, and flexible for such applications. We model this system such that each robot can service a single task at a time; each task requires a specific number of robots, which we refer to as 'quota'; task allocation is instantaneous; and tasks do not have inter- dependencies. This work focuses on distributing robots to spatially distributed tasks of known quotas in an efficient manner. Centralized solutions which guarantee optimality in terms of distance travelled by the swarm exist. Although potentially scalable, they require non-trivial coordination; could be computationally expensive; and may have poor response time when the number of robots, tasks and task quotas increase. For a swarm to efficiently complete tasks with a short response time, a decentralized approach provides better parallelism and scalability than a centralized one. In this work, we study the performance of a weight-based approach which is enhanced to include spatial aspects. In our approach, the robots share a common table that reports the task locations and quotas. Each robot, according to its relative position with respect to task locations, modifies weights for each task and randomly chooses a task to serve. Weights increase for tasks that are closer and have high quota as opposed to tasks which are far away and have low quota. Tasks with higher weights have a higher probability of being selected. This results in each robot having its own set of weights for all tasks. We introduce a distance- bias parameter, which determines how sensitive the system is to relative robot-task locations over task quotas. We focus on evaluating the distance covered by the swarm, number of inter- task switches, and time required to completely allocate all tasks and study the performance of our approach in several sets of simulated experiments.
107

Dynamic Task Allocation in Robot Swarms with Limited Buffer and Energy Constraints

Mohan, Janani 26 April 2018 (has links)
Area exploration and information gathering are one of the fundamental problems in mobile robotics. Much of the current research in swarm robotics is aimed at developing practical solutions to this problem. Exploring large environments poses three main challenges. Firstly, there is the problem of limited connectivity among the robots. Secondly, each of the robots has a limited battery life which requires the robots to be recharged each time they are running out of charge. Lastly, the robots have limited memory to store data. In this work, we mainly focus on the memory and energy constraints of the robot swarm. The memory constraint forces the robots to travel to a centralized data collection center called sink, to deposit data each time their memory is full. The energy constraint forces the robots to travel to the charging station called dock to recharge when their battery level is low. However, this navigation plan is inefficient in terms of energy and time. There is additional energy dissipation in depositing data at the centralized sink. Moreover, ample amount of time is spent in traveling from one end of the arena to the sink owing to the memory constraint. The goal is that the robots perform data gathering in the least time possible with the optimal use of energy. Both the energy and time spent while depositing data at the sink act as an additional overhead cost to this goal. In this work, we propose to study an algorithm to tackle this scenario in a decentralized manner. We implement a dynamic task allocation algorithm which accomplishes the goal of exploration with data gathering by assigning roles to robots based on their memory buffer and energy levels. The algorithm assigns two sets of roles, to the entire group of robots, namely: Role A is the data gatherer, a robot which does the task of workspace exploration and data gathering, and Role B is data relayer, a robot which does the task of data transportation from data gatherers to the sink. By this division of labor, the robots dynamically decide which role to choose given the contradicting goals of maximizing data gathering and minimizing energy loss. The choice of a robot to perform the task of data gathering or data relaying is the key problem tackled in this work. We study the performance of the algorithm in terms of task distribution, time spent by the robots on each task and data throughput. We analyze the behavior of the robot swarm by varying the energy constraints, timeout parameter as well as strategies for relayer choice. We also test whether the algorithm is scalable.
108

Cognitive smart agents for optimising OpenFlow rules in software defined networks

Sabih, Ann Faik January 2017 (has links)
This research provides a robust solution based on artificial intelligence (AI) techniques to overcome the challenges in Software Defined Networks (SDNs) that can jeopardise the overall performance of the network. The proposed approach, presented in the form of an intelligent agent appended to the SDN network, comprises of a new hybrid intelligent mechanism that optimises the performance of SDN based on heuristic optimisation methods under an Artificial Neural Network (ANN) paradigm. Evolutionary optimisation techniques, including Particle Swarm Optimisation (PSO) and Genetic Algorithms (GAs) are deployed to find the best set of inputs that give the maximum performance of an SDN-based network. The ANN model is trained and applied as a predictor of SDN behaviour according to effective traffic parameters. The parameters that were used in this study include round-trip time and throughput, which were obtained from the flow table rules of each switch. A POX controller and OpenFlow switches, which characterise the behaviour of an SDN, have been modelled with three different topologies. Generalisation of the prediction model has been tested with new raw data that were unseen in the training stage. The simulation results show a reasonably good performance of the network in terms of obtaining a Mean Square Error (MSE) that is less than 10−6 [superscript]. Following the attainment of the predicted ANN model, utilisation with PSO and GA optimisers was conducted to achieve the best performance of the SDN-based network. The PSO approach combined with the predicted SDN model was identified as being comparatively better than the GA approach in terms of their performance indices and computational efficiency. Overall, this research demonstrates that building an intelligent agent will enhance the overall performance of the SDN network. Three different SDN topologies have been implemented to study the impact of the proposed approach with the findings demonstrating a reduction in the packets dropped ratio (PDR) by 28-31%. Moreover, the packets sent to the SDN controller were also reduced by 35-36%, depending on the generated traffic. The developed approach minimised the round-trip time (RTT) by 23% and enhanced the throughput by 10%. Finally, in the event where SDN controller fails, the optimised intelligent agent can immediately take over and control of the entire network.
109

Comparison of Auto-Scaling Policies Using Docker Swarm / Jämförelse av autoskalningspolicies med hjälp av Docker Swarm

Adolfsson, Henrik January 2019 (has links)
When deploying software engineering applications in the cloud there are two similar software components used. These are Virtual Machines and Containers. In recent years containers have seen an increase in popularity and usage, in part because of tools such as Docker and Kubernetes. Virtual Machines (VM) have also seen an increase in usage as more companies move to solutions in the cloud with services like Amazon Web Services, Google Compute Engine, Microsoft Azure and DigitalOcean. There are also some solutions using auto-scaling, a technique where VMs are commisioned and deployed to as load increases in order to increase application performace. As the application load decreases VMs are decommisioned to reduce costs. In this thesis we implement and evaluate auto-scaling policies that use both Virtual Machines and Containers. We compare four different policies, including two baseline policies. For the non-baseline policies we define a policy where we use a single Container for every Virtual Machine and a policy where we use several Containers per Virtual Machine. To compare the policies we deploy an image serving application and run workloads to test them. We find that the choice of deployment strategy and policy matters for response time and error rate. We also find that deploying applications as described in the methodis estimated to take roughly 2 to 3 minutes.
110

Haptic Shape-Based Management of Robot Teams in Cordon and Patrol

McDonald, Samuel Jacob 01 September 2016 (has links)
There are many current and future scenarios that require teams of air, ground or humanoid robots to gather information in complex and often dangerous environments, where it would be unreasonable or impossible for humans to be physically present [1-6]. The current state of the art involves a single robot being monitored by one or many human operators [7], but a single operator managing a team of autonomous robots is preferred as long as effective and time-efficient management of the team is maintained [8-9]. This is limited by the operator's ability to command actions of multiple robots, be aware of robot states, and respond to less important tasks, while accomplishing a primary objective defined by the application scenario. The operator's ability to multi-task could be improved with the use of a multimodal interface, using both visual and haptic feedback. This thesis investigates the use of haptic feedback in developing intuitive, shape-based interaction to maintain heads-up control and increase an operator's situation awareness (SA) while managing a robot team.In this work, the autonomous behavior of the team is modeled after a patrol and cordon scenario, where the team travels to and surrounds buildings of interest. A novel approach that involves treating the team as a moldable volume is presented, where deformations of this volume correspond to changes in team shape. During surround mode, the operator may explore or manipulate the team shape to create custom formations around a building. A spacing interaction method also allows the operator to adjust how robots are spaced within the current shape. Separate haptic feedback is developed for each method to allow the operator to "feel" the shape or spacing manipulation. During travel mode, the operator chooses desired travel locations and receives feedback to help identify how and where the team travels. RoTHSim, an experimental robot team haptic simulator, was developed and used as a test bed for single-operator management of a robot team in a multitasking reconnaissance and surveillance scenario. Using RoTHSim, a human subject experiment was conducted with 19 subjects to determine the effects of haptic feedback and task demand difficulty on levels of performance, SA and workload. Results from the experiment suggest that haptic feedback significantly improves operator performance in a reconnaissance task when task demand is higher, but may slightly increase operator workload. Due to the experimental setup, these results suggest that haptic feedback may make it easier for the operator to experience heads-up control of a team of autonomous robots. There were no significance differences on SA scores due to haptic feedback in this study.

Page generated in 0.0229 seconds