DESIGNING WATERSHED-SCALE STRUCTURAL BEST MANAGEMENT PRACTICES USING EVOLUTIONARY ALGORITHMS TO ACHIEVE WATER QUALITY GOALSKaini, Prakash D. 01 December 2010 (has links)
Water quality has been a major concern in the United States and elsewhere because of its impact on people's daily lives and on the environment. There are two main sources of water pollution: point sources and non-point sources, which are differentiated based on their mode of generation. Pollution generated from point sources has been effectively controlled by the implementation of the National Pollution Discharge Elimination System (NPDES) program, under the auspices of the 1972 Clean Water Act (CWA). However, a large portion of the nation's water remains polluted, mainly due to non-point sources of pollution. The Total Maximum Daily Load (TMDL) program within the CWA regulates water pollution by controlling both point and non-point sources. Structural and non-structural Best Management Practices (BMPs) have been recognized as effective measures for controlling non-point sources of pollution. These practices are designed on an on site basis in most cases. The objective of this research is to develop methodologies that can be used to design structural BMPs as measurements for controlling non-point sources of pollution (i.e. sediment and nutrients) on a larger spatial scale, that of a watershed. The Soil and Water Assessment Tool (SWAT), a semi-distributed model that simulates hydrological processes, has been selected for this study. The most sensitive model parameters with respect to discharge and sediment yield are identified by a parameter sensitivity analysis. Latin Hypercube Sampling One-at-a-time (LH-OAT), a global sensitivity analysis method, has been adopted for this purpose. SWAT has been calibrated by using these parameters to accurately simulate runoff and sediment yields from the watershed. An automatic calibration model using a genetic algorithm that optimizes the parameter values has been used. In addition, an uncertainty analysis of these selected parameters has been conducted to analyze the robustness of the model's predictions. Both single- objective and a multi-objective Optimal Control Models (OCM) have been developed by coupling SWAT with evolutionary algorithms, optimizing types, sizes, and locations of structural BMPs to achieve the desired level of treatment goals (the reduction of sediment and nutrient yields) at the watershed outlet. The single-objective OCM optimizes BMPs to a user-defined level of the treatment goals while the multi-objective OCM simultaneously optimizes BMPs for various degrees of treatment goals. The state-of-the-art multi-objective evolutionary algorithm that has been used in the study is the Non-Dominated Sorting Genetic Algorithm (NSGA-II). In addition, the single-objective OCM is applied to control increased sediment yield due to projected future climate scenarios. In conclusion, this research has developed methodologies that can cost-effectively improve water quality goals in agricultural watersheds by integrating a contemporary hydrological model with evolutionary algorithms.
No description available.
Filomeno Coelho, Rajan
01 April 2004
Though lots of numerical methods have been proposed in the literature to optimize me-chanical structures at the final stage of the design process, few designers use these tools since the first stage. However, a minor modification at the first step can bring significant change to the global performances of the structure. Usually, during the initial stage, models are based on theoretical and empirical equations, which are often characterized by mixed variables: continuous (e.g. geometrical dimensions), discrete (e.g. the cross section of a beam available in a catalogue) and/or integer (e.g. the number of layers in a composite material). Furthermore, the functions involved may be non differentiable, or even discontinuous. Therefore, classical algorithms based on the computation of sensi-tivities are no more applicable. Consequently, to solve these problems, the most wide-spread meta-heuristic methods are evolutionary algorithms (EAs), which work as follows: the best individuals among an initial population of randomly generated potential solutions are favoured and com-bined (by specific operators like crossover and mutation) in order to create potentially better individuals at the next generation. The creation of new generations is repeated till the convergence is reached. The ability of EAs to explore widely the design space is useful to solve single-objective unconstrained optimization problems, because it gener-ally prevents from getting trapped into a local optimum, but it is also well known that they do not perform very efficiently in the presence of constraints. Furthermore, in many industrial applications, multiple objectives are pursued together. Therefore, to take into account the constrained and multicriteria aspects of optimization problems in EAs, a new method called PAMUC (Preferences Applied to MUltiobjectiv-ity and Constraints) has been proposed in this dissertation. First the user has to assign weights to the m objectives. Then, an additional objective function is built by linearly aggregating the normalized constraints. Finally, a multicriteria decision aid method, PROMETHEE II, is used in order to rank the individuals of the population following the m+1 objectives. PAMUC has been validated on standard multiobjective test cases, as well as on the pa-rametrical optimization of the purge valve and the feed valve of the Vinci engine, both designed by Techspace Aero for launcher Ariane 5. The second step of the thesis consists in incorporating an inference engine within the optimization scheme in order to take expert rules into account. First, information about conception and design is collected among engineers expert in a specific domain. In the case of the valves designed by Techspace Aero, the expert rules are rules of thumb based upon experience, and related to the leakages, the choice of the materials for the different parts of the structure, etc. Then, each potential design generated by the EA is tested and repaired (with a given probability) according to the user-defined rules. This approach seems very efficient in reducing the size of the search space and guiding the EA towards the global feasible optimum.
Carter, Richard G.
Poker has become the subject of an increasing amount of study in the computational intelligence community. The element of imperfect information presents new and greater challenges than those previously posed by games such as checkers and chess. Advances in computer poker have great potential, since reasoning under conditions of uncertainty is typical of many real world problems. To date the focus of computer poker research has centred on the development of ring game players for limit Texas hold’em. For a computer to compete in the most prestigious poker events, however, it will be required to play in a tournament setting with a no-limit betting structure. This thesis is the first academic attempt to investigate the underlying dynamics of successful no-limit tournament poker play. Professional players have proffered advice in the non-academic poker literature on correct strategies for tournament poker play. This study seeks to empirically validate their suggestions on a simplified no-limit Texas hold’em tournament framework. Starting by using exhaustive simulations, we first assess the hypothesis that a strategy including information related to game-specific factors performs better than one founded on hand strength knowledge alone. Specifically, we demonstrate that the use of information pertaining to one’s seating position, the opponents’ prior actions, the stage of the tournament, and one’s chip stack size all contribute towards a statistically significant improvement in the number of tournaments won. In extending the research to combine all factors we explain the limitations of the exhaustive simulation approach, and introduce evolutionary algorithms as a method of searching the strategy space. We then test the hypothesis that a strategy which combines information from all the aforementioned factors performs better than one which employs only a single factor. We show that an evolutionary algorithm is successfully able to resolve conflicting signals from the specified factors, and that the resulting strategies are statistically stronger than those previously discovered. Our research continues with an analysis of the results, as we interpret them in the context of poker strategy. We compare our findings to poker authors’ recommendations, and conclude with a discussion on the many possible extensions to this work.
Thomas, Peter James, email@example.com
28 July 2003
Robot soccer provides a fertile environment for the development of artificial intelligence techniques. Robot controls require high speed lower level reactive layers as well as higher level deliberative functions. This thesis focuses on a number of aspects in the robot soccer arena. Topics covered include boundary avoidance strategies, vision detection and the application of evolutionary learning to find fuzzy controllers for the control of mobile robot. A three input, two output controller using two angles and a distance as the input and producing two wheel velocity outputs, was developed using evolutionary learning. Current wheel velocities were excluded from the input. The controller produced was a coarse control permitting only either forward or reverse facing impact with the ball. A five input controller was developed which expanded upon the three input model by including the current wheel velocities as inputs. The controller allowed both forward and reverse facing impacts with the ball. A five input hierarchical three layer model was developed to reduce the number of rules to be learnt by an evolutionary algorithm. Its performance was the same as the five input model. Fuzzy clustering of evolved paths was limited by the information available from the paths. The information was sparse in many areas and did not produce a controller that could be used to control the robots. Research was also conducted on the derivation of simple obstacle avoidance strategies for robot soccer. A new decision region method for colour detection in the UV colour map to enable better detection of the robots using an overhead vision system. Experimental observations are given.
Adaptive Techniques for Enhancing the Robustness and Performance of Speciated PSOs in Multimodal EnvironmentsBird, Stefan Charles, firstname.lastname@example.org January 2008 (has links)
This thesis proposes several new techniques to improve the performance of speciated particle swarms in multimodal environments. We investigate how these algorithms can become more robust and adaptive, easier to use and able to solve a wider variety of optimisation problems. We then develop a technique that uses regression to vastly improve an algorithm's convergence speed without requiring extra evaluations. Speciation techniques play an important role in particle swarms. They allow an algorithm to locate multiple optima, providing the user with a choice of solutions. Speciation also provides diversity preservation, which can be critical for dynamic optimisation. By increasing diversity and tracking multiple peaks simultaneously, speciated algorithms are better able to handle the changes inherent in dynamic environments. Speciation algorithms often require a user to specify a parameter that controls how species form. This is a major drawback since the knowledge may not be available a priori. If the parameter is incorrectly set, the algorithm's performance is likely to be highly degraded. We propose using a time-based measure to control the speciation, allowing the algorithm to define species far more adaptively, using the population's characteristics and behaviour to control membership. Two new techniques presented in this thesis, ANPSO and ESPSO, use time-based convergence measures to define species. These methods are shown to be robust while still providing highly competitive performance. Both algorithms effectively optimised all of our test functions without requiring any tuning. Speciated algorithms are ideally suited to optimising dynamic environments, however the complexity of these environments makes them far more difficult to design algorithms for. To increase an algorithm's performance it is necessary to determine in what ways it should be improved. While all performance metrics allow optimisation techniques to be compared, they cannot show how to improve an algorithm. Until now this has been done largely by trial and error. This is extremely inefficient, in the same way it is inefficient trying to improve a program's speed without profiling it first. This thesis proposes a new metric that exclusively measures convergence speed. We show that an algorithm can be profiled by correlating the performance as measured by multiple metrics. By combining these two techniques, we can obtain far better insight into how best to improve an algorithm. Using this information, we then propose a local convergence enhancement that greatly increases performance by actively estimating the location of an optimum. The enhancement uses regression to fit a surface to the peak, guiding the search by estimating the peak's true location. By incorporating this technique, the algorithm is able to use the information contained within the fitness landscape far more effectively. We show that by combining the regression with an existing speciated algorithm, we are able to vastly improve the algorithm's performance. This technique will greatly enhance the utility of PSO on problems where fitness evaluations are expensive, or that require fast reaction to change.
<p>Imitation learning has been studied from a large range of disciplines, including adaptive robotics. In adaptive robotics the focus is often on how robots can learn tasks by imitating experts. In order to build robots able to imitate a number of problems must be solved, including: How does the robot know when and what to imitate? How does the robot link the recognition of observed actions to the execution of the same actions? This thesis presents an approach using unsupervised imitation where artificial evolution is used to find solutions to the problems. The approach is tested in a number of experiments where robots are being evolved to solve a number of navigation tasks of varying difficulty. Two sets of experiments are made for each task. In the first set the robots are trained without any demonstrator present. The second set is identical to the first one except for the presence of a demonstrator. The demonstrator is present in the beginning of the training and thereafter removed. The robots are not being programmed to imitate the demonstrator but are only instructed to solve the navigation tasks. By comparing the performance of the robots of the two sets the impact of the demonstrator is investigated. The results show that the robots evolved with a demonstrator need less training time than the robots evolved without any demonstrator except when the task is easy to solve in which case the demonstrator seems to have no effect on the performance of the robots. It is concluded that evolved robots are able to imitate demonstrators even if the robots are not explicitly programmed to follow the demonstrators.</p>
Little attention has been paid, in depth, to the relationship between fitness evaluation in evolutionary algorithms and reputation mechanisms in multi-agent systems, but if these could be related it opens the way for implementation of distributed evolutionary systems via multi-agent architectures. Our investigation concentrates on the effectiveness with which social selection, in the form of reputation, can replace direct fitness observation as the selection bias in an evolutionary multi-agent system. We do this in two stages: In the first, we implement a peer-to-peer, adaptive Genetic Algorithm (GA), in which agents act as individual GAs that, in turn, evolve dynamically themselves in real-time, using the traditional evolutionary operators of fitness-based selection, crossover and mutation. In the second stage, we replace the fitness-based selection operator with a reputation-based one, in which agents choose their mates based on the collective past experiences of themselves and their peers. Our investigation shows that this simple model of distributed reputation can be successful as the evolutionary drive in such a system, exhibiting practically identical performance and scalability to direct fitness observation. Further, we discuss the effect of noise (in the form of “defective” agents) in both models. We show that the reputation-based model is significantly better at identifying the defective agents, thus showing an increased level of resistance to noise.
Patton, Robert, Schuman, Catherine, Kulkarni, Shruti, Parsa, Maryam, Mitchell, J. P., Haas, N. Q., Stahl, Christopher, Paulissen, Spencer, Date, Prasanna, Potok, Thomas, Sneider, Shay
27 July 2021
Neuromorphic computing has many opportunities in future autonomous systems, especially those that will operate at the edge. However, there are relatively few demonstrations of neuromorphic implementations on real-world applications, partly because of the lack of availability of neuromorphic hardware and software, but also because of the lack of availability of an accessible demonstration platform. In this work, we propose utilizing the F1Tenth platform as an evaluation task for neuromorphic computing. F1Tenth is a competition wherein one tenth scale cars compete in an autonomous racing task; there are significant open source resources in both software and hardware for realizing this task. We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
01 January 2006
In today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or "noisy") values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, "black-box" objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms' performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications.
Page generated in 0.0852 seconds