• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 86
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 259
  • 259
  • 83
  • 80
  • 78
  • 69
  • 67
  • 54
  • 54
  • 54
  • 53
  • 47
  • 41
  • 39
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Scheduling and Resource Efficiency Balancing: Discrete Species Conserving Cuckoo Search for Scheduling in an Uncertain Execution Environment

Bibiks, Kirils January 2017 (has links)
The main goal of a scheduling process is to decide when and how to execute each of the project’s activities. Despite large variety of researched scheduling problems, the majority of them can be described as generalisations of the resource-constrained project scheduling problem (RCPSP). Because of wide applicability and challenging difficulty, RCPSP has attracted vast amount of attention in the research community and great variety of heuristics have been adapted for solving it. Even though these heuristics are structurally different and operate according to diverse principles, they are designed to obtain only one solution at a time. In the recent researches on RCPSPs, it was proven that these kind of problems have complex multimodal fitness landscapes, which are characterised by a wide solution search spaces and presence of multiple local and global optima. The main goal of this thesis is twofold. Firstly, it presents a variation of the RCPSP that considers optimisation of projects in an uncertain environment where resources are modelled to adapt to their environment and, as the result of this, improve their efficiency. Secondly, modification of a novel evolutionary computation method Cuckoo Search (CS) is proposed, which has been adapted for solving combinatorial optimisation problems and modified to obtain multiple solutions. To test the proposed methodology, two sets of experiments are carried out. First, the developed algorithm is applied to a real-life software development project. Second, performance of the algorithm is tested on universal benchmark instances for scheduling problems which were modified to take into account specifics of the proposed optimisation model. The results of both experiments demonstrate that the proposed methodology achieves competitive level of performance and is capable of finding multiple global solutions, as well as prove its applicability in real-life projects.
52

A Feasibility Study of BBP for predicting shear capacity of FRP reinforced concrete beams without stirrups.

Golafshani, E.M., Ashour, Ashraf 18 February 2016 (has links)
yes / Shear failure of concrete elements reinforced with Fiber Reinforced Polymer (FRP) bars is generally brittle, requiring accurate predictions to avoid it. In the last decade, a variety of artificial intelligence based approaches have been successfully applied to predict the shear capacity of FRP Reinforced Concrete (FRP-RC). In this paper, a new approach, namely, biogeography-based programming (BBP) is introduced for predicting the shear capacity of FRP-RC beams based on test results available in the literature. The performance of the BBP model is compared with several shear design equations, two previously developed artificial intelligence models and experimental results. It was found that the proposed model provides the most accurate results in calculating the shear capacity of FRP-RC beams among the considered shear capacity models. The proposed BBP model can also correctly predict the trend of different influencing variables on the shear capacity of FRP-RC beams.
53

Incorporating Design Knowledge into Genetic Algorithm-based White-Box Software Test Case Generators

Makai, Matthew Charles 14 May 2008 (has links)
This thesis shows how to incorporate Unified Modeling Language sequence diagrams into genetic algorithm-based automated test case generators to increase the code coverage of their resulting test cases. Automated generation of test data through evolutionary testing was proven feasible in prior research studies. In those previous investigations, the metrics used for determining the test generation method effectiveness were the percentages of testing statement and branch code coverage achieved. However, the code coverage realized in those preceding studies often converged at suboptimal percentages due to a lack of guidance in conditional statements. This study compares the coverage percentages of 16 different Java programs when test cases are automatically generated with and without incorporating associated UML sequence diagrams. It introduces a tool known as the Evolutionary Test Case Generator, or ETCG, an automatic test case generator based on genetic algorithms that provides the ability to incorporate sequence diagrams to direct the heuristic search process and facilitate evolutionary testing. When the generator uses sequence diagrams, the resulting test cases showed an average improvement of 21% in branch coverage and 8% in statement coverage over test cases produced without using sequence diagrams. / Master of Science
54

Parallelizing a nondeterministic optimization algorithm

D'Souza, Sammy Raymond 01 January 2007 (has links)
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
55

Novelty-assisted Interactive Evolution Of Control Behaviors

Woolley, Brian G 01 January 2012 (has links)
The field of evolutionary computation is inspired by the achievements of natural evolution, in which there is no final objective. Yet the pursuit of objectives is ubiquitous in simulated evolution because evolutionary algorithms that can consistently achieve established benchmarks are lauded as successful, thus reinforcing this paradigm. A significant problem is that such objective approaches assume that intermediate stepping stones will increasingly resemble the final objective when in fact they often do not. The consequence is that while solutions may exist, searching for such objectives may not discover them. This problem with objectives is demonstrated through an experiment in this dissertation that compares how images discovered serendipitously during interactive evolution in an online system called Picbreeder cannot be rediscovered when they become the final objective of the very same algorithm that originally evolved them. This negative result demonstrates that pursuing an objective limits evolution by selecting offspring only based on the final objective. Furthermore, even when high fitness is achieved, the experimental results suggest that the resulting solutions are typically brittle, piecewise representations that only perform well by exploiting idiosyncratic features in the target. In response to this problem, the dissertation next highlights the importance of leveraging human insight during search as an alternative to articulating explicit objectives. In particular, a new approach called novelty-assisted interactive evolutionary computation (NA-IEC) combines human intuition with a method called novelty search for the first time to facilitate the serendipitous discovery of agent behaviors. iii In this approach, the human user directs evolution by selecting what is interesting from the on-screen population of behaviors. However, unlike in typical IEC, the user can then request that the next generation be filled with novel descendants, as opposed to only the direct descendants of typical IEC. The result of such an approach, unconstrained by a priori objectives, is that it traverses key stepping stones that ultimately accumulate meaningful domain knowledge. To establishes this new evolutionary approach based on the serendipitous discovery of key stepping stones during evolution, this dissertation consists of four key contributions: (1) The first contribution establishes the deleterious effects of a priori objectives on evolution. The second (2) introduces the NA-IEC approach as an alternative to traditional objective-based approaches. The third (3) is a proof-of-concept that demonstrates how combining human insight with novelty search finds solutions significantly faster and at lower genomic complexities than fully-automated processes, including pure novelty search, suggesting an important role for human users in the search for solutions. Finally, (4) the NA-IEC approach is applied in a challenge domain wherein leveraging human intuition and domain knowledge accelerates the evolution of solutions for the nontrivial octopus-arm control task. The culmination of these contributions demonstrates the importance of incorporating human insights into simulated evolution as a means to discovering better solutions more rapidly than traditional approaches.
56

Analysing electricity markets with evolutionary computation

Nguyen, Duy Huu Manh January 2002 (has links)
The use of electricity in 21st century living has been firmly established throughout most of the world, correspondingly the infrastructure for production and delivery of electricity to consumers has matured and stabilised. However, due to recent technical and environmental–political developments, the electricity infrastructure worldwide is undergoing major restructuring. The forces driving this reorganisation are a complex interplay of technical, environmental, economic and political factors. The general trend of the reorganisation is a dis–aggregation of the previously integrated functions of generation, transmission and distribution, together with the establishment of competitive markets, primarily in generation, to replace previous regulated monopolistic utilities. To ensure reliable and cost effective electricity supply to consumers it is necessary to have an accurate picture of the expected generation in terms of the spatial and temporal distribution of prices and volumes. Previously this information was obtained by the regulated utility using technical studies such as centrally planned unit–commitment and economic–dispatch. However, in the new deregulated market environment such studies have diminished applicability and limited accuracy since generation assets are generally autonomous and subject to market forces. With generation outcomes governed by market mechanisms, to have an accurate picture of expected generation in the new electricity supply industry, it is necessary to complement traditional studies with new studies of market equilibrium and stability. Models and solution methods have been developed and refined for many markets, however they cannot be directly applied to the generation market due to the unique nature of electricity, having high inelastic demand, low storage capability and distinct transportation requirements. Intensive effort is underway to formulate solutions and models that specifically reflect the unique characteristics of the generation market. Various models have been proposed including game theory, stochastic and agent–based systems. Similarly there is a diverse range of solution methods including, Monte–Carlo simulations, linear–complimentary and quadratic programming. These approaches have varying degrees of generality, robustness and accuracy, some being better in certain aspects but weaker in others. This thesis formulates a new general model for the generation market based on the Cournot game, it makes no conjectures about producers’ behaviour and assumes that all electricity produced is immediately consumed. The new formulation characterises producers purely by their cost curves, which is only required to be piece–wise differentiable, and allows consumers’ characteristics to remain unspecified. The formulation can determine dynamic equilibrium and multiple equilibria of markets with single and multiple consumers and producers. Additionally stability concepts for the new market equilibrium is also developed to provide discrimination for dynamic equilibrium and to enable the structural stability of the market to be assessed. Solutions of the new formulation are evaluated by the use of evolutionary computation, which is a guided stochastic search paradigm that mimics the operation of biological evolution to iteratively produce a population of solutions. Evolutionary computation is employed as it is adept at finding multiple solutions for underconstrained systems, such as that of the new market formulation. Various enhancements to significantly improve the performance of the algorithms and simplify its application are developed. The concept of convergence potential of a population is introduced together with a system for the controlled extraction of such potential to accelerate the algorithm’s convergence and improve its accuracy and robustness. A new constraint handling technique for linear constraints that preserves the solution’s diversity is also presented together with a coevolutionary solution method for the multiple consumers and producers market. To illustrate the new electricity market formulation and its evolutionary computation solution methods, the equilibrium and stability of a test market with one consumer and thirteen thermal generators with valve point losses is examined. The case of a multiple consumer market is not simulated, though the formulation and solution methods for this case is included. The market solutions obtained not only confirms previous findings thus validating the new approach, but also includes new results yet to be verified by future studies. Techniques for market designers, regulators and other system planners in utilising the new market solutions are also given. In summary, the market formulation and solution method developed shows great promise in determining expected generation in a deregulated environment.
57

Optimising evolutionary strategies for problems with varying noise strength

Di Pietro, Anthony January 2007 (has links)
For many real-world applications of evolutionary computation, the fitness function is obscured by random noise. This interferes with the evaluation and selection processes and adversely affects the performance of the algorithm. Noise can be effectively eliminated by averaging a large number of fitness samples for each candidate, but the number of samples used per candidate (the resampling rate) required to achieve this is usually prohibitively large and time-consuming. Hence there is a practical need for algorithms that handle noise without eliminating it. Moreover, the amount of noise (noise strength and distribution) may vary throughout the search space, further complicating matters. We study noisy problems for which the noise strength varies throughout the search space. Such problems have generally been ignored by previous work, which has instead generally focussed on the specific case where the noise strength is the same at all points in the search domain. However, this need not be the case, and indeed this assumption is false for many applications. For example, in games of chance such as Poker, some strategies may be more conservative than others and therefore less affected by the inherent noise of the game. This thesis makes three significant contributions in the field of noisy fitness functions: We present the concept of dynamic resampling. Dynamic resampling is a technique that varies the resampling rate based on the noise strength and fitness for each candidate individually. This technique is designed to exploit the variation in noise strength and fitness to yield a more efficient algorithm. We present several dynamic resampling algorithms and give results that show that dynamic resampling can perform significantly better than the standard resampling technique that is usually used by the optimisation community, and that dynamic resampling algorithms that vary their resampling rates based on both noise strength and fitness can perform better than algorithms that vary their resampling rate based on only one of the above. We study a specific class of noisy fitness functions for which we counterintuitively find that it is better to use a higher resampling rate in regions of lower noise strength, and vice versa. We investigate how the evolutionary search operates on such problems, explain why this is the case, and present a hypothesis (with supporting evidence) for classifying such problems. We present an adaptive engine that automatically tunes the noise compensation parameters of the search during the run, thereby eliminating the need for the user to choose these parameters ahead of time. This means that our techniques can be readily applied to real-world problems without requiring the user to have specialised domain knowledge of the problem that they wish to solve. These three major contributions present a significant addition to the body of knowledge for noisy fitness functions. Indeed, this thesis is the first work specifically to examine the implications of noise strength that varies throughout the search domain for a variety of noise landscapes, and thus starts to fill a large void in the literature on noisy fitness functions.
58

Link discovery in very large graphs by constructive induction using genetic programming

Weninger, Timothy Edwards January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / This thesis discusses the background and methodologies necessary for constructing features in order to discover hidden links in relational data. Specifically, we consider the problems of predicting, classifying and annotating friends relations in friends networks, based upon features constructed from network structure and user profile data. I first document a data model for the blog service LiveJournal, and define a set of machine learning problems such as predicting existing links and estimating inter-pair distance. Next, I explain how the problem of classifying a user pair in a social networks, as directly connected or not, poses the problem of selecting and constructing relevant features. In order to construct these features, a genetic programming approach is used to construct multiple symbol trees with base features as their leaves; in this manner, the genetic program selects and constructs features that many not have been considered, but possess better predictive properties than the base features. In order to extract certain graph features from the relatively large social network, a new shortest path search algorithm is presented which computes and operates on a Euclidean embedding of the network. Finally, I present classification results and discuss the properties of the frequently constructed features in order to gain insight on hidden relations that exists in this domain.
59

ENAMS : energy optimization algorithm for mobile wireless sensor networks using evolutionary computation and swarm intelligence

Al-Obaidi, Mohanad January 2010 (has links)
Although traditionally Wireless Sensor Network (WSNs) have been regarded as static sensor arrays used mainly for environmental monitoring, recently, its applications have undergone a paradigm shift from static to more dynamic environments, where nodes are attached to moving objects, people or animals. Applications that use WSNs in motion are broad, ranging from transport and logistics to animal monitoring, health care and military. These application domains have a number of characteristics that challenge the algorithmic design of WSNs. Firstly, mobility has a negative effect on the quality of the wireless communication and the performance of networking protocols. Nevertheless, it has been shown that mobility can enhance the functionality of the network by exploiting the movement patterns of mobile objects. Secondly, the heterogeneity of devices in a WSN has to be taken into account for increasing the network performance and lifetime. Thirdly, the WSN services should ideally assist the user in an unobtrusive and transparent way. Fourthly, energy-efficiency and scalability are of primary importance to prevent the network performance degradation. This thesis contributes toward the design of a new hybrid optimization algorithm; ENAMS (Energy optimizatioN Algorithm for Mobile Sensor networks) which is based on the Evolutionary Computation and Swarm Intelligence to increase the life time of mobile wireless sensor networks. The presented algorithm is suitable for large scale mobile sensor networks and provides a robust and energy- efficient communication mechanism by dividing the sensor-nodes into clusters, where the number of clusters is not predefined and the sensors within each cluster are not necessary to be distributed in the same density. The presented algorithm enables the sensor nodes to move as swarms within the search space while keeping optimum distances between the sensors. To verify the objectives of the proposed algorithm, the LEGO-NXT MIND-STORMS robots are used to act as particles in a moving swarm keeping the optimum distances while tracking each other within the permitted distance range in the search space.
60

A New Evolutionary Algorithm For Mining Noisy, Epistatic, Geospatial Survey Data Associated With Chagas Disease

Hanley, John P. 01 January 2017 (has links)
The scientific community is just beginning to understand some of the profound affects that feature interactions and heterogeneity have on natural systems. Despite the belief that these nonlinear and heterogeneous interactions exist across numerous real-world systems (e.g., from the development of personalized drug therapies to market predictions of consumer behaviors), the tools for analysis have not kept pace. This research was motivated by the desire to mine data from large socioeconomic surveys aimed at identifying the drivers of household infestation by a Triatomine insect that transmits the life-threatening Chagas disease. To decrease the risk of transmission, our colleagues at the laboratory of applied entomology and parasitology have implemented mitigation strategies (known as Ecohealth interventions); however, limited resources necessitate the search for better risk models. Mining these complex Chagas survey data for potential predictive features is challenging due to imbalanced class outcomes, missing data, heterogeneity, and the non-independence of some features. We develop an evolutionary algorithm (EA) to identify feature interactions in "Big Datasets" with desired categorical outcomes (e.g., disease or infestation). The method is non-parametric and uses the hypergeometric PMF as a fitness function to tackle challenges associated with using p-values in Big Data (e.g., p-values decrease inversely with the size of the dataset). To demonstrate the EA effectiveness, we first test the algorithm on three benchmark datasets. These include two classic Boolean classifier problems: (1) the "majority-on" problem and (2) the multiplexer problem, as well as (3) a simulated single nucleotide polymorphism (SNP) disease dataset. Next, we apply the EA to real-world Chagas Disease survey data and successfully archived numerous high-order feature interactions associated with infestation that would not have been discovered using traditional statistics. These feature interactions are also explored using network analysis. The spatial autocorrelation of the genetic data (SNPs of Triatoma dimidiata) was captured using geostatistics. Specifically, a modified semivariogram analysis was performed to characterize the SNP data and help elucidate the movement of the vector within two villages. For both villages, the SNP information showed strong spatial autocorrelation albeit with different geostatistical characteristics (sills, ranges, and nuggets). These metrics were leveraged to create risk maps that suggest the more forested village had a sylvatic source of infestation, while the other village had a domestic/peridomestic source. This initial exploration into using Big Data to analyze disease risk shows that novel and modified existing statistical tools can improve the assessment of risk on a fine-scale.

Page generated in 0.1445 seconds