• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 85
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 237
  • 71
  • 70
  • 68
  • 66
  • 65
  • 53
  • 49
  • 46
  • 45
  • 42
  • 39
  • 38
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Incorporating Design Knowledge into Genetic Algorithm-based White-Box Software Test Case Generators

Makai, Matthew Charles 14 May 2008 (has links)
This thesis shows how to incorporate Unified Modeling Language sequence diagrams into genetic algorithm-based automated test case generators to increase the code coverage of their resulting test cases. Automated generation of test data through evolutionary testing was proven feasible in prior research studies. In those previous investigations, the metrics used for determining the test generation method effectiveness were the percentages of testing statement and branch code coverage achieved. However, the code coverage realized in those preceding studies often converged at suboptimal percentages due to a lack of guidance in conditional statements. This study compares the coverage percentages of 16 different Java programs when test cases are automatically generated with and without incorporating associated UML sequence diagrams. It introduces a tool known as the Evolutionary Test Case Generator, or ETCG, an automatic test case generator based on genetic algorithms that provides the ability to incorporate sequence diagrams to direct the heuristic search process and facilitate evolutionary testing. When the generator uses sequence diagrams, the resulting test cases showed an average improvement of 21% in branch coverage and 8% in statement coverage over test cases produced without using sequence diagrams. / Master of Science
42

Dance evolution : interactively evolving neural networks to control dancing three-dimensional models

Dubbin, Greg A. 01 January 2009 (has links)
The impulse shared by all humans to express ourselves through dance represents a unique opportunity to artificially capture human creative expression. 1hls ambition aligns with the aim of artificial intelligence (AI) to study and emulate those aspects of human intelligence that are not readily reproduced in existing computer algorithms. As a first step toward addressing this challenge, this thesis describes Dance Evolution, which focuses on movements that are tied to a specific beat of music. Furthermore, Dance Evolution harnesses the users own taste to ex pl ore the new and interesting dances, allowing ta novel form of self-expression mediated by the computer, following the trend started by music and rhythm games. By implementing an algorithm that identifies the most prominent sounds within a song, Dance Evolution in effect allows artificial neural networks (ANNs) to listen to any song and exploit its rhythmic structure. Interactive evolution provides a tool for users to search increasingly intricate movement sequences by breeding their ANN controllers, in the same way that a gardener might explore interesting plants by breeding hybrids. The underlying idea in Dance Evolution is thus to create a novel mapping between sound and movement that evokes the spirit of casually dancing to the beat of a song.
43

Enhancing Surgical Gesture Recognition Using Bidirectional LSTM and Evolutionary Computation: A Machine Learning Approach to Improving Robotic-Assisted Surgery / BiLSTM and Evolutionary Computation for Surgical Gesture Recognition

Zhang, Yifei January 2024 (has links)
The integration of artificial intelligence (AI) and machine learning in the medical field has led to significant advancements in surgical robotics, particularly in enhancing the precision and efficiency of surgical procedures. This thesis investigates the application of a single-layer bidirectional Long Short-Term Memory (BiLSTM) model to the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) dataset, aiming to improve the recognition and classification of surgical gestures. The BiLSTM model, with its capability to process data in both forward and backward directions, offers a comprehensive analysis of temporal sequences, capturing intricate patterns within surgical motion data. This research explores the potential of BiLSTM models to outperform traditional unidirectional models in the context of robotic surgery. In addition to the core model development, this study employs evolutionary computation techniques for hyperparameter tuning, systematically searching for optimal configurations to enhance model performance. The evaluation metrics include training and validation loss, accuracy, confusion matrices, prediction time, and model size. The results demonstrate that the BiLSTM model with evolutionary hyperparameter tuning achieves superior performance in recognizing surgical gestures compared to standard LSTM models. The findings of this thesis contribute to the broader field of surgical robotics and human-AI partnership by providing a robust method for accurate gesture recognition, which is crucial for assessing and training surgeons and advancing automated and assistive technologies in surgical procedures. The improved model performance underscores the importance of sophisticated hyperparameter optimization in developing high-performing deep learning models for complex sequential data analysis. / Thesis / Master of Applied Science (MASc) / Advancements in artificial intelligence (AI) are transforming medicine, particularly in robotic surgery. This thesis focuses on improving how robots recognize and classify surgeons' movements during operations. Using a special AI model called a bidirectional Long Short-Term Memory (BiLSTM) network, which looks at data both forwards and backwards, the study aims to better understand and predict surgical gestures. By applying this model to a dataset of surgical tasks, specifically suturing, and optimizing its settings with advanced techniques, the research shows significant improvements in accuracy and efficiency over traditional methods. The enhanced model is not only more accurate but also smaller and faster. These improvements can help train surgeons more effectively and advance robotic assistance in surgeries, leading to safer and more precise operations, ultimately benefiting both surgeons and patients.
44

Parallelizing a nondeterministic optimization algorithm

D'Souza, Sammy Raymond 01 January 2007 (has links)
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
45

Analysing electricity markets with evolutionary computation

Nguyen, Duy Huu Manh January 2002 (has links)
The use of electricity in 21st century living has been firmly established throughout most of the world, correspondingly the infrastructure for production and delivery of electricity to consumers has matured and stabilised. However, due to recent technical and environmental–political developments, the electricity infrastructure worldwide is undergoing major restructuring. The forces driving this reorganisation are a complex interplay of technical, environmental, economic and political factors. The general trend of the reorganisation is a dis–aggregation of the previously integrated functions of generation, transmission and distribution, together with the establishment of competitive markets, primarily in generation, to replace previous regulated monopolistic utilities. To ensure reliable and cost effective electricity supply to consumers it is necessary to have an accurate picture of the expected generation in terms of the spatial and temporal distribution of prices and volumes. Previously this information was obtained by the regulated utility using technical studies such as centrally planned unit–commitment and economic–dispatch. However, in the new deregulated market environment such studies have diminished applicability and limited accuracy since generation assets are generally autonomous and subject to market forces. With generation outcomes governed by market mechanisms, to have an accurate picture of expected generation in the new electricity supply industry, it is necessary to complement traditional studies with new studies of market equilibrium and stability. Models and solution methods have been developed and refined for many markets, however they cannot be directly applied to the generation market due to the unique nature of electricity, having high inelastic demand, low storage capability and distinct transportation requirements. Intensive effort is underway to formulate solutions and models that specifically reflect the unique characteristics of the generation market. Various models have been proposed including game theory, stochastic and agent–based systems. Similarly there is a diverse range of solution methods including, Monte–Carlo simulations, linear–complimentary and quadratic programming. These approaches have varying degrees of generality, robustness and accuracy, some being better in certain aspects but weaker in others. This thesis formulates a new general model for the generation market based on the Cournot game, it makes no conjectures about producers’ behaviour and assumes that all electricity produced is immediately consumed. The new formulation characterises producers purely by their cost curves, which is only required to be piece–wise differentiable, and allows consumers’ characteristics to remain unspecified. The formulation can determine dynamic equilibrium and multiple equilibria of markets with single and multiple consumers and producers. Additionally stability concepts for the new market equilibrium is also developed to provide discrimination for dynamic equilibrium and to enable the structural stability of the market to be assessed. Solutions of the new formulation are evaluated by the use of evolutionary computation, which is a guided stochastic search paradigm that mimics the operation of biological evolution to iteratively produce a population of solutions. Evolutionary computation is employed as it is adept at finding multiple solutions for underconstrained systems, such as that of the new market formulation. Various enhancements to significantly improve the performance of the algorithms and simplify its application are developed. The concept of convergence potential of a population is introduced together with a system for the controlled extraction of such potential to accelerate the algorithm’s convergence and improve its accuracy and robustness. A new constraint handling technique for linear constraints that preserves the solution’s diversity is also presented together with a coevolutionary solution method for the multiple consumers and producers market. To illustrate the new electricity market formulation and its evolutionary computation solution methods, the equilibrium and stability of a test market with one consumer and thirteen thermal generators with valve point losses is examined. The case of a multiple consumer market is not simulated, though the formulation and solution methods for this case is included. The market solutions obtained not only confirms previous findings thus validating the new approach, but also includes new results yet to be verified by future studies. Techniques for market designers, regulators and other system planners in utilising the new market solutions are also given. In summary, the market formulation and solution method developed shows great promise in determining expected generation in a deregulated environment.
46

Optimising evolutionary strategies for problems with varying noise strength

Di Pietro, Anthony January 2007 (has links)
For many real-world applications of evolutionary computation, the fitness function is obscured by random noise. This interferes with the evaluation and selection processes and adversely affects the performance of the algorithm. Noise can be effectively eliminated by averaging a large number of fitness samples for each candidate, but the number of samples used per candidate (the resampling rate) required to achieve this is usually prohibitively large and time-consuming. Hence there is a practical need for algorithms that handle noise without eliminating it. Moreover, the amount of noise (noise strength and distribution) may vary throughout the search space, further complicating matters. We study noisy problems for which the noise strength varies throughout the search space. Such problems have generally been ignored by previous work, which has instead generally focussed on the specific case where the noise strength is the same at all points in the search domain. However, this need not be the case, and indeed this assumption is false for many applications. For example, in games of chance such as Poker, some strategies may be more conservative than others and therefore less affected by the inherent noise of the game. This thesis makes three significant contributions in the field of noisy fitness functions: We present the concept of dynamic resampling. Dynamic resampling is a technique that varies the resampling rate based on the noise strength and fitness for each candidate individually. This technique is designed to exploit the variation in noise strength and fitness to yield a more efficient algorithm. We present several dynamic resampling algorithms and give results that show that dynamic resampling can perform significantly better than the standard resampling technique that is usually used by the optimisation community, and that dynamic resampling algorithms that vary their resampling rates based on both noise strength and fitness can perform better than algorithms that vary their resampling rate based on only one of the above. We study a specific class of noisy fitness functions for which we counterintuitively find that it is better to use a higher resampling rate in regions of lower noise strength, and vice versa. We investigate how the evolutionary search operates on such problems, explain why this is the case, and present a hypothesis (with supporting evidence) for classifying such problems. We present an adaptive engine that automatically tunes the noise compensation parameters of the search during the run, thereby eliminating the need for the user to choose these parameters ahead of time. This means that our techniques can be readily applied to real-world problems without requiring the user to have specialised domain knowledge of the problem that they wish to solve. These three major contributions present a significant addition to the body of knowledge for noisy fitness functions. Indeed, this thesis is the first work specifically to examine the implications of noise strength that varies throughout the search domain for a variety of noise landscapes, and thus starts to fill a large void in the literature on noisy fitness functions.
47

Link discovery in very large graphs by constructive induction using genetic programming

Weninger, Timothy Edwards January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / This thesis discusses the background and methodologies necessary for constructing features in order to discover hidden links in relational data. Specifically, we consider the problems of predicting, classifying and annotating friends relations in friends networks, based upon features constructed from network structure and user profile data. I first document a data model for the blog service LiveJournal, and define a set of machine learning problems such as predicting existing links and estimating inter-pair distance. Next, I explain how the problem of classifying a user pair in a social networks, as directly connected or not, poses the problem of selecting and constructing relevant features. In order to construct these features, a genetic programming approach is used to construct multiple symbol trees with base features as their leaves; in this manner, the genetic program selects and constructs features that many not have been considered, but possess better predictive properties than the base features. In order to extract certain graph features from the relatively large social network, a new shortest path search algorithm is presented which computes and operates on a Euclidean embedding of the network. Finally, I present classification results and discuss the properties of the frequently constructed features in order to gain insight on hidden relations that exists in this domain.
48

ENAMS : energy optimization algorithm for mobile wireless sensor networks using evolutionary computation and swarm intelligence

Al-Obaidi, Mohanad January 2010 (has links)
Although traditionally Wireless Sensor Network (WSNs) have been regarded as static sensor arrays used mainly for environmental monitoring, recently, its applications have undergone a paradigm shift from static to more dynamic environments, where nodes are attached to moving objects, people or animals. Applications that use WSNs in motion are broad, ranging from transport and logistics to animal monitoring, health care and military. These application domains have a number of characteristics that challenge the algorithmic design of WSNs. Firstly, mobility has a negative effect on the quality of the wireless communication and the performance of networking protocols. Nevertheless, it has been shown that mobility can enhance the functionality of the network by exploiting the movement patterns of mobile objects. Secondly, the heterogeneity of devices in a WSN has to be taken into account for increasing the network performance and lifetime. Thirdly, the WSN services should ideally assist the user in an unobtrusive and transparent way. Fourthly, energy-efficiency and scalability are of primary importance to prevent the network performance degradation. This thesis contributes toward the design of a new hybrid optimization algorithm; ENAMS (Energy optimizatioN Algorithm for Mobile Sensor networks) which is based on the Evolutionary Computation and Swarm Intelligence to increase the life time of mobile wireless sensor networks. The presented algorithm is suitable for large scale mobile sensor networks and provides a robust and energy- efficient communication mechanism by dividing the sensor-nodes into clusters, where the number of clusters is not predefined and the sensors within each cluster are not necessary to be distributed in the same density. The presented algorithm enables the sensor nodes to move as swarms within the search space while keeping optimum distances between the sensors. To verify the objectives of the proposed algorithm, the LEGO-NXT MIND-STORMS robots are used to act as particles in a moving swarm keeping the optimum distances while tracking each other within the permitted distance range in the search space.
49

A New Evolutionary Algorithm For Mining Noisy, Epistatic, Geospatial Survey Data Associated With Chagas Disease

Hanley, John P. 01 January 2017 (has links)
The scientific community is just beginning to understand some of the profound affects that feature interactions and heterogeneity have on natural systems. Despite the belief that these nonlinear and heterogeneous interactions exist across numerous real-world systems (e.g., from the development of personalized drug therapies to market predictions of consumer behaviors), the tools for analysis have not kept pace. This research was motivated by the desire to mine data from large socioeconomic surveys aimed at identifying the drivers of household infestation by a Triatomine insect that transmits the life-threatening Chagas disease. To decrease the risk of transmission, our colleagues at the laboratory of applied entomology and parasitology have implemented mitigation strategies (known as Ecohealth interventions); however, limited resources necessitate the search for better risk models. Mining these complex Chagas survey data for potential predictive features is challenging due to imbalanced class outcomes, missing data, heterogeneity, and the non-independence of some features. We develop an evolutionary algorithm (EA) to identify feature interactions in "Big Datasets" with desired categorical outcomes (e.g., disease or infestation). The method is non-parametric and uses the hypergeometric PMF as a fitness function to tackle challenges associated with using p-values in Big Data (e.g., p-values decrease inversely with the size of the dataset). To demonstrate the EA effectiveness, we first test the algorithm on three benchmark datasets. These include two classic Boolean classifier problems: (1) the "majority-on" problem and (2) the multiplexer problem, as well as (3) a simulated single nucleotide polymorphism (SNP) disease dataset. Next, we apply the EA to real-world Chagas Disease survey data and successfully archived numerous high-order feature interactions associated with infestation that would not have been discovered using traditional statistics. These feature interactions are also explored using network analysis. The spatial autocorrelation of the genetic data (SNPs of Triatoma dimidiata) was captured using geostatistics. Specifically, a modified semivariogram analysis was performed to characterize the SNP data and help elucidate the movement of the vector within two villages. For both villages, the SNP information showed strong spatial autocorrelation albeit with different geostatistical characteristics (sills, ranges, and nuggets). These metrics were leveraged to create risk maps that suggest the more forested village had a sylvatic source of infestation, while the other village had a domestic/peridomestic source. This initial exploration into using Big Data to analyze disease risk shows that novel and modified existing statistical tools can improve the assessment of risk on a fine-scale.
50

The role of Uncertainty in Categorical Perception Utilizing Statistical Learning in Robots

Powell, Nathaniel V. 01 January 2016 (has links)
At the heart of statistical learning lies the concept of uncertainty. Similarly, embodied agents such as robots and animals must likewise address uncertainty, as sensation is always only a partial reflection of reality. This thesis addresses the role that uncertainty can play in a central building block of intelligence: categorization. Cognitive agents are able to perform tasks like categorical perception through physical interaction (active categorical perception; ACP), or passively at a distance (distal categorical perception; DCP). It is possible that the former scaffolds the learning of the latter. However, it is unclear whether DCP indeed scaffolds ACP in humans and animals, nor how a robot could be trained to likewise learn DCP from ACP. Here we demonstrate a method for doing so which involves uncertainty: robots perform ACP when uncertain and DCP when certain. Furthermore, we demonstrate that robots trained in such a manner are more competent at categorizing novel objects than robots trained to categorize in other ways. This suggests that such a mechanism would also be useful for humans and animals, suggesting that they may be employing some version of this mechanism.

Page generated in 0.053 seconds