• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 511
  • 386
  • 96
  • 59
  • 43
  • 25
  • 17
  • 11
  • 10
  • 7
  • 6
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 1404
  • 1404
  • 454
  • 268
  • 192
  • 177
  • 138
  • 135
  • 127
  • 113
  • 113
  • 111
  • 108
  • 107
  • 105
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Flexible cross layer design for improved quality of service in MANETs

Kiourktsidis, Ilias January 2011 (has links)
Mobile Ad hoc Networks (MANETs) are becoming increasingly important because of their unique characteristics of connectivity. Several delay sensitive applications are starting to appear in these kinds of networks. Therefore, an issue in concern is to guarantee Quality of Service (QoS) in such constantly changing communication environment. The classical QoS aware solutions that have been used till now in the wired and infrastructure wireless networks are unable to achieve the necessary performance in the MANETs. The specialized protocols designed for multihop ad hoc networks offer basic connectivity with limited delay awareness and the mobility factor in the MANETs makes them even more unsuitable for use. Several protocols and solutions have been emerging in almost every layer in the protocol stack. The majority of the research efforts agree on the fact that in such dynamic environment in order to optimize the performance of the protocols, there is the need for additional information about the status of the network to be available. Hence, many cross layer design approaches appeared in the scene. Cross layer design has major advantages and the necessity to utilize such a design is definite. However, cross layer design conceals risks like architecture instability and design inflexibility. The aggressive use of cross layer design results in excessive increase of the cost of deployment and complicates both maintenance and upgrade of the network. The use of autonomous protocols like bio-inspired mechanisms and algorithms that are resilient on cross layer information unavailability, are able to reduce the dependence on cross layer design. In addition, properties like the prediction of the dynamic conditions and the adaptation to them are quite important characteristics. The design of a routing decision algorithm based on Bayesian Inference for the prediction of the path quality is proposed here. The accurate prediction capabilities and the efficient use of the plethora of cross layer information are presented. Furthermore, an adaptive mechanism based on the Genetic Algorithm (GA) is used to control the flow of the data in the transport layer. The aforementioned flow control mechanism inherits GA’s optimization capabilities without the need of knowing any details about the network conditions, thus, reducing the cross layer information dependence. Finally, is illustrated how Bayesian Inference can be used to suggest configuration parameter values to the other protocols in different layers in order to improve their performance.
232

Evolution of grasping behaviour in anthropomorphic robotic arms with embodied neural controllers

Massera, Gianluca January 2012 (has links)
The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robot’s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robot’s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks.
233

Bio-inspired optimization algorithms for smart antennas

Zuniga, Virgilio January 2011 (has links)
This thesis studies the effectiveness of bio-inspired optimization algorithms in controlling adaptive antenna arrays. Smart antennas are able to automatically extract the desired signal from interferer signals and external noise. The angular pattern depends on the number of antenna elements, their geometrical arrangement, and their relative amplitude and phases. In the present work different antenna geometries are tested and compared when their array weights are optimized by different techniques. First, the Genetic Algorithm and Particle Swarm Optimization algorithms are used to find the best set of phases between antenna elements to obtain a desired antenna pattern. This pattern must meet several restraints, for example: Maximizing the power of the main lobe at a desired direction while keeping nulls towards interferers. A series of experiments show that the PSO achieves better and more consistent radiation patterns than the GA in terms of the total area of the antenna pattern. A second set of experiments use the Signal-to-Interference-plus-Noise-Ratio as the fitness function of optimization algorithms to find the array weights that configure a rectangular array. The results suggest an advantage in performance by reducing the number of iterations taken by the PSO, thus lowering the computational cost. During the development of this thesis, it was found that the initial states and particular parameters of the optimization algorithms affected their overall outcome. The third part of this work deals with the meta-optimization of these parameters to achieve the best results independently from particular initial parameters. Four algorithms were studied: Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing and Hill Climb. It was found that the meta-optimization algorithms Local Unimodal Sampling and Pattern Search performed better to set the initial parameters and obtain the best performance of the bio-inspired methods studied.
234

Energy system analysis

Soundararajan, Ranjith January 2017 (has links)
The purpose of this thesis is to use a model to optimize the heat exchanger network for process industry and to estimate the minimum cost required for the heat exchanger network without compromising the energy demand by each stream as much as possible with the help of MATLAB programming software. Here, the optimization is done without considering stream splitting and stream combining. The first phase involves with deriving a simple heat exchanger network consisting of four streams i.e... Two hot streams and two cold streams required for the heat exchanger using the traditional Pinch Analysis method. The second phase of this work deals with randomly placing the heat exchanger network between the hot and cold streams and calculating the minimum cost of the heat exchanger network using genetic coding which is nothing but thousands of randomly created heat exchangers which are evolved over series of population.
235

Optimisation of the VARTM process

Struzziero, Giacomo January 2014 (has links)
This study focuses on the development of a multi-objective optimisation methodology for the vacuum assisted resin transfer moulding composite processing route. Simulations of the cure and filling stages of the process have been implemented and the corresponding heat transfer and flow through porous media problems solved by means of finite element analysis. The simulations involved material sub-models to describe thermal properties, cure kinetics and viscosity evolution. A Genetic algorithm which constitutes the foundation for the development of the optimisation has been adapted, implemented and tested in terms of its effectiveness using four benchmark problems. Two methodologies suitable for multi-objective optimisation of the cure and filling stages have been specified and successfully implemented. In the case of the curing stage the optimisation aims at finding a cure profile minimising both process time and temperature overshoot within the part. In the case of the filling stage the thermal profile during filling, gate locations and initial resin temperature are optimised to minimise filling time and final degree of cure at the end of the filling stage. Investigations of the design landscape for both curing and filling stage have indicated the complex nature of the problems under investigation justifying the choice for using a Genetic algorithm. Application of the two methodologies showed that they are highly efficient in identifying appropriate process designs and significant improvements compared to standard conditions are feasible. In the cure process an overshoot temperature reduction up to 75% in the case of thick component can be achieved whilst for a thin part a 60% reduction in process time can be accomplished. In the filling process a 42% filling time reduction and 14% reduction of degree of cure at the end of the filling can be achieved using the optimisation methodology. Stability analysis of the set of solutions for the curing stage has shown that different degrees of robustness are present among the individuals in the Pareto front. The optimisation methodology has also been integrated with an existing cost model that allowed consideration of process cost in the optimisation of the cure stage. The optimisation resulted in process designs that involve 500 € reduction in process cost. An inverse scheme has been developed based on the optimisation methodology aiming at combining simulation and monitoring of the filling stage for the identification of on-line permeability during an infusion. The methodology was tested using artificial data and it was demonstrated that the methodology is able to handle levels of noise from the measurements up to 5 s per sensor without affecting the quality of the outcome.
236

Amélioration de l'ordonnancement d'une ligne de production par la méthode Analytic Hierarchy Process / Improvement of production line scheduling by Analytic Hierarchy Process

Ohayon, Karen 19 December 2011 (has links)
Le monde industriel est sans cesse confronté à des problèmes de choix. Une multitude de critères doit être pris en compte dans la résolution de ces derniers. Face à ces situations, les outils d’aide à la décision prennent tout leur sens. Nous utiliserons ici la méthode Analytic Hierarchic Process dans le domaine de la Production et plus particulièrement dans le cadre de l’ordonnancement d’une ligne de production de prothèses cardiovasculaires. Le paramétrage initial de cette méthode fait appel à l’évaluation d’un expert. Bien que ce dernier ait les connaissances nécessaires pour faire un paramétrage convenable, il n’en reste pas moins humain et introduira, même involontairement, une partie subjective dans ses décisions. L’ordonnancement en résultant ne sera donc pas forcément optimal. La réduction de cette subjectivité passe par l’utilisation d’une méta heuristique, de type algorithme génétique, pour améliorer ce paramétrage par l’exploration de solutions voisines à celles proposées par l’expert. / The industrial world is continually faced with problems of choices. Multitude of criterion must be taken into account in resolving them. Faced with these situations, the decision making tools give solutions. We will use the Analytic Hierarchy Process in the production field and especially in the scheduling of a cardio vascular prosthesis production line. The initial parameterization of this method involves the evaluation of an expert. Although he has the required knowledge to make a fitting setup, he is no less human and he will bring even unintentionally a subjective part in his decision. The resulting scheduling will therefore not necessarily be optimal. The reduction of the subjectivity can be done using a metaheuristic method, of genetic algorithm type, to improve this parameterization by exploring neighbouring solutions compared to those proposed by the expert.
237

Enhancing recall and precision of web search using genetic algorithm

Al-Dallal, Ammar Sami January 2012 (has links)
Due to rapid growth of the number of Web pages, web users encounter two main problems, namely: many of the retrieved documents are not related to the user query which is called low precision, and many of relevant documents have not been retrieved yet which is called low recall. Information Retrieval (IR) is an essential and useful technique for Web search; thus, different approaches and techniques are developed. Because of its parallel mechanism with high-dimensional space, Genetic Algorithm (GA) has been adopted to solve many of optimization problems where IR is one of them. This thesis proposes searching model which is based on GA to retrieve HTML documents. This model is called IR Using GA or IRUGA. It is composed of two main units. The first unit is the document indexing unit to index the HTML documents. The second unit is the GA mechanism which applies selection, crossover, and mutation operators to produce the final result, while specially designed fitness function is applied to evaluate the documents. The performance of IRUGA is investigated using the speed of convergence of the retrieval process, precision at rank N, recall at rank N, and precision at recall N. In addition, the proposed fitness function is compared experimentally with Okapi-BM25 function and Bayesian inference network model function. Moreover, IRUGA is compared with traditional IR using the same fitness function to examine the performance in terms of time required by each technique to retrieve the documents. The new techniques developed for document representation, the GA operators and the fitness function managed to achieves an improvement over 90% for the recall and precision measures. And the relevance of the retrieved document is much higher than that retrieved by the other models. Moreover, a massive comparison of techniques applied to GA operators is performed by highlighting the strengths and weaknesses of each existing technique of GA operators. Overall, IRUGA is a promising technique in Web search domain that provides a high quality search results in terms of recall and precision.
238

Evolving Credible Facial Expressions with Interactive GAs

Smith, Nancy T. 01 January 2012 (has links)
A major focus of research in computer graphics is the modeling and animation of realistic human faces. Modeling and animation of facial expressions is a very difficult task, requiring extensive manual manipulation by computer artists. Our primary hypothesis was that the use of machine learning techniques could reduce the manual labor by providing some automation to the process. The goal of this dissertation was to determine the effectiveness of using an interactive genetic algorithm (IGA) to generate realistic variations in facial expressions. An IGA's effectiveness is measured by satisfaction with the end results, including acceptable levels of user fatigue. User fatigue was measured by the rate of successful convergence, defined as achieving a sufficient fitness level as determined by the user. Upon convergence, the solution with the highest fitness value was saved for later evaluation by participants with questionnaires. The participants also rated animations that were manually created by the user for comparison. The animation of our IGA is performed by interpolating between successive face models, also known as blendshapes. The position of each blendshape's vertices is determined by a set of blendshape controls. Chromosomes map to animation sequences, where genes correspond to blendshapes. The manually created animations were also produced by manipulating the blendshape control values of successive blendshapes. Due to user fatigue, IGAs typically use a small population with the user evaluating each individual. This is a serious limitation since there must be a sufficient number of building blocks in the initial population to converge to a good solution. One method that has been used to address this problem in the music domain is a surrogate fitness function, which serves as a filter to present a small subpopulation to the user for subjective evaluation. Our secondary hypothesis was that an IGA for the high-dimensional problem of facial animation would benefit from a large population made possible by using a neural network (NN) as a surrogate fitness function. The NN assigns a fitness value to every individual in the population, and the phenotypes of the highest rated individuals are presented to receive subjective fitness values from the user. This is a unique approach to the problem of automatic generation of facial animation. Experiments were conducted for each of the six emotions, using the optimal parameters that had been discovered. The average convergence rate was 85%. The quality of the NNs showed evidence of a correlation to convergence rates as measured by the true positive and false positive rates. The animations with the highest subjective fitness from the final set of experiments were saved for participant evaluation. The participants gave the IGA animations an average credibility rating of 69% and the manual animations an average credibility rating of 65%. The participants preferred the IGA animations an average of 54% of the time to the manual animations. The results of these experiments indicated that an IGA is effective at generating realistic variations in facial expressions that are comparable to manually created ones. Moreover, experiments that varied population size indicated that a larger population results in a higher convergence rate.
239

A Balanced Secondary Structure Predictor

Islam, Md Nasrul 15 May 2015 (has links)
Secondary structure (SS) refers to the local spatial organization of the polypeptide backbone atoms of a protein. Accurate prediction of SS is a vital clue to resolve the 3D structure of protein. SS has three different components- helix (H), beta (E) and coil (C). Most SS predictors are imbalanced as their accuracy in predicting helix and coil are high, however significantly low in the beta. The objective of this thesis is to develop a balanced SS predictor which achieves good accuracies in all three SS components. We proposed a novel approach to solve this problem by combining a genetic algorithm (GA) with a support vector machine. We prepared two test datasets (CB471 and N295) to compare the performance of our predictors with SPINE X. Overall accuracy of our predictor was 76.4% and 77.2% respectively on CB471 and N295 datasets, while SPINE X gave 76.5% overall accuracy on both test datasets.
240

Heuristické a metaheuristické metody řešení úlohy obchodního cestujícího / Heuristic and metaheuristic methods for travelling salesman problem

Burdová, Jana January 2010 (has links)
Minimal length of a travelling salesman's problem had been studied in this diploma these. Travelling salesman must come trough each place just once and then go back to the starting place. This problem can be illustrated as a problem of graph theory, such that places are the vertices, roads are the edges, distances of roads are the lengths of edges. The optimal travelling salesman's problem tour is the shortest Hamiltionian cycle in the graph. It is a classical NP-complete problem. There is no algorithm that solves this problem in polynomial time. This problem can be solved by using various approximation algorithms, they offer less time consumption and lowest quality than optimization. This diploma these had been dedicated to approximation algorithms, for example: nearest neighbor method, minimal spanning tree method, Christofide's method, 2-opt., genetic algorithm, etc.

Page generated in 0.0792 seconds