• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 32
  • 17
  • 8
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 104
  • 34
  • 30
  • 27
  • 26
  • 24
  • 22
  • 21
  • 20
  • 20
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Utilizing Swarm Intelligence Algorithms for Pathfinding in Games

Kelman, Alexander January 2017 (has links)
The Ant Colony Optimization and Particle Swarm Optimization are two Swarm Intelligence algorithms often utilized for optimization. Swarm Intelligence relies on agents that possess fragmented knowledge, a concept not often utilized in games. The aim of this study is to research whether there are any benefits to using these Swarm Intelligence algorithms in comparison to standard algorithms such as A* for pathfinding in a game. Games often consist of dynamic environments with mobile agents, as such all experiments were conducted with dynamic destinations. Algorithms were measured on the length of their path and the time taken to calculate that path. The algorithms were implemented with minor modifications to allow them to better function in a grid based environment. The Ant Colony Optimization was modified in regards to how pheromone was distributed in the dynamic environment to better allow the algorithm to path towards a mobile target. Whereas the Particle Swarm Optimization was given set start positions and velocity in order to increase initial search space and modifications to increase particle diversity. The results obtained from the experimentation showcased that the Swarm Intelligence algorithms were capable of performing to great results in terms of calculation speed, they were however not able to obtain the same path optimality as A*. The algorithms' implementation can be improved but show potential to be useful in games.
42

Global supply chain optimization : a machine learning perspective to improve caterpillar's logistics operations

Veluscek, Marco January 2016 (has links)
Supply chain optimization is one of the key components for the effective management of a company with a complex manufacturing process and distribution network. Companies with a global presence in particular are motivated to optimize their distribution plans in order to keep their operating costs low and competitive. Changing condition in the global market and volatile energy prices increase the need for an automatic decision and optimization tool. In recent years, many techniques and applications have been proposed to address the problem of supply chain optimization. However, such techniques are often too problemspecific or too knowledge-intensive to be implemented as in-expensive, and easy-to-use computer system. The effort required to implement an optimization system for a new instance of the problem appears to be quite significant. The development process necessitates the involvement of expert personnel and the level of automation is low. The aim of this project is to develop a set of strategies capable of increasing the level of automation when developing a new optimization system. An increased level of automation is achieved by focusing on three areas: multi-objective optimization, optimization algorithm usability, and optimization model design. A literature review highlighted the great level of interest for the problem of multiobjective optimization in the research community. However, the review emphasized a lack of standardization in the area and insufficient understanding of the relationship between multi-objective strategies and problems. Experts in the area of optimization and artificial intelligence are interested in improving the usability of the most recent optimization algorithms. They stated the concern that the large number of variants and parameters, which characterizes such algorithms, affect their potential applicability in real-world environments. Such characteristics are seen as the root cause for the low success of the most recent optimization algorithms in industrial applications. Crucial task for the development of an optimization system is the design of the optimization model. Such task is one of the most complex in the development process, however, it is still performed mostly manually. The importance and the complexity of the task strongly suggest the development of tools to aid the design of optimization models. In order to address such challenges, first the problem of multi-objective optimization is considered and the most widely adopted techniques to solve it are identified. Such techniques are analyzed and described in details to increase the level of standardization in the area. Empirical evidences are highlighted to suggest what type of relationship exists between strategies and problem instances. Regarding the optimization algorithm, a classification method is proposed to improve its usability and computational requirement by automatically tuning one of its key parameters, the termination condition. The algorithm understands the problem complexity and automatically assigns the best termination condition to minimize runtime. The runtime of the optimization system has been reduced by more than 60%. Arguably, the usability of the algorithm has been improved as well, as one of the key configuration tasks can now be completed automatically. Finally, a system is presented to aid the definition of the optimization model through regression analysis. The purpose of the method is to gather as much knowledge about the problem as possible so that the task of the optimization model definition requires a lower user involvement. The application of the proposed algorithm is estimated that could have saved almost 1000 man-weeks to complete the project. The developed strategies have been applied to the problem of Caterpillar’s global supply chain optimization. This thesis describes also the process of developing an optimization system for Caterpillar and highlights the challenges and research opportunities identified while undertaking this work. This thesis describes the optimization model designed for Caterpillar’s supply chain and the implementation details of the Ant Colony System, the algorithm selected to optimize the supply chain. The system is now used to design the distribution plans of more than 7,000 products. The system improved Caterpillar’s marginal profit on such products by a factor of 4.6% on average.
43

Uma abordagem distribuída e bio-inspirada para mapeamento de ambientes internos utilizando múltiplos robôs móveis / A distributed and bioinspired approach for mapping of indoor environments using multiple mobile robots

Janderson Rodrigo de Oliveira 31 March 2014 (has links)
As estratégias de mapeamento utilizando múltiplos robôs móveis possuem uma série de vantagens quando comparadas àquelas estratégias baseadas em um único robô. As principais vantagens que podem ser elucidadas são: flexibilidade, ganho de informação e redução do tempo de construção do mapa do ambiente. No presente trabalho, um método de integração de mapas locais é proposto baseado em observações inter-robôs, considerando uma nova abordagem para a exploração do ambiente. Tal abordagem é conhecida como Sistema de Vigilância baseado na Modificação do Sistema Colônias de Formigas, ou IAS-SS. A estratégia IAS-SS é inspirada em mecanismos biológicos que definem a organização social de sistemas de enxames. Especificamente, esta estratégia é baseada em uma modificação do tradicional algoritmo de otimização por colônias de formiga. A principal contribuição do presente trabalho é a adaptação de um modelo de compartilhamento de informações utilizado em redes de sensores móveis, adaptando o mesmo para tarefas de mapeamento. Outra importante contribuição é a colaboração entre o método proposto de integração de mapas e a estratégia de coordenação de múltiplos robôs baseada na teoria de colônias de formigas. Tal colaboração permite o desenvolvimento de uma abordagem de exploração que emprega um mecanismo não físico para depósito e detecção de feromônios em ambientes reais por meio da elaboração do conceito de feromônios virtuais integrados. Resultados obtidos em simulação demonstram que o método de integração de mapas é eficiente, de modo que os ensaios experimentais foram realizados considerando-se um número variável de robôs móveis durante o processo de exploração de ambientes internos com diferentes formas e estruturas. Os resultados obtidos com os diversos experimentos realizados confirmam que o processo de integração é efetivo e adequado para executar o mapeamento do ambiente durante tarefas de exploração e vigilância do mesmo / The multiple robot map building strategies have several advantages when compared to strategies based on a single robot, in terms of flexibility, gain of information and reduction of map building time. In this work, a local map integration method is proposed based on the inter-robot observations, considering a recent approach for the environment exploration. This approach is based on the Inverse Ant System-Based Surveillance System strategy, called IASSS. The IAS-SS strategy is inspired on biological mechanisms that define the social organization of swarm systems. Specifically, it is based on a modified version of the known ant colony algorithm. The main contribution of this work is the fit of an information sharing model used in an mobile sensor network, adapting the method for mapping tasks. Another important contribution is the collaboration between the local map integration method and the multiple robot coordination strategy based on ant colony theory. Through this collaboration it is possible to develop an approach that uses a mechanism for controlling the access to pheromones in real environments. Such mechanism is based on the integrated virtual pheromones concept. Simulation results show that the map integration method is efficient, the trials are performed considering a variable number of robots and environments with different structures. Results obtained from several experiments confirm that the integration process is effective and suitable to execute mapping during the exploration task
44

Algoritmos híbridos para la resolución del F.L.P. (Facility Layout Problem) basados en colonias de hormigas

Jaén Gómez, Pedro Ildefonso 07 January 2016 (has links)
[EN] The Facilities Layout Problem in a industrial plant (FLP) pursues the good ordenation of the integrating elements (that in this work they will call themselves facilities, understan-ding those elements of the production system that they require space) of a production system and it contemplates, among other, geometric and economic aspects. The eco-nomic aspect has to do with the installation of the plant and with its operation while the geometric one is related with the architecture of the system. Under consideration of these aspects they are derived different formulations of the problem according to the geometric model adopted for represent the solution and according to the function to optimize that can include quantitative terms as installation costs and operation cost (manu-tención) and qualitative terms derived of the chart establishing relationship of activities from the met-hodology SLP. Certain tradition exists in the Educational Unit of Buildings and Architectu-re Industrial (at the moment U.D of Industrial Buldings), on the resolution of this FLP from diverse focuses, what there is origin that already from the years 90, myself, author of this thesis, as well as other partners, let us have implemented some computer applications of several types for the resolution of the same, based, by way of example, in genetic algo-rithms or in fuzzy logic. The last one goal was this one implemented with ACO ("Ant Co-lony Optimization") that this work shows. Anyway, this applications, often used in other works or even with educational ends, they have provided satisfactory results so much in the investigating scheduling as in the academic. At the beginning of the 2000, when the normative of Industrial Buldings Fire Proofing appears, when being starting from then of a preceptive normative in the greater part of industries of new installation, and the position that was continued in the real works was: in a first phase the elaboration of the layout, while in a second phase the application of the preceptive normative of fire proofing was demanded against fires to the layout obtained previously, with obligatory character so much in the industrial field, like in the subsidiary uses that aren't industrials, different from the main one. Any layout that it doesn't complete the fire proofing normative approaches in all the areas, be these industrial or not, it lacks legal validity and therefore it's not viable. In a third phase it is endowed of the thermal appropriate atmosphere, higroscopic, acous-tic and lighting to the obtained solution. In front of this reality, more and more commenda-ble starting from the appearance of the Technical Code of Buildings, that impels the per-formance designing and not in prescriptions, of the non convenience of unlying the design phases, we have started including the approach of the compartmentalization in the design like another objective in the quality of the final adopted solution, and therefore optimizable like any another. Hence in this work we have been carried out a proposal of compart-mentalization algorithm that works starting from the information and approaches that the normative of fires use, and we have also defined a proposal of objective function, as well as a series of parameters that allows to consider like it influences this compartmentaliza-tion in the flow of materials through the different facilities. / [ES] El problema de la distribución en planta de procesos industriales (FLP) persigue la ordenación óptima de los elementos (que en este trabajo se llamarán actividades, conceptuándose como aquellos elementos del sistema de producción que requieren espacio) de un sistema de producción y contempla, entre otros, aspectos geométricos y económicos. El aspecto económico tiene que ver con la instalación de la planta y con su operación mientras que el geométrico se relaciona con la arquitectura del sistema. De la consideración de estos aspectos se derivan diferentes formulaciones del problema según el modelo geométrico adoptado para representar la solución y según la función a optimizar, que puede incluir términos cuantitativos como costes de instalación y de operación (manutención) y términos cualitativos derivados de la tabla relacional de actividades establecida desde la metodología SLP. Existe cierta tradición en la Unidad Docente de Construcción y Arquitectura Industrial (actualmente U.D de Construcciones Industriales), sobre la resolución de este problema de distribución en planta desde diversos enfoques, lo que ha originado que ya desde los años 90, yo mismo, autor de esta Tesis Doctoral, así como otros compañeros, hayamos implementado algunas aplicaciones informáticas de varios tipos para la resolución del mismo, basadas, a modo de ejemplo, en algoritmos genéticos o en lógica borrosa. El último caso el de la aplicación informática que utiliza ACO ("Ant Colony Optimization") que se presenta en este trabajo. En cualquier caso, dichas aplicaciones, a menudo utilizadas en otras investigaciones o incluso con fines docentes, han proporcionado resultados satisfactorios tanto en el plano investigador como en el académico. A principios de los 2000, cuando aparece la normativa de Protección Contra Incendios en Establecimientos Industriales, al tratarse a partir de entonces de una normativa de obligado cumplimiento en la gran mayoría de actividades de nueva planta, y el planteamiento que se siguió al realizar los trabajos y proyectos sobre casos reales fue en una primera fase la elaboración de la distribución en planta, mientras que en una segunda fase se exigía la aplicación de la normativa de protección contra incendios a la distribución en planta obtenida con anterioridad, con carácter obligatorio tanto en el ámbito industrial, como en los usos subsidiaros no industriales diferentes del principal. Cualquier distribución en planta que no cumpla los criterios normativos en todas las zonas, sean éstas industriales o no, carece de validez legal y por tanto no es viable. En una tercera fase se dota del adecuado ambiente térmico, higroscópico, acústico y lumínico a la solución obtenida. Frente a esta realidad, cada vez más plausible a partir de la entrada en vigor del Código Técnico de la edificación, que impulsa el diseño basado en prestaciones y no en prescripciones, de la no conveniencia de desligar las fases de diseño, se ha comenzado por incluir el criterio de la sectorización en el diseño como un objetivo más mesurable en la calidad de la solución final adoptada, y por lo tanto optimizable como cualquier otro. Por ello en este trabajo se ha realizado una propuesta de algoritmo de sectorización, que funciona a partir de la información y criterios que las normativas de incendios utilizan, y se ha definido también una propuesta de función objetivo, así como una serie de parámetros que permiten considerar cómo influye esta sectorización en el trasiego de materiales (fundamentalmente flujos) a través de las distintas actividades. / [CAT] El problema de la distribució en planta de processos industrials (FLP) perseguix l'ordena-ció òptima dels elements (que en este treball es cridaran activitats, conceptuant-se com aquells elements del sistema de producció que requerixen espai) d'un sistema de pro-ducció i contempla, entre altres, aspectes geomètrics i econòmics. L'aspecte econòmic té a veure amb la instal·lació de la planta i amb la seua operació mentres que el geo-mètric es relaciona amb l'arquitectura del sistema. De la consideració d'estos aspectes es deriven diferents formulacions del problema segons el model geomètric adoptat per a representar la solució i segons la funció a optimitzar, que pot incloure termes quantitatius com a costos d'instal·lació i d'operació (manutenció) i termes qualitatius derivats de la taula relacional d'activitats establida des de la metodologia SLP. Hi ha una certa tradició en la Unitat Docent de Construcció i Arquitectura Industrial (actualment U.D de Cons-truccions Industrials) , sobre la resolució d'este problema de distribució en planta des de diversos enfocaments, la qual cosa ha originat que ja des dels anys 90, jo mateix, autor d'esta tesi, així com altres companys, hàgem implementat algunes aplicacions informàti-ques de diversos tipus per a la resolució del mateix, basades, a manera d'exemple, en algoritmes genètics o en lògica borrosa. L'últim cas el de l'aplicació informàtica que uti-litza ACO ("Ant Colony Optimization") que es presenta en este treball. En tot cas, les dites aplicacions, sovint utilitzades en altres investigacions o inclús amb fins docents, han pro-porcionat resultats satisfactoris tant en el pla investigador com en l'acadèmic. A principis dels 2000, quan apareix la normativa de Protecció Contra Incendis en Establiments In-dustrials, al tractar-se a partir de llavors d'una normativa de compliment obligatori en la gran majoria d'activitats de nova planta, i el plantejament que es va seguir en els treballs i projectes reials va ser en una primera fase l'elaboració de la distribució en planta, men-tres que en una segona fase s'exigia l'aplicació de la normativa de protecció contra in-cendis a la distribució en planta obtinguda amb anterioritat, amb caràcter obligatori tant en l'àmbit industrial, com en els usos subsidiar-vos no industrials diferents del principal. Qualsevol distribució en planta que no complisca els criteris normatius en totes les zones, siguen ést. En una tercera fase es dota de l'adequat ambient tèrmic, higroscòpic, acústic i lumínic a la solució obtinguda. Enfront d'esta realitat, cada vegada més plausible a partir de l'entrada en vigor del Codi Tècnic de l'Edificació, que impulsa el disseny basat en prestacions i no en prescripcions, de la no conveniència de deslligar les fases de disseny, s'ha començat per incloure el criteri de la sectorització en el disseny com un objectiu més mesurable en la qualitat de la solució final adoptada, i per tant optimizable com qualsevol altre. Per això en este treball s'ha realitzat una proposta d'algoritme de sectorització, que funciona a partir de la informació i criteris que les normatives d'incendis utilitzen, i s'ha definit també una proposta de funció objectiu, així com una sèrie de paràmetres que permeten considerar com influïx esta sectorització en el trasbals de materials (fonamen-talment fluxos) a través de les distintes activitats. / Jaén Gómez, PI. (2015). Algoritmos híbridos para la resolución del F.L.P. (Facility Layout Problem) basados en colonias de hormigas [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59447 / TESIS
45

Ant Clustering with Consensus

Gu, Yuhua 01 April 2009 (has links)
Clustering is actively used in several research fields, such as pattern recognition, machine learning and data mining. This dissertation focuses on clustering algorithms in the data mining area. Clustering algorithms can be applied to solve the unsupervised learning problem, which deals with finding clusters in unlabeled data. Most clustering algorithms require the number of cluster centers be known in advance. However, this is often not suitable for real world applications, since we do not know this information in most cases. Another question becomes, once clusters are found by the algorithms, do we believe the clusters are exactly the right ones or do there exist better ones? In this dissertation, we present two new Swarm Intelligence based approaches for data clustering to solve the above issues. Swarm based approaches to clustering have been shown to be able to skip local extrema by doing a form of global search, our two newly proposed ant clustering algorithms take advantage of this. The first algorithm is a kernel-based fuzzy ant clustering algorithm using the Xie-Beni partition validity metric, it is a two stage algorithm, in the first stage of the algorithm ants move the cluster centers in feature space, the cluster centers found by the ants are evaluated using a reformulated kernel-based Xie-Beni cluster validity metric. We found when provided with more clusters than exist in the data our new ant-based approach produces a partition with empty clusters and/or very lightly populated clusters. Then the second stage of this algorithm was applied to automatically detect the number of clusters for a data set by using threshold solutions. The second ant clustering algorithm, using chemical recognition of nestmates is a combination of an ant based algorithm and a consensus clustering algorithm. It is a two-stage algorithm without initial knowledge of the number of clusters. The main contributions of this work are to use the ability of an ant based clustering algorithm to determine the number of cluster centers and refine the cluster centers, then apply a consensus clustering algorithm to get a better quality final solution. We also introduced an ensemble ant clustering algorithm which is able to find a consistent number of clusters with appropriate parameters. We proposed a modified online ant clustering algorithm to handle clustering large data sets. To our knowledge, we are the first to use consensus to combine multiple ant partitions to obtain robust clustering solutions. Experiments were done with twelve data sets, some of which were benchmark data sets, two artificially generated data sets and two magnetic resonance image brain volumes. The results show how the ant clustering algorithms play an important role in finding the number of clusters and providing useful information for consensus clustering to locate the optimal clustering solutions. We conducted a wide range of comparative experiments that demonstrate the effectiveness of the new approaches.
46

ACODV : Ant Colony Optimisation Distance Vector routing in ad hoc networks

Du Plessis, Johan 11 April 2007 (has links)
A mobile ad hoc network is a collection of wireless mobile devices which dynamically form a temporary network, without using any existing network infrastructure or centralised administration. Each node in the network effectively becomes a router, and forwards packets towards the packet’s destination node. Ad hoc networks are characterized by frequently changing network topology, multi-hop wireless connections and the need for dynamic, efficient routing protocols. <p.This work considers the routing problem in a network of uniquely addressable sensors. These networks are encountered in many industrial applications, where the aim is to relay information from a collection of data gathering devices deployed over an area to central points. The routing problem in such networks are characterised by: <ul> <li>The overarching requirement for low power consumption, as battery powered sensors may be required to operate for years without battery replacement;</li> <li>An emphasis on reliable communication as opposed to real-time communication, it is more important for packets to arrive reliably than to arrive quickly; and</li> <li>Very scarce processing and memory resources, as these sensors are often implemented on small low-power microprocessors.</li> </ul> This work provides overviews of routing protocols in ad hoc networks, swarm intelligence, and swarm intelligence applied to ad hoc routing. Various mechanisms that are commonly encountered in ad hoc routing are experimentally evaluated under situations as close to real-life as possible. Where possible, enhancements to the mechanisms are suggested and evaluated. Finally, a routing protocol suitable for such low-power sensor networks is defined and benchmarked in various scenarios against the Ad hoc On-Demand Distance Vector (AODV) algorithm. / Dissertation (MSc)--University of Pretoria, 2005. / Computer Science / Unrestricted
47

Statistical methods for imaging data, imaging genetics and sparse estimation in linear mixed models

Opoku, Eugene A. 21 October 2021 (has links)
This thesis presents research focused on developing statistical methods with emphasis on techniques that can be used for the analysis of data in imaging studies and sparse estimations for applications in high-dimensional data. The first contribution addresses the pixel/voxel-labeling problem for spatial hidden Markov models in image analysis. We formulate a Gaussian spatial mixture model with Potts model used as a prior for mixture allocations for the latent states in the model. Jointly estimating the model parameters, the discrete state variables and the number of states (number of mixture components) is recognized as a difficult combinatorial optimization. To overcome drawbacks associated with local algorithms, we implement and make comparisons between iterated conditional modes (ICM), simulated annealing (SA) and hybrid ICM with ant colony system (ACS-ICM) optimization for pixel labelling, parameter estimation and mixture component estimation. In the second contribution, we develop ACS-ICM algorithm for spatiotemporal modeling of combined MEG/EEG data for computing estimates of the neural source activity. We consider a Bayesian finite spatial mixture model with a Potts model as a spatial prior and implement the ACS-ICM for simultaneous point estimation and model selection for the number of mixture components. Our approach is evaluated using simulation studies and an application examining the visual response to scrambled faces. In addition, we develop a nonparametric bootstrap for interval estimation to account for uncertainty in the point estimates. In the third contribution, we present sparse estimation strategies in linear mixed model (LMM) for longitudinal data. We address the problem of estimating the fixed effects parameters of the LMM when the model is sparse and predictors are correlated. We propose and derive the asymptotic properties of the pretest and shrinkage estimation strategies. Simulation studies is performed to compare the numerical performance of the Lasso and adaptive Lasso estimators with the pretest and shrinkage ridge estimators. The methodology is evaluated through an application of a high-dimensional data examining effective brain connectivity and genetics. In the fourth and final contribution, we conduct an imaging genetics study to explore how effective brain connectivity in the default mode network (DMN) may be related to genetics within the context of Alzheimer’s disease. We develop an analysis of longitudinal resting-state functional magnetic resonance imaging (rs-fMRI) and genetic data obtained from a sample of 111 subjects with a total of 319 rs-fMRI scans from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. A Dynamic Causal Model (DCM) is fit to the rs-fMRI scans to estimate effective brain connectivity within the DMN and related to a set of single nucleotide polymorphisms (SNPs) contained in an empirical disease-constrained set. We relate longitudinal effective brain connectivity estimated using spectral DCM to SNPs using both linear mixed effect (LME) models as well as function-on-scalar regression (FSR). / Graduate
48

Heuristické řešení plánovacích problémů / Heuristic Solving of Planning Problems

Novotná, Kateřina January 2013 (has links)
This thesis deals with the implementation of the metaheuristic algorithms into the Drools Planner. The Drools Planner is an open source tool for solving optimization problems. This work describes design and implementation of Ant colony optimization metaheuristics in the Drools Planner. Evaluation of the algorithm results is done by Drools Planner benchmark with different kinds of optimization problems.
49

Ant Colony Algorithms for the Resolution of Semantic Searches in P2P Networks

Krynicki, Kamil Krzysztof 01 March 2016 (has links)
Tesis por compendio / [EN] The long-lasting trend in the field of computation of stress and resource distribution has found its way into computer networks via the concept of peer-to-peer (P2P) connectivity. P2P is a symmetrical model, where each network node is enabled a comparable range of capacities and resources. It stands in a stark contrast to the classical, strongly asymmetrical client-server approach. P2P, originally considered only a complimentary, server-side structure to the straightforward client-server model, has been shown to have the substantial potential on its own, with multiple, widely known benefits: good fault tolerance and recovery, satisfactory scalability and intrinsic load distribution. However, contrary to client-server, P2P networks require sophisticated solutions on all levels, ranging from network organization, to resource location and managing. In this thesis we address one of the key issues of P2P networks: performing efficient resource searches of semantic nature under realistic, dynamic conditions. There have been numerous solutions to this matter, with evolutionary, stigmergy-based, and simple computational foci, but few attempt to resolve the full range of challenges this problem entails. To name a few: real-life P2P networks are rarely static, nodes disconnect, reconnect and change their content. In addition, a trivial incorporation of semantic searches into well-known algorithms causes significant decrease in search efficiency. In our research we build a solution incrementally, starting with the classic Ant Colony System (ACS) within the Ant Colony Optimization metaheuristic (ACO). ACO is an algorithmic framework used for solving combinatorial optimization problems that fits contractually the problem very well, albeit not providing an immediate solution to any of the aforementioned problems. First, we propose an efficient ACS variant in structured (hypercube structured) P2P networks, by enabling a path-post processing algorithm, which called Tabu Route Optimization (TRO). Next, we proceed to resolve the issue of network dynamism with an ACO-compatible information diffusion approach. Consequently, we attempt to incorporate the semantic component of the searches. This initial approximation to the problem was achieved by allowing ACS to differentiate between search types with the pheromone-per-concept idea. We called the outcome of this merger Routing Concept ACS (RC-ACS). RC-ACS is a robust, static multipheromone implementation of ACS. However, we were able to conclude from it that the pheromone-per-concept approach offers only limited scalability and cannot be considered a global solution. Thus, further progress was made in this respect when we introduced to RC-ACS our novel idea: dynamic pheromone creation, which replaces the static one-to-one assignment. We called the resulting algorithm Angry Ant Framework (AAF). In AAF new pheromone levels are created as needed and during the search, rather than prior to it. The final step was to enable AAF, not only to create pheromone levels, but to reassign them to optimize the pheromone usage. The resulting algorithm is called EntropicAAF and it has been evaluated as one of the top-performing algorithms for P2P semantic searches under all conditions. / [ES] La popular tendencia de distribución de carga y recursos en el ámbito de la computación se ha transmitido a las redes computacionales a través del concepto de la conectividad peer-to-peer (P2P). P2P es un modelo simétrico, en el cual a cada nodo de la red se le otorga un rango comparable de capacidades y recursos. Se trata de un fuerte contraste con el clásico y fuertemente asimétrico enfoque cliente-servidor. P2P, originalmente considerado solo como una estructura del lado del servidor complementaria al sencillo modelo cliente-servidor, ha demostrado tener un potencial considerable por sí mismo, con múltiples beneficios ampliamente conocidos: buena tolerancia a fallos y recuperación, escalabilidad satisfactoria y distribución de carga intrínseca. Sin embargo, al contrario que el modelo cliente-servidor, las redes P2P requieren de soluciones sofisticadas a todos los niveles, desde la organización de la red hasta la gestión y localización de recursos. Esta tesis aborda uno de los problemas principales de las redes P2P: la búsqueda eficiente de recursos de naturaleza semántica bajo condiciones dinámicas y realistas. Ha habido numerosas soluciones a este problema basadas en enfoques evolucionarios, estigmérgicos y simples, pero pocas han tratado de resolver el abanico completo de desafíos. En primer lugar, las redes P2P reales son raramente estáticas: los nodos se desconectan, reconectan y cambian de contenido. Además, la incorporación trivial de búsquedas semánticas en algoritmos conocidos causa un decremento significativo de la eficiencia de la búsqueda. En esta investigación se ha construido una solución de manera incremental, comenzando por el clásico Ant Colony System (ACS) basado en la metaheurística de Ant Colony Optimization (ACO). ACO es un framework algorítmico usado para búsquedas en grafos que encaja perfectamente con las condiciones del problema, aunque no provee una solución inmediata a las cuestiones mencionadas anteriormente. En primer lugar, se propone una variante eficiente de ACS para redes P2P estructuradas (con estructura de hipercubo) permitiendo el postprocesamiento de las rutas, al que hemos denominado Tabu Route Optimization (TRO). A continuación, se ha tratado de resolver el problema del dinamismo de la red mediante la difusión de la información a través de una estrategia compatible con ACO. En consecuencia, se ha tratado de incorporar el componente semántico de las búsquedas. Esta aproximación inicial al problema ha sido lograda permitiendo al ACS diferenciar entre tipos de búsquedas através de la idea de pheromone-per-concept. El resultado de esta fusión se ha denominado Routing Concept ACS (RC-ACS). RC-ACS es una implementación multiferomona estática y robusta de ACS. Sin embargo, a partir de esta implementación se ha podido concluir que el enfoque pheromone-per-concept ofrece solo escalabilidad limitada y que no puede ser considerado una solución global. Por lo tanto, para lograr una mejora a este respecto, se ha introducido al RC-ACS una novedosa idea: la creación dinámica de feromonas, que reemplaza la asignación estática uno a uno. En el algoritmo resultante, al que hemos denominado Angry Ant Framework (AAF), los nuevos niveles de feromona se crean conforme se necesitan y durante la búsqueda, en lugar de crearse antes de la misma. La mejora final se ha obtenido al permitir al AAF no solo crear niveles de feromona, sino también reasignarlos para optimizar el uso de la misma. El algoritmo resultante se denomina EntropicAAF y ha sido evaluado como uno de los algoritmos más exitosos para las búsquedas semánticas P2P bajo todas las condiciones. / [CA] La popular tendència de distribuir càrrega i recursos en el camp de la computació s'ha estès cap a les xarxes d'ordinadors a través del concepte de connexions d'igual a igual (de l'anglès, peer to peer o P2P). P2P és un model simètric on cada node de la xarxa disposa del mateix nombre de capacitats i recursos. P2P, considerat originàriament només una estructura situada al servidor complementària al model client-servidor simple, ha provat tindre el suficient potencial per ella mateixa, amb múltiples beneficis ben coneguts: una bona tolerància a errades i recuperació, una satisfactòria escalabilitat i una intrínseca distribució de càrrega. No obstant, contràriament al client-servidor, les xarxes P2P requereixen solucions sofisticades a tots els nivells, que varien des de l'organització de la xarxa a la localització de recursos i la seua gestió. En aquesta tesi s'adreça un dels problemes clau de les xarxes P2P: ser capaç de realitzar eficientment cerques de recursos de naturalesa semàntica sota condicions realistes i dinàmiques. Existeixen nombroses solucions a aquest tema basades en la computació simple, evolutiva i també basades en l'estimèrgia (de l'anglès, stigmergy), però pocs esforços s'han realitzat per intentar resoldre l'ampli conjunt de reptes existent. En primer lloc, les xarxes P2P reals són rarament estàtiques: els nodes es connecten, desconnecten i canvien els seus continguts. A més a més, la incorporació trivial de cerques semàntiques als algorismes existents causa una disminució significant de l'eficiència de la cerca. En aquesta recerca s'ha construït una solució incremental, començant pel sistema clàssic de colònia de formigues (de l'anglés, Ant Colony System o ACS) dins de la metaheurística d'optimització de colònies de formigues (de l'anglès, Ant Colony Optimization o ACO). ACO és un entorn algorísmic utilitzat per cercar en grafs i que aborda el problema de forma satisfactòria, tot i que no proveeix d'una solució immediata a cap dels problemes anteriorment mencionats. Primer, s'ha proposat una variant eficient d'ACS en xarxes P2P estructurades (en forma d'hipercub) a través d'un algorisme de processament post-camí el qual s'ha anomenat en anglès Tabu Route Optimization (TRO). A continuació, s'ha procedit a resoldre el problema del dinamisme de les xarxes amb un enfocament de difusió d'informació compatible amb ACO. Com a conseqüència, s'ha intentat incorporar la component semàntica de les cerques. Aquest enfocament inicial al problema s'ha realitzat permetent a ACS diferenciar entre tipus de cerques amb la idea de ''feromona per concepte'', i s'ha anomenat a aquest producte Routing Concept ACS o RC-ACS. RC-ACS és una implementació multi-feromona robusta i estàtica d'ACS. No obstant, s'ha pogut concloure que l'enfocament de feromona per concepte ofereix només una escalabilitat limitada i no pot ser considerada una solució global. En aquest respecte s'ha realitzat progrés posteriorment introduint una nova idea a RC-ACS: la creació dinàmica de feromones, la qual reemplaça a l'assignació un a un de les mateixes. A l'algorisme resultant se l'ha anomenat en anglès Angry Ant Framework (AAF). En AAF es creen nous nivells de feromones a mesura que es necessiten durant la cerca, i no abans d'aquesta. El progrés final s'ha aconseguit quan s'ha permès a AAF, no sols crear nivells de feromones, sinó reassignar-los per optimitzar la utilització de feromones. L'algorisme resultant s'ha anomenat EntropicAAF i ha sigut avaluat com un dels algorismes per a cerques semàntiques P2P amb millors prestacions. / Krynicki, KK. (2016). Ant Colony Algorithms for the Resolution of Semantic Searches in P2P Networks [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61293 / TESIS / Premios Extraordinarios de tesis doctorales / Compendio
50

Circuit Design Methods with Emerging Nanotechnologies

Zheng, Yexin 28 December 2009 (has links)
As complementary metal-oxide semiconductor (CMOS) technology faces more and more severe physical barriers down the path of continuously feature size scaling, innovative nano-scale devices and other post-CMOS technologies have been developed to enhance future circuit design and computation. These nanotechnologies have shown promising potentials to achieve magnitude improvement in performance and integration density. The substitution of CMOS transistors with nano-devices is expected to not only continue along the exponential projection of Moore's Law, but also raise significant challenges and opportunities, especially in the field of electronic design automation. The major obstacles that the designers are experiencing with emerging nanotechnology design include: i) the existing computer-aided design (CAD) approaches in the context of conventional CMOS Boolean design cannot be directly employed in the nanoelectronic design process, because the intrinsic electrical characteristics of many nano-devices are not best suited for Boolean implementations but demonstrate strong capability for implementing non-conventional logic such as threshold logic and reversible logic; ii) due to the density and size factors of nano-devices, the defect rate of nanoelectronic system is much higher than conventional CMOS systems, therefore existing design paradigms cannot guarantee design quality and lead to even worse result in high failure ratio. Motivated by the compelling potentials and design challenges of emerging post-CMOS technologies, this dissertation work focuses on fundamental design methodologies to effectively and efficiently achieve high quality nanoscale design. A novel programmable logic element (PLE) is first proposed to explore the versatile functionalities of threshold gates (TGs) and multi-threshold threshold gates (MTTGs). This PLE structure can realize all three- or four-variable logic functions through configuring binary control bits. This is the first single threshold logic structure that provides complete Boolean logic implementation. Based on the PLEs, a reconfigurable architecture is constructed to offer dynamic reconfigurability with little or no reconfiguration overhead, due to the intrinsic self-latching property of nanopipelining. Our reconfiguration data generation algorithm can further reduce the reconfiguration cost. To fully take advantage of such threshold logic design using emerging nanotechnologies, we also developed a combinational equivalence checking (CEC) framework for threshold logic design. Based on the features of threshold logic gates and circuits, different techniques of formulating a given threshold logic in conjunctive normal form (CNF) are introduced to facilitate efficient SAT-based verification. Evaluated with mainstream benchmarks, our hybrid algorithm, which takes into account both input symmetry and input weight order of threshold gates, can efficiently generate CNF formulas in terms of both SAT solving time and CNF generating time. Then the reversible logic synthesis problem is considered as we focus on efficient synthesis heuristics which can provide high quality synthesis results within a reasonable computation time. We have developed a weighted directed graph model for function representation and complexity measurement. An atomic transformation is constructed to associate the function complexity variation with reversible gates. The efficiency of our heuristic lies in maximally decreasing the function complexity during synthesis steps as well as the capability to climb out of local optimums. Thereafter, swarm intelligence, one of the machine learning techniques is employed in the space searching for reversible logic synthesis, which achieves further performance improvement. To tackle the high defect-rate during the emerging nanotechnology manufacturing process, we have developed a novel defect-aware logic mapping framework for nanowire-based PLA architecture via Boolean satisfiability (SAT). The PLA defects of various types are formulated as covering and closure constraints. The defect-aware logic mapping is then solved efficiently by using available SAT solvers. This approach can generate valid logic mapping with a defect rate as high as 20%. The proposed method is universally suitable for various nanoscale PLAs, including AND/OR, NOR/NOR structures, etc. In summary, this work provides some initial attempts to address two major problems confronting future nanoelectronic system designs: the development of electronic design automation tools and the reliability issues. However, there are still a lot of challenging open questions remain in this emerging and promising area. We hope our work can lay down stepstones on nano-scale circuit design optimization through exploiting the distinctive characteristics of emerging nanotechnologies. / Ph. D.

Page generated in 0.0596 seconds