• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 7
  • 7
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hibridinis genetinis algoritmas ir jo modifikacijos kvadratinio pasiskirstymo uždaviniui spręsti / Hybrid Genetic Algorithm and its modifications for the Qaudratic Assignment Problem

Milinis, Andrius 22 May 2005 (has links)
Genetic algorithms (GA) are among the widely used in various areas of computer science, including optimization problems. Genetic algorithms (GA) are based on the biological process of natural selection. Many simulations have demonstrated the efficiency of GAs on different optimization problems, among them, bin-packing, qaudratic assignment problem, graph partitioning, job-shop scheduling problem, set covering problem, traveling salesman problem, vehicle routing. The quadratic assignment problem (QAP) belong to the class of NP-hard combinatorial optimization problems. One of the main operators in GA is a crossover (i.e. solution recombination). This operator plays a very important role by constructing competitive genetic algorithms (GAs). In this work, we investigate several crossover operators for the QAP, among them, ULX (uniform like crossover), SPX (swap path crossover), OPX (one point crossover), COHX (cohesive crossover), MPX (multiple parent crossover) and others. Comparison of these crossover operators was performed. The results show high efficiency of the cohesive crossover.
2

La mémoire dans les algorithmes à colonie de fourmis : applications à l'optimisation combinatoire et à la programmation automatique

Roux, Olivier 13 December 2001 (has links) (PDF)
Dans ce mémoire, nous presentons les meta-heuristiques inspirées du comportement des fourmis lors de la recherche de nourriture, les OCF. Nous confrontons ces méthodes face aux principales méta-heuristiques connues. Pour cela, nous proposons de nous placer sous le point de vue de l'utilisation de la mémoire et nous présentons taxinomie qui étend celle des AMP. Nous proposons deux nouvelles adaptations du modéle des fourmis. La premiere est l'algorithme ANTabu, il s'agit d'une méthode hybride pour la résolution du PAQ. Il associe l'utilisation des fourmis artificielles et d'une méthode de recherche locale robuste : la recherche tabou. Le parallélisme intrinseque des systèmes de fourmis nous a amene a developper un modele paralléle pour ANTabu.<br />Cette méthode intègre également une puissante fonction de diversification et l'utilisation de bornes qui lui permettent d'eviter d'être piege au niveau d'optima locaux.<br />La seconde application développee est AP, cet algorithme est l'adaptation du modèle de coopération des fourmis a la programmation automatique. Son mécanisme de fonctionnement<br />est simple, puisque à chaque itération on crée une nouvelle population en utilisant l'information emmagasinée par la phéromone. L'intérêt de cette gestion de l'information est qu'elle n'utilise pas de mécanismes complexes. Nous présentons cette méthode face a l'algorithme de base tel que Koza l'a défini.
3

Algoritmos evolutivos aplicados aos problemas de leiaute de facilidades com áreas diferentes e escalonamento de tarefas sem espera

Paes, Frederico Galaxe 27 July 2017 (has links)
Submitted by Secretaria Pós de Produção (tpp@vm.uff.br) on 2017-07-27T19:12:43Z No. of bitstreams: 1 D2016 - Frederico Galaxe Paes.pdf: 5062594 bytes, checksum: 6141e589af7945cfd88fe0b9b3d62443 (MD5) / Made available in DSpace on 2017-07-27T19:12:43Z (GMT). No. of bitstreams: 1 D2016 - Frederico Galaxe Paes.pdf: 5062594 bytes, checksum: 6141e589af7945cfd88fe0b9b3d62443 (MD5) / Este trabalho aborda os seguintes problemas: Problema Quadrático de Alocação (PQA), Problema de Leiaute de Facilidades com Áreas Diferentes (PLFAD) e o Problema Job Shop Sem Espera (PJSSE). O PQA é um clássico problema de otimização combinatória, cujo objetivo é minimizar a soma das distâncias entre pares de locais distintos, ponderadas pelos fluxos entre as facilidades neles alocadas. O objetivo desta parte do trabalho é investigar técnicas heurísticas da literatura com base num conjunto de instâncias de referência do PQA. Os experimentos relatadosenvolveramAlgoritmosMeméticos(AM),técnicasdediversidadeadaptativa,algoritmos ILS (Iterated Local Search), busca locais 2-exchange e cadeia de ejeção (Ejection Chain). Doze algoritmos foram testados em 37 instâncias de referência obtidas da QAPLIB levando à escolha da combinação de técnicas mais adequada ao problema. A partir das observações obtidas do estudo anterior, decidiu-se abordar o PLFAD, de natureza semelhante ao PQA. No PLFAD, o objetivo é dimensionar e localizar facilidades retangulares em um espaço ilimitado e contínuo, sem sobreposição, de modo a minimizar a soma das distâncias entre facilidades ponderada pelos fluxos de manuseio de material. Porém, a pesquisa mostrou que devido a estrutura amarrada apresentada pelas soluções do PLFAD, métodos tradicionais de busca local tornam o problema caro computacionalmente, principalmente pelo tratamento da inviabilidade, devido a sobreposição. Duas abordagens algorítmicas são então introduzidas para tratar o problema: um Algoritmo Genético (GA) básico e um GA combinado com uma estratégia de decomposiçãoviadesconstruçãoereconstruçãoparcialdasolução. Paradecomporeficientemente o problema, uma estrutura especial é imposta às soluções impedindo que as facilidade cruzem os eixos X ou Y. Embora esta restrição possa deteriorar o valor da melhor solução encontrada, ela também aumenta muito a capacidade de busca do método em problemas de médio e grande porte. Comomostradopelosexperimentos,oalgoritmoresultanteproduzsoluçõesdealtaqualidadepara doisgruposdeinstânciasclássicasdaliteratura,melhorando6das8melhoressoluçõesconhecidas do primeiro grupo e todas as instâncias de médio e grande porte do segundo grupo. Para algumas das maiores instâncias do segundo grupo, com 90 ou 100 facilidades, a melhora média das soluções ficou em torno de6%ou7%quando comparado aos algoritmos anteriores, com menor tempo de CPU. Para tais instâncias, métodos exatos atuais são impraticáveis. Finalmente é apresentado o PJSSE, escolhido devido às suas soluções apresentarem uma natureza semelhante àquelas do PLFAD. Uma algoritmo baseado em GA, cuja construção da solução é efetuada por um algoritmo guloso eficiente, é proposto para resolver instâncias de referência da literatura obtendo resultados promissores e com menor tempo computacional comparado com abordagens anteriores, principalmente em instâncias de grande porte. / This work address the following problems: Quadratic Assignment Problem (QAP), Unequal Area Facility Layout Problem (UA-FLP), and the Job Shop Problem No-Wait (JSPNW). The QAP is a classic combinatorial optimization problem, which aims to minimize the sum of distances between pairs of different locations, weighted by flows between facilities allocated in them. The objective of this part of the work is to investigate heuristic techniques of the literature based on a benchmark datasets of the QAP. We perform experiments with Memetic Algorithms (MA), adaptive diversity techniques, Iterated Local Search (ILS) algorithms, local searches 2−exchange andEjectionChains. Twelvealgorithmshavebeentestedin37benchmarkdatasets obtained from QAPLIB thus enabling to identify a combination of more suitable techiques for the problem. Based on the observations of the previous study, we decided to address the UA-FLP, of similar nature to QAP. The UA-FLP, aims to dimension and locate rectangular facilities in an unlimited floor space, without overlap, while minimizing the sum of distances among facilities weighted by “material-handling"flows. However, the research has shown that due to the tight structure of good UA-FLP solutions, traditional methods of local search make the problem expensive computationally, mainly by infeasibility treatment due to overlap. We introduce two algorithmic approaches to address this problem: a simple Genetic Algorithm (GA), and a GA combined with a decomposition strategy via partial solution deconstructions and reconstructions. To efficiently decompose the problem, we impose a solution structure where no facility should cross the X or Y axis. Although this restriction can possibly deteriorate the value of the best achievable solution, it also greatly enhances the search capabilities of the method on medium and large problems. As highlighted by our experiments, the resulting algorithm produces solutions of high quality for the two classic datasets of the literature, improving 6 out of the 8 best known solutions from the first set and all medium- and large-scale instances from the second set. For some of the largest instances of the second set, with 90 or 100 facilities, the average solution improvement goes as high as 6% or 7% when compared to previous algorithms, in less CPU time. For such instances, current exact methods are impracticable. Finally is presented the PJSSE, chosen because of its solutions present a nature similar to those of PLFAD. An algorithm based on GA, where the construction of the solution is made by an greedy eficient algorithm, is proposed to solve benchmark instances of the literature. Promising results have been achieved in less CPU-time than previous approaches, especially for larger scale instances.
4

Otimização do processo de inserção automática de componentes eletrônicos empregando a técnica de times assíncronos. / Using A-Teams to optimize automatic insertion of electronic components.

Rabak, Cesar Scarpini 22 June 1999 (has links)
Máquinas insersoras de componentes são utilizadas na indústria eletrônica moderna para a montagem automática de placas de circuito impresso. Com a competição acirrada, há necessidade de se buscar todas as oportunidades para diminuir custos e aumentar a produtividade na exploração desses equipamentos. Neste trabalho, foi proposto um procedimento de otimização do processo de inserção da máquina insersora AVK da Panasonic, implementado em um sistema baseado na técnica de times assíncronos (A-Teams). Foram realizados testes com exemplos de placas de circuito impresso empregadas por uma indústria do ramo e problemas sintéticos para avaliar o desempenho do sistema. / Component inserting machines are employed in the modern electronics industry for the automatic assembly of printed circuit boards. Due the fierce competition, there is a need to search for all opportunities to reduce costs and increase the productivity in the exploitation of these equipment. In this work we propose an optimization procedure for the insertion process of the AVK Panasonic inserting machine, implemented in a system based on asynchronous teams (A-Teams). Tests were conducted using as examples both printed circuit boards used by a particular industry of the realm and synthetic problems for the evaluation of the system.
5

Otimização do processo de inserção automática de componentes eletrônicos empregando a técnica de times assíncronos. / Using A-Teams to optimize automatic insertion of electronic components.

Cesar Scarpini Rabak 22 June 1999 (has links)
Máquinas insersoras de componentes são utilizadas na indústria eletrônica moderna para a montagem automática de placas de circuito impresso. Com a competição acirrada, há necessidade de se buscar todas as oportunidades para diminuir custos e aumentar a produtividade na exploração desses equipamentos. Neste trabalho, foi proposto um procedimento de otimização do processo de inserção da máquina insersora AVK da Panasonic, implementado em um sistema baseado na técnica de times assíncronos (A-Teams). Foram realizados testes com exemplos de placas de circuito impresso empregadas por uma indústria do ramo e problemas sintéticos para avaliar o desempenho do sistema. / Component inserting machines are employed in the modern electronics industry for the automatic assembly of printed circuit boards. Due the fierce competition, there is a need to search for all opportunities to reduce costs and increase the productivity in the exploitation of these equipment. In this work we propose an optimization procedure for the insertion process of the AVK Panasonic inserting machine, implemented in a system based on asynchronous teams (A-Teams). Tests were conducted using as examples both printed circuit boards used by a particular industry of the realm and synthetic problems for the evaluation of the system.
6

Drawing DNA Sequence Networks

Olivieri, Julia 12 August 2016 (has links)
No description available.
7

The Allure of Departed Colleagues : An Examination of Career Mobility in Competitive Labor Markets

Gopakumar, M G January 2015 (has links) (PDF)
In global corporations, work is increasingly organized around projects and individuals are constantly working with new constellations of partners across locational and temporal boundaries. In order to be successful in such settings, individuals have to form and maintain relationships with those they need to learn from and coordinate with. Recent studies suggest that these social ties provide resources and support as well as create normative pressures that strengthen the attachment of employees with the firm and lead them to stay with the firm. In contrast, the strength of an individual’s attachment with the organization given the departure of connected colleagues remains largely under theorized, and consequently, its implications have not been adequately studied. We address these gaps by examining whether ties to colleagues who leave the firm activate different mechanisms which can weaken their binds with the organization. This study assume significance in the context of contemporary free-agent labor markets where career trajectories are proposed to unfold in a series of short stints at multiple firms as opposed to life-long career in a single firm. We develop theoretical arguments predicting the effect of workplace relationships on career mobility decisions by building on prior research into distributed work, changing nature of careers, social comparison, homophily, and structural equivalence. The main contention of this study is that the departure of one or more coworkers serves as powerful signals that unsettle the feeling of belongingness the focal employee enjoys with other teammates who choose to stay with the firm. Further, we propose that the influence of those departed employees will be higher when they are collocated and occupied similar professional roles as the focal employee. To test the arguments, we analyze entire project co-assignment data across five years that linked 728 geographically distributed employees who were engaged in software development and delivery activities at a multi-national high technology firm. Our findings suggest that instead of seeking belonging and viability with coworkers, employees are actively seeking cues from their network of colleagues and continuously making subjective assessments of career success. In distributed work settings, such cues circulate more among physically proximate than distant employees and formal roles of coworkers serve as referent points for those signals. These mechanisms collectively influence voluntary turnover decisions. Using a classification model, we further demonstrate how insights from this study can be used by human resource management practitioners to assess and contain the flight risk of their valuable talent.
8

A new framework considering uncertainty for facility layout problem

Oheba, Jamal Bashir January 2012 (has links)
In today’s dynamic environment, where product demands are highly volatile and unstable, the ability to design and operate manufacturing facilities that are robust with respect to uncertainty and variability is becoming increasingly important to the success of any manufacturing firm in order to operate effectively in such an environment. Hence manufacturing facilities must be able to exhibit high levels of robustness and stability in order to deal with changing market demands. In general, Facility Layout Problem (FLP) is concerned with the allocation of the departments or machines in a facility with an objective to minimize the total material handling cost (MHC) of moving the required materials between pairs of departments. Most FLP approaches assume the flow between departments is deterministic, certain and constant over the entire time planning horizon. Changes in product demand and product mix in a dynamic environment invalidate these assumptions. Therefore there is a need for stochastic FLP approaches that aim to assess the impact of uncertainty and accommodate any possible changes in future product demands.This research focuses on stochastic FLP with an objective to present a methodology in the form of a framework that allows the layout designer to incorporate uncertainty in product demands into the design of a facility. In order to accomplish this objective, a measure of impact of this uncertainty is required. Two solution methods for single and multi period stochastic FLPs are presented to quantify the impact of product demand uncertainty to facility layout designs in terms of robustness (MHC) and variability (standard deviation). In the first method, a hybrid (simulation) approach which considers the development of a simulation model and integration of this model with the VIPPLANOPT 2006 algorithm is presented. In the second method, mathematical formulations of analytic robust and stable indices are developed along with the use of VIPPLANOPT for solution procedure. Several case studies are developed along with numerical examples and case studies from the literature are used to demonstrate the proposed methodology and the application of the two methods to address different aspects of stochastic FLP both analytically and via the simulation method. Through experimentation, the proposed framework with solution approaches has proven to be effective in evaluating the robustness and stability of facility layout designs with practical assumptions such as deletion and expansion of departments in a stochastic environment and in applying the analysis results of the analytic and simulation indices to reduce the impact of errors and make better decisions
9

THE EVACUATION PROBLEM IN MULTI-STORY BUILDINGS

Cung, Quang Hong 19 March 2019 (has links)
The pressure from high population density leads to the creation of high-rise structures within urban areas. Consequently, the design of facilities which confront the challenges of emergency evacuation from high-rise buildings become a complex concern. This paper proposes an embedded program which combines a deterministic (GMAFLAD) and stochastic model (M/G/C/C State Dependent Queueing model) into one program, GMAF_MGCC, to solve an evacuation problem. An evacuation problem belongs to Quadratic Assignment Problem (QAP) class which will be formulated as a Quadratic Set Packing model (QSP) including the random flow out of the building and the random pairwise traffic flow among activities. The procedure starts with solving the QSP model to find all potential optimal layouts for the problem. Then, the stochastic model calculates an evacuation time of each solution which is the primary decision variable to figure the best design for the building. Here we also discuss relevant topics to the new program including the computational accuracy and the correlation between a successful rate of solving and problems’ scale. This thesis examines the relationship of independent variables including arrival rate, population and a number of stories with the dependent variable, evacuation time. Finally, the study also analyzes the probability distribution of an evacuation time for a wide range of problem scale.
10

Approche efficace pour la conception des architectures multiprocesseurs sur puce électronique

Elie, Etienne 12 1900 (has links)
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système. / On-Chip Multiprocessor (OCM) systems are considered to be the best structures to occupy the abundant space available on today integrated circuits (IC). In our thesis, we are interested on an architectural model, called Isometric on-Chip Multiprocessor Architecture (ICMA), that optimizes the OCM systems by focusing on an effective organization of cores (processors and memories) and on methodologies that optimize the use of these architectures. In the first part of this work, we study the topology of ICMA and propose an architecture that enables efficient and massive use of on-chip memories. ICMA organizes processors and memories in an isometric structure with the objective to get processed data close to the processors that use them rather than to optimize transfers between processors and memories, arranged in a conventional manner. ICMA is a mesh model in three dimensions. The organization of our architecture is inspired by the crystal structure of sodium chloride (NaCl), where each processor can access six different memories and where each memory can communicate with six processors at once. In the second part of our work, we focus on a methodology of decomposition. This methodology is used to find the optimal number of nodes for a given application or specification. The approach we use is to transform an application or a specification into an incidence matrix, where the entries of this matrix are the interactions between processors and memories as entries. In other words, knowing that the performance of a model depends on the intensity of the data flow exchanged between its units, namely their number, we aim to guarantee a good computing performance by finding the optimal number of processors and memories that are suitable for the application computation. We also consider the load balancing of the units of ICMA during the specification phase of the design. Our proposed decomposition is on three points: the transformation of the specification or application into an incidence matrix, a new methodology based on the Cell Formation Problem (CFP), and load balancing processes in the processors and data in memories. In the third part, we focus on the allocation of processor and memory by a two-step methodology. Initially, we allocate units to the nodes of the system structure, considered here as an undirected graph, and subsequently we assign values to the arcs of this graph. For the assignment, we propose modeling of the decomposed application using a matrix approach and the Quadratic Assignment Problem (QAP). For the assignment of the values to the arcs, we propose an approach of gradual changes of these values in order to seek the best combination of cost allocation, this under certain metric constraints such as temperature, heat dissipation, power consumption and surface occupied by the chip. The ultimate goal of this work is to propose a methodology for non-traditional, systematic and effective decision support design tools for multiprocessor system architects, from the phase of functional specification.

Page generated in 0.3963 seconds