61 |
Estudo de casos de complexidade de colorações gulosa de vértices e de arestas / Case studies of complexity of greedy colorings of vertices and edgesOliveira, Ana Karolinna Maia de January 2011 (has links)
OLIVEIRA, Ana Karolinna Maia de. Estudo de casos de complexidade de colorações gulosa de vértices e de arestas. 2011. 58 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2011. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-08T18:03:48Z
No. of bitstreams: 1
2011_dis_akmoliveira.pdf: 520341 bytes, checksum: b0c0d48f19d7c3e376c2c79c3a815b08 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-13T12:34:49Z (GMT) No. of bitstreams: 1
2011_dis_akmoliveira.pdf: 520341 bytes, checksum: b0c0d48f19d7c3e376c2c79c3a815b08 (MD5) / Made available in DSpace on 2016-07-13T12:34:49Z (GMT). No. of bitstreams: 1
2011_dis_akmoliveira.pdf: 520341 bytes, checksum: b0c0d48f19d7c3e376c2c79c3a815b08 (MD5)
Previous issue date: 2011 / The vertices and edges colorings problems, which consists in determine the smallest number of colors needed to color the vertices and edges of a graph, respectively, so that adjacent vertices and adjacent edges, respectively, have distinct colors, are computationally hard problems and recurring subject of research in graph theory due to numerous practical problems they model. In this work, we study the worst performance of greedy algorithms for coloring vertices and edges. The greedy algorithm has the following general principle: to receive, one by one, the vertices (respect. edges) of the graph to be colored by assigning always the smallest possible color to the vertex (resp. edge) to be colored. We note that so greedy coloring the edges of a graph is equivalent to greedily coloring its line graph, this being the greatest interest in research on greedy edges coloring. The worst performance of the Algorithms is measured by the greatest number of colors they can use. In the case of greedy vertex coloring, this is the number of Grundy or greedy chromatic number of the graph. For the edge coloring, this is the greedy chromatic index or Grundy index of the graph. It is known that determining the Grundy number of any graph is NP-hard. The complexity of determining the Grundy index of any graph was however an open problem. In this dissertation, we prove two complexity results. We prove that the Grundy number of a (q,q−4)-graph can be determined in polynomial time. This class contains strictly the class of cografos P4-sparse for which the same result had been established. This result generalizes so those results. The presented algorithm uses the primeval decomposition of graphs, determining the parameter in linear time. About greedy edge coloring, we prove that the problem of determining the Grundy index is NP-complete for general graphs and polynomial for catepillar graphs, implying that the Grundy number is polynomial for graphs of line of caterpillars. More specifically, we prove that the Grundy index of a caterpillar is D or D+1 and present a polynomial algorithm to determine it exactly. / Os problemas de coloracão de vértices e de arestas, que consistem em determinar o menor número de cores necessárias para colorir os vértices e arestas de um grafo, respectivamente, de forma que vértices adjacentes e arestas adjacentes, respectivamente, possuem cores distintas, são problemas computacionalmente difíceis e são objeto de pesquisa recorrente em teoria do grafos em virtude de inúmeros problemas práticos que eles modelam. No presente trabalho, estudamos o pior desempenho dos algoritmos gulosos de coloração de vértices e de arestas. O algoritmo guloso tem o seguinte princípio geral: receber, um a um, os vértices (respect. as arestas) do grafo a ser colorido, atribuindo sempre a menor cor possível ao vértice (resp. aresta) a ser colorido. Observamos que colorir de forma gulosa as arestas de um grafo equivale a colorir de forma gulosa o seu grafo linha, tendo sido este o maior interesse na pesquisa em coloração gulosa de arestas. O pior desempenho dos algoritmos é medido pelo maior número de cores que eles podem utilizar. No caso da coloração gulosa de vértices, esse é o número de Grundy ou número cromático guloso do grafo. No caso da coloração de arestas, esse é o íındice cromático guloso ou íındice de Grundy do grafo. Sabe-se que determinar o número de Grundy de um grafo qualquer é NP-difícil. A complexidade de determinar o índice de Grundy de um grafo qualquer era entretanto um problema em aberto. Na presente dissertação, provamos dois resultados de complexidade. Provamos que o número de Grundy de um grafo (q,q−4) pode ser determinado em tempo polinomial. Essa classe contém estritamente a classe dos cografos e P4-esparsos para os quais o mesmo resultado havia sido estabelecido. Esse resultado generaliza portanto aqueles resultados. O algoritmo apresentado usa a decomposição primeval desses grafos, determinando o parâmetro em tempo linear. No que se refere à coloração de arestas, provamos que o problema de determinar o índice de Grundy é NP-completo para grafos em geral e polinomial para grafos caterpillar, implicando que o número de Grundy é polinomial para os grafos linha desses. Mais especificamente provamos que o índice de Grundy dos caterpillar é D ou D+1 e apresentamos um algoritmo polinomial para determiná-lo exatamente.
|
62 |
A study onshop sceduling problems / Um estudo sobre escalonamento de processosZubaran, Tadeu Knewitz January 2018 (has links)
Escalonamento de processos é um tipo de problema de otimização combinatória no qual devemos alocar máquinas à tarefas por períodos específicos de tempo. A literatura contém diversos estudos propondo técnicas para resolver modelos de escalonamento de processos como o job shop e o open shop. Esses modelos permitem que os passos no processo produtivo sejam ou completamente ordenados ou sem ordenação alguma. Com o aumento da complexidade das aplicações industriais no encontramos, mais recentemente, diversos trabalhos que propõe problemas de escalonamento de processos mais gerais para modelar mais precisamente os processos produtivos. O mixed shop, group shop e partial shop são exemplos de tais modelos. Nesse trabalho nós propomos uma busca tabu iterada para o partial shop, que é um modelo geral que inclui diversos modelos mais restritivos. Os componentes novos mais importantes da técnica são o gerador de solução inicial, a vizinhança e o limite inferior para a vizinhança. Em experimentos computacionais nós conseguimos demonstrar que a heurística genérica e única é capaz de competir, e as vezes superar, as técnicas de estado de arte desenvolvidas especificamente para partial, open, mixed e group shop. Algumas vezes uma máquina é o gargalo de um processo produtivo, e é replicada. Na literatura o caso das máquinas paralelas foi incluído em diversas extensões de problemas de escalonamento de processos. Nessa tese nós também propomos uma técnica para escalonar as máquinas paralelas, sem incluí-las explicitamente na representação do problema. Nós usamos técnicas gerais para os casos sem máquinas paralelas para produzir uma busca heurística tabu rápida, e estado da arte, para o caso do job shop com máquinas paralelas. / Shop scheduling is a combinatorial optimization type of problem in which we must allocate machines to jobs for specific periods time. A set of constraints defines which schedules are valid, and we must select one that minimizes or maximizes an objective function. In this work we use the makespan, which is the time the last job finishes. The literature contains several studies proposing techniques to solve shop problems such as the job shop and open shop. These problems allow the steps of the production processes to be either fully ordered or not ordered at all. With increasing complexity and size of industrial applications we find, more recently, several works which propose more general shop problems to model the production processes more accurately. The mixed shop, group shop and partial shop are examples of such problems In this work we propose an iterated tabu search for the partial shop, which is a general problem and includes several other more restrictive shop problems. The most important novel components of the solver are the initial solution generator, the neighbourhood, and the lower bound for the neighbourhood. In computational experiments we were able to show that the general partial shop solver is able to compete with, and sometimes surpass, the state-of-the-art solvers developed specifically for the partial, open, mixed and group shops. Sometimes a machine is a bottleneck in the production process, and is replicated. In the literature the parallel machines case has being included in several extensions of shop problems. In this thesis we also propose a technique to schedule the parallel machines heuristically, without including them explicitly in the representation of the problem. We use general techniques for the non-parallel machine cases to produce a fast tabu search heuristic results for the job shop with parallel machines.
|
63 |
Effective techniques for generating Delaunay mesh models of single- and multi-component imagesLuo, Jun 19 December 2018 (has links)
In this thesis, we propose a general computational framework for generating mesh models of single-component (e.g., grayscale) and multi-component (e.g., RGB color) images. This framework builds on ideas from the previously-proposed GPRFSED method for single-component images to produce a framework that can handle images with any arbitrary number of components. The key ideas embodied in our framework are Floyd-Steinberg error diffusion and greedy-point removal. Our framework has several free parameters and the effect of the choices of these parameters is studied. Based on experimentation, we recommend two specific sets of parameter choices, yielding two highly effective single/multi-component mesh-generation methods, known as MED and MGPRFS. These two methods make different trade offs between mesh quality and computational cost. The MGPRFS method is able to produce high quality meshes at a reasonable computational cost, while the MED method trades off some mesh quality for a reduction in computational cost relative to the MGPRFS method.
To evaluate the performance of our proposed methods, we compared them to three highly-effective previously-proposed single-component mesh generators for both grayscale and color images. In particular, our evaluation considered the following previously-proposed methods: the error diffusion (ED) method of Yang et al., the greedy-point-removal from-subset (GPRFSED) method of Adams, and the greedy-point removal (GPR) method of Demaret and Iske. Since these methods cannot directly handle color images, color images were handled through conversion to grayscale as a preprocessing step, and then as a postprocessing step after mesh generation, the grayscale sample values in the generated mesh were replaced by their corresponding color values. These color-capable versions of ED, GPRFSED, and GPR are henceforth referred to as CED, CGPRFSED, and CGPR, respectively.
Experimental results show that our MGPRFS method yields meshes of higher quality than the CGPRFSED and GPRFSED methods by up to 7.05 dB and 2.88 dB respectively, with nearly the same computational cost. Moreover, the MGPRFS method outperforms the CGPR and GPR methods in mesh quality by up to 7.08 dB and 0.42 dB respectively, with about 5 to 40 times less computational cost. Lastly, our MED method yields meshes of higher quality than the CED and ED methods by up to 7.08 and 4.72 dB respectively, where all three of these methods have a similar computational cost. / Graduate
|
64 |
A study onshop sceduling problems / Um estudo sobre escalonamento de processosZubaran, Tadeu Knewitz January 2018 (has links)
Escalonamento de processos é um tipo de problema de otimização combinatória no qual devemos alocar máquinas à tarefas por períodos específicos de tempo. A literatura contém diversos estudos propondo técnicas para resolver modelos de escalonamento de processos como o job shop e o open shop. Esses modelos permitem que os passos no processo produtivo sejam ou completamente ordenados ou sem ordenação alguma. Com o aumento da complexidade das aplicações industriais no encontramos, mais recentemente, diversos trabalhos que propõe problemas de escalonamento de processos mais gerais para modelar mais precisamente os processos produtivos. O mixed shop, group shop e partial shop são exemplos de tais modelos. Nesse trabalho nós propomos uma busca tabu iterada para o partial shop, que é um modelo geral que inclui diversos modelos mais restritivos. Os componentes novos mais importantes da técnica são o gerador de solução inicial, a vizinhança e o limite inferior para a vizinhança. Em experimentos computacionais nós conseguimos demonstrar que a heurística genérica e única é capaz de competir, e as vezes superar, as técnicas de estado de arte desenvolvidas especificamente para partial, open, mixed e group shop. Algumas vezes uma máquina é o gargalo de um processo produtivo, e é replicada. Na literatura o caso das máquinas paralelas foi incluído em diversas extensões de problemas de escalonamento de processos. Nessa tese nós também propomos uma técnica para escalonar as máquinas paralelas, sem incluí-las explicitamente na representação do problema. Nós usamos técnicas gerais para os casos sem máquinas paralelas para produzir uma busca heurística tabu rápida, e estado da arte, para o caso do job shop com máquinas paralelas. / Shop scheduling is a combinatorial optimization type of problem in which we must allocate machines to jobs for specific periods time. A set of constraints defines which schedules are valid, and we must select one that minimizes or maximizes an objective function. In this work we use the makespan, which is the time the last job finishes. The literature contains several studies proposing techniques to solve shop problems such as the job shop and open shop. These problems allow the steps of the production processes to be either fully ordered or not ordered at all. With increasing complexity and size of industrial applications we find, more recently, several works which propose more general shop problems to model the production processes more accurately. The mixed shop, group shop and partial shop are examples of such problems In this work we propose an iterated tabu search for the partial shop, which is a general problem and includes several other more restrictive shop problems. The most important novel components of the solver are the initial solution generator, the neighbourhood, and the lower bound for the neighbourhood. In computational experiments we were able to show that the general partial shop solver is able to compete with, and sometimes surpass, the state-of-the-art solvers developed specifically for the partial, open, mixed and group shops. Sometimes a machine is a bottleneck in the production process, and is replicated. In the literature the parallel machines case has being included in several extensions of shop problems. In this thesis we also propose a technique to schedule the parallel machines heuristically, without including them explicitly in the representation of the problem. We use general techniques for the non-parallel machine cases to produce a fast tabu search heuristic results for the job shop with parallel machines.
|
65 |
Estudo de casos de complexidade de coloraÃÃes gulosa de vÃrtices e de arestas / Case studies of complexity of greedy colorings of vertices and edgesAna Karolinna Maia de Oliveira 07 April 2011 (has links)
Os problemas de colorac Ëao de vÂertices e de arestas, que consistem em determinar o menor
nÂumero de cores necessÂarias para colorir os vÂertices e arestas de um grafo, respectivamente, de
forma que vÂertices adjacentes e arestas adjacentes, respectivamente, possuem cores distintas,
sËao problemas computacionalmente difÂıceis e sËao objeto de pesquisa recorrente em teoria do
grafos em virtude de inÂumeros problemas prÂaticos que eles modelam.
No presente trabalho, estudamos o pior desempenho dos algoritmos gulosos de colorac Ëao
de vÂertices e de arestas. O algoritmo guloso tem o seguinte princÂıpio geral: receber, um a um,
os vÂertices (respect. as arestas) do grafo a ser colorido, atribuindo sempre a menor cor possÂıvel
ao vÂertice (resp. aresta) a ser colorido. Observamos que colorir de forma gulosa as arestas de
um grafo equivale a colorir de forma gulosa o seu grafo linha, tendo sido este o maior interesse
na pesquisa em colorac Ëao gulosa de arestas.
O pior desempenho dos algoritmos Âe medido pelo maior nÂumero de cores que eles podem
utilizar. No caso da colorac Ëao gulosa de vÂertices, esse Âe o nÂumero de Grundy ou nÂumero
cromÂatico guloso do grafo. No caso da colorac Ëao de arestas, esse Âe o Âındice cromÂatico guloso
ou Âındice de Grundy do grafo. Sabe-se que determinar o nÂumero de Grundy de um grafo qualquer
Âe NP-difÂıcil. A complexidade de determinar o Âındice de Grundy de um grafo qualquer era
entretanto um problema em aberto.
Na presente dissertac Ëao, provamos dois resultados de complexidade. Provamos que o
nÂumero de Grundy de um grafo (q,q−4) pode ser determinado em tempo polinomial. Essa
classe contÂem estritamente a classe dos cografos e P4-esparsos para os quais o mesmo resultado
havia sido estabelecido. Esse resultado generaliza portanto aqueles resultados. O algoritmo
apresentado usa a decomposicÂËao primeval desses grafos, determinando o parËametro em tempo
linear.
No que se refere `a colorac Ëao de arestas, provamos que o problema de determinar o Âındice
de Grundy Âe NP-completo para grafos em geral e polinomial para grafos caterpillar, implicando
que o nÂumero de Grundy Âe polinomial para os grafos linha desses. Mais especificamente
provamos que o Âındice de Grundy dos caterpillar Âe D ou D+1 e apresentamos um algoritmo
polinomial para determinÂa-lo exatamente. / The vertices and edges colorings problems, which consists in determine the smallest number
of colors needed to color the vertices and edges of a graph, respectively, so that adjacent
vertices and adjacent edges, respectively, have distinct colors, are computationally hard problems
and recurring subject of research in graph theory due to numerous practical problems
they model.
In this work, we study the worst performance of greedy algorithms for coloring vertices and
edges. The greedy algorithm has the following general principle: to receive, one by one, the
vertices (respect. edges) of the graph to be colored by assigning always the smallest possible
color to the vertex (resp. edge) to be colored. We note that so greedy coloring the edges of a
graph is equivalent to greedily coloring its line graph, this being the greatest interest in research
on greedy edges coloring.
The worst performance of the Algorithms is measured by the greatest number of colors they
can use. In the case of greedy vertex coloring, this is the number of Grundy or greedy chromatic
number of the graph. For the edge coloring, this is the greedy chromatic index or Grundy index
of the graph. It is known that determining the Grundy number of any graph is NP-hard. The
complexity of determining the Grundy index of any graph was however an open problem.
In this dissertation, we prove two complexity results. We prove that the Grundy number of
a (q,q−4)-graph can be determined in polynomial time. This class contains strictly the class
of cografos P4-sparse for which the same result had been established. This result generalizes so
those results. The presented algorithm uses the primeval decomposition of graphs, determining
the parameter in linear time.
About greedy edge coloring, we prove that the problem of determining the Grundy index is
NP-complete for general graphs and polynomial for catepillar graphs, implying that the Grundy
number is polynomial for graphs of line of caterpillars. More specifically, we prove that the
Grundy index of a caterpillar is D or D+1 and present a polynomial algorithm to determine it
exactly.
|
66 |
Greedy algorithms for multi-channel sparse recoveryDeterme, Jean-François 16 January 2018 (has links)
During the last decade, research has shown compressive sensing (CS) to be a promising theoretical framework for reconstructing high-dimensional sparse signals. Leveraging a sparsity hypothesis, algorithms based on CS reconstruct signals on the basis of a limited set of (often random) measurements. Such algorithms require fewer measurements than conventional techniques to fully reconstruct a sparse signal, thereby saving time and hardware resources. This thesis addresses several challenges. The first is to theoretically understand how some parameters—such as noise variance—affect the performance of simultaneous orthogonal matching pursuit (SOMP), a greedy support recovery algorithm tailored to multiple measurement vector signal models. Chapters 4 and 5 detail novel improvements in understanding the performance of SOMP. Chapter 4 presents analyses of SOMP for noiseless measurements; using those analyses, Chapter 5 extensively studies the performance of SOMP in the noisy case. A second challenge consists in optimally weighting the impact of each measurement vector on the decisions of SOMP. If measurement vectors feature unequal signal-to-noise ratios, properly weighting their impact improves the performance of SOMP. Chapter 6 introduces a novel weighting strategy from which SOMP benefits. The chapter describes the novel weighting strategy, derives theoretically optimal weights for it, and presents both theoretical and numerical evidence that the strategy improves the performance of SOMP. Finally, Chapter 7 deals with the tendency for support recovery algorithms to pick support indices solely for mapping a particular noise realization. To ensure that such algorithms pick all the correct support indices, researchers often make the algorithms pick more support indices than the number strictly required. Chapter 7 presents a support reduction technique, that is, a technique removing from a support the supernumerary indices solely mapping noise. The advantage of the technique, which relies on cross-validation, is that it is universal, in that it makes no assumption regarding the support recovery algorithm generating the support. Theoretical results demonstrate that the technique is reliable. Furthermore, numerical evidence proves that the proposed technique performs similarly to orthogonal matching pursuit with cross-validation (OMP-CV), a state-of-the-art algorithm for support reduction. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
67 |
An optimization model using the Assignment Problem to manage the location of parts : Master Thesis at the engine assembly at Scania CV ABLundquist, Josefin, O'Hara, Linnéa January 2017 (has links)
A key challenge for manufacturing companies is to store parts in an efficient way atthe lowest cost possible. As the demand of differentiated products increases, togetherwith the fact that old products are not phased out at the same pace, the need of usingstorage space as dynamically as possible becomes vital.Scania’s engine assembly manufactures engines for various automotive vehicles andmarine & industry applications. The variation in engine range in Scania’s offeringleads to the need of holding a vast, and increasing, assortment of parts in the produc-tion. As a consequence, this puts more pressure on the logistics and furnishing withinthe engine assembly.This master thesis aims to facilitate the process of assigning parts’ storage locationsin the most profitable manner through an optimization model, the Location Model, inExcel VBA. Together with the model, suggestions of work methods are presented.By implementing the Location Model at Scania’s engine assembly, 4,98 % of all keptparts are recommended location changes, while resulting in cost savings, for the chosen30-day period. These location changes result in a cost saving of 6,73 % of the totallogistic costs for the same time period.
|
68 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. January 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
69 |
Poincare Embeddings for Visualizing Eigenvector CentralityJanuary 2020 (has links)
abstract: Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself to be an enduring presence in the network science community throughout the 2010s, with no signs of fading into obscurity anytime soon. Hyperbolic embeddings, which map a given graph to hyperbolic space, have particularly proven to be a powerful and dynamic tool for studying complex networks. Hyperbolic embeddings are exploited in this thesis to illustrate centrality in a graph. In network science, centrality quantifies the influence of individual nodes in a graph. Eigenvector centrality is one type of such measure, and assigns an influence weight to each node in a graph by solving for an eigenvector equation. A procedure is defined to embed a given network in a model of hyperbolic space, known as the Poincare disk, according to the influence weights computed by three eigenvector centrality measures: the PageRank algorithm, the Hyperlink-Induced Topic Search (HITS) algorithm, and the Pinski-Narin algorithm. The resulting embeddings are shown to accurately and meaningfully reflect each node's influence and proximity to influential nodes. / Dissertation/Thesis / Masters Thesis Computer Science 2020
|
70 |
Lokalizace a její vliv na další procesy v bezdrátové síti / Localization and its influence on other processes in a wireless networkZemánek, Karel January 2010 (has links)
The masters thesis concerns localization and its influence on other processes in a wireless network. The first part of the thesis is devoted to the study of localization algorithms in wireless sensor networks. The second and third part is devoted to the description of hierarchiacal aggragation and Greedy perimetr stateless rating (GPSR). The fourth part presents implementation of GPSR protokol into the MATLAB simulation tool. It contains the description of specific m-files, that are use for simulation. The fifth part deals with the simulation itself. And the final part presents simulation results.
|
Page generated in 0.0396 seconds