• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 70
  • 26
  • 19
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 317
  • 143
  • 57
  • 44
  • 40
  • 38
  • 33
  • 32
  • 26
  • 24
  • 24
  • 24
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Comparing Quantum Annealing and Simulated Annealing when Solving the Graph Coloring Problem / Jämförelse mellan kvantglödgning och simulerad härdning vid lösning av graffärgningsproblemet

Odelius, Nora, Reinholdsson, Isak January 2023 (has links)
Quantum annealing (QA) is an optimization process in quantum computing similar to the probabilistic metaheuristic simulated annealing (SA). The QA process involves encoding an optimization problem into an energy landscape, which it then traverses in search for the point of minimal energy representing the global optimal state. In this thesis two different implementations of QA are examined, one run on a binary quadratic model (BQM) and one on a discrete quadratic model (DQM). These are then compared to their traditional counterpart: SA, in terms of performance and accuracy when solving the graph coloring problem (GCP). Regarding performance, the results illustrate how SA outperforms both QA implementations. However, it is apparent that these slower execution times are mostly due to various overhead costs that appear because of limited hardware. When only looking at the quantum annealing part of the process, it is about a hundred times faster than the SA process. When it comes to accuracy, both the DQM-implementation of QA and SA provided results of high quality, whereas the BQM-implementation performed notably worse, both by often not finding the optimal values and by sometimes returning invalid results. / Quantum annealing (QA) är en kvantbaserad optimeringsprocess som liknar den probabilistiska metaheuristiken simulated annealing (SA). QA går ut på att konvertera ett optimeringsproblem till ett energilandskap, som sedan navigeras för att hitta punkten med lägst energi, vilket då motsvarar den optimala lösningen på problemet. I denna uppsats undersöks två olika implementationer av QA: en som använder en binary quadratic model (BQM) och en som använder en discrete quadratic model (DQM). Dessa två implementationerna jämförs med deras traditionella motsvarighet: SA, utifrån både prestanda och korrekthet vid lösning av graffärgningsproblemet (GCP). När det gäller prestanda visar resultaten att SA är snabbare än båda QA implementationerna. Samtidigt är det tydligt att denna prestandaskillnad framförallt beror på diverse förberedelser innan exkueringen startar på kvantdatorn, vilka är krävande på grund av olika hårdvarubegränsningar. Om man endast betraktar kvantprocesserna visar vår studie att QA implementationerna är ungefär hundra gånger snabbare än SA. Gällande korrekthet gav både DQM-implementationen av QA och SA resultat av hög kvalitet medan BQM-implementationen presterade betydligt sämre. Den gjorde detta dels genom att inte skapa optimala resultat och genom att returnera otillåtna lösningar.
242

Vlastnosti grafů velkého obvodu / Vlastnosti grafů velkého obvodu

Volec, Jan January 2011 (has links)
In this work we study two random procedures in cubic graphs with large girth. The first procedure finds a probability distribution on edge-cuts such that each edge is in a randomly chosen cut with probability at least 0.88672. As corollaries, we derive lower bounds for the size of maximum cut in cubic graphs with large girth and in random cubic graphs, and also an upper bound for the fractional cut covering number in cubic graphs with large girth. The second procedure finds a probability distribution on independent sets such that each vertex is in an independent set with probability at least 0.4352. This implies lower bounds for the size of maximum independent set in cubic graphs with large girth and in random cubic graphs, as well as an upper bound for the fractional chromatic number in cubic graphs with large girth.
243

Distribution et stockage dans les réseaux / Distribution and storage in networks

Modrzejewski, Remigiusz 24 October 2013 (has links)
Dans cette thèse, nous étudions divers problèmes dont l'objectif est de gérer la croissance d'internet plus efficacement. En effet celle-ci est très vive : 41% pour le pic en 2012. Afin de répondre aux défis posés par cette évolution aux divers acteurs du réseau, des protocoles de gestion et de communication plus intelligents sont nécessaires. Les protocoles de l'Internet furent conçus, point à point. Or, la part de la diffusion de média dans le trafic est prépondérante et en hausse tendancielle, et des projections indiquent qu'en 2016 80-90% du trafic sera engendré par de la diffusion vidéo. Cette divergence entraîne des inefficacités car les données parcourent plusieurs fois le réseau. Dans cette thèse, nous étudions comment tempérer cette inefficacité. Nos contributions sont organisées selon les couches et les phases de déploiement du réseau. Nous étudions le placement de caches lors de la conception du réseau. Ensuite, pour la gestion d'un réseau, nous regardons quand placer des appareils en veille, en utilisant un mécanisme de cache et en coopération avec des réseaux de distribution. Puis, au niveau de la couche application, nous étudions un problème de maintenance d'arbres équilibrés pour la diffusion de média. Enfin, nous analysons la probabilité de survie de données dans un système de sauvegarde distribuée. Notre travail se fonde à la fois sur des méthodes théoriques (Chaînes de Markov, Programmation Linéaire), mais aussi sur des outils empiriques tels que la simulation et l'expérimentation. / In this thesis we study multiple approaches to efficiently accommodating for the future growth of the Internet. The exponential growth of Internet traffic, reported to be as high as 41% in peak throughput in 2012 alone, continues to pose challenges to all interested parties. Therefore, to accommodate the growth, smart management and communication protocols are needed. The basic protocols of the Internet are point-to-point in nature. However, the traffic is largely broadcasting, with projections stating that as much as 80-90% of it will be video by 2016. This discrepancy leads to inefficiency, where multiple copies of essentially the same messages travel in parallel through the same links. In this thesis we study multiple approaches to mitigating this inefficiency. The contributions are organized by layers and phases of the network life. We look into optimal cache provisioning during network design. Next, we move to managing an existing network. We look into putting devices to sleep mode, using caching and cooperation with Content Distribution Networks. In the application layer, we look into maintaining balanced trees for media broadcasting. Finally, we analyze data survivability in a distributed backup system, which can reduce network traffic by putting the backups closer to the client than if using a data center. Our work is based on both theoretical methods, like Markov chains and linear programming, as well as empirical tools, like simulation and experimentation.
244

Heuristic Algorithms for Graph Coloring Problems / Algorithmes heuristiques pour des problèmes de coloration de graphes

Sun, Wen 29 November 2018 (has links)
Cette thèse concerne quatre problèmes de coloration de graphes NPdifficiles, à savoir le problème de coloration (GCP), le problème de coloration équitable (ECP), le problème de coloration des sommets pondérés et le problème de sous-graphe critique (k-VCS). Ces problèmes sont largement étudiés dans la littérature, non seulement pour leur difficulté théorique, mais aussi pour leurs applications réelles dans de nombreux domaines. Étant donné qu'ils appartiennent à la classe de problèmes NP-difficiles, il est difficile de les résoudre dans le cas général de manière exacte. Pour cette raison, cette thèse est consacrée au développement d'approches heuristiques pour aborder ces problèmes complexes. Plus précisément, nous développons un algorithme mémétique de réduction (RMA) pour la coloration des graphes, un algorithme de recherche réalisable et irréalisable (FISA) pour la coloration équitable et un réalisable et irréalisable (AFISA) pour le problème de coloration des sommets pondérés et un algorithme de suppression basé sur le retour en arrière (IBR) pour le problème k-VCS. Tous les algorithmes ont été expérimentalement évalués et comparés aux méthodes de l'état de l'art. / This thesis concerns four NP-hard graph coloring problems, namely, graph coloring (GCP), equitable coloring (ECP), weighted vertex coloring (WVCP) and k-vertex-critical subgraphs (k-VCS). These problems are extensively studied in the literature not only for their theoretical intractability, but also for their real-world applications in many domains. Given that they belong to the class of NP-hard problems, it is computationally difficult to solve them exactly in the general case. For this reason, this thesis is devoted to developing effective heuristic approaches to tackle these challenging problems. We develop a reduction memetic algorithm (RMA) for the graph coloring problem, a feasible and infeasible search algorithm (FISA) for the equitable coloring problem, an adaptive feasible and infeasible search algorithm (AFISA) for the weighted vertex coloring problem and an iterated backtrack-based removal (IBR) algorithm for the k-VCS problem. All these algorithms were experimentally evaluated and compared with state-of-the-art methods.
245

Vertex partition of sparse graphs / Partition des sommets de graphes peu denses

Dross, François 27 June 2018 (has links)
Le Théorème des Quatre Couleurs, conjecturé en 1852 et prouvé en 1976, est à l'origine de l'étude des partitions des sommets de graphes peu denses. Il affirme que toute carte plane peut être coloriée avec au plus quatre couleurs différentes, de telle manière que deux régions qui partagent une frontière aient des couleurs différentes. Énoncé en terme de théorie des graphes, cela veut dire que tout graphe planaire, c'est à dire tout graphe qui peut être représenté dans le plan sans que deux arêtes ne se croisent, peut voir son ensemble de sommets partitionné en quatre ensembles tels que chacun de ces ensembles ne contient pas les deux extrémités d'une même arête. Une telle partition est appelée une coloration propre en quatre couleurs. Dans cette thèse, on s'intéresse à l'étude de la structure des graphes peu denses, selon différentes notions de densité. D'une part, on étudie les graphes planaires sans petits cycles, et d'autre part les graphes dont tous les sous-graphes ont un degré moyen peu élevé. Pour ces classes de graphes, on recherche tout d'abord le plus petit nombre de sommets à retirer pour obtenir une forêt, c'est à dire un graphe sans cycles. Cela peut être vu comme une partition des sommets du graphe en un ensemble induisant une forêt et un ensemble de sommets contenant au plus une fraction donnée des sommets du graphe. La motivation première de cette étude est une conjecture d'Albertson et Berman (1976) comme quoi tout graphe planaire admettrait une telle partition où la forêt contient au moins la moitié des sommets du graphe. Dans un second temps, on s'intéresse aux partitions des sommets de ces graphes en deux ensembles, tels que les sous-graphes induits par ces deux ensembles ont des propriétés particulières. Par exemple, ces sous-graphes peuvent être des graphes sans arêtes, des forêts, des graphes de degré borné, ou des graphes dont les composantes connexes ont un nombre borné de sommets. Ces partitions des sommets sont des extensions de la notion de coloration propre de graphe.On montre, pour différentes classes de graphes peu denses, que tous les graphes de ces classes admettent de telles partitions. On s'intéresse également aux aspect algorithmiques de la construction de telles partitions. / The study of vertex partitions of planar graphs was initiated by the Four Colour Theorem, which was conjectured in 1852, and proven in 1976. According to that theorem, one can colour the regions of any planar map by using only four colours, in such a way that any two regions sharing a border have distinct colours. In terms of graph theory, it can be reformulated this way: the vertex set of every planar graph, i.e. every graph that can be represented in the plane such that edges do not cross, can be partitioned into four sets such that no edge has its two endpoints in the same set. Such a partition is called a proper colouring of the graph.In this thesis, we look into the structure of sparse graphs, according to several notions of sparsity. On the one hand, we consider planar graphs with no small cycles, and on the other hand, we consider the graphs where every subgraph has bounded average degree.For these classes of graphs, we first look for the smallest number of vertices that can be removed such that the remaining graph is a forest, that is a graph with no cycles. That can be seen as a partition of the vertices of the graph into a set inducing a forest and a set with a bounded fraction of the vertices of the graph. The main motivation for this study is a the Albertson and Berman Conjecture (1976), which states that every planar graph admits an induced forest containing at least one half of its vertices.We also look into vertex partition of sparse graphs into two sets both inducing a subgraph with some specific prescribed properties. Exemples of such properties can be that they have no edges, or no cycles, that they have bounded degree, or that they have bounded components. These vertex partitions generalise the notion of proper colouring. We show, for different classes of sparse graphs, that every graph in those classes have some specific vertex partition. We also look into algorithmic aspects of these partitions.
246

Cromoscopia óptica com tecnologia de banda estreita versus cromoscopia com solução de Lugol no diagnóstico do carcinoma superficial de esôfago em pacientes com câncer de cabeça e pescoço / Narrow band imaging versus chromoendoscopy with Lugols solution for esophageal squamous cell carcinoma detection

Ide, Edson 22 July 2010 (has links)
Presente estudo teve como objetivo avaliar a utilização da tecnologia de banda estreita com filtros ópticos (TBE) no rastreamento do carcinoma espinocelular do esôfago (CEC), utilizando como método comparativo a cromoscopia com a solução de Lugol. Trata-se de um estudo prospectivo de teste de diagnóstico, para o qual foram avaliados 129 pacientes do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP), com diagnóstico de carcinoma epidermóide de cabeça e pescoço, em programa de rastreamento de tumores secundários, no período de agosto de 2006 a fevereiro de 2007. Os exames de endoscopia convencional, TBE e a cromoscopia com Lugol foram realizados consecutivamente em um mesmo procedimento, e as lesões encontradas foram registradas e submetidas a biópsias. Foram calculados para cada método valores da sensibilidade, especificidade, acurácia, valores preditivos positivos e negativos, valores de verossimilhança positivo e negativo. Foram diagnosticados nove carcinomas superficiais (7%), sendo cinco carcinomas in situ e quatro carcinomas intramucosos, todos detectados pelo TBE e pelo Lugol, porém apenas seis foram diagnosticados pelo exame convencional e destes, nenhum foi menor ou igual a 10 mm. A tecnologia de bandas estreitas com filtros ópticos (TBE) sem magnificação de imagem apresentou resultados superponíveis a cromoscopia com Lugol, método atualmente de escolha para rastreamento do CEC esofágico em grupos de pacientes de alto risco, portadores de tumores de cabeça e pescoço / Background and study aims: The aim of this study was to compare narrow band imaging (NBI) without magnification and chromoendoscopy with Lugols solution for detecting superficial esophageal squamous cell carcinoma in patients with head and neck cancer. Patients and methods: This is a prospective observational study of 129 patients with primary head and neck tumors consecutively referred to the Gastrointestinal Endoscopy Unit of Hospital das Clínicas, São Paulo University Medical School (FMUSP), Brazil, between August 2006 and February 2007. Conventional examinations, NBI and Lugol chromoendoscopy were consecutively performed, and the detected lesions were mapped, recorded and sent for biopsy. The results of the three methods were compared regarding sensitivity, specificity, accuracy, positive predictive value, negative predictive value, positive likelihood value and negative likelihood value. Results: Of the 129 patients, nine (7%) were diagnosed with carcinomas, five of which were in situ and four intramucosal. All carcinomas were detected through NBI and Lugol chromoendoscopy. Only six lesions were diagnosed by conventional examination, all of which were larger than 10 mm. Conclusions: Narrow-band imaging technology with optical filters has high sensitivity and high negative predictive value for detecting superficial esophageal squamous cell carcinomas and produces results comparable to those obtained with 2.0% Lugol chromoendoscopy in patients with head and neck cancer
247

AVALIAÇÃO CLÍNICA DA INFLUÊNCIA DO CAFÉ NA EFETIVIDADE DO CLAREAMENTO DENTAL / Clinical evaluation of the influence of coffee in the effectiveness of dental bleaching

Siqueira, Márcia Fernanda de Rezende 29 February 2012 (has links)
Made available in DSpace on 2017-07-24T19:22:19Z (GMT). No. of bitstreams: 1 MARCIA FERNANDA DE REZENDE SIQUEIRA.pdf: 1854900 bytes, checksum: 25353ada2c457db2da3208c5e6d94257 (MD5) Previous issue date: 2012-02-29 / The objective of this clinical study was to evaluate whether exposure to coffee during the bleaching treatment with PC 16 (Whiteness Perfect, FGM,Joinville, Santa Catarina, Brazil) affects the degree of whitening and tooth sensitivity. We selected 40 patients with central incisors darker than A2, which were divided into 2 groups (n=20): GC - control group and GE - the experimental group. For the CG was no restriction of foods with dyes and the GE patients beyond the usual coffee intake in his diet made with black coffee rinses soluble Nescafé® Tradição (Nestlé), 4 times daily for 30 seconds. For both groups we used PC 16 for a period of 3 hours daily for 3 weeks. The color scale was assessed by spectrophotometer Easyshade and in the periods: initial, during the whitening (1 st, 2 nd and 3 rd week) and postbleaching (1 week and 1 month). Patients recorded their perceptions of sensitivity through the NRS scale (0-4) and VAS. For the color analysis was performed analysis of variance (ANOVA) with repeated measures on two factors (time vs. treatment groups), time being the repeated measure (=0.05). We carried out the Tukey test for contrast of means (=0.05). The tooth sensitivity was evaluated by Fisher's exact test and the intensity of tooth sensitivity was evaluated by Mann-Whitney test (=0.05) for both scales. There effectiveness of tooth whitening over time (p <0.001). For both groups was not statistical differences in absolute risk of sensitivity between the groups (p=1.0). Most patients had mild tooth sensitivity and no statistically significant difference was detected between groups for both the NRS scale (p=0.529) and for the VAS (p=0.258). It was concluded that dental bleaching is effective for the groups studied, which makes possible the consumption of coffee during the bleaching treatment. / O objetivo deste estudo clínico foi avaliar se a exposição ao café durante o tratamento clareador com PC 16% (Whiteness Perfect, FGM, Joinville, Santa Catarina, Brasil) afeta o grau de clareamento e a sensibilidade dental. Foram selecionados 40 pacientes com os incisivos centrais mais escuros que A2, os quais foram divididos em 2 grupos (n=20): GC - grupo controle e GE - grupo experimental. Para o GC houve restrição de alimentos com corantes e para o GE os pacientes, além da ingestão usual do café na sua dieta, realizaram bochecho com café preto solúvel Nescafé Tradição (Nestlé) 4 vezes ao dia, por 30 segundos. Para os dois grupos utilizou-se PC 16% pelo período de 3h diariamente, durante 3 semanas. A cor foi avaliada através da escala Vita Classical e espectrofotômetro Vita Easyshade, nos períodos: inicial, durante o clareamento dental (1ª, 2ª e 3ª semanas) e pós-clareamento (1 semana e 1 mês). Os pacientes registraram suas percepções de sensibilidade através da escala NRS (0-4) e VAS. Para a análise de cor foi efetuada ANOVA de dois fatores (grupos vs tempo de tratamento), sendo o tempo a medida repetida (=0,05). Foi realizado o teste de Tukey para o contraste das médias =0,05). A sensibilidade dental foi avaliada pelo teste exato de Fisher e a intensidade da sensibilidade dental foi avaliada pelo teste Mann-Whitney (=0,05) para ambas as escalas. Houve efetividade do clareamento dental ao longo do tempo (p<0,001). Para ambos os grupos não foi verificada diferença estatística no risco absoluto de sensibilidade entre os grupos (p=1,0). A maioria dos pacientes apresentou sensibilidade dental leve e não foi detectada diferença estatisticamente significante entre os grupos, tanto para a escala NRS (p=0,529) como para a VAS (p=0,258). Concluiu-se que o clareamento dental foi eficaz para os grupos avaliados, o que torna possível o consumo de café durante o tratamento clareador.
248

Environnements pour l'analyse expérimentale d'applications de calcul haute performance / Environments for the experimental analysis of HPC applications.

Perarnau, Swann 01 December 2011 (has links)
Les machines du domaine du calcul haute performance (HPC) gagnent régulièrement en com- plexité. De nos jours, chaque nœud de calcul peut être constitué de plusieurs puces ou de plusieurs cœurs se partageant divers caches mémoire de façon hiérarchique. Que se soit pour comprendre les performances ob- tenues par une application sur ces architectures ou pour développer de nouveaux algorithmes et valider leur performance, une phase d'expérimentation est souvent nécessaire. Dans cette thèse, nous nous intéressons à deux formes d'analyse expérimentale : l'exécution sur machines réelles et la simulation d'algorithmes sur des jeux de données aléatoires. Dans un cas comme dans l'autre, le contrôle des paramètres de l'environnement (matériel ou données en entrée) permet une meilleure analyse des performances de l'application étudiée. Ainsi, nous proposons deux méthodes pour contrôler l'utilisation par une application des ressources ma- térielles d'une machine : l'une pour le temps processeur alloué et l'autre pour la quantité de cache mémoire disponible. Ces deux méthodes nous permettent notamment d'étudier les changements de comportement d'une application en fonction de la quantité de ressources allouées. Basées sur une modification du compor- tement du système d'exploitation, nous avons implémenté ces méthodes pour un système Linux et démontré leur utilité dans l'analyse de plusieurs applications parallèles. Du point de vue de la simulation, nous avons étudié le problème de la génération aléatoire de graphes orientés acycliques (DAG) pour la simulation d'algorithmes d'ordonnancement. Bien qu'un grand nombre d'algorithmes de génération existent dans ce domaine, la plupart des publications repose sur des implémen- tations ad-hoc et peu validées de ces derniers. Pour pallier ce problème, nous proposons un environnement de génération comprenant la majorité des méthodes rencontrées dans la littérature. Pour valider cet envi- ronnement, nous avons réalisé de grande campagnes d'analyses à l'aide de Grid'5000, notamment du point de vue des propriétés statistiques connues de certaines méthodes. Nous montrons aussi que la performance d'un algorithme est fortement influencée par la méthode de génération des entrées choisie, au point de ren- contrer des phénomènes d'inversion : un changement d'algorithme de génération inverse le résultat d'une comparaison entre deux ordonnanceurs. / High performance computing systems are increasingly complex. Nowadays, each compute node can contain several sockets or several cores and share multiple memory caches in a hierarchical way. To understand an application's performance on such systems or to develop new algorithms and validate their behavior, an experimental study is often required. In this thesis, we consider two types of experimental analysis : execution on real systems and simulation using randomly generated inputs. In both cases, a scientist can improve the quality of its performance analysis by controlling the environment (hardware or input data) used. Therefore, we discuss two methods to control hardware resources allocation inside a system : one for the processing time given to an application, the other for the amount of cache memory available to it. Both methods allow us to study how an application's behavior change according to the amount of resources allocated. Based on modifications of the operating system, we implemented these methods for Linux and demonstrated their use for the analysis of several parallel applications. Regarding simulation, we studied the issue of the random generation of directed acyclic graphs for scheduler simulations. While numerous algorithms can be found for such problem, most papers in this field rely on ad-hoc implementations and provide little validation of their generator. To tackle this issue, we propose a complete environment providing most of the classical generation methods. We validated this environment using big analysis campaigns on Grid'5000, verifying known statistical properties of most algorithms. We also demonstrated that the performance of a scheduler can be impacted by the generation method used, identifying a reversing phenomenon : changing the generating algorithm can reverse the comparison between two schedulers.
249

[en] RECONSTRUCTION OF SCENES FROM IMAGES BY COARSE-TO-FINE SPACE CARVING / [pt] RECONSTRUÇÃO DE CENAS A PARTIR DE IMAGENS ATRAVÉS DE ESCULTURA DO ESPAÇO POR REFINAMENTO ADAPTATIVO

ANSELMO ANTUNES MONTENEGRO 03 March 2004 (has links)
[pt] A reconstrução de cenas a partir de imagens tem recebido, recentemente, grande interesse por parte dos pesquisadores das áreas de visão computacional, computação gráfica e modelagem geométrica. Várias são as suas aplicações como, por exemplo, modelagem de objetos a partir de imagens, construção de ambientes virtuais e telepresença. Dentre os métodos que têm produzido bons resultados na reconstrução de cenas a partir de imagens, podemos destacar aqueles que se baseiam em algoritmos de Escultura do Espaço. Tais técnicas procuram determinar quais são os elementos, em uma representação volumétrica do espaço da cena, que satisfazem um conjunto de restrições fotométricas impostas por um conjunto de imagens. Uma vez determinados, tais elementos volumétricos são coloridos de modo que reproduzam as informações fotométricas nas imagens de entrada, com uma certa margem de tolerância especificada com base em critérios estatísticos. Neste trabalho, investigamos o emprego de técnicas utilizadas em visualização no desenvolvimento de métodos de escultura do espaço. Como resultado, propomos um método por refinamento adaptativo que trabalha sobre espaços de reconstrução representados através de subdivisões espaciais. Tal método é capaz de realizar o processo de reconstrução de modo mais eficiente, empregando esforços proporcionais às características locais da cena, que são descobertas à medida em que a reconstrução é realizada. Finalmente, avaliamos a qualidade e a eficiência do método proposto, com base em um conjunto de resultados obtidos através de um sistema de reconstrução de objetos que utiliza imagens capturadas por webcams. / [en] The reconstruction of scenes from imagens has received special attention from researchers of the areas of computer vision, computer graphics and geometric modeling. As examples of application we can mention image-based scene reconstruction, modeling of complex as-built objects, construction of virtual environments and telepresence. Among the most successful methods used for the reconstruction of scenes from images are those based on Space Carving algorithms. These techniques reconstruct the shape of the objects of interest in a scene by determining, in a volumetric representation of the scene space, those elements that satisfy a set of photometric constraints imposed by the input images. Once determined, each photo- consistent element is colorized according to the photometric information in the input images, in such a way that they reproduce the photometric information in the input images, within some pre-specificied error tolerance. In this work, we investigate the use of rendering techniques in space carving methods. As a result, we propose a method based on an adaptive refinement process which works on reconstruction spaces represented by spatial subdivisions. We claim that such method can reconstruct the objects of interest in a more efficient way, using resources proportional to the local characteristics of the scene, which are discovered as the reconstruction takes place. Finally, we evaluate the quality and the efficiency of the method based on the results obtained from a reconstruction device that works with images captured from webcams.
250

Recoloração convexa de grafos: algoritmos e poliedros / Convex recoloring of graphs: algorithms and polyhedra

Moura, Phablo Fernando Soares 07 August 2013 (has links)
Neste trabalho, estudamos o problema a recoloração convexa de grafos, denotado por RC. Dizemos que uma coloração dos vértices de um grafo G é convexa se, para cada cor tribuída d, os vértices de G com a cor d induzem um subgrafo conexo. No problema RC, é dado um grafo G e uma coloração de seus vértices, e o objetivo é recolorir o menor número possível de vértices de G tal que a coloração resultante seja convexa. A motivação para o estudo deste problema surgiu em contexto de árvores filogenéticas. Sabe-se que este problema é NP-difícil mesmo quando G é um caminho. Mostramos que o problema RC parametrizado pelo número de mudanças de cor é W[2]-difícil mesmo se a coloração inicial usa apenas duas cores. Além disso, provamos alguns resultados sobre a inaproximabilidade deste problema. Apresentamos uma formulação inteira para a versão com pesos do problema RC em grafos arbitrários, e então a especializamos para o caso de árvores. Estudamos a estrutura facial do politopo definido como a envoltória convexa dos pontos inteiros que satisfazem as restrições da formulação proposta, apresentamos várias classes de desigualdades que definem facetas e descrevemos os correspondentes algoritmos de separação. Implementamos um algoritmo branch-and-cut para o problema RC em árvores e mostramos os resultados computacionais obtidos com uma grande quantidade de instâncias que representam árvores filogenéticas reais. Os experimentos mostram que essa abordagem pode ser usada para resolver instâncias da ordem de 1500 vértices em 40 minutos, um desempenho muito superior ao alcançado por outros algoritmos propostos na literatura. / In this work we study the convex recoloring problem of graphs, denoted by CR. We say that a vertex coloring of a graph G is convex if, for each assigned color d, the vertices of G with color d induce a connected subgraph. In the CR problem, given a graph G and a coloring of its vertices, we want to find a recoloring that is convex and minimizes the number of recolored vertices. The motivation for investigating this problem has its roots in the study of phylogenetic trees. It is known that this problem is NP-hard even when G is a path. We show that the problem CR parameterized by the number of color changes is W[2]-hard even if the initial coloring uses only two colors. Moreover, we prove some inapproximation results for this problem. We also show an integer programming formulation for the weighted version of this problem on arbitrary graphs, and then specialize it for trees. We study the facial structure of the polytope defined as the convex hull of the integer points satisfying the restrictions of the proposed ILP formulation, present several classes of facet-defining inequalities and the corresponding separation algorithms. We also present a branch-and-cut algorithm that we have implemented for the special case of trees, and show the computational results obtained with a large number of instances. We considered instances which are real phylogenetic trees. The experiments show that this approach can be used to solve instances up to 1500 vertices in 40 minutes, comparing favorably to other approaches that have been proposed in the literature.

Page generated in 0.0728 seconds