• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

K-set Polygons and Centroid Triangulations / K-set Polygones et Triangulations Centroïdes

El Oraiby, Wael 09 October 2009 (has links)
Cette thèse est une contribution à un problème classique de la géométrie algorithmique et combinatoire : l’étude des k-sets d’un ensemble V de n points du plan. Nous introduisons d’abord la notion de chaîne d’inclusion de convexes qui est un ordonnancement des points de V tel qu’aucun point n’appartient à l’enveloppe convexe de ceux qui le précèdent. Tout k-set d’une sous-suite commençante de la chaîne est appelé un k-set de la chaîne. Nous montrons que le nombre de ces k-sets est un invariant de V et qu’il est égal au nombre de régions du diagramme de Voronoï d’ordre k de V. Nous en déduisons un algorithme en ligne pour construire les k-sets des sommets d’une ligne polygonale simple dont chaque sommet est à l’extérieur de l’enveloppe convexe des sommets qui le précèdent sur la ligne. Si c est le nombre total de k-sets construits, la complexité de notrealgorithme est en O(n log n+c log^2 k) et est équivalente, par k-set construit, à celle du meilleur algorithme connu. Nous montrons ensuite que la méthode algorithmique classique de division-fusion peut être adaptée à la construction des k-sets de V. L’algorithme qui en résulte a une complexité enO(n log n+c log^2 k log(n/k)), où c est le nombre maximum de k-sets d’un ensemble de n points.Nous prouvons enfin que les centres de gravité des k-sets d’une chaîne d’inclusion de convexes sont les sommets d’une triangulation qui appartient à la même famille de triangulations, dites centroïdes, que le dual du diagramme de Voronoï d’ordre k. Nous en d´déduisons un algorithme qui construit des triangulations centroïdes particulières en temps O(n log n+k(n-k) log^2 k), ce qui est plus efficace que les algorithmes connus jusque là. / This thesis is a contribution to a classical problem in computational and combinatorial geometry: the study of the k-sets of a set V of n points in the plane. First we introduce the notion of convex inclusion chain that is an ordering of the points of V such that no point is inside the convex hull of the points that precede it. Every k-set of an initial sub-sequence of the chain is called a k-set of the chain. We prove that the number of these k-sets is an invariant of V and is equal to the number of regions in the order-k Voronoi diagram of V. We then deduce an online algorithm for the construction of the k-sets of the vertices of a simple polygonal line such that every vertex of this line is outside the convex hull of all its preceding vertices on the line. If c is the total number of k-sets built with this algorithm, the complexity of our algorithm is in O(n log n + c log^2k) and is equal, per constructed k-set, to the complexity of the best algorithm known. Afterward, we prove that the classical divide and conquer algorithmic method can be adapted to the construction of the k-sets of V. The algorithm has a complexity of O(n log n + c log^2k log(n/k)), where c is the maximum number of k-sets of a set of n points. We finally prove that the centers of gravity of the k-sets of a convex inclusion chain are the vertices of a triangulation belonging to the family of so-called centroid triangulations. This family notably contains the dual of the order-k Voronoi diagram. We give an algorithm that builds particular centroid triangulations in O(n log n + k(n- k) log^2 k) time, which is more efficient than all the currently known algorithms.
22

Αλγοριθμικές τεχνικές εντοπισμού και παρακολούθησης πολλαπλών πηγών από ασύρματα δίκτυα αισθητήρων

Αμπελιώτης, Δημήτριος 12 April 2010 (has links)
Οι πρόσφατες εξελίξεις στις ασύρματες επικοινωνίες και στα ηλεκτρονικά κυκλώματα έχουν επιτρέψει την ανάπτυξη υπολογιστικών διατάξεων χαμηλού κόστους και χαμηλής κατανάλωσης ισχύος, οι οποίες ενσωματώνουν δυνατότητες μέτρησης (sensing), επεξεργασίας και ασύρματης επικοινωνίας. Οι διατάξεις αυτές, οι οποίες έχουν ιδιαίτερα μικρό μέγεθος, καλούνται κόμβοι αισθητήρες. Ένα ασύρματο δίκτυο κόμβων αισθητήρων αποτελείται από ένα πλήθος κόμβων οι οποίοι έχουν αναπτυχθεί σε κάποια περιοχή ενδιαφέροντος προκειμένου να μετρούν κάποια μεταβλητή του περιβάλλοντος. Ανάμεσα σε πολλές εφαρμογές, ο εντοπισμός και η παρακολούθηση των θέσεων πηγών οι οποίες εκπέμπουν κάποιο σήμα (π.χ. ακουστικό, ηλεκτρομαγνητικό) αποτελεί ένα πολύ ενδιαφέρον θέμα, το οποίο μάλιστα μπορεί να χρησιμοποιηθεί και ως βάση για τη μελέτη άλλων προβλημάτων τα οποία εμφανίζονται στα ασύρματα δίκτυα αισθητήρων. Οι περισσότερες από τις υπάρχουσες τεχνικές εντοπισμού θέσης μιας πηγής από μια συστοιχία αισθητήρων μπορούν να ταξινομηθούν σε δυο κατηγορίες: (α) Τις τεχνικές οι οποίες χρησιμοποιούν μετρήσεις διεύθυνσης άφιξης (Direction of Arrival, DOA) και (β) τις τεχνικές οι οποίες χρησιμοποιούν μετρήσεις διαφοράς χρόνων άφιξης (Time Difference of Arrival, TDOA). Ωστόσο, οι τεχνικές αυτές απαιτούν υψηλό ρυθμό δειγματοληψίας και ακριβή συγχρονισμό των κόμβων και δε συνάδουν έτσι με τις περιορισμένες ικανότητες των κόμβων αισθητήρων. Για τους λόγους αυτούς, το ενδιαφέρον έχει στραφεί σε μια τρίτη κατηγορία τεχνικών οι οποίες χρησιμοποιούν μετρήσεις ισχύος (Received Signal Strength, RSS). Το πρόβλημα του εντοπισμού θέσης χρησιμοποιώντας μετρήσεις ισχύος είναι ένα πρόβλημα εκτίμησης, όπου οι μετρήσεις συνδέονται με τις προς εκτίμηση παραμέτρους με μη-γραμμικό τρόπο. Στα πλαίσια της Διδακτορικής Διατριβής ασχολούμαστε αρχικά με την περίπτωση όπου επιθυμούμε να εκτιμήσουμε τη θέση και την ισχύ μιας πηγής χρησιμοποιώντας μετρήσεις ισχύος οι οποίες φθίνουν με βάση το αντίστροφο του τετραγώνου της απόστασης ανάμεσα στην πηγή και το σημείο μέτρησης. Για το πρόβλημα αυτό, προτείνουμε έναν εκτιμητή ο οποίος δίνει τις παραμέτρους της πηγής ως λύση ενός γραμμικού προβλήματος ελαχίστων τετραγώνων. Στη συνέχεια, υπολογίζουμε κατάλληλα βάρη και προτείνουμε έναν εκτιμητή ο οποίος δίνει τις παραμέτρους της πηγής ως λύση ενός προβλήματος ελαχίστων τετραγώνων με βάρη. Ακόμα, τροποποιούμε κατάλληλα τον τελευταίο εκτιμητή έτσι ώστε να είναι δυνατή η κατανεμημένη υλοποίησή του μέσω των προσαρμοστικών αλγορίθμων Least Mean Square (LMS) και Recursive Least Squares (RLS). Στη συνέχεια, εξετάζουμε την περίπτωση όπου ενδιαφερόμαστε να εκτιμήσουμε τη θέση μιας πηγής αλλά δεν έχουμε καμιά πληροφορία σχετικά με το μοντέλο εξασθένισης της ισχύος. Έτσι, υποθέτουμε πως αυτό περιγράφεται από μια άγνωστη γνησίως φθίνουσα συνάρτηση της απόστασης. Αρχικά, προσεγγίζουμε το πρόβλημα εκτίμησης κάνοντας την υπόθεση πως οι θέσεις των κόμβων αποτελούν τυχαία σημεία ομοιόμορφα κατανεμημένα στο επίπεδο. Χρησιμοποιώντας την υπόθεση αυτή, υπολογίζουμε εκτιμήσεις για τις αποστάσεις ανάμεσα στους κόμβους και την πηγή, και αναπτύσσουμε έναν αλγόριθμο εκτίμησης της θέσης της πηγής. Στη συνέχεια, προσεγγίζουμε το πρόβλημα εκτίμησης χωρίς την υπόθεση περί ομοιόμορφης κατανομής των θέσεων των κόμβων στο επίπεδο. Προτείνουμε μια κατάλληλη συνάρτηση κόστους για την περίπτωση αυτή, και δείχνουμε την ύπαρξη μιας συνθήκης υπό την οποία η βέλτιστη λύση μπορεί να υπολογιστεί. Η λύση αυτή είναι εσωτερικό σημείο ενός κυρτού πολυγώνου, το οποίο ονομάζουμε ταξινομημένο τάξης-K κελί Voronoi. Έτσι, δίνουμε αλγορίθμους υπολογισμού της λύσης αυτής, καθώς και κατανεμημένους αλγορίθμους οι οποίοι βασίζονται σε προβολές σε κυρτά σύνολα. Ακόμα, ασχολούμαστε με τις ιδιότητες των κελιών αυτών στην περίπτωση όπου οι θέσεις των κόμβων αισθητήρων είναι ομοιόμορφα κατανεμημένες στο επίπεδο και υπολογίζουμε κάποια φράγματα για το εμβαδόν τους. Τέλος, ασχολούμαστε με την περίπτωση όπου ενδιαφερόμαστε να εκτιμήσουμε τις θέσεις πολλαπλών πηγών με γνωστό μοντέλο εξασθένισης της ισχύος. Για το πρόβλημα αυτό, αρχικά προτείνουμε έναν αλγόριθμο διαδοχικής εκτίμησης και ακύρωσης της συνεισφοράς κάθε πηγής, προκειμένου να υπολογιστούν σταδιακά οι θέσεις όλων των πηγών. Ο αλγόριθμος αυτός, αποτελείται από τρία βήματα κατά τα οποία πρώτα υπολογίζεται μια προσεγγιστική θέση για την πηγή, στη συνέχεια εκτιμάται ένα σύνολο κόμβων το οποίο δέχεται μικρής έντασης παρεμβολή από τις υπόλοιπες πηγές, και τέλος επιχειρείται μια λεπτομερέστερη εκτίμηση της θέσης κάθε πηγής. Στη συνέχεια, επεκτείνοντας την τεχνική αυτή, προτείνουμε έναν επαναληπτικό αλγόριθμο εκτίμησης ο οποίος βασίζεται στον αλγόριθμο εναλλασσόμενων προβολών (Alternating Projections). Εξετάζουμε επίσης μεθόδους οι οποίες οδηγούν στη μείωση της υπολογιστικής πολυπλοκότητας του αλγορίθμου αυτού. / Technology advances in microelectronics and wireless communications have enabled the development of small-scale devices that integrate sensing, processing and short-range radio capabilities. The deployment of a large number of such devices, referred to as sensor nodes, over a territory of interest, defines the so-called wireless sensor network. Wireless sensor networks have attracted considerable attention in recent years and have motivated many new challenges, most of which require the synergy of many disciplines, including signal processing, networking and distributed algorithms. Among many other applications, source localization and tracking has been widely viewed as a canonical problem of wireless sensor networks. Furthermore, it constitutes an easily perceived problem that can be used as a vehicle to study more involved information processing and organization problems. Most of the source localization methods that have appeared in the literature can be classified into two broad categories, according to the physical variable they utilize. The algorithms of the first category utilize “time delay of arrival”(TDOA) measurements, and the algorithms of the second category use “direction of arrival” (DOA) measurements. DOA estimates are particularly useful for locating sources emitting narrowband signals, while TDOA measurements offer the increased capability of localizing sources emitting broadband signals. However, the methods of both categories impose two major requirements that render them inappropriate to be used in wireless sensor networks: (a) the analog signals at the outputs of the spatially distributed sensors should be sampled in a synchronized fashion, and (b) the sampling rate used should be high enough so as to capture the features of interest. These requirements, in turn, imply that accurate distributed synchronization methods should be implemented so as to keep the remote sensor nodes synchronized and that high frequency electronics as well as increased bandwidth are needed to transmit the acquired measurements. Due to the aforementioned limitations, source localization methods that rely upon received signal strength (RSS) measurements - originally explored for locating electromagnetic sources - have recently received revived attention. In this Thesis, we begin our study by considering the localization of an isotropic acoustic source using energy measurements from distributed sensors, in the case where the energy decays according to an inverse square law with respect to the distance. While most acoustic source localization algorithms require that distance estimates between the sensors and the source of interest are available, we propose a linear least squares criterion that does not make such an assumption. The new criterion can yield the location of the source and its transmit power in closed form. A weighted least squares cost function is also considered, and distributed implementation of the proposed estimators is studied. Numerical results indicate significant performance improvement as compared to a linear least squares based approach that utilizes energy ratios, and comparable performance to other estimators of higher computational complexity. In the sequel, we turn our attention to the case where the energy decay model is not known. For solving the localization problem in this case, we first make the assumption that the locations of the nodes near the source can be well described by a uniform distribution. Using this assumption, we derive distance estimates that are independent of both the energy decay model and the transmit power of the source. Numerical results show that these estimates lead to improved localization accuracy as compared to other model-independent approaches. In the sequel, we consider the more general case where the assumption about the uniform deployment of the sensors is not required. For this case, an optimization problem that does not require knowledge of the underlying energy decay model is proposed, and a condition under which the optimal solution can be computed is given. This condition employs a new geometric construct, called the sorted order-K Voronoi diagram. We give centralized and distributed algorithms for source localization in this setting. Finally, analytical results and simulations are used to verify the performance of the developed algorithms. The next problem we consider is the estimation of the locations of multiple acoustic sources by a network of distributed energy measuring sensors. The maximum likelihood (ML) solution to this problem is related to the optimization of a non-convex function of, usually, many variables. Thus, search-based methods of high complexity are required in order to yield an accurate solution. In order to reduce the computational complexity of the multiple source localization problem, we propose two methods. The first method proposes a sequential estimation algorithm, in which each source is localized, its contribution is cancelled, and the next source is considered. The second method makes use of an alternating projection (AP) algorithm that decomposes the original problem into a number of simpler, yet also non-convex, optimization steps. The particular form of the derived cost functions of each such optimization step indicates that, in some cases, an approximate form of these cost functions can be used. These approximate cost functions can be evaluated using considerably lower computational complexity. Thus, a low-complexity version of the AP algorithm is proposed. Extensive simulation results demonstrate that the proposed algorithm offers a performance close to that of the exact AP implementation, and in some cases, similar performance to that of the ML estimator.
23

Development of advanced methods for super-resolution microscopy data analysis and segmentation / Développement de méthodes avancées pour l'analyse et la segmentation de données de microscopie à super-résolution

Andronov, Leonid 09 January 2018 (has links)
Parmi les méthodes de super-résolution, la microscopie par localisation de molécules uniques se distingue principalement par sa meilleure résolution réalisable en pratique mais aussi pour l’accès direct aux propriétés des molécules individuelles. Les données principales de la microscopie par localisation sont les coordonnées des fluorochromes, un type de données peu répandu en microscopie conventionnelle. Le développement de méthodes spéciales pour le traitement de ces données est donc nécessaire. J’ai développé les logiciels SharpViSu et ClusterViSu qui permettent d’effectuer les étapes de traitements les plus importantes, notamment une correction des dérives et des aberrations chromatiques, une sélection des événements de localisations, une reconstruction des données dans des images 2D ou dans des volumes 3D par le moyen de différentes techniques de visualisation, une estimation de la résolution à l’aide de la corrélation des anneaux de Fourier, et une segmentation à l’aide de fonctions K et L de Ripley. En plus, j’ai développé une méthode de segmentation de données de localisation en 2D et en 3D basée sur les diagrammes de Voronoï qui permet un clustering de manière automatique grâce à modélisation de bruit par les simulations Monte-Carlo. En utilisant les méthodes avancées de traitement de données, j’ai mis en évidence un clustering de la protéine CENP-A dans les régions centromériques des noyaux cellulaires et des transitions structurales de ces clusters au moment de la déposition de la CENP-A au début de la phase G1 du cycle cellulaire. / Among the super-resolution methods single-molecule localization microscopy (SMLM) is remarkable not only for best practically achievable resolution but also for the direct access to properties of individual molecules. The primary data of SMLM are the coordinates of individual fluorophores, which is a relatively rare data type in fluorescence microscopy. Therefore, specially adapted methods for processing of these data have to be developed. I developed the software SharpViSu and ClusterViSu that allow for most important data processing steps, namely for correction of drift and chromatic aberrations, selection of localization events, reconstruction of data in 2D images or 3D volumes using different visualization techniques, estimation of resolution with Fourier ring correlation, and segmentation using K- and L-Ripley functions. Additionally, I developed a method for segmentation of 2D and 3D localization data based on Voronoi diagrams, which allows for automatic and unambiguous cluster analysis thanks to noise modeling with Monte-Carlo simulations. Using advanced data processing methods, I demonstrated clustering of CENP-A in the centromeric regions of the cell nucleus and structural transitions of these clusters upon the CENP-A deposition in early G1 phase of the cell cycle.
24

[pt] OTIMIZAÇÃO TOPOLÓGICA USANDO MALHAS POLIÉDRICAS / [en] TOPOLOGY OPTIMIZATION USING POLYHEDRAL MESHES

22 February 2019 (has links)
[pt] A otimização topológica tem se desenvolvido bastante e possui potencial para revolucionar diversas áreas da engenharia. Este método pode ser implementado a partir de diferentes abordagens, tendo como base o Método dos Elementos Finitos. Ao se utilizar uma abordagem baseada no elemento, potencialmente, cada elemento finito pode se tornar um vazio ou um sólido, e a cada elemento do domínio é atribuído uma variável de projeto, constante, denominada densidade. Do ponto de vista Euleriano, a topologia obtida é um subconjunto dos elementos iniciais. No entanto, tal abordagem está sujeita a instabilidades numéricas, tais como conexões de um nó e rápidas oscilações de materiais do tipo sólido-vazio (conhecidas como instabilidade de tabuleiro). Projetos indesejáveis podem ser obtidos quando elementos de baixa ordem são utilizados e métodos de regularização e/ou restrição não são aplicados. Malhas poliédricas não estruturadas naturalmente resolvem esses problemas e oferecem maior flexibilidade na discretização de domínios não Cartesianos. Neste trabalho investigamos a otimização topológica em malhas poliédricas por meio de um acoplamento entre malhas. Primeiramente, as malhas poliédricas são geradas com base no conceito de diagramas centroidais de Voronoi e posteriormente otimizadas para uso em análises de elementos finitos. Demonstramos que o número de condicionamento do sistema de equações associado pode ser melhorado ao se minimizar uma função de energia relacionada com a geometria dos elementos. Dada a qualidade da malha e o tamanho do problema, diferentes tipos de resolvedores de sistemas de equações lineares apresentam diferentes desempenhos e, portanto, ambos os resolvedores diretos e iterativos são abordados. Em seguida, os poliedros são decompostos em tetraedros por um algoritmo específico de acoplamento entre as malhas. A discretização em poliedros é responsável pelas variáveis de projeto enquanto a malha tetraédrica, obtida pela subdiscretização da poliédrica, é utilizada nas análises via método dos elementos finitos. A estrutura modular, que separa as rotinas e as variáveis usadas nas análises de deslocamentos das usadas no processo de otimização, tem se mostrado promissora tanto na melhoria da eficiência computacional como na qualidade das soluções que foram obtidas neste trabalho. Os campos de deslocamentos e as variáveis de projeto são relacionados por meio de um mapeamento. A arquitetura computacional proposta oferece uma abordagem genérica para a solução de problemas tridimensionais de otimização topológica usando poliedros, com potencial para ser explorada em outras aplicações que vão além do escopo deste trabalho. Finalmente, são apresentados diversos exemplos que demonstram os recursos e o potencial da abordagem proposta. / [en] Topology optimization has had an impact in various fields and has the potential to revolutionize several areas of engineering. This method can be implemented based on the finite element method, and there are several approaches of choice. When using an element-based approach, every finite element is a potential void or actual material, whereas every element in the domain is assigned to a constant design variable, namely, density. In an Eulerian setting, the obtained topology consists of a subset of initial elements. This approach, however, is subject to numerical instabilities such as one-node connections and rapid oscillations of solid and void material (the so-called checkerboard pattern). Undesirable designs might be obtained when standard low-order elements are used and no further regularization and/or restrictions methods are employed. Unstructured polyhedral meshes naturally address these issues and offer fl exibility in discretizing non-Cartesians domains. In this work we investigate topology optimization on polyhedra meshes through a mesh staggering approach. First, polyhedra meshes are generated based on the concept of centroidal Voronoi diagrams and further optimized for finite element computations. We show that the condition number of the associated system of equations can be improved by minimizing an energy function related to the element s geometry. Given the mesh quality and problem size, different types of solvers provide different performances and thus both direct and iterative solvers are addressed. Second, polyhedrons are decomposed into tetrahedrons by a tailored embedding algorithm. The polyhedra discretization carries the design variable and a tetrahedra subdiscretization is nested within the polyhedra for finite element analysis. The modular framework decouples analysis and optimization routines and variables, which is promising for software enhancement and for achieving high fidelity solutions. Fields such as displacement and design variables are linked through a mapping. The proposed mapping-based framework provides a general approach to solve three-dimensional topology optimization problems using polyhedrons, which has the potential to be explored in applications beyond the scope of the present work. Finally, the capabilities of the framework are evaluated through several examples, which demonstrate the features and potential of the proposed approach.
25

Multispektrální analýza obrazových dat / Multispectral Analyse of Image Data

Novotný, Jan January 2009 (has links)
The airborne hyperspectral remote sensing is used as an approach to monitor actual state of environmental components. This thesis covers priority treatment to analyse of hyperspectral data with the aim of a tree crowns delineation. Specific algorithm applying adaptive equalization and the Voronoi diagrams is designed to subdivide a forest area into individual trees. A computer program executes the algorithm and allows testing it on real data, checking and analyzing the results.
26

[en] AUTOMATED SYNTHESIS OF OPTIMAL DECISION TREES FOR SMALL COMBINATORIAL OPTIMIZATION PROBLEMS / [pt] SÍNTESE AUTOMATIZADA DE ÁRVORES DE DECISÃO ÓTIMAS PARA PEQUENOS PROBLEMAS DE OTIMIZAÇÃO COMBINATÓRIA

CLEBER OLIVEIRA DAMASCENO 24 August 2021 (has links)
[pt] A análise de complexidade clássica para problemas NP-difíceis é geralmente orientada para cenários de pior caso, considerando apenas o comportamento assintótico. No entanto, existem algoritmos práticos com execução em um tempo razoável para muitos problemas clássicos. Além disso, há evidências que apontam para algoritmos polinomiais no modelo de árvore de decisão linear para resolver esses problemas, embora não muito explorados. Neste trabalho, exploramos esses resultados teóricos anteriores. Mostramos que a solução ótima para problemas combinatórios 0-1 pode ser encontrada reduzindo esses problemas para uma Busca por Vizinho Mais Próximo sobre o conjunto de vértices de Voronoi correspondentes. Utilizamos os hiperplanos que delimitam essas regiões para gerar sistematicamente uma árvore de decisão que repetidamente divide o espaço até que possa separar todas as soluções, garantindo uma resposta ótima. Fazemos experimentos para testar os limites de tamanho para os quais podemos construir essas árvores para os casos do 0-1 knapsack, weighted minimum cut e symmetric traveling salesman. Conseguimos encontrar as árvores desses problemas com tamanhos até 10, 5 e 6, respectivamente. Obtemos também as relações de adjacência completas para os esqueletos dos politopos do knapsack e do traveling salesman até os tamanhos 10 e 7. Nossa abordagem supera consistentemente o método de enumeração e os métodos baseline para o weighted minimum cut e symmetric traveling salesman, fornecendo soluções ótimas em microssegundos. / [en] Classical complexity analysis for NP-hard problems is usually oriented to worst-case scenarios, considering only the asymptotic behavior. However, there are practical algorithms running in a reasonable time for many classic problems. Furthermore, there is evidence pointing towards polynomial algorithms in the linear decision tree model to solve these problems, although not explored much. In this work, we explore previous theoretical results. We show that the optimal solution for 0-1 combinatorial problems can be found by reducing these problems into a Nearest Neighbor Search over the set of corresponding Voronoi vertices. We use the hyperplanes delimiting these regions to systematically generate a decision tree that repeatedly splits the space until it can separate all solutions, guaranteeing an optimal answer. We run experiments to test the size limits for which we can build these trees for the cases of the 0-1 knapsack, weighted minimum cut, and symmetric traveling salesman. We manage to find the trees of these problems with sizes up to 10, 5, and 6, respectively. We also obtain the complete adjacency relations for the skeletons of the knapsack and traveling salesman polytopes up to size 10 and 7. Our approach consistently outperforms the enumeration method and the baseline methods for the weighted minimum cut and symmetric traveling salesman, providing optimal solutions within microseconds.

Page generated in 0.0613 seconds