Spelling suggestions: "subject:"branch anda found"" "subject:"branch anda sound""
51 |
Solving Influence Diagrams using Branch and Bound SearchKhaled, Arindam 11 December 2015 (has links)
Influence diagrams (ID) are graphical frameworks for decision making in stochastic situations with mathematical models embedded in them. The goal of an optimal algorithm for an ID is to find a strategy that would maximize the expected utility. We will explain a few algorithms for influence diagrams in this thesis. There exists an obvious temporal ordering among decisions in an ID; and any information used in the past will always be available in the future: these two properties are respectively called the “regularity” and “noforgetting” assumptions. A limited memory influence diagram (LIMID) does not follow these two properties. The existing state-of-art depthirst-branch-and-bound (DFBnB) algorithm for solving influence diagrams does not scale very well due to the exponential increase of nodes proportional to the depth of the search (or total stages in the ID). In this paper, we propose and implement an algorithm that combines two widely used methods, depth first branch-andbound search (DFBnB) and value iteration with incremental pruning, for solving IDs and POMDPs, respectively. We describe an algorithm to convert the strategy tree to a strategy graph. Experiments show the effectiveness of these approaches. Algorithms for solving traditional influence diagrams are not easily generalized to solve LIMIDs, however, and only recently have exact algorithms for solving LIMIDs been developed. In this thesis, we provide an exact algorithm for solving LIMIDs that is based on branch-and-bound search. Our approach is related to the approach of solving an influence diagram by converting it to an equivalent decision tree, with the difference that the LIMID is converted to a much smaller decision graph that can be searched more efficiently.
|
52 |
HyunJuOhDissertation.pdfHyun-Ju Oh (14228162) 09 December 2022 (has links)
<p>In this thesis, we devise a new finite branch-and-bound algorithm for disjoint bilinear programs. In these problems, there are two vectors of variables, $\b{x}$ and $\b{y}$, and two polytopes $P_{\b{x}}$ and $P_{\b{y}}$ such that $\b{x}$ (resp. $\b{y}$) is chosen from $P_{\b{x}}$ (resp. $P_{\b{y}}$) so that a bilinear objective function is minimized. By a bilinear objective, we mean that the objective becomes linear when either one of $\b{x}$ or $\b{y}$ is fixed. </p>
<p> This branch-and-bound scheme uses a relaxation that is derived using the reformulation-linearization technique (RLT). The RLT relaxation is constructed by taking products of constraints and linearizing the bilinear terms using introduced variables. The quality of this relaxation improves as higher order products and the corresponding monomial terms are linearized. Although it is known that, as RLT relaxations are constructed with increasingly higher order linearizations, the relaxation eventually converges to the true optimal value. However, no finite convergence properties are known. In contrast, we show that the first level of the RLT hierarchy suffices to convexify the problem when one of the polytopes is a simplex. Then, using this insight we devise a new class of relaxations by combining RLT with a variant of the double description procedure, where the latter lifts a polytope into a simplex in a finite number of steps. This leads us to a finite hierarchy of relaxations that converges to the optimal value. Although this hierarchy is finite, from a computational perspective, we find that the relaxations grow rapidly in size. However, we utilize the insight to derive a simplicial branch-and-bound scheme, that expresses each polytope as a union of simplices allowing us to converge finitely to the optimal solution for the problem. We perform preliminary numerical experiments to show that this approach holds promise and competes favorably with state-of-the-art methods on larger instances.</p>
|
53 |
Semantic Decomposition By CoveringSripadham, Shankar B. 10 August 2000 (has links)
This thesis describes the implementation of a covering algorithm for semantic decomposition of sentences of technical patents. This research complements the ASPIN project that has a long term goal of providing an automated system for digital system synthesis from patents.
In order to develop a prototype of the system explained in a patent, a natural language processor (sentence-interpreter) is required. These systems typically attempt to interpret a sentence by syntactic analysis (parsing) followed by semantic analysis. Quite often, the technical narrative contains grammatical errors, incomplete sentences, anaphoric references and typological errors that can cause the grammatical parse to fail. In such situations, an alternate method that uses a repository of pre-compiled, simple sentences (called frames) to analyze the sentences of the patent can be a useful back up. By semantically decomposing the sentences of patents to a set of frames whose meanings are fully understood, the meaning of the patent sentences can be interpreted.
This thesis deals with the semantic decomposition of sentences using a branch and bound covering algorithm. The algorithm is implemented in C++. A number of experiments were conducted to evaluate the performance of this algorithm. The covering algorithm uses a standard branch and bound algorithm to semantically decompose sentences. The algorithm is fast, flexible and can provide good (100 % coverage for some sentences) coverage results. The system covered 67.68 % of the sentence tokens using 3459 frames in the repository. 54.25% of the frames identified by the system in covers for sentences, were found to be semantically correct. The experiments suggest that the performance of the system can be improved by increasing the number of frames in the repository. / Master of Science
|
54 |
Avaliação de métodos ótimos e subótimos de seleção de características de texturas em imagens / Evaluation of optimal and suboptimal feature selection methods applied to image texturesRoncatti, Marco Aurelio 10 July 2008 (has links)
Características de texturas atuam como bons descritores de imagens e podem ser empregadas em diversos problemas, como classificação e segmentação. Porém, quando o número de características é muito elevado, o reconhecimento de padrões pode ser prejudicado. A seleção de características contribui para a solução desse problema, podendo ser empregada tanto para redução da dimensionalidade como também para descobrir quais as melhores características de texturas para o tipo de imagem analisada. O objetivo deste trabalho é avaliar métodos ótimos e subótimos de seleção de características em problemas que envolvem texturas de imagens. Os algoritmos de seleção avaliados foram o branch and bound, a busca exaustiva e o sequential oating forward selection (SFFS). As funções critério empregadas na seleção foram a distância de Jeffries-Matusita e a taxa de acerto do classificador de distância mínima (CDM). As características de texturas empregadas nos experimentos foram obtidas com estatísticas de primeira ordem, matrizes de co-ocorrência e filtros de Gabor. Os experimentos realizados foram a classificação de regiôes de uma foto aérea de plantação de eucalipto, a segmentação não-supervisionada de mosaicos de texturas de Brodatz e a segmentação supervisionada de imagens médicas (MRI do cérebro). O branch and bound é um algoritmo ótimo e mais efiiente do que a busca exaustiva na maioria dos casos. Porém, continua sendo um algoritmo lento. Este trabalho apresenta uma nova estratégia para o branch and bound, nomeada floresta, que melhorou significativamente a eficiência do algoritmo. A avaliação dos métodos de seleção de características mostrou que os melhores subconjuntos foram aqueles obtidos com o uso da taxa de acerto do CDM. A busca exaustiva e o branch and bound, mesmo com a estratégia floresta, foram considerados inviáveis devido ao alto tempo de processamento nos casos em que o número de característica é muito grande. O SFFS apresentou os melhores resultados, pois, além de mais rápido, encontrou as soluções ótimas ou próximas das ótimas. Pôde-se concluir também que a precisão no reconhecimento de padrões aumenta com a redução do número de características e que os melhores subconjuntos freqüentemente são formados por características de texturas obtidas com técnicas diferentes / Texture features are eficient image descriptors and can be employed in a wide range of applications, such as classification and segmentation. However, when the number of features is considerably high, pattern recognition tasks may be compromised. Feature selection helps prevent this problem, as it can be used to reduce data dimensionality and reveal features which best characterise images under investigation. This work aims to evaluate optimal and suboptimal feature selection algorithms in the context of textural features extracted from images. Branch and bound, exhaustive search and sequential floating forward selection (SFFS) were the algorithms investigated. The criterion functions employed during selection were the Jeffries-Matusita (JM) distance and the minimum distance classifier (MDC) accuracy rate. Texture features were computed from first-order statistics, co-occurrence matrices and Gabor filters. Three different experiments have been conducted: classification of aerial picture of eucalyptus plantations, unsupervised segmentation of mosaics of Brodatz texture samples and supervised segmentation of MRI images of the brain. The branch and bound is an optimal algorithm and many times more eficient than exhaustive search. But is still time consuming. This work proposed a novel strategy for the branch and bound algorithm, named forest, which has considerably improved its performance. The evaluation of the feature selection methods has revealed that the best feature subsets were those computed by the MDC accuracy rate criterion function. Exhaustive search and branch and bound approaches have been considered unfeasible, due to their high processing times, especially for high dimensional data. This statement holds even for the branch and bound with the forest strategy. The SFFS approach yielded the best results. Not only was it faster, as it also was capable of finding the optimal or nearly optimal solutions. Finally, it has been observed that the precision of pattern recognition tasks increases as the number of features decreases and that the best feature subsets are those which possess features computed from distinct texture feature methods
|
55 |
Algorithmen im WirkstoffdesignThimm, Martin 31 January 2006 (has links)
Die Bestimmung der Ähnlichkeit von molekularen Strukturen und das Clustern solcher Strukturen gemäß Ähnlichkeit sind zwei zentrale Fragen im Wirkstoffdesign. Die Arbeit beschreibt im ersten Teil zwei neue Verfahren zum Vergleich von Molekülen auf 3-dimensionale Ähnlichkeit. Der erste Algorithmus benutzt als Eingabe nur die Koordinaten der Atome der zu vergleichenden Moleküle. Wir können zeigen, daß eine rein geometrische Zielfunktion in der Lage ist, Wirkungsähnlichkeit von Substanzen vorherzusagen, und daß der Algorithmus geeignet ist, Ähnlichkeiten zu finden, die mit bisherigen, einfacheren Methoden nicht gefunden werden konnten. Das zweite Verfahren nutzt zusätzlich noch die Bindungsstruktur der Eingabemoleküle. Es ist flexibel, d.h. alle Konformere der Moleküle werden simultan behandelt. Wir erhalten ein sehr schnelles Verfahren, das bei geeigneter Parametereinstellung auch beweisbar optimale Lösungen liefert. Für praktisch relevante Anwendungen erreichen wir erstmals Laufzeiten, die selbst das Durchsuchen großer Datenbanken ermöglichen. Im zweiten Teil beschreiben wir zwei Methoden, eine Menge von molekularen Strukturen so zu organisieren, daß die Suche nach geometrisch ähnlichen deutlich schneller durchgeführt werden kann als durch lineare Suche. Nach Analyse der Daten mit graphentheoretischen Methoden finden hierarchische Verfahren und repräsentantenbasierte Ansätze ihre Anwendung. Schließlich geben wir einen neuen Algorithmus zum Biclustern von Daten an, einem Problem, das bei der Analyse von Genexpressionsdaten eine wichtige Rolle spielt. Mit graphentheoretischen Methoden konstruieren wir zunächst deterministisch Obermengen von Lösungen, die danach heuristisch ausgedünnt werden. Wir können zeigen, daß dieser neue Ansatz bisherige, vergleichbare z.T. deutlich überbietet. Seine prinzipielle Einfachheit läßt anwendungsbezogene Modifikationen leicht zu. / Two important questions in drug design are the following: "How to compute the similarity of two molecules?" and "How to cluster molecules by similarity?" In the first part we describe two different approaches to compare molecules for 3D-similarity. The first algorithm just uses the 3D coordinates of the atoms as input. We show that this algorithm is able to detect similar activity or similar adverse reaction, even with a simple purely geometry based scoring function. Compared to previous simpler approaches more interesting hits are found. The connectivity structures of the molecular graphs are used by the second algorithm as additional input. This fully flexible approach -- conformers of the molecules are treated simultaneously -- may even find provably optimal solutions. Parameter settings for practically relevant instances allow running times that make it possible to even search large databases. The second part describes two methods to search a database of molecular structures. After analyzing the data with graph theoretical methods two algorithms for two different ranges of similarity are designed. Scanning the database for structures similar to a given query can be accelerated considerably. We use hierarchical methods and dominating set techniques. Finally we propose a new biclustering algorithm. Biclustering problems recently appeared mainly in the context of analysing gene expression data. Again graph theoretical methods are our main tools. In our model biclusters correspond to dense subgraphs of certain bipartite graphs. In a first phase the algorithm deterministically finds supersets of solution candidates. Thinning out these sets by heuristical methods leads to solutions. This new algorithm outperforms former comparable methods and its simple structure make it easy to customize it for practical applications.
|
56 |
Avaliação de métodos ótimos e subótimos de seleção de características de texturas em imagens / Evaluation of optimal and suboptimal feature selection methods applied to image texturesMarco Aurelio Roncatti 10 July 2008 (has links)
Características de texturas atuam como bons descritores de imagens e podem ser empregadas em diversos problemas, como classificação e segmentação. Porém, quando o número de características é muito elevado, o reconhecimento de padrões pode ser prejudicado. A seleção de características contribui para a solução desse problema, podendo ser empregada tanto para redução da dimensionalidade como também para descobrir quais as melhores características de texturas para o tipo de imagem analisada. O objetivo deste trabalho é avaliar métodos ótimos e subótimos de seleção de características em problemas que envolvem texturas de imagens. Os algoritmos de seleção avaliados foram o branch and bound, a busca exaustiva e o sequential oating forward selection (SFFS). As funções critério empregadas na seleção foram a distância de Jeffries-Matusita e a taxa de acerto do classificador de distância mínima (CDM). As características de texturas empregadas nos experimentos foram obtidas com estatísticas de primeira ordem, matrizes de co-ocorrência e filtros de Gabor. Os experimentos realizados foram a classificação de regiôes de uma foto aérea de plantação de eucalipto, a segmentação não-supervisionada de mosaicos de texturas de Brodatz e a segmentação supervisionada de imagens médicas (MRI do cérebro). O branch and bound é um algoritmo ótimo e mais efiiente do que a busca exaustiva na maioria dos casos. Porém, continua sendo um algoritmo lento. Este trabalho apresenta uma nova estratégia para o branch and bound, nomeada floresta, que melhorou significativamente a eficiência do algoritmo. A avaliação dos métodos de seleção de características mostrou que os melhores subconjuntos foram aqueles obtidos com o uso da taxa de acerto do CDM. A busca exaustiva e o branch and bound, mesmo com a estratégia floresta, foram considerados inviáveis devido ao alto tempo de processamento nos casos em que o número de característica é muito grande. O SFFS apresentou os melhores resultados, pois, além de mais rápido, encontrou as soluções ótimas ou próximas das ótimas. Pôde-se concluir também que a precisão no reconhecimento de padrões aumenta com a redução do número de características e que os melhores subconjuntos freqüentemente são formados por características de texturas obtidas com técnicas diferentes / Texture features are eficient image descriptors and can be employed in a wide range of applications, such as classification and segmentation. However, when the number of features is considerably high, pattern recognition tasks may be compromised. Feature selection helps prevent this problem, as it can be used to reduce data dimensionality and reveal features which best characterise images under investigation. This work aims to evaluate optimal and suboptimal feature selection algorithms in the context of textural features extracted from images. Branch and bound, exhaustive search and sequential floating forward selection (SFFS) were the algorithms investigated. The criterion functions employed during selection were the Jeffries-Matusita (JM) distance and the minimum distance classifier (MDC) accuracy rate. Texture features were computed from first-order statistics, co-occurrence matrices and Gabor filters. Three different experiments have been conducted: classification of aerial picture of eucalyptus plantations, unsupervised segmentation of mosaics of Brodatz texture samples and supervised segmentation of MRI images of the brain. The branch and bound is an optimal algorithm and many times more eficient than exhaustive search. But is still time consuming. This work proposed a novel strategy for the branch and bound algorithm, named forest, which has considerably improved its performance. The evaluation of the feature selection methods has revealed that the best feature subsets were those computed by the MDC accuracy rate criterion function. Exhaustive search and branch and bound approaches have been considered unfeasible, due to their high processing times, especially for high dimensional data. This statement holds even for the branch and bound with the forest strategy. The SFFS approach yielded the best results. Not only was it faster, as it also was capable of finding the optimal or nearly optimal solutions. Finally, it has been observed that the precision of pattern recognition tasks increases as the number of features decreases and that the best feature subsets are those which possess features computed from distinct texture feature methods
|
57 |
Problems, Models and Algorithms in One- and Two-Dimensional Cutting / Probleme, Modelle und Algorithmen in ein- und zweidimensionalem ZuschnittBelov, Gleb 20 January 2004 (has links) (PDF)
Within such disciplines as Management Science, Information and Computer Science, Engineering, Mathematics and Operations Research, problems of cutting and packing (C&amp;P) of concrete and abstract objects appear under various specifications (cutting problems, knapsack problems, container and vehicle loading problems, pallet loading, bin packing, assembly line balancing, capital budgeting, changing coins, etc.), although they all have essentially the same logical structure. In cutting problems, a large object must be divided into smaller pieces; in packing problems, small items must be combined to large objects. Most of these problems are NP-hard. Since the pioneer work of L.V. Kantorovich in 1939, which first appeared in the West in 1960, there has been a steadily growing number of contributions in this research area. In 1961, P. Gilmore and R. Gomory presented a linear programming relaxation of the one-dimensional cutting stock problem. The best-performing algorithms today are based on their relaxation. It was, however, more than three decades before the first `optimum? algorithms appeared in the literature and they even proved to perform better than heuristics. They were of two main kinds: enumerative algorithms working by separation of the feasible set and cutting plane algorithms which cut off infeasible solutions. For many other combinatorial problems, these two approaches have been successfully combined. In this thesis we do it for one-dimensional stock cutting and two-dimensional two-stage constrained cutting. For the two-dimensional problem, the combined scheme provides mostly better solutions than other methods, especially on large-scale instances, in little time. For the one-dimensional problem, the integration of cuts into the enumerative scheme improves the results of the latter only in exceptional cases. While the main optimization goal is to minimize material input or trim loss (waste), in a real-life cutting process there are some further criteria, e.g., the number of different cutting patterns (setups) and open stacks. Some new methods and models are proposed. Then, an approach combining both objectives will be presented, to our knowledge, for the first time. We believe this approach will be highly relevant for industry.
|
58 |
Modelo de roteamento de veículos aplicado ao planejamento do Inventário Florestal / Vehicle routing problem applied to Inventory Forest planningMeneguzzi, Cristiane Coutinho 04 October 2011 (has links)
Made available in DSpace on 2016-12-23T13:51:54Z (GMT). No. of bitstreams: 1
Cristiane Coutinho Meneguzzi.pdf: 2106158 bytes, checksum: 65c537220893be6e9c9d64b3001fef07 (MD5)
Previous issue date: 2011-10-04 / On Forest field, studies in development of forest harvesting and transport still being the most emphasized subject, for being directly responsible for the final cost of wood. However, other different phases are a big potential for studies, as Forest Inventory. Information provided by the Forest Inventory are important for all planning of Forest Enterprise, as it bases any decision making involving forest resources. On this present research, was based on vehicle routing problem for planning this task. The vehicle routing problem and its variants has being largely studied on the last years, mainly for its applicability and efficiency for given solutions resulting in cost and distance reduction. The general objective of the present study is optimize the Inventory Forest planning from a vehicle routing problem and evaluate the importance of this technique on its productivity. Among the factors that influence this productivity, the spatial dispersion , basic feature of forest stands, it is one controllable factor from the use of technique that makes possible matches with planning. Studies shows that this match brings out significant results / Na área florestal, ainda é dada maior ênfase ao desenvolvimento de estudos envolvendo as etapas de colheita e transporte florestal, por serem diretamente responsáveis pelo custo final da madeira. Entretanto, diversas outras etapas possuem grande potencial para estudos, como é o caso do inventário florestal. Informações fornecidas pelo inventário florestal são importantes no planejamento de todo empreendimento florestal, pois subsidiam qualquer tomada de decisão envolvendo recursos florestais. Nesta pesquisa, utilizou-se o modelo de roteamento de veículos (PRV) no planejamento dessa atividade. O PRV e suas variantes vêm sendo amplamente estudados nos últimos anos, principalmente pela sua aplicabilidade e eficiência em gerar soluções apresentando redução de custo e/ou distâncias. O objetivo geral foi otimizar o planejamento da atividade de inventário florestal a partir de um modelo PRV e avaliar a importância do uso desta técnica no rendimento das atividades. Dentre os fatores que influenciam neste rendimento, a dispersão espacial, característica básica dos povoamentos florestais, é um fator controlável a partir do uso de técnicas que possibilitem associá-lo ao planejamento. Estudos mostram que essa associação traz resultados significativos
|
59 |
Um algoritmo exato para a otimização de carteiras de investimento com restrições de cardinalidade / An exact algorithm for portifolio optimization with cardinality constraintsVillela, Pedro Ferraz, 1982- 12 August 2018 (has links)
Orientador: Francisco de Assis Magalhães Gomes Neto / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-12T16:09:04Z (GMT). No. of bitstreams: 1
Villela_PedroFerraz_M.pdf: 727069 bytes, checksum: d87d64ae49bfc1a53017a463cf10b453 (MD5)
Previous issue date: 2008 / Resumo: Neste trabalho, propomos um método exato para a resolução de problemas de programação quadrática que envolvem restrições de cardinalidade. Como aplicação, empregamos o método para a obtenção da fronteira eficiente de um problema (bi-objetivo) de otimização de carteiras de investimento. Nosso algoritmo é baseado no método Branch-and-Bound. A chave de seu sucesso, entretanto, reside no uso do método de Lemke, que é aplicado para a resolução dos subproblemas associados aos nós da árvore gerada pelo Branch-and-Bound. Ao longo do texto, algumas heurísticas também são introduzidas, com o propósito de acelerar a convergência do método. Os resultados computacionais obtidos comprovam que o algoritmo proposto é eficiente. / Abstract: In this work, we propose an exact method for the resolution of quadratic programming problems involving cardinality restrictions. As an application, the algorithm is used to generate the effective Pareto frontier of a (bi-objective) portfolio optimization problem. This algorithm is based on the Branch-and-Bound method. The key to its success, however, resides in the application of Lemke's method to the resolution of the subproblems associated to the nodes of the tree generated by the Branch-and-Bound algorithm. Throughout the text, some heuristics are also introduced as a way to accelerate the performance of the method. The computational results acquired show that the proposed algorithm is efficient. / Mestrado / Otimização / Mestre em Matemática Aplicada
|
60 |
Symbolische Interpretation Technischer ZeichnungenBringmann, Oliver 08 August 2002 (has links)
Gescannte und vektorisierte technische Zeichnungen werden automatisch unter Nutzung eines Netzes von Modellen in eine hochwertige Datenstruktur migriert. Die Modelle beschreiben die Inhalte der Zeichnungen hierarchisch und deklarativ. Modelle für einzelne Bestandteile der Zeichnungen können paarweise unabhängig entwickelt werden. Dadurch werden auch sehr komplexe Zeichnungsklassen wie Elektroleitungsnetze oder Gebäudepläne zugänglich. Die Modelle verwendet der neue, sogenannte Y-Algorithmus: Hypothesen über die Deutung lokaler Zeichnungsinhalte werden hierarchisch generiert. Treten bei der Nutzung konkurrierender Modelle Konflikte auf, werden diese protokolliert. Mittels des Konfliktbegriffes können konsistente Interpretationen einer kompletten Zeichnung abstrakt definiert und während der Analyse einer konkreten Zeichnung bestimmt werden. Ein wahrscheinlichkeitsbasiertes Gütemaß bewertet jede dieser alternativen, globalen Interpretationen. Das Suchen einer bzgl. dieses Maßes optimalen Interpretation ist ein NP-hartes Problem. Ein Branch and Bound-Algorithmus stellt die adäquate Lösung dar.
|
Page generated in 0.0682 seconds