• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 29
  • 22
  • 19
  • 9
  • 8
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 253
  • 51
  • 43
  • 40
  • 37
  • 36
  • 32
  • 30
  • 30
  • 25
  • 24
  • 22
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The maximum feasible subsystem problem and vertex facet incidences of polyhedra

Pfetsch, Marc E. Unknown Date (has links) (PDF)
Techn. University, Diss., 2002--Berlin.
12

Vyhodnocení soustřeďování dříví koňmi v Národním parku

Zbořilová, Šárka January 2010 (has links)
No description available.
13

Análise da distribuição do número de operações de resolvedores SAT / Distribution\'s analysis of operations\'s number of SAT solvers

Poliana Magalhães Reis 28 February 2012 (has links)
No estudo da complexidade de problemas computacionais destacam-se duas classes conhecidas como P e NP. A questao P=NP e um dos maiores problemas nao resolvidos em Ciencia da Compu- tacao teorica e Matematica contemporanea. O problema SAT foi o primeiro problema reconhecido como NP-completo e consiste em verificar se uma determinada formula da logica proposicional clas- sica e ou nao satisfazivel. As implementacoes de algoritmos para resolver problemas SAT sao conhe- cidas como resolvedores SAT (SAT Solvers). Existem diversas aplicacoes em Ciencia da Computacao que podem ser realizadas com SAT Solvers e com outros resolvedores de problemas NP-completos que podem ser reduzidos ao SAT como por exemplo problemas de coloracao de grafos, problemas de agendamento e problemas de planejamento. Dentre os mais eficientes algoritmos para resolvedores de SAT estao Sato, Grasp, Chaff, MiniSat e Berkmin. O Algoritmo Chaff e baseado no Algoritmo DPLL o qual existe a mais de 40 anos e e a estrategia mais utilizada para os Sat Solvers. Essa dissertacao apresenta um estudo aprofundado do comportamento do zChaff (uma implementacao muito eficiente do Chaff) para saber o que esperar de suas execucoes em geral . / In the study of computational complexity stand out two classes known as P and NP. The question P = NP is one of the greatest unsolved problems in theoretical computer science and contemporary mathematics. The SAT problem was first problem recognized as NP-complete and consists to check whether a certain formula of classical propositional logic is satisfiable or not. The implementations of algorithms to solve SAT problems are known as SAT solvers. There are several applications in computer science that can be performed with SAT solvers and other solvers NP- complete problems can be reduced to SAT problems such as graph coloring, scheduling problems and planning problems. Among the most efficient algorithms for SAT solvers are Sato, Grasp, Chaf, MiniSat and Berkmin. The Chaff algorithm is based on the DPLL algorithm which there is more than 40 years and is the most used strategy for Sat Solvers. This dissertation presents a detailed study of the behavior of zChaff (a very efficient implementation of the Chaff) to know what to expect from their performance in general.
14

Fixed parameter tractable algorithms for optimal covering tours with turns

Yu, Nuo, 1983- January 2008 (has links)
No description available.
15

Keliaujančių pirklių uždavinys / Multiple traveling salesman problem

Jurgo, Gžegož 30 June 2014 (has links)
Atliekant magistrinį darbą pagrindinis tikslas buvo išnagrinėti keliaujančių pirklių uždavinį su papildomais apribojimais. Darbo metu buvo pridėtas pirklio keliamosios galios apribojimas. Išanalizuoti įmanomi sprendimo budai. Darbo metu buvo realizuotas genetinis algoritmas gebantis spręsti iškeltą uždavinį. Sugalvoti ir realizuoti uždavinio sprendimui reikalingi genetiniai operatoriai. Realizuoti lokalaus optimizavimo algoritmai. Atlikti testavimo darbai bei gauti galimi sprendiniai. / The main goal of the master's thesis was to analyze travelling salesmen problem with additional limitations. The limitation of salesman's lifting force was entered during study. Possible calculation methods were analyzed. During the study genetic algorithm was applied, possible of handling current problem. Genetic operators, needed for solving travelling salesmen problem were created and applied. Besides that, local route optimization algorithms were implemented. Tests were accomplished and possible solutions found.
16

Minimum Crossing Problems on Graphs

Roh, Patrick January 2007 (has links)
This thesis will address several problems in discrete optimization. These problems are considered hard to solve. However, good approximation algorithms for these problems may be helpful in approximating problems in computational biology and computer science. Given an undirected graph G=(V,E) and a family of subsets of vertices S, the minimum crossing spanning tree is a spanning tree where the maximum number of edges crossing any single set in S is minimized, where an edge crosses a set if it has exactly one endpoint in the set. This thesis will present two algorithms for special cases of minimum crossing spanning trees. The first algorithm is for the case where the sets of S are pairwise disjoint. It gives a spanning tree with the maximum crossing of a set being 2OPT+2, where OPT is the maximum crossing for a minimum crossing spanning tree. The second algorithm is for the case where the sets of S form a laminar family. Let b_i be a bound for each S_i in S. If there exists a spanning tree where each set S_i is crossed at most b_i times, the algorithm finds a spanning tree where each set S_i is crossed O(b_i log n) times. From this algorithm, one can get a spanning tree with maximum crossing O(OPT log n). Given an undirected graph G=(V,E), and a family of subsets of vertices S, the minimum crossing perfect matching is a perfect matching where the maximum number of edges crossing any set in S is minimized. A proof will be presented showing that finding a minimum crossing perfect matching is NP-hard, even when the graph is bipartite and the sets of S are pairwise disjoint.
17

Description de la variabilité en pharmacocinétique de population par l'approche nin paramétrique des noyaux : adaptation de posologie à l'aide des nomogrammes cinétiques / Nonparametric kernel density estimation applied to population pharmacokinetics : dosage regimens individualization by kinetic nomograms

Marouani, Hafedh 10 July 2012 (has links)
La mise en oeuvre des approches de population en PK devient systématique aussi bien en milieu industriel qu'en milieu clinique. Ces approches visent à décrire et à quantifier la variabilité interindividuelle en termes de statistiques. L'information obtenue est un élément essentiel car, utilisée dans un critère Bayésien permet l'estimation des paramètres pharmacocinétiques d'un nouvel individu, conduisant dans un contexte clinique à une adaptation rigoureuse de la posologie. L'implémentation des approches non-paramétriques des noyaux et le développement d'une nouvelle procédure d'individualisation de posologie sont les deux contributions principales de ce travail. La description de la variabilité a été abordée par la méthode en deux étapes. Dans le contexte multidimensionnel avec erreurs normales et hétéroscédastiques, nous avons mis en oeuvre différents estimateurs à noyaux "naïfs" et de déconvolution. Le lissage optimal des noyaux a été calculé à l'aide de la validation croisée des moindres carrés. Par ailleurs,nous avons développé les nomogrammes cinétiques en tant que nouvel outil d'individualisation posologique. Etant donné une concentration cible à l'état stationnaire, ces nomogrammes sont construits comme une collection de profils pharmacocinétiques "spécifiques" obtenus après administrations répétées d'un "protocole d'identification". Les graphes obtenus divisent l'espace "temps-concentration" en plusieurs domaines dont chacun correspond à une dose ajustée. Les différents calculs et développements ont été réalisés en utilisant le logiciel Matlab®. Le sélecteur de lissage fournit de bonnes performances et l'implémentation des estimateurs à noyaux précités s'est faite avec succès. Par ailleurs, l'évaluation des performances en termes d'ajustement des doses de Sirolimus® par les nomogrammes cinétiques, que ce soit dans l'étude de simulation ou en clinique, a montré la simplicité et la fiabilité de la procédure. Elle semble efficace même chez les patients dont les caractéristiques pharmacocinétiques varient temporellement. Les estimateurs non-paramétriques à noyaux mis en oeuvre permettent de décrire fidèlement des formes atypiques de la distribution des paramètres pharmacocinétiques, objectif difficilement atteignable avec les approches paramétriques en une seule étape. En clinique, les nomogrammes cinétiques proposés en tant qu'outil d'individualisation de posologie directement disponible au chevet du patient représentent une alternative intéressante aux procédures Bayésiennes traditionnelles. / Population pharmacokinetic approaches become nowadays systematic in both industry and clinical settings. The purpose of these studies is to describe and quantify statistically the interindividual variability.The reliability of the obtained information is fundamental because when used in a Bayesian criterion allows the estimation of individual pharmacokinetic parameters, which could lead to appropriate individual dosage regimens. The implementation of nonparametric kernel density estimators and the development of a new toolassisting individualization of dosage regimens are the main contributions in this thesis. A two-stage method was used in the multivariate context with normal heteroscedastic erors. To describe the variability of pharmacokinetic parameters, we implemented nonparmetric "naive" and deconvolution kernel density estimators. Least-squares cross valdation was used to calculate the optimal smoothing parameter for kernel estimators. Moreover, to assist individualization of drug regimens, we developed the kinetic nomograms. They involve collection of concentration-time following repeated administrations of a fixed "identification protocol" and targeting a given steady-state concentration. The profiles divide the concentration-time space into several areas, each of them corresponding to a given adjusted drug dose. All calculations were performed by use ofMatlab® software. The selector of least-squares cross validation provides good performances and implementation of the above kernel estimators was sucessful. Otherwise, performance evaluation in terms of dose adjustement of Sirolimus® by kinetic nomograms, in both simulation and clinical study, showed simplicity and reliability of the new procedure. It provides adequate dosage adjustement even for drugs that exhibit large intra-individual variability. Implemented nonparametric kernel estimators accurately described the interindividual pharmacokinetic variability, which is difficult to achieve when classical single-stage parametric approaches are used. In the clinical context, kinetic nomograms rendered the individual adjustement of dosage regimen a simplified bedside application. They are interesting alternatives to the cumbersome Bayesian procedure.
18

Automatic Solutions of Logic Puzzles

Sempolinski, Peter January 2009 (has links)
Thesis advisor: Howard Straubing / The use of computer programs to automatically solve logic puzzles is examined in this work. A typical example of this type of logic puzzle is one in which there are five people, with five different occupations and five different color houses. The task is to use various clues to determine which occupation and which color belongs to each person. The clues to this type of puzzle often are statements such as, ''John is not the barber,'' or ''Joe lives in the blue house.'' These puzzles range widely in complexity with varying numbers of objects to identify and varying numbers of characteristics that need to be identified for each object. With respect to the theoretical aspects of solving these puzzles automatically, this work proves that the problem of determining, given a logic puzzle, whether or not that logic puzzle has a solution is NP-Complete. This implies, provided that P is not equal to NP, that, for large inputs, automated solvers for these puzzles will not be efficient in all cases. Having proved this, this work proceeds to seek methods that will work for solving these puzzles efficiently in most cases. To that end, each logic puzzle can be encoded as an instance of boolean satisfiability. Two possible encodings are proposed that both translate logic puzzles into boolean formulas in Conjunctive Normal Form. Using a selection of test puzzles, a group of boolean satisfiability solvers is used to solve these puzzles in both encodings. In most cases, these simple solvers are successful in producing solutions efficiently. / Thesis (BS) — Boston College, 2009. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: College Honors Program. / Discipline: Computer Science.
19

Algorithms for the satisfiability problem

Rolf, Daniel 22 November 2006 (has links)
Diese Arbeit befasst sich mit Worst-Case-Algorithmen für das Erfüllbarkeitsproblem boolescher Ausdrücke in konjunktiver Normalform. Im Wesentlichen betrachten wir Laufzeitschranken drei verschiedener Algorithmen, zwei für 3-SAT und einen für Unique-k-SAT. Wir entwickeln einen randomisierten Algorithmus, der eine Lösung eines erfüllbaren 3-KNF-Ausdrucks G mit n Variablen mit einer erwarteten Laufzeit von O(1.32793^n) findet. Der Algorithmus basiert auf der Analyse sogenannter Strings, welche Sequenzen von Klauseln der Länge drei sind. Dabei dürfen einerseits nicht aufeinanderfolgende Klauseln keine Variablen und andererseits aufeinanderfolgende Klauseln ein oder zwei Variablen gemeinsam haben. Gibt es wenige Strings, so treffen wir wahrscheinlich bereits während der String-Suche auf eine Lösung von G. 1999 entwarf Schöning einen Algorithmus mit einer Schranke von O(1.3334^n) für 3-SAT. Viele Strings erlauben es, die Laufzeit dieses Algorithmusses zu verbessern. Weiterhin werden wir den PPSZ-Algorithmus für Unique-k-SAT derandomisieren. Der 1998 von Paturi, Pudlak, Saks und Zane vorgestellte PPSZ-Algorithmus hat die besondere Eigenschaft, dass die Lösung eines eindeutig erfüllbaren 3-KNF-Ausdrucks in höchstens O(1.3071^n) erwarteter Laufzeit gefunden wird. Die derandomisierte Variante des Algorithmusses für Unique-k-SAT hat im Wesentlichen die gleiche Laufzeitschranke. Wir erreichen damit die momentan beste deterministische Worst-Case-Schranke für Unique-k-SAT. Zur Derandomisierung wenden wir die "Methode der kleinen Zufallsräume" an. Schließlich verbessern wir die Schranke für den Algorithmus von Iwama und Tamaki. 2003 kombinierten Iwama und Tamaki den PPSZ-Algorithmus mit Schönigs Algorithmus und konnten eine Schranke von O(1.3238^n) bewiesen. Um seinen Beitrag zum kombinierten Algorithmus zu steigern, justieren wir die Schranke des PPSZ-Algorithmusses. Damit erhalten wir die momentan beste randomisierte Worst-Case-Schranke für das 3-SAT-Problem von O(1.32216^n). / This work deals with worst-case algorithms for the satisfiability problem regarding boolean formulas in conjunctive normal form. The main part of this work consists of the analysis of the running time of three different algorithms, two for 3-SAT and one for Unique-k-SAT. We establish a randomized algorithm that finds a satisfying assignment for a satisfiable 3-CNF formula G on n variables in O(1.32793^n) expected running time. The algorithm is based on the analysis of so-called strings, which are sequences of clauses of size three, whereby non-succeeding clauses do not share a variable, and succeeding clauses share one or two variables. If there are not many strings, it is likely that we already encounter a solution of G while searching for strings. In 1999, Schöning proved a bound of O(1.3334^n) for 3-SAT. If there are many strings, we use them to improve the running time of Schöning''s algorithm. Furthermore, we derandomize the PPSZ algorithm for Unique-k-SAT. The PPSZ algorithm presented by Paturi, Pudlak, Saks, and Zane in 1998 has the feature that the solution of a uniquely satisfiable 3-CNF formula can be found in expected running time at most O(1.3071^n). In general, we will obtain a derandomized version of the algorithm for Unique-k-SAT that has essentially the same bound as the randomized version. This settles the currently best known deterministic worst-case bound for the Unique-k-SAT problem. We apply the `Method of Small Sample Spaces'' in order to derandomize the algorithm. Finally, we improve the bound for the algorithm of Iwama and Tamaki to get the currently best known randomized worst-case bound for the 3-SAT problem of O(1.32216^n). In 2003 Iwama and Tamaki combined Schöning''s and the PPSZ algorithm to yield an O(1.3238^n) bound. We tweak the bound for the PPSZ algorithm to get a slightly better contribution to the combined algorithm.
20

Complexity analysis of task assignment problems and vehicle scheduling problems.

January 1994 (has links)
by Chi-lok Chan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Scheduling Problems of Chain-like Task System --- p.4 / Chapter 2.1 --- Introduction --- p.4 / Chapter 2.2 --- Problem Assumptions and Notations Definition --- p.7 / Chapter 2.3 --- Related Works --- p.9 / Chapter 2.3.1 --- Bokhari's Algorithm --- p.10 / Chapter 2.3.2 --- Sheu and Chiang's Algorithm --- p.12 / Chapter 2.3.3 --- Hsu's Algorithm --- p.12 / Chapter 2.4 --- Decision Algorithms for Un-mergeable Task System --- p.18 / Chapter 2.4.1 --- Feasible Length-K Schedule --- p.18 / Chapter 2.4.2 --- Generalized Decision Test --- p.23 / Chapter 2.5 --- Dominated and Non-dominated Task Systems --- p.26 / Chapter 2.5.1 --- Algorithm for Dominated Task System --- p.26 / Chapter 2.5.2 --- Property of Non-dominated Task System --- p.27 / Chapter 2.6 --- A Searching-Based Algorithm for the Optimization Problem --- p.28 / Chapter 2.6.1 --- Algorithm --- p.29 / Chapter 2.6.2 --- Complexity Analysis --- p.31 / Chapter 2.7 --- A Searching Algorithm Based on a Sorted Matrix --- p.33 / Chapter 2.7.1 --- Sorted Matrix --- p.33 / Chapter 2.7.2 --- Algorithm for the Optimization Problem --- p.35 / Chapter 2.7.3 --- Complexity Analysis --- p.40 / Chapter 2.8 --- A Constructive Algorithm for the Optimization Problem --- p.43 / Chapter 2.9 --- A Modified Constructive Algorithm --- p.46 / Chapter 2.9.1 --- Algorithm --- p.46 / Chapter 2.9.2 --- Worst-Case Analysis --- p.50 / Chapter 2.9.3 --- Sufficient Condition for Efficient Algorithm H --- p.58 / Chapter 2.9.4 --- Average-Case Analysis --- p.62 / Chapter 2.10 --- Performance Evaluation --- p.65 / Chapter 2.10.1 --- Optimal Schedule --- p.65 / Chapter 2.10.2 --- Space Complexity Analysis --- p.67 / Chapter 2.10.3 --- Time Complexity Analysis --- p.68 / Chapter 2.10.4 --- Simulation of Algorithm F and Algorithm H --- p.70 / Chapter 2.11 --- Conclusion --- p.74 / Chapter 3 --- Vehicle Scheduling Problems with Time Window Constraints --- p.77 / Chapter 3.1 --- Introduction --- p.77 / Chapter 3.2 --- Problem Formulation and Notations --- p.79 / Chapter 3.3 --- NP-hardness of VSP-WINDOW-SLP --- p.83 / Chapter 3.3.1 --- A Transformation from PARTITION --- p.83 / Chapter 3.3.2 --- Intuitive Idea of the Reduction --- p.85 / Chapter 3.3.3 --- NP-completeness Proof --- p.87 / Chapter 3.4 --- Polynomial Time Algorithm for the VSP-WINDOW on a Straight Line with Common Ready Time --- p.98 / Chapter 3.5 --- Strong NP-hardness of VSP-WINDOW-TREEP --- p.106 / Chapter 3.5.1 --- A Transformation from 3-PARTITION --- p.107 / Chapter 3.5.2 --- NP-completeness Proof --- p.107 / Chapter 3.6 --- Conclusion --- p.111 / Chapter 4 --- Conclusion --- p.115 / Bibliography --- p.119

Page generated in 0.0241 seconds