• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 734
  • 269
  • 129
  • 52
  • 19
  • 14
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1474
  • 668
  • 257
  • 243
  • 241
  • 240
  • 186
  • 182
  • 174
  • 167
  • 159
  • 150
  • 143
  • 141
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
911

Playing and solving the game of Hex

Henderson, Philip Unknown Date
No description available.
912

Multi-tree algorithms for computational statistics and phyiscs

March, William B. 20 September 2013 (has links)
The Fast Multipole Method of Greengard and Rokhlin does the seemingly impossible: it approximates the quadratic scaling N-body problem in linear time. The key is to avoid explicitly computing the interactions between all pairs of N points. Instead, by organizing the data in a space-partitioning tree, distant interactions are quickly and efficiently approximated. Similarly, dual-tree algorithms, which approximate or eliminate parts of a computation using distance bounds, are the fastest algorithms for several fundamental problems in statistics and machine learning -- including all nearest neighbors, kernel density estimation, and Euclidean minimum spanning tree construction. We show that this overarching principle -- that by organizing points spatially, we can solve a seemingly quadratic problem in linear time -- can be generalized to problems involving interactions between sets of three or more points and can provide orders-of-magnitude speedups and guarantee runtimes that are asymptotically better than existing algorithms. We describe a family of algorithms, multi-tree algorithms, which can be viewed as generalizations of dual-tree algorithms. We support this thesis by developing and implementing multi-tree algorithms for two fundamental scientific applications: n-point correlation function estimation and Hartree-Fock theory. First, we demonstrate multi-tree algorithms for n-point correlation function estimation. The n-point correlation functions are a family of fundamental spatial statistics and are widely used for understanding large-scale astronomical surveys, characterizing the properties of new materials at the microscopic level, and for segmenting and processing images. We present three new algorithms which will reduce the dependence of the computation on the size of the data, increase the resolution in the result without additional time, and allow probabilistic estimates independent of the problem size through sampling. We provide both empirical evidence to support our claim of massive speedups and a theoretical analysis showing linear scaling in the fundamental computational task. We demonstrate the impact of a carefully optimized base case on this computation and describe our distributed, scalable, open-source implementation of our algorithms. Second, we explore multi-tree algorithms as a framework for understanding the bottleneck computation in Hartree-Fock theory, a fundamental model in computational chemistry. We analyze existing fast algorithms for this problem, and show how they fit in our multi-tree framework. We also show new multi-tree methods, demonstrate that they are competitive with existing methods, and provide the first rigorous guarantees for the runtimes of all of these methods. Our algorithms will appear as part of the PSI4 computational chemistry library.
913

Tractability and approximability for subclasses of the makespan problem on unrelated parallel machines

Page, Daniel 19 August 2014 (has links)
Let there be m parallel machines and n jobs to be scheduled non-preemptively. A job j scheduled on machine i takes p_{i,j} time units to complete, where 1 ≤ i ≤ m and 1 ≤ j ≤ n. For a given schedule, the makespan is the completion time of a machine that finishes last. The goal is to produce a schedule of all n jobs with minimum makespan. This is known as the makespan problem on unrelated parallel machines (UPMs), denoted as R||C_{max}. In this thesis, we focus on subclasses of R||C_{max}. Our research consists of two components. First, a survey of theoretic results for R||C_{max} with a focus on approximation algorithms is presented. Second, we present exact polynomial-time algorithms and approximation algorithms for some subclasses of R||C_{max}. For instance, we present k-approximation algorithms on par with or better than the best known for certain subclasses of R||C_{max}.
914

Combinatorial synthesis of new GFP- and RFP-like chromophores and their photophysical properties

Fellows, William Brett 27 August 2014 (has links)
A new synthetic methodology for the combinatorial preparation of C-terminus-modified Green and Red Fluorescent Protein chromophores is described. This method involves the modification of the previously reported [2+3] cycloaddition reaction scheme to incorporate new R2 groups in the imidate used in the final step. This is achieved through two primary routes: (a) the imidation of nitriles using hydrochloric acid gas and (b) the O-alkylation of amides using a variant of Meerwein's Salt to provide conjugated imidates. The preparation of fluorescent microcrystals and nanofibers from Green Fluorescent Protein chromophore derivatives via the reprecipitation method is also demonstrated. The properties of these microcrystals and nanofibers, especially in relation to the powder obtained from organic solvents, are also explored. Additionally, it is demonstrated that the size and shape of the microcrystals and nanofibers can be modulated with varying experimental conditions for RP. A new class of AIE-active GFP chromophores is reported. These chromophores contain a benzoxazole group on the phenyl ring and varying lengths of alkyl chains on the imidazolidinone nitrogen. These benzoxazole-based chromophores exhibit unique properties in the solid state not previously observed for GFP chromophore derivatives, namely, a broadening of the excitation spectrum and red-shifting of the emission, likely caused by excimer formation. The crystal structure also reveals a unique "hot-dog" stacking motif. Additionally, some projects which require further work are discussed at the end of the thesis. These include a stress-responsive GFP-based polymer and DNA-binding fluorophores.
915

Asymptotic existence results on specific graph decompositions

Chan, Justin 23 July 2010 (has links)
This work examines various asymptotic edge-decomposition problems on graphs. A G-group divisible design (G-GDD) of type [g_1, ..., g_u] and index lambda is a decomposition of the edges of the complete lambda-fold multipartite graph H, with groups (maximal independent sets) G_1, ..., G_n, |G_i| = g_i, into graphs (blocks) isomorphic to G. We shall also examine special types of G-GDDs (such as G-frames) and prove that, given all parameters except u, these structures exist for all asymptotically large u satisfying the necessary conditions. Our primary technique is to invoke a useful theorem of Lamken and Wilson on edge-colored graph decompositions. The basic construction for k-RGDDs shall be outlined at the end of the thesis.
916

Covering Problems via Structural Approaches

Grant, Elyot January 2011 (has links)
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible. In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide: - An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC. - A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure. - A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity. - Applications of the above to various capacitated covering problems via linear programming strengthening and rounding. In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability.
917

Intractability Results for some Computational Problems

Ponnuswami, Ashok Kumar 08 July 2008 (has links)
In this thesis, we show results for some well-studied problems from learning theory and combinatorial optimization. Learning Parities under the Uniform Distribution: We study the learnability of parities in the agnostic learning framework of Haussler and Kearns et al. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et al, this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of k-juntas reduces to learning noisy parities of k variables. Agnostic Learning of Halfspaces: We give an essentially optimal hardness result for agnostic learning of halfspaces over rationals. We show that for any constant ε finding a halfspace that agrees with an unknown function on 1/2+ε fraction of examples is NP-hard even when there exists a halfspace that agrees with the unknown function on 1-ε fraction of examples. This significantly improves on a number of previous hardness results for this problem. We extend the result to ε = 2[superscript-Ω(sqrt{log n})] assuming NP is not contained in DTIME(2[superscript(log n)O(1)]). Majorities of Halfspaces: We show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem. This also implies a hardness result for learning halfspaces with a high rate of adversarial noise even if the learning algorithm can output any efficiently computable hypothesis. Max-Clique, Chromatic Number and Min-3Lin-Deletion: We prove an improved hardness of approximation result for two problems, namely, the problem of finding the size of the largest clique in a graph (also referred to as the Max-Clique problem) and the problem of finding the chromatic number of a graph. We show that for any constant γ > 0, there is no polynomial time algorithm that approximates these problems within factor n/2[superscript(log n)3/4+γ] in an n vertex graph, assuming NP is not contained in BPTIME(2[superscript(log n)O(1)]). This improves the hardness factor of n/2[superscript (log n)1-γ'] for some small (unspecified) constant γ' > 0 shown by Khot. Our main idea is to show an improved hardness result for the Min-3Lin-Deletion problem. An instance of Min-3Lin-Deletion is a system of linear equations modulo 2, where each equation is over three variables. The objective is to find the minimum number of equations that need to be deleted so that the remaining system of equations has a satisfying assignment. We show a hardness factor of 2[superscript sqrt{log n}] for this problem, improving upon the hardness factor of (log n)[superscriptβ] shown by Hastad, for some small (unspecified) constant β > 0. The hardness results for Max-Clique and chromatic number are then obtained using the reduction from Min-3Lin-Deletion as given by Khot. Monotone Multilinear Boolean Circuits for Bipartite Perfect Matching: A monotone Boolean circuit is said to be multilinear if for any AND gate in the circuit, the minimal representation of the two input functions to the gate do not have any variable in common. We show that monotone multilinear Boolean circuits for computing bipartite perfect matching require exponential size. In fact we prove a stronger result by characterizing the structure of the smallest monotone multilinear Boolean circuits for the problem.
918

Problems and results in partially ordered sets, graphs and geometry

Biro, Csaba 26 June 2008 (has links)
The thesis consist of three independent parts. In the first part, we investigate the height sequence of an element of a partially ordered set. Let $x$ be an element of the partially ordered set $P$. Then $h_i(x)$ is the number of linear extensions of $P$ in which $x$ is in the $i$th lowest position. The sequence ${h_i(x)}$ is called the height sequence of $x$ in $P$. Stanley proved in 1981 that the height sequence is log-concave, but no combinatorial proof has been found, and Stanley's proof does not reveal anything about the deeper structure of the height sequence. In this part of the thesis, we provide a combinatorial proof of a special case of Stanley's theorem. The proof of the inequality uses the Ahlswede--Daykin Four Functions Theorem. In the second part, we study two classes of segment orders introduced by Shahrokhi. Both classes are natural generalizations of interval containment orders and interval orders. We prove several properties of the classes, and inspired by the observation, that the classes seem to be very similar, we attempt to find out if they actually contain the same partially ordered sets. We prove that the question is equivalent to a stretchability question involving certain sets of pseudoline arrangements. We also prove several facts about continuous universal functions that would transfer segment orders of the first kind into segments orders of the second kind. In the third part, we consider the lattice whose elements are the subsets of ${1,2,ldots,n}$. Trotter and Felsner asked whether this subset lattice always contains a monotone Hamiltonian path. We make progress toward answering this question by constructing a path for all $n$ that satisfies the monotone properties and covers every set of size at most $3$. This portion of thesis represents joint work with David M.~Howard.
919

Optimization of paths and locations of water quality monitoring systems in surface water environments

Nam, Kijin 08 July 2008 (has links)
Even though the necessity of water quality monitoring systems is increasing, and though mobile watery quality monitoring systems using the combination of automatic measuring devices and autonomous vehicles is becoming available, research on effective deployment of such systems is not studied well. The locations or paths to take the measurement are one of the most important design factors to maximize the performance of water quality monitoring systems, and they needs to be optimized to maximize the monitoring performance. To solve these optimization problems, multi-objective genetic algorithms were proposed and developed. The proposed optimization procedures were applied to hypothetical circular lakes and Lake Pontchartrain in order to obtain optimal monitoring locations, straight monitoring paths, and higher-order monitoring paths under various conditions. Also, the effect of various parameters such as the speed of a monitoring vessel, the weights of possible scenarios, and etc. are investigated. The optimization models found optimal solutions efficiently while reflecting various effects of complex physical settings. The results from the optimizations show that distribution of possible source locations is an important factor that affects optimal solutions greatly. In a closed water body, wind is major forcing that determines hydrodynamics and contaminant transport, and it affects optimal solutions as well. Straight monitoring lines do not perform very well due to their incapability to cover the irregular boundaries of water bodies. Higher-order optimal monitoring paths overcome this difficulty and perform well up to a comparable level of a few stationary monitoring locations even under realistic and transient conditions.
920

Combinatorial optimization and application to DNA sequence analysis

Gupta, Kapil 25 August 2008 (has links)
With recent and continuing advances in bioinformatics, the volume of sequence data has increased tremendously. Along with this increase, there is a growing need to develop efficient algorithms to process such data in order to make useful and important discoveries. Careful analysis of genomic data will benefit science and society in numerous ways, including the understanding of protein sequence functions, early detection of diseases, and finding evolutionary relationships that exist among various organisms. Most sequence analysis problems arising from computational genomics and evolutionary biology fall into the class of NP-complete problems. Advances in exact and approximate algorithms to address these problems are critical. In this thesis, we investigate a novel graph theoretical model that deals with fundamental evolutionary problems. The model allows incorporation of the evolutionary operations ``insertion', ``deletion', and ``substitution', and various parameters such as relative distances and weights. By varying appropriate parameters and weights within the model, several important combinatorial problems can be represented, including the weighted supersequence, weighted superstring, and weighted longest common sequence problems. Consequently, our model provides a general computational framework for solving a wide variety of important and difficult biological sequencing problems, including the multiple sequence alignment problem, and the problem of finding an evolutionary ancestor of multiple sequences. In this thesis, we develop large scale combinatorial optimization techniques to solve our graph theoretical model. In particular, we formulate the problem as two distinct but related models: constrained network flow problem and weighted node packing problem. The integer programming models are solved in a branch and bound setting using simultaneous column and row generation. The methodology developed will also be useful to solve large scale integer programming problems arising in other areas such as transportation and logistics.

Page generated in 0.0296 seconds