• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • Tagged with
  • 20
  • 20
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
2

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
3

Algorithms, measures and upper bounds for satisfiability and related problems

Wahlström, Magnus January 2007 (has links)
The topic of exact, exponential-time algorithms for NP-hard problems has received a lot of attention, particularly with the focus of producing algorithms with stronger theoretical guarantees, e.g. upper bounds on the running time on the form O(c^n) for some c. Better methods of analysis may have an impact not only on these bounds, but on the nature of the algorithms as well. The most classic method of analysis of the running time of DPLL-style ("branching" or "backtracking") recursive algorithms consists of counting the number of variables that the algorithm removes at every step. Notable improvements include Kullmann's work on complexity measures, and Eppstein's work on solving multivariate recurrences through quasiconvex analysis. Still, one limitation that remains in Eppstein's framework is that it is difficult to introduce (non-trivial) restrictions on the applicability of a possible recursion. We introduce two new kinds of complexity measures, representing two ways to add such restrictions on applicability to the analysis. In the first measure, the execution of the algorithm is viewed as moving between a finite set of states (such as the presence or absence of certain structures or properties), where the current state decides which branchings are applicable, and each branch of a branching contains information about the resultant state. In the second measure, it is instead the relative sizes of the modelled attributes (such as the average degree or other concepts of density) that controls the applicability of branchings. We adapt both measures to Eppstein's framework, and use these tools to provide algorithms with stronger bounds for a number of problems. The problems we treat are satisfiability for sparse formulae, exact 3-satisfiability, 3-hitting set, and counting models for 2- and 3-satisfiability formulae, and in every case the bound we prove is stronger than previously known bounds.
4

Algorithm-Based Meta-Analysis Reveals the Mechanistic Interaction of the Tumor Suppressor LIMD1 With Non-Small-Cell Lung Carcinoma

Wang, Ling, Sparks-Wallace, Ayrianna, Casteel, Jared L., Howell, Mary E.A., Ning, Shunbin 31 March 2021 (has links)
Non-small-cell lung carcinoma (NSCLC) is the major type of lung cancer, which is among the leading causes of cancer-related deaths worldwide. LIMD1 was previously identified as a tumor suppressor in lung cancer, but their detailed interaction in this setting remains unclear. In this study, we have carried out multiple genome-wide bioinformatic analyses for a comprehensive understanding of LIMD1 in NSCLC, using various online algorithm platforms that have been built for mega databases derived from both clinical and cell line samples. Our results indicate that LIMD1 expression level is significantly downregulated at both mRNA and protein levels in both lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), with a considerable contribution from its promoter methylation rather than its gene mutations. The Limd1 gene undergoes mutation only at a low rate in NSCLC (0.712%). We have further identified LIMD1-associated molecular signatures in NSCLC, including its natural antisense long non-coding RNA LIMD1-AS1 and a pool of membrane trafficking regulators. We have also identified a subgroup of tumor-infiltrating lymphocytes, especially neutrophils, whose tumor infiltration levels significantly correlate with LIMD1 level in both LUAD and LUSC. However, a significant correlation of LIMD1 with a subset of immune regulatory molecules, such as IL6R and TAP1, was only found in LUAD. Regarding the clinical outcomes, LIMD1 expression level only significantly correlates with the survival of LUAD (p0.1) patients. These findings indicate that LIMD1 plays a survival role in LUAD patients at least by acting as an immune regulatory protein. To further understand the mechanisms underlying the tumor-suppressing function of LIMD1 in NSCLC, we show that LIMD1 downregulation remarkably correlates with the deregulation of multiple pathways that play decisive roles in the oncogenesis of NSCLC, especially those mediated by EGFR, KRAS, PIK3CA, Keap1, and p63, in both LUAD and LUSC, and those mediated by p53 and CDKN2A only in LUAD. This study has disclosed that LIMD1 can serve as a survival prognostic marker for LUAD patients and provides mechanistic insights into the interaction of LIMD1 with NSCLC, which provide valuable information for clinical applications.
5

An Approximation Algorithm for Character Compatibility and Fast Quartet-based Phylogenetic Tree Comparison

Tsang, John January 2000 (has links)
Phylogenetic analysis, or the inference of evolutionary history is done routinely by biologists and is one of the most important problems in systematic biology. In this thesis, we study two computational problems in the area. First, we study the evolutionary tree reconstruction problem under the character compatibility (CC) paradigm and give a polynomial time approximation scheme (PTAS) for a variation of the formulation called fractional character compatibility (FCC), which has been proven to be NP-hard. We also present a very simple algorithm called the Ordinal Split Method (OSM) to generate bipartitions given sequence data, which can be served as a front-end to the PTAS. The performance of the OSM and the validity of the FCC formulation are studied through simulation experiments. The second part of this thesis presents an efficient algorithm to compare evolutionary trees using the quartet metric. Different evolutionary hypothesis arises when different data sets are used or when different tree inference methods are applied to the same data set. Tree comparisons are routinely done by biologists to evaluate the quality of their tree inference experiments. The quartet metric has many desirable properties but its use has been hindered by its relatively heavy computational requirements. We address this problem by giving the first O(n^2) time algorithm to compute the quartet distance between two evolutionary trees.
6

An Approximation Algorithm for Character Compatibility and Fast Quartet-based Phylogenetic Tree Comparison

Tsang, John January 2000 (has links)
Phylogenetic analysis, or the inference of evolutionary history is done routinely by biologists and is one of the most important problems in systematic biology. In this thesis, we study two computational problems in the area. First, we study the evolutionary tree reconstruction problem under the character compatibility (CC) paradigm and give a polynomial time approximation scheme (PTAS) for a variation of the formulation called fractional character compatibility (FCC), which has been proven to be NP-hard. We also present a very simple algorithm called the Ordinal Split Method (OSM) to generate bipartitions given sequence data, which can be served as a front-end to the PTAS. The performance of the OSM and the validity of the FCC formulation are studied through simulation experiments. The second part of this thesis presents an efficient algorithm to compare evolutionary trees using the quartet metric. Different evolutionary hypothesis arises when different data sets are used or when different tree inference methods are applied to the same data set. Tree comparisons are routinely done by biologists to evaluate the quality of their tree inference experiments. The quartet metric has many desirable properties but its use has been hindered by its relatively heavy computational requirements. We address this problem by giving the first O(n^2) time algorithm to compute the quartet distance between two evolutionary trees.
7

An Experimental Study of Distance Sensitivity Oracles

Williams, Vincent Troy 26 October 2010 (has links)
The paper \A Nearly Optimal Oracle for Avoiding Failed Vertices and Edges" by Aaron Bernstein and David Karger lays out a nearly optimal algorithm for nding the shortest distances and paths between vertices with any given single failure in constant time without reconstructing the oracle. Using their paper as a guideline, we have implemented their algorithm in C++ and recorded each step in this thesis. Each step has its own pseudo-code and its own analysis to prove that the entire oracle construction stays within the stated running time and total space bounds, from the authors. The effciency of the algorithm is compared against that of the brute-force methods total running time and total space needed. Using multiple test cases with an increasing number of vertices and edges, we have experimentally validated that their algorithm holds true to their statements of space, running time, and query time.
8

Optimal Alignment of Multiple Sequence Alignments

Starrett, Dean January 2008 (has links)
An essential tool in biology is the alignment of multiple sequences. Biologists use multiple sequence alignments for tasks such as predicting protein structure and function, reconstructing phylogenetic trees, and finding motifs. Constructing high-quality multiple alignments is computationally hard, both in theory and in practice, and is typically done using heuristic methods. The majority of state-of-the-art multiple alignment programs employ a form and polish strategy, where in the construction phase, an initial multiple alignment is formed by progressively merging smaller alignments, starting with single sequences. Then in a local-search phase, the resulting alignment is polished by repeatedly splitting it into smaller alignments and re-merging. This merging of alignments, the basic computational problem in the construction and local-search phases of the best multiple alignment heuristics, is called the Aligning Alignments Problem. Under the sum-of-pairs objective for scoring multiple alignments, this problem may seem to be a simple extension of two-sequence alignment. It is proven here, however, that with affine gap costs (which are recognized as necessary to get biologically-informative alignments) the problem is NP-complete when gaps are counted exactly. Interestingly, this form of multiple alignment is polynomial-time solvable when we relax the exact count, showing that exact gap counts themselves are inherently hard in multiple sequence alignment. Unlike general multiple alignment however, we show that Aligning Alignments with affine gap costs and exact counts is tractable in practice, by demonstrating an effective algorithm and a fast implementation. Our software AlignAlign is both time- and space-efficient on biological data. Computational experiments on biological data show instances derived from standard benchmark suites can be optimally aligned with surprising efficiency, and experiments on simulated data show the time and space both scale well.
9

Complexities of Order-Related Formal Language Extensions / Komplexiteter hos ordnings-relaterade utökningar av formella språk

Berglund, Martin January 2014 (has links)
The work presented in this thesis discusses various formal language formalisms that extend classical formalisms like regular expressions and context-free grammars with additional abilities, most relating to order. This is done while focusing on the impact these extensions have on the efficiency of parsing the languages generated. That is, rather than taking a step up on the Chomsky hierarchy to the context-sensitive languages, which makes parsing very difficult, a smaller step is taken, adding some mechanisms which permit interesting spatial (in)dependencies to be modeled. The most immediate example is shuffle formalisms, where existing language formalisms are extended by introducing operators which generate arbitrary interleavings of argument languages. For example, introducing a shuffle operator to the regular expressions does not make it possible to recognize context-free languages like anbn, but it does capture some non-context-free languages like the language of all strings containing the same number of as, bs and cs. The impact these additions have on parsing has many facets. Other than shuffle operators we also consider formalisms enforcing repeating substrings, formalisms moving substrings around, and formalisms that restrict which substrings may be concatenated. The formalisms studied here all have a number of properties in common. They are closely related to existing regular and context-free formalisms. They operate in a step-wise fashion, deriving strings by sequences of rule applications of individually limited power. Each step generates a constant number of symbols and does not modify parts that have already been generated. That is, strings are built in an additive fashion that does not explode in size (in contrast to e.g. Lindenmayer systems). All languages here will have a semi-linear Parikh image. They feature some interesting characteristic involving order or other spatial constraints. In the example of the shuffle multiple derivations are in a sense interspersed in a way that each is unaware of. All of the formalisms are intended to be limited enough to make an efficient parsing algorithm at least for some cases a reasonable goal. This thesis will give intuitive explanations of a number of formalisms fulfilling these requirements, and will sketch some results relating to the parsing problem for them. This should all be viewed as preparation for the more complete results and explanations featured in the papers given in the appendices. / Denna avhandling diskuterar utökningar av klassiska formalismer inom formella språk, till exempel reguljära uttryck och kontextfria grammatiker. Utökningarna handlar på ett eller annat sätt omordning, och ett särskilt fokus ligger på att göra utökningarna på ett sätt som dels har intressanta spatiala/ordningsrelaterade effekter och som dels bevarar den effektiva parsningen som är möjlig för de ursprungliga klassiska formalismerna. Detta står i kontrast till att ta det större steget upp i Chomsky-hierarkin till de kontextkänsliga språken, vilket medför ett svårt parsningsproblem. Ett omedelbart exempel på en sådan utökning är s.k. shuffle-formalismer. Dessa utökar existerande formalismer genom att introducera operatorer som godtyckligt sammanflätar strängar från argumentspråk. Om shuffle-operator introduceras till de reguljära uttrycken ger det inte förmågan att känna igen t.ex. det kontextfria språket anbn, men det fångar istället vissa språk som inte är kontextfria, till exempel språket som består av alla strängar som innehåller lika många a:n, b:n och c:n. Sättet på vilket dessa utökningar påverkar parsningsproblemet är mångfacetterat. Utöver dessa shuffle-operatorer tas också formalismer där delsträngar kan upprepas, formalismer där delsträngar flyttas runt, och formalismer som begränsar hur delsträngar får konkateneras upp. Formalismerna som tas upp här har dock vissa egenskaper gemensamma. De är nära besläktade med de klassiska reguljära och kontextfria formalismerna. De arbetar stegvis, och konstruerar strängar genom successiva applikationer av individuellt enkla regler. Varje steg genererar ett konstant antal symboler och modifierar inte det som redan genererats. Det vill säga, strängar byggs additivt och längden på dem kan inte explodera (i kontrast till t.ex. Lindenmayer-system). Alla språk som tas upp kommer att ha en semi-linjär Parikh-avbildning. De har någon instressant spatial/ordningsrelaterad egenskap. Exempelvis sättet på vilket shuffle-operatorer sammanflätar annars oberoende deriveringar. Alla formalismerna är tänkta att vara begränsade nog att det är resonabelt att ha effektiv parsning som mål. Denna avhandling kommer att ge intuitiva förklaring av ett antal formalismer som uppfyller ovanstående krav, och kommer att skissa en blandning av resultat relaterade till parsningsproblemet för dem. Detta bör ses som förberedande inför läsning av de mer djupgående och komplexa resultaten och förklaringarna i de artiklar som finns inkluderade som appendix.
10

Escolha otimizada de parâmetros em métodos de pontos interiores para programação linear / Optimized choice of parameters in interior point methods for linear programming

Santos, Luiz Rafael dos, 1981- 25 August 2018 (has links)
Orientadores: Aurelio Ribeiro Leite de Oliveira, Fernando da Rocha Villas-Bôas, Clóvis Perin Filho / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T03:16:18Z (GMT). No. of bitstreams: 1 Santos_LuizRafaeldos_D.pdf: 1892418 bytes, checksum: f636057b6014ba9f4fdbc0c69c99bdeb (MD5) Previous issue date: 2014 / Resumo: Neste trabalho, propomos um método de pontos interiores do tipo preditor-corretor para programação linear em um contexto primal-dual, em que o próximo iterado será escolhido através de um subproblema de minimização de uma função de mérito polinomial a três variáveis: a primeira variável é o tamanho de passo, a segunda define a trajetória central e a última modela o peso que uma direção corretora deve ter. A minimização da função de mérito é feita sujeitando-a à restrições definidas por uma vizinhança da trajetória central que permite passos largos. Dessa maneira, combinamos diferentes direções, tais como preditora, corretora e de centralização com o objetivo de obter uma direção melhor. O método proposto generaliza grande parte dos métodos de pontos interiores preditores-corretores, a depender da escolha do valor das variáveis acima descritas. É feita, então uma análise de convergência do método proposto, considerando um ponto inicial que tem bom desempenho na prática, e que resulta em convergência linear dos iterados em complexidade polinomial. São feitos experimentos numéricos, utilizando o conjunto de testes Netlib, que mostram que essa abordagem é competitiva, quando comparada a implementações de pontos interiores bem estabelecidas como o PCx / Abstract: In this work we propose a predictor-corrector interior point method for linear programming in a primal-dual context, where the next iterate is chosen by the minimization of a polynomial merit function of three variables: the first one is the step length, the second one defines the central path and the last one models the weight that a corrector direction must have. The merit function minimization is performed by restricting it to constraints defined by a neighborhood of the central path that allows wide steps. In this framework, we combine different directions, such as the predictor, the corrector and the centering directions, with the aim of producing a better direction. The proposed method generalizes most of predictor-corrector interior point methods, depending on the choice of the variables described above. Convergence analysis of the method is carried out, considering an initial point that has a good practical performance, which results in Q-linear convergence of the iterates with polynomial complexity. Numerical experiments are made, using the Netlib test set, which show that this approach is competitive when compared to well established solvers, such as PCx / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada

Page generated in 0.0576 seconds