Spelling suggestions: "subject:"anline algorithms"" "subject:"bnline algorithms""
31 |
Quality of service routing using decentralized learningHeidari, Fariba. January 2009 (has links)
No description available.
|
32 |
Online Covering: Efficient and Learning-Augmented AlgorithmsYoung-san Lin (12868319) 14 June 2022 (has links)
<p>We start by slightly modifying the generic framework for solving online covering and packing linear programs (LP) proposed in the seminal work of Buchbinder and Naor (Mathematics of Operations Research, 34, 2009) to obtain efficient implementations in settings in which one has access to a separation oracle.</p>
<p><br></p>
<p>We then apply the generic framework to several online network connectivity problems with LP formulations, namely pairwise spanners and directed Steiner forests. Our results are comparable to the previous state-of-the-art results for these problems in the offline setting.</p>
<p><br></p>
<p>Further, we extend the generic frameworks to online optimization problems enhanced with <strong>machine-learning predictions</strong>. In particular, we present <strong>learning-augmented</strong> algorithms for online covering LPs and semidefinite programs (SDP), which outperform any optimal online algorithms when the prediction is accurate while maintaining reasonable guarantees when the prediction is misleading. Specifically, we obtain general online learning-augmented algorithms for covering LPs with fractional advice and general constraints and initiate the study of learning-augmented algorithms for covering SDPs.</p>
|
33 |
Física estatística de compressed sensing online / Statistical Physics of Online Compressed SensingRossi, Paulo Victor Camargo 02 March 2018 (has links)
Neste trabalho, Compressed Sensing é introduzido do ponto de vista da Física Estatística. Após uma introdução sucinta onde os conceitos básicos da teoria são apresentados, incluindo condições necessárias para as medições e métodos básicos de reconstrução do sinal, a performance típica do esquema Bayesiano de reconstrução é analisada através de um cálculo de réplicas exposto em detalhe pedagógico. Em seguida, a principal contribuição original do trabalho é introduzida --- o algoritmo Bayesiano de Compressed Sensing Online faz uso de uma aproximação de campo médio para simplificar cálculos e reduzir os requisitos de memória e computação, enquanto mantém a acurácia de reconstrução do esquema offline na presença de ruído aditivo. A última parte deste trabalho contém duas extensões do algoritmo online que permitem reconstrução otimizada do sinal no cenário mais realista onde conhecimento perfeito da distribuição geradora não está disponível. / In this work, Compressed Sensing is introduced from a Statistical Physics point of view. Following a succinct introduction where the basic concepts of the framework are presented, including necessary measurement conditions and basic signal reconstruction methods, the typical performance of the Bayesian reconstruction scheme is analyzed through a replica calculation shown in pedagogical detail. Thereafter, the main original contribution of this work is introduced --- the Bayesian Online Compressed Sensing algorithm makes use of a mean-field approximation to simplify calculations and reduce memory and computation requirements, while maintaining the asymptotic reconstruction accuracy of the offline scheme in the presence of additive noise. The last part of this work are two extensions of the online algorithm that allow for optimized signal reconstruction in the more realistic scenarios where perfect knowledge of the generating distribution is unavailable.
|
34 |
Models for Parallel Computation in Multi-Core, Heterogeneous, and Ultra Wide-Word ArchitecturesSalinger, Alejandro January 2013 (has links)
Multi-core processors have become the dominant processor architecture with 2, 4, and 8 cores on a chip being widely available and an increasing number of cores predicted for the future. In addition, the decreasing costs and increasing programmability of Graphic Processing Units (GPUs) have made these an accessible source of parallel processing power in general purpose computing. Among the many research challenges that this scenario has raised are the fundamental problems related to theoretical modeling of computation in these architectures. In this thesis we study several aspects of computation in modern parallel architectures, from modeling of computation in multi-cores and heterogeneous platforms, to multi-core cache management strategies, through the proposal of an architecture that exploits bit-parallelism on thousands of bits.
Observing that in practice multi-cores have a small number of cores, we propose a model for low-degree parallelism for these architectures. We argue that assuming a small number of processors (logarithmic in a problem's input size) simplifies the design of parallel algorithms. We show that in this model a large class of divide-and-conquer and dynamic programming algorithms can be parallelized with simple modifications to sequential programs, while achieving optimal parallel speedups. We further explore low-degree-parallelism in computation, providing evidence of fundamental differences in practice and theory between systems with a sublinear and linear number of processors, and suggesting a sharp theoretical gap between the classes of problems that are efficiently parallelizable in each case.
Efficient strategies to manage shared caches play a crucial role in multi-core performance. We propose a model for paging in multi-core shared caches, which extends classical paging to a setting in which several threads share the cache. We show that in this setting traditional cache management policies perform poorly, and that any effective strategy must partition the cache among threads, with a partition that adapts dynamically to the demands of each thread. Inspired by the shared cache setting,
we introduce the minimum cache usage problem, an extension to classical sequential paging in which algorithms must account for the amount of cache they use.
This cache-aware model seeks algorithms with good performance in terms of faults and the amount of cache used, and has applications in energy efficient caching and in shared cache scenarios.
The wide availability of GPUs has added to the parallel power of multi-cores, however, most applications underutilize the available resources. We propose a model for hybrid computation in heterogeneous systems with multi-cores and GPU, and describe strategies for generic parallelization and efficient scheduling of a large class of divide-and-conquer algorithms.
Lastly, we introduce the Ultra-Wide Word architecture and model, an extension of the word-RAM model, that allows for constant time operations on thousands of bits in parallel. We show that a large class of existing algorithms can be
implemented in the Ultra-Wide Word model, achieving speedups comparable to those of multi-threaded computations, while avoiding the more difficult aspects of parallel programming.
|
35 |
Some results on linear discrepancy for partially ordered setsKeller, Mitchel Todd 24 November 2009 (has links)
Tanenbaum, Trenk, and Fishburn introduced the concept of linear
discrepancy in 2001, proposing it as a way to measure a partially
ordered set's distance from being a linear order. In addition to
proving a number of results about linear discrepancy, they posed
eight challenges and questions for future work. This dissertation
completely resolves one of those challenges and makes contributions
on two others. This dissertation has three principal components:
3-discrepancy irreducible posets of width 3, degree bounds, and
online algorithms for linear discrepancy. The first principal
component of this dissertation provides a forbidden subposet
characterization of the posets with linear discrepancy equal to 2
by completing the determination of the posets that are
3-irreducible with respect to linear discrepancy. The second
principal component concerns degree bounds for linear discrepancy
and weak discrepancy, a parameter similar to linear
discrepancy. Specifically, if every point of a poset is incomparable
to at most D other points of the poset, we prove three
bounds: the linear discrepancy of an interval order is at most
D, with equality if and only if it contains an antichain of
size D; the linear discrepancy of a disconnected poset is
at most the greatest integer less than or equal to (3D-1)/2; and the weak discrepancy of a
poset is at most D. The third principal component of this
dissertation incorporates another large area of research, that of
online algorithms. We show that no online algorithm for linear
discrepancy can be better than 3-competitive, even for the class
of interval orders. We also give a 2-competitive online algorithm
for linear discrepancy on semiorders and show that this algorithm is
optimal.
|
36 |
Models for Parallel Computation in Multi-Core, Heterogeneous, and Ultra Wide-Word ArchitecturesSalinger, Alejandro January 2013 (has links)
Multi-core processors have become the dominant processor architecture with 2, 4, and 8 cores on a chip being widely available and an increasing number of cores predicted for the future. In addition, the decreasing costs and increasing programmability of Graphic Processing Units (GPUs) have made these an accessible source of parallel processing power in general purpose computing. Among the many research challenges that this scenario has raised are the fundamental problems related to theoretical modeling of computation in these architectures. In this thesis we study several aspects of computation in modern parallel architectures, from modeling of computation in multi-cores and heterogeneous platforms, to multi-core cache management strategies, through the proposal of an architecture that exploits bit-parallelism on thousands of bits.
Observing that in practice multi-cores have a small number of cores, we propose a model for low-degree parallelism for these architectures. We argue that assuming a small number of processors (logarithmic in a problem's input size) simplifies the design of parallel algorithms. We show that in this model a large class of divide-and-conquer and dynamic programming algorithms can be parallelized with simple modifications to sequential programs, while achieving optimal parallel speedups. We further explore low-degree-parallelism in computation, providing evidence of fundamental differences in practice and theory between systems with a sublinear and linear number of processors, and suggesting a sharp theoretical gap between the classes of problems that are efficiently parallelizable in each case.
Efficient strategies to manage shared caches play a crucial role in multi-core performance. We propose a model for paging in multi-core shared caches, which extends classical paging to a setting in which several threads share the cache. We show that in this setting traditional cache management policies perform poorly, and that any effective strategy must partition the cache among threads, with a partition that adapts dynamically to the demands of each thread. Inspired by the shared cache setting,
we introduce the minimum cache usage problem, an extension to classical sequential paging in which algorithms must account for the amount of cache they use.
This cache-aware model seeks algorithms with good performance in terms of faults and the amount of cache used, and has applications in energy efficient caching and in shared cache scenarios.
The wide availability of GPUs has added to the parallel power of multi-cores, however, most applications underutilize the available resources. We propose a model for hybrid computation in heterogeneous systems with multi-cores and GPU, and describe strategies for generic parallelization and efficient scheduling of a large class of divide-and-conquer algorithms.
Lastly, we introduce the Ultra-Wide Word architecture and model, an extension of the word-RAM model, that allows for constant time operations on thousands of bits in parallel. We show that a large class of existing algorithms can be
implemented in the Ultra-Wide Word model, achieving speedups comparable to those of multi-threaded computations, while avoiding the more difficult aspects of parallel programming.
|
37 |
Física estatística de compressed sensing online / Statistical Physics of Online Compressed SensingPaulo Victor Camargo Rossi 02 March 2018 (has links)
Neste trabalho, Compressed Sensing é introduzido do ponto de vista da Física Estatística. Após uma introdução sucinta onde os conceitos básicos da teoria são apresentados, incluindo condições necessárias para as medições e métodos básicos de reconstrução do sinal, a performance típica do esquema Bayesiano de reconstrução é analisada através de um cálculo de réplicas exposto em detalhe pedagógico. Em seguida, a principal contribuição original do trabalho é introduzida --- o algoritmo Bayesiano de Compressed Sensing Online faz uso de uma aproximação de campo médio para simplificar cálculos e reduzir os requisitos de memória e computação, enquanto mantém a acurácia de reconstrução do esquema offline na presença de ruído aditivo. A última parte deste trabalho contém duas extensões do algoritmo online que permitem reconstrução otimizada do sinal no cenário mais realista onde conhecimento perfeito da distribuição geradora não está disponível. / In this work, Compressed Sensing is introduced from a Statistical Physics point of view. Following a succinct introduction where the basic concepts of the framework are presented, including necessary measurement conditions and basic signal reconstruction methods, the typical performance of the Bayesian reconstruction scheme is analyzed through a replica calculation shown in pedagogical detail. Thereafter, the main original contribution of this work is introduced --- the Bayesian Online Compressed Sensing algorithm makes use of a mean-field approximation to simplify calculations and reduce memory and computation requirements, while maintaining the asymptotic reconstruction accuracy of the offline scheme in the presence of additive noise. The last part of this work are two extensions of the online algorithm that allow for optimized signal reconstruction in the more realistic scenarios where perfect knowledge of the generating distribution is unavailable.
|
38 |
Análise de estabilidade de Lyapunov de algoritmos adaptativos com contribuições ao estudo do critério de módulo constante / Lyapunov stability analysis for adaptative algorithms with contributions to constant modulus criteria studySousa Júnior, Celso de 08 December 2011 (has links)
Orientadores: João Marcos Travassos Romano, Romis Ribeiro de Faissol Attux / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-19T00:40:42Z (GMT). No. of bitstreams: 1
SousaJunior_Celsode_D.pdf: 2582555 bytes, checksum: 4c20b08e97e42d51ea143f7651c91af4 (MD5)
Previous issue date: 2011 / Resumo: O problema de equalização adaptativa se vincula à busca por soluções iterativas que permitam reduzir ou eliminar os efeitos nocivos do canal de comunicação sobre um sinal transmitido de interesse. Uma vez que os sistemas adaptativos se baseiam em algoritmos capazes de ajustar os parâmetros de um filtro, pode-se considerar o conjunto equalizador / algoritmo adaptativo como um sistema dinâmico, o que termina por relacionar a possibilidade de obter uma solução satisfatória à noção de convergência. A análise de convergência de algoritmos de equalização adaptativa se desenvolveu, tipicamente, considerando algumas hipóteses para viabilizar o tratamento matemático, mas nem sempre tais hipóteses são estritamente válidas. Um exemplo clássico nesse sentido é o uso da teoria da independência. Neste trabalho, buscamos uma abordagem distinta do estudo das condições de estabilidade de algoritmos de equalização clássica baseada na teoria de Lyapunov. Essa teoria é geralmente utilizada no estudo de sistemas não-lineares, e apresenta um amplo histórico de resultados sólidos na área de controle adaptativo. Isso motiva o uso no campo de processamento de sinais. A primeira contribuição deste trabalho consiste em determinar, por meio da teoria de Lyapunov, a faixa de valores de passo de adaptação que garantem estabilidade do sistema de equalização para algoritmos baseados no critério deWiener e para o algoritmo do módulo constante. A partir dos resultados para estabilidade, investigar-se-á também a região de convergência para os pesos do algoritmo LMS, o que trará uma produtiva relação com a idéia de misadjustment. Como segunda linha de contribuição, será apresentada uma análise de um limitante inferior para o custo atingível e uma proposta de inicialização capaz de aumentar a probabilidade de convergência para o melhor ótimo gerado pelo critério para o algoritmo do módulo constante. Essa estratégia se baseia numa formulação do critério não-supervisionado de filtragem linear em termos da aplicação do critério de Wiener a uma estrutura polinomial. Os resultados obtidos revelam que a idéia é capaz de levar a um desempenho melhor que os do clássico método center spike e de uma estratégia de inicialização aleatória / Abstract: The problem of adaptive equalization is related to the search for iterative solutions that allow the reduction or the elimination of the noxious effects of a communication channel on a transmitted signal of interest. Since adaptive systems are based on algorithms capable of adjusting the parameters of a filter, the combination between equalizer and learning algorithm can be considered to form a dynamical system, which relates the possibility of obtaining a satisfactory solution to the convergence issue. The analysis of the convergence of adaptive equalization algorithms was developed, typically, considering certain simplifying hypotheses that, however, are not always strictly valid. A classical example that illustrates this assertion is the use of the so-called independence theory. In this work, it has been investigated a distinct approach to the study of stability conditions of classical methods based on Lyapunov theory. This theory is generally employed in the study of nonlinear systems, and presents a significant framework of sound results in the field of adaptive control, which motivates its use in the context of signal processing. The first contribution of this work consists of determining, by means of Lyapunov theory, the range of step-size values that ensure stability of the equalization system for algorithms based on the Wiener criterion and for the constant modulus algorithm. Using the obtained stability results, the convergence region for the parameters estimated via LMS is also investigated, which establishes an interesting connection with the notion of misadjustment. In a second line of study, we present an analysis of the lower bound for the attainable CM cost and an initialization heuristic capable of increasing the probability of convergence to the best optimum engendered by the constant modulus criterion. This strategy is based on a formulation of the unsupervised linear filtering criterion in terms of the application of the Wiener criterion to a polynomial structure. The obtained results reveal that the proposal is able to effectively lead to a performance level that is better than that achieved using the classical center spike method and a random approach / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
|
39 |
Online algoritmy pro varianty bin packingu / Online algorithms for variants of bin packingVeselý, Pavel January 2014 (has links)
An online algorithm must make decisions immediately and irrevocably based only on a part of the input without any knowledge of the future part of the input. We introduce the competitive analysis of online algorithms, a standard worst-case analysis, and present main results of this analysis on the problem of online Bin Packing and on some of its variants. In Bin Packing, a sequence of items of size up to 1 arrives to be packed into the minimal number of unit capacity bins. Mainly, we focus on Colored Bin Packing in which items have also a color and we cannot pack two items of the same color adjacently in a bin. For Colored Bin Packing, we improve some previous results on the problem with two colors and present the first results for arbitrarily many colors. Most notably, in the important case when all items have size zero, we give an optimal 1.5-competitive algorithm. For items of arbitrary size we present a lower bound of 2.5 and a 3.5-competitive algorithm. Powered by TCPDF (www.tcpdf.org)
|
40 |
Algorithms for large graphsDas Sarma, Atish 01 July 2010 (has links)
No description available.
|
Page generated in 0.0437 seconds