Spelling suggestions: "subject:"incremental algorithms"" "subject:"ncremental algorithms""
1 |
Novos métodos incrementais para otimização convexa não-diferenciável em dois níveis com aplicações em reconstrução de imagens em tomografia por emissão / New incremental methods for bivel nondifferentiable convex optimization with applications on image reconstruction in emission tomographySimões, Lucas Eduardo Azevedo 28 March 2013 (has links)
Apresentamos dois novos métodos para a solução de problemas de otimização convexa em dois níveis não necessariamente diferenciáveis, i.e., mostramos que as sequências geradas por ambos os métodos convergem para o conjunto ótimo de uma função não suave sujeito a um conjunto que também envolve a minimização de uma função não diferenciável. Ambos os algoritmos dispensam qualquer tipo de resolução de subproblemas ou busca linear durante suas iterações. Ao final, para demonstrar que os métodos são viáveis, resolvemos um problema de reconstrução de imagens tomográficas / We present two new methods for solving bilevel convex optimization problems, where both functions are not necessarily differentiable, i.e., we show that the sequences generated by those methods converge to the optimal set of a nonsmooth function subject to a set that also involves a function minimization. Both algorithms do not require any kind of subproblems resolution or linear search during the iterations. At the end, to prove that our methods are viable, we solve a problem of tomographic image reconstruction
|
2 |
The Incremental Constraint of k-ServerMcAulay, Caelyn Burnham January 2012 (has links)
Online algorithms are characterized by operating on an input sequence revealed over time versus a single static input. Instead of generating a single solution, they produce a sequence of incremental solutions corresponding to the input seen so far. An online algorithm's ignorance of future inputs limits its ability to produce optimal solutions. The incremental nature of its solutions is also an obstacle. The two factors can be differentiated by examining the corresponding incremental algorithm, which has knowledge of future inputs, but must still provide a competitive solution at each step.
In this thesis, the lower bound of the incremental constraint of k-server is shown to be to 2.
|
3 |
The Incremental Constraint of k-ServerMcAulay, Caelyn Burnham January 2012 (has links)
Online algorithms are characterized by operating on an input sequence revealed over time versus a single static input. Instead of generating a single solution, they produce a sequence of incremental solutions corresponding to the input seen so far. An online algorithm's ignorance of future inputs limits its ability to produce optimal solutions. The incremental nature of its solutions is also an obstacle. The two factors can be differentiated by examining the corresponding incremental algorithm, which has knowledge of future inputs, but must still provide a competitive solution at each step.
In this thesis, the lower bound of the incremental constraint of k-server is shown to be to 2.
|
4 |
Novos métodos incrementais para otimização convexa não-diferenciável em dois níveis com aplicações em reconstrução de imagens em tomografia por emissão / New incremental methods for bivel nondifferentiable convex optimization with applications on image reconstruction in emission tomographyLucas Eduardo Azevedo Simões 28 March 2013 (has links)
Apresentamos dois novos métodos para a solução de problemas de otimização convexa em dois níveis não necessariamente diferenciáveis, i.e., mostramos que as sequências geradas por ambos os métodos convergem para o conjunto ótimo de uma função não suave sujeito a um conjunto que também envolve a minimização de uma função não diferenciável. Ambos os algoritmos dispensam qualquer tipo de resolução de subproblemas ou busca linear durante suas iterações. Ao final, para demonstrar que os métodos são viáveis, resolvemos um problema de reconstrução de imagens tomográficas / We present two new methods for solving bilevel convex optimization problems, where both functions are not necessarily differentiable, i.e., we show that the sequences generated by those methods converge to the optimal set of a nonsmooth function subject to a set that also involves a function minimization. Both algorithms do not require any kind of subproblems resolution or linear search during the iterations. At the end, to prove that our methods are viable, we solve a problem of tomographic image reconstruction
|
5 |
Dictionary learning methods for single-channel source separationLefèvre, Augustin 03 October 2012 (has links) (PDF)
In this thesis we provide three main contributions to blind source separation methods based on NMF. Our first contribution is a group-sparsity inducing penalty specifically tailored for Itakura-Saito NMF. In many music tracks, there are whole intervals where only one source is active at the same time. The group-sparsity penalty we propose allows to blindly indentify these intervals and learn source specific dictionaries. As a consequence, those learned dictionaries can be used to do source separation in other parts of the track were several sources are active. These two tasks of identification and separation are performed simultaneously in one run of group-sparsity Itakura-Saito NMF. Our second contribution is an online algorithm for Itakura-Saito NMF that allows to learn dictionaries on very large audio tracks. Indeed, the memory complexity of a batch implementation NMF grows linearly with the length of the recordings and becomes prohibitive for signals longer than an hour. In contrast, our online algorithm is able to learn NMF on arbitrarily long signals with limited memory usage. Our third contribution deals user informed NMF. In short mixed signals, blind learning becomes very hard and sparsity do not retrieve interpretable dictionaries. Our contribution is very similar in spirit to inpainting. It relies on the empirical fact that, when observing the spectrogram of a mixture signal, an overwhelming proportion of it consists in regions where only one source is active. We describe an extension of NMF to take into account time-frequency localized information on the absence/presence of each source. We also investigate inferring this information with tools from machine learning.
|
6 |
Algorithmes pour la dynamique moléculaire restreinte de manière adaptative / Algorithms for adaptively restrained molecular dynamicsSingh, Krishna Kant 08 November 2017 (has links)
Les méthodes de dynamique moléculaire (MD pour Molecular Dynamics en anglais) sont utilisées pour simuler des systèmes volumineux et complexes. Cependant, la simulation de ce type de systèmes sur de longues échelles temporelles demeure un problème coûteux en temps de calcul. L'étape la plus coûteuse des méthodes de MD étant la mise à jour des forces entre les particules. La simulation de particules restreintes de façon adaptative (ARMD pour Adaptively Restrained Molecular Dynamics en anglais) est une nouvelle approche permettant d'accélérer le processus de simulation en réduisant le nombre de calculs de forces effectués à chaque pas de temps. La méthode ARMD fait varier l'état des degrés de liberté en position en les activants ou en les désactivants de façon adaptative au cours de la simulation. Du fait, que le calcul des forces dépend majoritairement de la distance entre les atomes, ce calcul peut être évité entre deux particules dont les degrés de liberté en position sont désactivés. En revanche, le calcul des forces pour les particules actives (i.e. celles dont les degrés de liberté en position sont actifs) est effectué. Afin d'exploiter au mieux l'adaptabilité de la méthode ARMD, nous avons conçu de nouveaux algorithmes permettant de calculer et de mettre à jour les forces de façon plus efficace. Nous avons développé des algorithmes permettant de construire et de mettre à jour des listes de voisinage de manière incrémentale. En particulier, nous avons travaillé sur un algorithme de mise à jour incrémentale des forces en un seul passage deux fois plus rapide que l'ancien algorithme également incrémental mais qui nécessitait deux passages. Les méthodes proposées ont été implémentées et validées dans le simulateur de MD appelé LAMMPS, mais elles peuvent s'appliquer à n'importe quel autre simulateur de MD. Nous avons validé nos algorithmes pour différents exemples sur les ensembles NVE et NVT. Dans l'ensemble NVE, la méthode ARMD permet à l'utilisateur de jouer sur le précision pour accélérer la vitesse de la simulation. Dans l'ensemble NVT, elle permet de mesurer des grandeurs statistiques plus rapidement. Finalement, nous présentons des algorithmes parallèles pour la mise à jour incrémentale en un seul passage permettant d'utiliser la méthode ARMD avec le standard Message Passage Interface (MPI). / Molecular Dynamics (MD) is often used to simulate large and complex systems. Although, simulating such complex systems for the experimental time scales are still computationally challenging. In fact, the most computationally extensive step in MD is the computation of forces between particles. Adaptively Restrained Molecular Dynamics (ARMD) is a recently introduced particles simulation method that switches positional degrees of freedom on and off during simulation. Since force computations mainly depend upon the inter-atomic distances, the force computation between particles with positional degrees of freedom off~(restrained particles) can be avoided. Forces involving active particles (particles with positional degrees of freedom on) are computed.In order to take advantage of adaptability of ARMD, we designed novel algorithms to compute and update forces efficiently. We designed algorithms not only to construct neighbor lists, but also to update them incrementally. Additionally, we designed single-pass incremental force update algorithm that is almost two times faster than previously designed two-pass incremental algorithm. These proposed algorithms are implemented and validated in the LAMMPS MD simulator, however, these algorithms can be applied to other MD simulators. We assessed our algorithms on different and diverse benchmarks in both microcanonical ensemble (NVE) and canonical (NVT) ensembles. In the NVE ensemble, ARMD allows users to trade between precision and speed while, in the NVT ensemble, it makes it possible to compute statistical averages faster. In Last, we introduce parallel algorithms for single-pass incremental force computations to take advantage of adaptive restraints using the Message Passage Interface (MPI) standard.
|
7 |
Efficient Temporal Reasoning with UncertaintyNilsson, Mikael January 2015 (has links)
Automated Planning is an active area within Artificial Intelligence. With the help of computers we can quickly find good plans in complicated problem domains, such as planning for search and rescue after a natural disaster. When planning in realistic domains the exact duration of an action generally cannot be predicted in advance. Temporal planning therefore tends to use upper bounds on durations, with the explicit or implicit assumption that if an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false. If we finish cooking too early, the dinner will be cold before everyone is at home and can eat. Simple Temporal Networks with Uncertainty (STNUs) allow us to model such situations. An STNU-based planner must verify that the temporal problems it generates are executable, which is captured by the property of dynamic controllability (DC). If a plan is not dynamically controllable, adding actions cannot restore controllability. Therefore a planner should verify after each action addition whether the plan remains DC, and if not, backtrack. Verifying dynamic controllability of a full STNU is computationally intensive. Therefore, incremental DC verification algorithms are needed. We start by discussing two existing algorithms relevant to the thesis. These are the very first DC verification algorithm called MMV (by Morris, Muscettola and Vidal) and the incremental DC verification algorithm called FastIDC, which is based on MMV. We then show that FastIDC is not sound, sometimes labeling networks as dynamically controllable when they are not. We analyze the algorithm to pinpoint the cause and show how the algorithm can be modified to correctly and efficiently detect uncontrollable networks. In the next part we use insights from this work to re-analyze the MMV algorithm. This algorithm is pseudo-polynomial and was later subsumed by first an n5 algorithm and then an n4 algorithm. We show that the basic techniques used by MMV can in fact be used to create an n4 algorithm for verifying dynamic controllability, with a new termination criterion based on a deeper analysis of MMV. This means that there is now a comparatively easy way of implementing a highly efficient dynamic controllability verification algorithm. From a theoretical viewpoint, understanding MMV is important since it acts as a building block for all subsequent algorithms that verify dynamic controllability. In our analysis we also discuss a change in MMV which reduces the amount of regression needed in the network substantially. In the final part of the thesis we show that the FastIDC method can result in traversing part of a temporal network multiple times, with constraints slowly tightening towards their final values. As a result of our analysis we then present a new algorithm with an improved traversal strategy that avoids this behavior. The new algorithm, EfficientIDC, has a time complexity which is lower than that of FastIDC. We prove that it is sound and complete.
|
8 |
Application des architectures many core dans les systèmes embarqués temps réel / Implementing a Real-time Avionic application on a Many-core ProcessorLo, Moustapha 22 February 2019 (has links)
Les processeurs mono-coeurs traditionnels ne sont plus suffisants pour répondre aux besoins croissants en performance des fonctions avioniques. Les processeurs multi/many-coeurs ont emergé ces dernières années afin de pouvoir intégrer plusieurs fonctions et de bénéficier de la puissance par Watt disponible grâce aux partages de ressources. En revanche, tous les processeurs multi/many-coeurs ne répondent pas forcément aux besoins des fonctions avioniques. Nous préférons avoir plus de déterminisme que de puissance de calcul car la certification de ces processeurs passe par la maîtrise du déterminisme. L’objectif de cette thèse est d’évaluer le processeur many-coeur (MPPA-256) de Kalray dans un contexte industriel aéronautique. Nous avons choisi la fonction de maintenance HMS (Health Monitoring System) qui a un besoin important en bande passante et un besoin de temps de réponse borné.Par ailleurs, cette fonction est également dotée de propriétés de parallélisme car elle traite des données de vibration venant de capteurs qui sont fonctionnellement indépendants, et par conséquent leur traitement peut être parallélisé sur plusieurs coeurs. La particularité de cette étude est qu’elle s’intéresse au déploiement d’une fonction existante séquentielle sur une architecture many-coeurs en partant de l’acquisition des données jusqu’aux calculs des indicateurs de santé avec un fort accent sur le fluxd’entrées/sorties des données. Nos travaux de recherche ont conduit à 5 contributions:• Transformation des algorithmes existants en algorithmes incrémentaux capables de traiter les données au fur et mesure qu’elles arrivent des capteurs.• Gestion du flux d’entrées des échantillons de vibrations jusqu’aux calculs des indicateurs de santé,la disponibilité des données dans le cluster interne, le moment où elles sont consommées et enfinl’estimation de la charge de calcul.• Mesures de temps pas très intrusives directement sur le MPPA-256 en ajoutant des timestamps dans le flow de données.• Architecture logicielle qui respecte les contraintes temps-réel même dans les pires cas. Elle estbasée sur une pipeline à 3 étages.• Illustration des limites de la fonction existante: nos expériences ont montré que les paramètres contextuels de l’hélicoptère tels que la vitesse du rotor doivent être corrélés aux indicateurs de santé pour réduire les fausses alertes. / Traditional single-cores are no longer sufficient to meet the growing needs of performance in avionics domain. Multi-core and many-core processors have emerged in the recent years in order to integrate several functions thanks to the resource sharing. In contrast, all multi-core and many-core processorsdo not necessarily satisfy the avionic constraints. We prefer to have more determinism than computing power because the certification of such processors depends on mastering the determinism.The aim of this thesis is to evaluate the many-core processor (MPPA-256) from Kalray in avionic context. We choose the maintenance function HMS (Health Monitoring System) which requires an important bandwidth and a response time guarantee. In addition, this function has also parallelism properties. It computes data from sensors that are functionally independent and, therefore their processing can be parallelized in several cores. This study focuses on deploying the existing sequential HMS on a many-core processor from the data acquisition to the computation of the health indicators with a strongemphasis on the input flow.Our research led to five main contributions:• Transformation of the global existing algorithms into a real-time ones which can process data as soon as they are available.• Management of the input flow of vibration samples from the sensors to the computation of the health indicators, the availability of raw vibration data in the internal cluster, when they are consumed and finally the workload estimation.• Implementing a lightweight Timing measurements directly on the MPPA-256 by adding timestamps in the data flow.• Software architecture that respects real-time constraints even in the worst cases. The software architecture is based on three pipeline stages.• Illustration of the limits of the existing function: our experiments have shown that the contextual parameters of the helicopter such as the rotor speed must be correlated with the health indicators to reduce false alarms.
|
9 |
Dictionary learning methods for single-channel source separation / Méthodes d'apprentissage de dictionnaire pour la séparation de sources audio avec un seul capteurLefèvre, Augustin 03 October 2012 (has links)
Nous proposons dans cette thèse trois contributions principales aux méthodes d'apprentissage de dictionnaire. La première est un critère de parcimonie par groupes adapté à la NMF lorsque la mesure de distorsion choisie est la divergence d'Itakura-Saito. Dans la plupart des signaux de musique on peut trouver de longs intervalles où seulement une source est active (des soli). Le critère de parcimonie par groupe que nous proposons permet de trouver automatiquement de tels segments et d'apprendre un dictionnaire adapté à chaque source. Ces dictionnaires permettent ensuite d'effectuer la tâche de séparation dans les intervalles où les sources sont mélangés. Ces deux tâches d'identification et de séparation sont effectuées simultanément en une seule passe de l'algorithme que nous proposons. Notre deuxième contribution est un algorithme en ligne pour apprendre le dictionnaire à grande échelle, sur des signaux de plusieurs heures. L'espace mémoire requis par une NMF estimée en ligne est constant alors qu'il croit linéairement avec la taille des signaux fournis dans la version standard, ce qui est impraticable pour des signaux de plus d'une heure. Notre troisième contribution touche à l'interaction avec l'utilisateur. Pour des signaux courts, l'apprentissage aveugle est particulièrement dificile, et l'apport d'information spécifique au signal traité est indispensable. Notre contribution est similaire à l'inpainting et permet de prendre en compte des annotations temps-fréquences. Elle repose sur l'observation que la quasi-totalité du spectrogramme peut etre divisé en régions spécifiquement assignées à chaque source. Nous décrivons une extension de NMF pour prendre en compte cette information et discutons la possibilité d'inférer cette information automatiquement avec des outils d'apprentissage statistique simples. / In this thesis we provide three main contributions to blind source separation methods based on NMF. Our first contribution is a group-sparsity inducing penalty specifically tailored for Itakura-Saito NMF. In many music tracks, there are whole intervals where only one source is active at the same time. The group-sparsity penalty we propose allows to blindly indentify these intervals and learn source specific dictionaries. As a consequence, those learned dictionaries can be used to do source separation in other parts of the track were several sources are active. These two tasks of identification and separation are performed simultaneously in one run of group-sparsity Itakura-Saito NMF. Our second contribution is an online algorithm for Itakura-Saito NMF that allows to learn dictionaries on very large audio tracks. Indeed, the memory complexity of a batch implementation NMF grows linearly with the length of the recordings and becomes prohibitive for signals longer than an hour. In contrast, our online algorithm is able to learn NMF on arbitrarily long signals with limited memory usage. Our third contribution deals user informed NMF. In short mixed signals, blind learning becomes very hard and sparsity do not retrieve interpretable dictionaries. Our contribution is very similar in spirit to inpainting. It relies on the empirical fact that, when observing the spectrogram of a mixture signal, an overwhelming proportion of it consists in regions where only one source is active. We describe an extension of NMF to take into account time-frequency localized information on the absence/presence of each source. We also investigate inferring this information with tools from machine learning.
|
10 |
Efficient Graph Summarization of Large NetworksHajiabadi, Mahdi 24 June 2022 (has links)
In this thesis, we study the notion of graph summarization,
which is a fundamental task of finding a compact representation of the original graph called the summary.
Graph summarization can be used for reducing the footprint of the input graph, better visualization, anonymizing the identity of users, and query answering.
There are two different frameworks of graph summarization we consider in this thesis, the utility-based framework and the correction set-based framework.
In the utility-based framework, the input graph is summarized until a utility threshold is not violated.
In the correction set-based framework a set of correction edges is produced along with the summary graph.
In this thesis we propose two algorithms for the utility-based framework and one for the correction set-based framework. All these three algorithms are for static graphs (i.e. graphs that do not change over time).
Then, we propose two more utility-based algorithms for fully dynamic graphs (i.e. graphs with edge insertions and deletions).
Algorithms for graph summarization can be lossless (summarizing the input graph without losing any information) or lossy (losing some information about the input graph in order to summarize it more).
Some of our algorithms are lossless and some lossy, but with controlled utility loss.
Our first utility-driven graph summarization algorithm, G-SCIS, is based on a clique and independent set decomposition, that produces optimal compression with zero
loss of utility. The compression provided is significantly better than
state-of-the-art in lossless graph summarization, while the runtime
is two orders of magnitude lower.
Our second algorithm is T-BUDS, a highly scalable, utility-driven algorithm for fully controlled lossy summarization.
It achieves high scalability by combining memory reduction using Maximum Spanning Tree with a novel binary
search procedure. T-BUDS outperforms state-of-the-art drastically in terms of the quality of summarization and is about two orders of magnitude better in terms of speed. In contrast to the competition, we are able to handle web-scale graphs in a single machine
without performance impediment as the utility threshold (and size of summary) decreases. Also, we show that our graph summaries can be used as-is to answer several important classes of queries, such as triangle enumeration, Pagerank and shortest paths.
We then propose algorithm LDME, a correction set-based graph summarization algorithm that produces compact output representations in a fast and scalable manner. To achieve this, we introduce (1) weighted locality sensitive hashing to drastically reduce the number of comparisons required to find good node merges, (2) an efficient way to compute the best quality merges that produces more compact outputs, and (3) a new sort-based encoding algorithm that is faster and more robust. More interestingly, our algorithm provides performance tuning settings to allow the option of trading compression for running
time. On high compression settings, LDME achieves compression equal to or better than the state of the art with up to 53x speedup in running time. On high speed settings, LDME achieves up to two orders of magnitude speedup with only slightly lower compression.
We also present two lossless summarization algorithms, Optimal and Scalable, for summarizing fully dynamic graphs.
More concretely, we follow the framework of G-SCIS, which produces summaries that can be used as-is in several graph analytics tasks. Different from G-SCIS, which is a batch algorithm, Optimal and Scalable are fully dynamic and can respond rapidly to each change in the graph.
Not only are Optimal and Scalable able to outperform G-SCIS and other batch algorithms by several orders of magnitude, but they also significantly outperform MoSSo, the state-of-the-art in lossless dynamic graph summarization.
While Optimal produces always the most optimal summary, Scalable is able to trade the amount of node reduction for extra scalability.
For reasonable values of the parameter $K$, Scalable is able to outperform Optimal by an order of magnitude in speed, while keeping the rate of node reduction close to that of Optimal.
An interesting fact that we observed experimentally is that even if we were to run a batch algorithm, such as G-SCIS, once for every big batch of changes, still they would be much slower than Scalable. For instance, if 1 million changes occur in a graph, Scalable is two orders of magnitude faster than running G-SCIS just once at the end of the 1 million-edge sequence. / Graduate
|
Page generated in 0.0929 seconds