11 |
Characterizing problems for realizing policies in self-adaptive and self-managing systemsBalasubramanian, Sowmya 15 March 2013 (has links)
Self-adaptive and self-managing systems optimize their own behaviour according to high-level objectives and constraints. One way for human administrators to effectively specify goals for such optimization problems is using policies. Over the past decade, researchers produced various approaches, models and techniques for policy specification in different areas including distributed systems, communication networks, web services, autonomic computing, and cloud computing. Research challenges range from characterizing policies for ease of specification in particular application domains to categorizing policies for achieving good solution qualities for particular algorithmic techniques.
The contributions of this thesis are threefold. Firstly, we give a mathematical formulation for each of the three policy types, action, goal and utility function policies, introduced in the policy framework by Kephart and Walsh. In particular, we introduce a first precise characterization of goal policies for optimization problems. Secondly, this thesis introduces a mathematical framework that adds structure to the underlying optimization problem for different types of policies. Structure is added either to the objective function or the constraints of the optimization problem. These mathematical structures, imposed on the underlying problem, progressively increase the quality of the solutions obtained when using the greedy optimization technique. Thirdly, we show the applicability of our framework through case studies by analyzing several optimization problems encountered in self-adaptive and self-managing systems, such as resource allocation, quality of service management, and Service Level Agreement (SLA) profit optimization to provide quality guarantees for their solutions.
Our approach combines the algorithmic results by Edmonds, Fisher et al., and Mestre, and the policy framework of Kephart and Walsh. Our characterization and approach will help designers of self-adaptive and self-managing systems formulate optimization problems, decide on algorithmic strategies based on policy requirements, and reason about solution qualities. / Graduate / 0984
|
12 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
13 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. January 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
14 |
Theoritical and numerical studies on the graph partitioning problem / Études théoriques et numériques du problème de partitionnement dans un grapheAlthoby, Haeder Younis Ghawi 06 November 2017 (has links)
Étant donné G = (V, E) un graphe non orienté connexe et un entier positif β (n), où n est le nombrede sommets de G, le problème du séparateur (VSP) consiste à trouver une partition de V en troisclasses A, B et C de sorte qu'il n'y a pas d'arêtes entre A et B, max {| A |, | B |} est inférieur ou égal àβ (n) et | C | est minimum. Dans cette thèse, nous considérons une modélisation du problème sous laforme d'un programme linéaire en nombres entiers. Nous décrivons certaines inégalités valides et etdéveloppons des algorithmes basés sur un schéma de voisinage.Nous étudions également le problème du st-séparateur connexe. Soient s et t deux sommets de Vnon adjacents. Un st-séparateur connexe dans le graphe G est un sous-ensemble S de V \ {s, t} quiinduit un sous-graphe connexe et dont la suppression déconnecte s de t. Il s'agit de déterminer un stséparateur de cardinalité minimum. Nous proposons trois formulations pour ce problème et donnonsdes inégalités valides du polyèdre associé à ce problème. Nous présentons aussi une heuristiqueefficace pour résoudre ce problème. / Given G=(V,E) a connected undirected graph and a positive integer β(n), where n is number ofvertices, the vertex separator problem (VSP) is to find a partition of V into three classes A,B and Csuch that there is no edge between A and B, max{|A|,|B|}less than or equal β(n), and |C| isminimum. In this thesis, we consider aninteger programming formulation for this problem. Wedescribe some valid inequalties and using these results to develop algorithms based onneighborhood scheme.We also study st-connected vertex separator problem. Let s and tbe two disjoint vertices of V, notadjacent. A st-connected separator in the graph G is a subset S of V\{s,t} such that there are no morepaths between sand tin G[G\S] and the graph G[S] is connected . The st-connected vertex speratorproblem consists in finding such subset with minimum cardinality. We propose three formulationsfor this problem and give some valid inequalities for the polyhedron associated with this problem.We develop also an efficient heuristic to solve this problem.
|
15 |
Représentations redondantes et hiérarchiques pour l'archivage et la compression de scènes sonores / Sparse and herarchical representations for archival and compression of audio scenesMoussallam, Manuel 18 December 2012 (has links)
L'objet de cette thèse est l'analyse et le traitement automatique de grands volumes de données audio. Plus particulièrement, on s'intéresse à l'archivage, tâche qui regroupe, au moins, deux problématiques: la compression des données, et l'indexation du contenu de celles-ci. Ces deux problématiques définissent chacune des objectifs, parfois concurrents, dont la prise en compte simultanée s'avère donc difficile. Au centre de cette thèse, il y a donc la volonté de construire un cadre cohérent à la fois pour la compression et pour l'indexation d'archives sonores. Les représentations parcimonieuses de signaux dans des dictionnaires redondants ont récemment montré leur capacité à remplir une telle fonction. Leurs propriétés ainsi que les méthodes et algorithmes permettant de les obtenir sont donc étudiés dans une première partie de cette thèse. Le cadre applicatif relativement contraignant (volume des données) va nous amener à choisir parmi ces derniers des algorithmes itératifs, appelés également gloutons. Une première contribution de cette thèse consiste en la proposition de variantes du célèbre Matching Pursuit basées sur un sous-échantillonnage aléatoire et dynamique de dictionnaires. L'adaptation au cas de dictionnaires temps-fréquence structurés (union de bases de cosinus locaux) nous permet d'espérer une amélioration significative des performances en compression de scènes sonores. Ces nouveaux algorithmes s'accompagnent d'une modélisation statistique originale des propriétés de convergence usant d'outils empruntés à la théorie des valeurs extrêmes. Les autres contributions de cette thèse s'attaquent au second membre du problème d'archivage: l'indexation. Le même cadre est cette fois-ci envisagé pour mettre à jour les différents niveaux de structuration des données. Au premier plan, la détection de redondances et répétitions. A grande échelle, un système robuste de détection de motifs récurrents dans un flux radiophonique par comparaison d'empreintes est proposé. Ses performances comparatives sur une campagne d'évaluation du projet QUAERO confirment la pertinence de cette approche. L'exploitation des structures pour un contexte autre que la compression est également envisagé. Nous proposons en particulier une application à la séparation de sources informée par la redondance pour illustrer la variété de traitements que le cadre choisi autorise. La synthèse des différents éléments permet alors d'envisager un système d'archivage répondant aux contraintes par la hiérarchisation des objectifs et des traitements. / The main goal of this work is automated processing of large volumes of audio data. Most specifically, one is interested in archiving, a process that encompass at least two distinct problems: data compression and data indexing. Jointly addressing these problems is a difficult task since many of their objectives may be concurrent. Therefore, building a consistent framework for audio archival is the matter of this thesis. Sparse representations of signals in redundant dictionaries have recently been found of interest for many sub-problems of the archival task. Sparsity is a desirable property both for compression and for indexing. Methods and algorithms to build such representations are the first topic of this thesis. Given the dimensionality of the considered data, greedy algorithms will be particularly studied. A first contribution of this thesis is the proposal of a variant of the famous Matching Pursuit algorithm, that exploits randomness and sub-sampling of very large time frequency dictionaries. We show that audio compression (especially at low bit-rate) can be improved using this method. This new algorithms comes with an original modeling of asymptotic pursuit behaviors, using order statistics and tools from extreme values theory. Other contributions deal with the second member of the archival problem: indexing. The same framework is used and applied to different layers of signal structures. First, redundancies and musical repetition detection is addressed. At larger scale, we investigate audio fingerprinting schemes and apply it to radio broadcast on-line segmentation. Performances have been evaluated during an international campaign within the QUAERO project. Finally, the same framework is used to perform source separation informed by the redundancy. All these elements validate the proposed framework for the audio archiving task. The layered structures of audio data are accessed hierarchically by greedy decomposition algorithms and allow processing the different objectives of archival at different steps, thus addressing them within the same framework.
|
16 |
Algorithms for the selection of optimal spaced seed sets for transposable element identificationLi, Hui 30 August 2010 (has links)
No description available.
|
17 |
Robotic Search Planning In Large Environments with Limited Computational Resources and Unreliable CommunicationsBiggs, Benjamin Adams 24 February 2023 (has links)
This work is inspired by robotic search applications where a robot or team of robots is equipped with sensors and tasked to autonomously acquire as much information as possible from a region of interest. To accomplish this task, robots must plan paths through the region of interest that maximize the effectiveness of the sensors they carry. Receding horizon path planning is a popular approach to addressing the computationally expensive task of planning long paths because it allows robotic agents with limited computational resources to iteratively construct a long path by solving for an optimal short path, traversing a portion of the short path, and repeating the process until a receding horizon path of the desired length has been constructed. However, receding horizon paths do not retain the optimality properties of the short paths from which they are constructed and may perform quite poorly in the context of achieving the robotic search objective. The primary contributions of this work address the worst-case performance of receding horizon paths by developing methods of using terminal rewards in the construction of receding horizon paths. We prove that the proposed methods of constructing receding horizon paths provide theoretical worst-case performance guarantees. Our result can be interpreted as ensuring that the receding horizon path performs no worse in expectation than a given sub-optimal search path. This result is especially practical for subsea applications where, due to use of side-scan sonar in search applications, search paths typically consist of parallel straight lines. Thus for subsea search applications, our approach ensures that expected performance is no worse than the usual subsea search path, and it might be much better.
The methods proposed in this work provide desirable lower-bound guarantees for a single robot as well as teams of robots. Significantly, we demonstrate that existing planning algorithms may be easily adapted to use our proposed methods. We present our theoretical guarantees in the context of subsea search applications and demonstrate the utility of our proposed methods through simulation experiments and field trials using real autonomous underwater vehicles (AUVs). We show that our worst-case guarantees may be achieved despite non-idealities such as sub-optimal short-paths used to construct the longer receding horizon path and unreliable communication in multi-agent planning. In addition to theoretical guarantees, An important contribution of this work is to describe specific implementation solutions needed to integrate and implement these ideas for real-time operation on AUVs. / Doctor of Philosophy / This work is inspired by robotic search applications where a robot or team of robots is equipped with sensors and tasked to autonomously acquire as much information as possible from a region of interest. To accomplish this task, robots must plan paths through the region of interest that maximize the effectiveness of the sensors they carry. Receding horizon path planning is a popular approach to addressing the computationally expensive task of planning long paths because it allows robotic agents with limited computational resources to iteratively construct a long path by solving for an optimal short path, traversing a portion of the short path, and repeating the process until a receding horizon path of the desired length has been constructed. However, receding horizon paths do not retain the optimality properties of the short paths from which they are constructed and may perform quite poorly in the context of achieving the robotic search objective. The primary contributions of this work address the worst-case performance of receding horizon paths by developing methods of using terminal rewards in the construction of receding horizon paths. The methods proposed in this work provide desirable lower-bound guarantees for a single robot as well as teams of robots. We present our theoretical guarantees in the context of subsea search applications and demonstrate the utility of our proposed methods through simulation experiments and field trials using real autonomous underwater vehicles (AUVs). In addition to theoretical guarantees, An important contribution of this work is to describe specific implementation solutions needed to integrate and implement these ideas for real-time operation on AUVs.
|
18 |
Greedy Inference Algorithms for Structured and Neural ModelsSun, Qing 18 January 2018 (has links)
A number of problems in Computer Vision, Natural Language Processing, and Machine Learning produce structured outputs in high-dimensional space, which makes searching for the global optimal solution extremely expensive. Thus, greedy algorithms, making trade-offs between precision and efficiency, are widely used. Unfortunately, they in general lack theoretical guarantees.
In this thesis, we prove that greedy algorithms are effective and efficient to search for multiple top-scoring hypotheses from structured (neural) models: 1) Entropy estimation. We aim to find deterministic samples that are representative of Gibbs distribution via a greedy strategy. 2) Searching for a set of diverse and high-quality bounding boxes. We formulate this problem as the constrained maximization of a monotonic sub-modular function such that there exists a greedy algorithm having near-optimal guarantee. 3) Fill-in-the-blank. The goal is to generate missing words conditioned on context given an image. We extend Beam Search, a greedy algorithm applicable on unidirectional expansion, to bidirectional neural models when both past and future information have to be considered.
We test our proposed approaches on a series of Computer Vision and Natural Language Processing benchmarks and show that they are effective and efficient. / Ph. D. / The rapid progress has been made in Computer Vision (e.g., detecting what and where objects are shown in an image), Natural Language Processing (e.g., translating a sentence in English to Chinese), and Machine learning (e.g., inference over graph models). However, a number of problems produce structured outputs in high-dimensional space, e.g., semantic segmentation requires predicting the labels (e.g., dog, cat, or person, etc) of all super-pixels, the search space is huge, say L<sup>n</sup>, where L is the number of object labels and n is the number of super-pixels. Thus, searching for the global optimal solution is often intractable. Instead, we aim to prove that greedy algorithms that produce reasonable solutions, e.g., near-optimal, are much effective and efficient. There are three tasks studied in the thesis: 1) Entropy estimation. We attempt to search for a finite number of semantic segmentations which are representative and diverse such that we can approximate the entropy of the distribution over output space by applying the existing model on the image. 2) Searching for a set of diverse bounding boxes that are most likely to contain an object. We formulate this problem as an optimization problem such that there exist a greedy algorithm having theoretical guarantee. 3) Fill-in-the-blank. We attempt to generate missing words in the blanks around which there are contexts available. We tested our proposed approaches on a series of Computer Vision and Natural Language Processing benchmarks, e.g., MS COCO, PASCAL VOC, etc, and show that they are indeed effective and efficient.
|
19 |
Automatic Classification of Fish in Underwater Video; Pattern Matching - Affine Invariance and Beyondgundam, madhuri, Gundam, Madhuri 15 May 2015 (has links)
Underwater video is used by marine biologists to observe, identify, and quantify living marine resources. Video sequences are typically analyzed manually, which is a time consuming and laborious process. Automating this process will significantly save time and cost. This work proposes a technique for automatic fish classification in underwater video. The steps involved are background subtracting, fish region tracking and classification using features. The background processing is used to separate moving objects from their surrounding environment. Tracking associates multiple views of the same fish in consecutive frames. This step is especially important since recognizing and classifying one or a few of the views as a species of interest may allow labeling the sequence as that particular species. Shape features are extracted using Fourier descriptors from each object and are presented to nearest neighbor classifier for classification. Finally, the nearest neighbor classifier results are combined using a probabilistic-like framework to classify an entire sequence.
The majority of the existing pattern matching techniques focus on affine invariance, mainly because rotation, scale, translation and shear are common image transformations. However, in some situations, other transformations may be modeled as a small deformation on top of an affine transformation. The proposed algorithm complements the existing Fourier transform-based pattern matching methods in such a situation. First, the spatial domain pattern is decomposed into non-overlapping concentric circular rings with centers at the middle of the pattern. The Fourier transforms of the rings are computed, and are then mapped to polar domain. The algorithm assumes that the individual rings are rotated with respect to each other. The variable angles of rotation provide information about the directional features of the pattern. This angle of rotation is determined starting from the Fourier transform of the outermost ring and moving inwards to the innermost ring. Two different approaches, one using dynamic programming algorithm and second using a greedy algorithm, are used to determine the directional features of the pattern.
|
20 |
Interactive visualization of financial data : Development of a visual data mining toolSaltin, Joakim January 2012 (has links)
In this project, a prototype visual data mining tool was developed, allowing users to interactively investigate large multi-dimensional datasets visually (using 2D visualization techniques) using so called drill-down, roll-up and slicing operations. The project included all steps of the development, from writing specifications and designing the program to implementing and evaluating it. Using ideas from data warehousing, custom methods for storing pre-computed aggregations of data (commonly referred to as materialized views) and retrieving data from these were developed and implemented in order to achieve higher performance on large datasets. View materialization enables the program to easily fetch or calculate a view using other views, something which can yield significant performance gains if view sizes are much smaller than the underlying raw dataset. The choice of which views to materialize was done in an automated manner using a well-known algorithm - the greedy algorithm for view materialization - which selects the fraction of all possible views that is likely (but not guaranteed) to yield the best performance gain. The use of materialized views was shown to have good potential to increase performance for large datasets, with an average speedup (compared to on-the-fly queries) between 20 and 70 for a test dataset containing 500~000 rows. The end result was a program combining flexibility with good performance, which was also reflected by good scores in a user-acceptance test, with participants from the company where this project was carried out.
|
Page generated in 0.0693 seconds