Spelling suggestions: "subject:"[een] GREEDY ALGORITHM"" "subject:"[enn] GREEDY ALGORITHM""
11 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
12 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. January 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
13 |
Theoritical and numerical studies on the graph partitioning problem / Études théoriques et numériques du problème de partitionnement dans un grapheAlthoby, Haeder Younis Ghawi 06 November 2017 (has links)
Étant donné G = (V, E) un graphe non orienté connexe et un entier positif β (n), où n est le nombrede sommets de G, le problème du séparateur (VSP) consiste à trouver une partition de V en troisclasses A, B et C de sorte qu'il n'y a pas d'arêtes entre A et B, max {| A |, | B |} est inférieur ou égal àβ (n) et | C | est minimum. Dans cette thèse, nous considérons une modélisation du problème sous laforme d'un programme linéaire en nombres entiers. Nous décrivons certaines inégalités valides et etdéveloppons des algorithmes basés sur un schéma de voisinage.Nous étudions également le problème du st-séparateur connexe. Soient s et t deux sommets de Vnon adjacents. Un st-séparateur connexe dans le graphe G est un sous-ensemble S de V \ {s, t} quiinduit un sous-graphe connexe et dont la suppression déconnecte s de t. Il s'agit de déterminer un stséparateur de cardinalité minimum. Nous proposons trois formulations pour ce problème et donnonsdes inégalités valides du polyèdre associé à ce problème. Nous présentons aussi une heuristiqueefficace pour résoudre ce problème. / Given G=(V,E) a connected undirected graph and a positive integer β(n), where n is number ofvertices, the vertex separator problem (VSP) is to find a partition of V into three classes A,B and Csuch that there is no edge between A and B, max{|A|,|B|}less than or equal β(n), and |C| isminimum. In this thesis, we consider aninteger programming formulation for this problem. Wedescribe some valid inequalties and using these results to develop algorithms based onneighborhood scheme.We also study st-connected vertex separator problem. Let s and tbe two disjoint vertices of V, notadjacent. A st-connected separator in the graph G is a subset S of V\{s,t} such that there are no morepaths between sand tin G[G\S] and the graph G[S] is connected . The st-connected vertex speratorproblem consists in finding such subset with minimum cardinality. We propose three formulationsfor this problem and give some valid inequalities for the polyhedron associated with this problem.We develop also an efficient heuristic to solve this problem.
|
14 |
Représentations redondantes et hiérarchiques pour l'archivage et la compression de scènes sonores / Sparse and herarchical representations for archival and compression of audio scenesMoussallam, Manuel 18 December 2012 (has links)
L'objet de cette thèse est l'analyse et le traitement automatique de grands volumes de données audio. Plus particulièrement, on s'intéresse à l'archivage, tâche qui regroupe, au moins, deux problématiques: la compression des données, et l'indexation du contenu de celles-ci. Ces deux problématiques définissent chacune des objectifs, parfois concurrents, dont la prise en compte simultanée s'avère donc difficile. Au centre de cette thèse, il y a donc la volonté de construire un cadre cohérent à la fois pour la compression et pour l'indexation d'archives sonores. Les représentations parcimonieuses de signaux dans des dictionnaires redondants ont récemment montré leur capacité à remplir une telle fonction. Leurs propriétés ainsi que les méthodes et algorithmes permettant de les obtenir sont donc étudiés dans une première partie de cette thèse. Le cadre applicatif relativement contraignant (volume des données) va nous amener à choisir parmi ces derniers des algorithmes itératifs, appelés également gloutons. Une première contribution de cette thèse consiste en la proposition de variantes du célèbre Matching Pursuit basées sur un sous-échantillonnage aléatoire et dynamique de dictionnaires. L'adaptation au cas de dictionnaires temps-fréquence structurés (union de bases de cosinus locaux) nous permet d'espérer une amélioration significative des performances en compression de scènes sonores. Ces nouveaux algorithmes s'accompagnent d'une modélisation statistique originale des propriétés de convergence usant d'outils empruntés à la théorie des valeurs extrêmes. Les autres contributions de cette thèse s'attaquent au second membre du problème d'archivage: l'indexation. Le même cadre est cette fois-ci envisagé pour mettre à jour les différents niveaux de structuration des données. Au premier plan, la détection de redondances et répétitions. A grande échelle, un système robuste de détection de motifs récurrents dans un flux radiophonique par comparaison d'empreintes est proposé. Ses performances comparatives sur une campagne d'évaluation du projet QUAERO confirment la pertinence de cette approche. L'exploitation des structures pour un contexte autre que la compression est également envisagé. Nous proposons en particulier une application à la séparation de sources informée par la redondance pour illustrer la variété de traitements que le cadre choisi autorise. La synthèse des différents éléments permet alors d'envisager un système d'archivage répondant aux contraintes par la hiérarchisation des objectifs et des traitements. / The main goal of this work is automated processing of large volumes of audio data. Most specifically, one is interested in archiving, a process that encompass at least two distinct problems: data compression and data indexing. Jointly addressing these problems is a difficult task since many of their objectives may be concurrent. Therefore, building a consistent framework for audio archival is the matter of this thesis. Sparse representations of signals in redundant dictionaries have recently been found of interest for many sub-problems of the archival task. Sparsity is a desirable property both for compression and for indexing. Methods and algorithms to build such representations are the first topic of this thesis. Given the dimensionality of the considered data, greedy algorithms will be particularly studied. A first contribution of this thesis is the proposal of a variant of the famous Matching Pursuit algorithm, that exploits randomness and sub-sampling of very large time frequency dictionaries. We show that audio compression (especially at low bit-rate) can be improved using this method. This new algorithms comes with an original modeling of asymptotic pursuit behaviors, using order statistics and tools from extreme values theory. Other contributions deal with the second member of the archival problem: indexing. The same framework is used and applied to different layers of signal structures. First, redundancies and musical repetition detection is addressed. At larger scale, we investigate audio fingerprinting schemes and apply it to radio broadcast on-line segmentation. Performances have been evaluated during an international campaign within the QUAERO project. Finally, the same framework is used to perform source separation informed by the redundancy. All these elements validate the proposed framework for the audio archiving task. The layered structures of audio data are accessed hierarchically by greedy decomposition algorithms and allow processing the different objectives of archival at different steps, thus addressing them within the same framework.
|
15 |
Algorithms for the selection of optimal spaced seed sets for transposable element identificationLi, Hui 30 August 2010 (has links)
No description available.
|
16 |
Robotic Search Planning In Large Environments with Limited Computational Resources and Unreliable CommunicationsBiggs, Benjamin Adams 24 February 2023 (has links)
This work is inspired by robotic search applications where a robot or team of robots is equipped with sensors and tasked to autonomously acquire as much information as possible from a region of interest. To accomplish this task, robots must plan paths through the region of interest that maximize the effectiveness of the sensors they carry. Receding horizon path planning is a popular approach to addressing the computationally expensive task of planning long paths because it allows robotic agents with limited computational resources to iteratively construct a long path by solving for an optimal short path, traversing a portion of the short path, and repeating the process until a receding horizon path of the desired length has been constructed. However, receding horizon paths do not retain the optimality properties of the short paths from which they are constructed and may perform quite poorly in the context of achieving the robotic search objective. The primary contributions of this work address the worst-case performance of receding horizon paths by developing methods of using terminal rewards in the construction of receding horizon paths. We prove that the proposed methods of constructing receding horizon paths provide theoretical worst-case performance guarantees. Our result can be interpreted as ensuring that the receding horizon path performs no worse in expectation than a given sub-optimal search path. This result is especially practical for subsea applications where, due to use of side-scan sonar in search applications, search paths typically consist of parallel straight lines. Thus for subsea search applications, our approach ensures that expected performance is no worse than the usual subsea search path, and it might be much better.
The methods proposed in this work provide desirable lower-bound guarantees for a single robot as well as teams of robots. Significantly, we demonstrate that existing planning algorithms may be easily adapted to use our proposed methods. We present our theoretical guarantees in the context of subsea search applications and demonstrate the utility of our proposed methods through simulation experiments and field trials using real autonomous underwater vehicles (AUVs). We show that our worst-case guarantees may be achieved despite non-idealities such as sub-optimal short-paths used to construct the longer receding horizon path and unreliable communication in multi-agent planning. In addition to theoretical guarantees, An important contribution of this work is to describe specific implementation solutions needed to integrate and implement these ideas for real-time operation on AUVs. / Doctor of Philosophy / This work is inspired by robotic search applications where a robot or team of robots is equipped with sensors and tasked to autonomously acquire as much information as possible from a region of interest. To accomplish this task, robots must plan paths through the region of interest that maximize the effectiveness of the sensors they carry. Receding horizon path planning is a popular approach to addressing the computationally expensive task of planning long paths because it allows robotic agents with limited computational resources to iteratively construct a long path by solving for an optimal short path, traversing a portion of the short path, and repeating the process until a receding horizon path of the desired length has been constructed. However, receding horizon paths do not retain the optimality properties of the short paths from which they are constructed and may perform quite poorly in the context of achieving the robotic search objective. The primary contributions of this work address the worst-case performance of receding horizon paths by developing methods of using terminal rewards in the construction of receding horizon paths. The methods proposed in this work provide desirable lower-bound guarantees for a single robot as well as teams of robots. We present our theoretical guarantees in the context of subsea search applications and demonstrate the utility of our proposed methods through simulation experiments and field trials using real autonomous underwater vehicles (AUVs). In addition to theoretical guarantees, An important contribution of this work is to describe specific implementation solutions needed to integrate and implement these ideas for real-time operation on AUVs.
|
17 |
Automatic Classification of Fish in Underwater Video; Pattern Matching - Affine Invariance and Beyondgundam, madhuri, Gundam, Madhuri 15 May 2015 (has links)
Underwater video is used by marine biologists to observe, identify, and quantify living marine resources. Video sequences are typically analyzed manually, which is a time consuming and laborious process. Automating this process will significantly save time and cost. This work proposes a technique for automatic fish classification in underwater video. The steps involved are background subtracting, fish region tracking and classification using features. The background processing is used to separate moving objects from their surrounding environment. Tracking associates multiple views of the same fish in consecutive frames. This step is especially important since recognizing and classifying one or a few of the views as a species of interest may allow labeling the sequence as that particular species. Shape features are extracted using Fourier descriptors from each object and are presented to nearest neighbor classifier for classification. Finally, the nearest neighbor classifier results are combined using a probabilistic-like framework to classify an entire sequence.
The majority of the existing pattern matching techniques focus on affine invariance, mainly because rotation, scale, translation and shear are common image transformations. However, in some situations, other transformations may be modeled as a small deformation on top of an affine transformation. The proposed algorithm complements the existing Fourier transform-based pattern matching methods in such a situation. First, the spatial domain pattern is decomposed into non-overlapping concentric circular rings with centers at the middle of the pattern. The Fourier transforms of the rings are computed, and are then mapped to polar domain. The algorithm assumes that the individual rings are rotated with respect to each other. The variable angles of rotation provide information about the directional features of the pattern. This angle of rotation is determined starting from the Fourier transform of the outermost ring and moving inwards to the innermost ring. Two different approaches, one using dynamic programming algorithm and second using a greedy algorithm, are used to determine the directional features of the pattern.
|
18 |
Interactive visualization of financial data : Development of a visual data mining toolSaltin, Joakim January 2012 (has links)
In this project, a prototype visual data mining tool was developed, allowing users to interactively investigate large multi-dimensional datasets visually (using 2D visualization techniques) using so called drill-down, roll-up and slicing operations. The project included all steps of the development, from writing specifications and designing the program to implementing and evaluating it. Using ideas from data warehousing, custom methods for storing pre-computed aggregations of data (commonly referred to as materialized views) and retrieving data from these were developed and implemented in order to achieve higher performance on large datasets. View materialization enables the program to easily fetch or calculate a view using other views, something which can yield significant performance gains if view sizes are much smaller than the underlying raw dataset. The choice of which views to materialize was done in an automated manner using a well-known algorithm - the greedy algorithm for view materialization - which selects the fraction of all possible views that is likely (but not guaranteed) to yield the best performance gain. The use of materialized views was shown to have good potential to increase performance for large datasets, with an average speedup (compared to on-the-fly queries) between 20 and 70 for a test dataset containing 500~000 rows. The end result was a program combining flexibility with good performance, which was also reflected by good scores in a user-acceptance test, with participants from the company where this project was carried out.
|
19 |
Signal reconstruction from incomplete and misplaced measurementsSastry, Challa, Hennenfent, Gilles, Herrmann, Felix J. January 2007 (has links)
Constrained by practical and economical considerations, one often uses seismic data with missing traces. The use of such data results in image artifacts and poor spatial resolution. Sometimes due to practical limitations, measurements may be available on a perturbed grid, instead of on the designated grid. Due to algorithmic requirements, when such measurements are viewed as those on the designated grid, the recovery procedures may result in additional artifacts. This paper interpolates incomplete data onto regular grid via the Fourier domain, using a recently developed greedy algorithm. The basic objective is to study experimentally as to what could be the size of the perturbation in measurement coordinates that allows for the measurements on the perturbed grid to be considered as on the designated grid for faithful recovery. Our experimental work shows that for compressible signals, a uniformly distributed perturbation can be offset with slightly more number of measurements.
|
20 |
Mesures de similarité pour cartes généralisées / Similarity measures between generalized mapsCombier, Camille 28 November 2012 (has links)
Une carte généralisée est un modèle topologique permettant de représenter implicitementun ensemble de cellules (sommets, arêtes, faces , volumes, . . .) ainsi que l’ensemblede leurs relations d’incidence et d’adjacence au moyen de brins et d’involutions. Les cartes généralisées sont notamment utilisées pour modéliser des images et objets3D. A ce jour il existe peu d’outils permettant l’analyse et la comparaison de cartes généralisées.Notre objectif est de définir un ensemble d’outils permettant la comparaisonde cartes généralisées.Nous définissons tout d’abord une mesure de similarité basée sur la taille de la partiecommune entre deux cartes généralisées, appelée plus grande sous-carte commune.Nous définissons deux types de sous-cartes, partielles et induites, la sous-carte induitedoit conserver toutes les involutions tandis que la sous-carte partielle autorise certaines involutions à ne pas être conservées. La sous-carte partielle autorise que les involutionsne soient pas toutes conservées en analogie au sous-graphe partiel pour lequelles arêtes peuvent ne pas être toutes présentes. Ensuite nous définissons un ensembled’opérations de modification de brins et de coutures pour les cartes généralisées ainsiqu’une distance d’édition. La distance d’édition est égale au coût minimal engendrépar toutes les successions d’opérations transformant une carte généralisée en une autrecarte généralisée. Cette distance permet la prise en compte d’étiquettes, grâce à l’opérationde substitution. Les étiquettes sont posées sur les brins et permettent d’ajouter del’information aux cartes généralisées. Nous montrons ensuite, que pour certains coûtsnotre distance d’édition peut être calculée directement à partir de la plus grande souscartecommune.Le calcul de la distance d’édition est un problème NP-difficile. Nous proposons unalgorithme glouton permettant de calculer en temps polynomial une approximation denotre distance d’édition de cartes. Nous proposons un ensemble d’heuristiques baséessur des descripteurs du voisinage des brins de la carte généralisée permettant de guiderl’algorithme glouton, et nous évaluons ces heuristiques sur des jeux de test générésaléatoirement, pour lesquels nous connaissons une borne de la distance.Nous proposons des pistes d’utilisation de nos mesures de similarités dans le domainede l’analyse d’image et de maillages. Nous comparons notre distance d’éditionde cartes généralisées avec la distance d’édition de graphes, souvent utilisée en reconnaissancede formes structurelles. Nous définissons également un ensemble d’heuristiquesprenant en compte les étiquettes de cartes généralisées modélisant des images etdes maillages. Nous mettons en évidence l’aspect qualitatif de notre appariement, permettantde mettre en correspondance des zones de l’image et des points du maillages. / A generalized map is a topological model that allows to represent implicitly differenttypes of cells (vertices, edges, volumes, . . . ) and their relationship by using a set of dartsand some involutions. Generalized maps are used to model 3D meshes and images.Anyway there exists only few tools to compare theses generalized maps. Our main goalis to define some tools tolerant to error to compare them.We define a similarity measure based on the size of the common part of two generalizedmaps, called maximum common submap. Then we define two types of submaps,partial and induced, the induced submap needs to preserve all the involutions whereasthe partial one can allow some involutions to be removed. Then we define a set of operationsto modify a generalized map into another and the associated edit distance. Theedit distance is equal to the minimal cost of all the sequences of operations that modifya generalized map into the other. This edit distance can use labels to consider additionalinformation, with the operation called ’substitution’. Labels are set on darts. Wenext showa relation between our edit distance and the distance based on the maximumcommon submap.Computing theses distance are aNP-hard problem.We propose a greedy algorithmcomputing an approximation of it. We also propose a set of heuristics based on thedescription of the neighborhoob of the darts to help the greedy algorithm.We try thesesheuristics on a set of generalized maps randomly generated where a lower bound of thedistance is known. We also propose some applications of our similarity measures inthe image analysis domain. We compare our edit distance on generalized maps withthe edit distance on graphs. We also define a set of labels specific on images and 3Dmeshes. And we show that the matching computed by our algorithm construct a linkbetween images’s areas.
|
Page generated in 0.0514 seconds