531 |
Computations on Massive Data Sets : Streaming Algorithms and Two-party Communication / Calculs sur des grosses données : algorithmes de streaming et communication entre deux joueursKonrad, Christian 05 July 2013 (has links)
Dans cette thèse on considère deux modèles de calcul qui abordent des problèmes qui se posent lors du traitement des grosses données. Le premier modèle est le modèle de streaming. Lors du traitement des grosses données, un accès aux données de façon aléatoire est trop couteux. Les algorithmes de streaming ont un accès restreint aux données: ils lisent les données de façon séquentielle (par passage) une fois ou peu de fois. De plus, les algorithmes de streaming utilisent une mémoire d'accès aléatoire de taille sous-linéaire dans la taille des données. Le deuxième modèle est le modèle de communication. Lors du traitement des données par plusieurs entités de calcul situées à des endroits différents, l'échange des messages pour la synchronisation de leurs calculs est souvent un goulet d'étranglement. Il est donc préférable de minimiser la quantité de communication. Un modèle particulier est la communication à sens unique entre deux participants. Dans ce modèle, deux participants calculent un résultat en fonction des données qui sont partagées entre eux et la communication se réduit à un seul message. On étudie les problèmes suivants: 1) Les couplages dans le modèle de streaming. L'entrée du problème est un flux d'arêtes d'un graphe G=(V,E) avec n=|V|. On recherche un algorithme de streaming qui calcule un couplage de grande taille en utilisant une mémoire de taille O(n polylog n). L'algorithme glouton remplit ces contraintes et calcule un couplage de taille au moins 1/2 fois la taille d'un couplage maximum. Une question ouverte depuis longtemps demande si l'algorithme glouton est optimal si aucune hypothèse sur l'ordre des arêtes dans le flux est faite. Nous montrons qu'il y a un meilleur algorithme que l'algorithme glouton si les arêtes du graphe sont dans un ordre uniformément aléatoire. De plus, nous montrons qu'avec deux passages on peut calculer un couplage de taille strictement supérieur à 1/2 fois la taille d'un couplage maximum sans contraintes sur l'ordre des arêtes. 2) Les semi-couplages en streaming et en communication. Un semi-couplage dans un graphe biparti G=(A,B,E) est un sous-ensemble d'arêtes qui couple tous les sommets de type A exactement une fois aux sommets de type B de façon pas forcement injective. L'objectif est de minimiser le nombre de sommets de type A qui sont couplés aux même sommets de type B. Pour ce problème, nous montrons un algorithme qui, pour tout 0<=ε<=1, calcule une O(n^((1-ε)/2))-approximation en utilisant une mémoire de taille Ô(n^(1+ε)). De plus, nous montrons des bornes supérieures et des bornes inférieurs pour la complexité de communication entre deux participants pour ce problème et des nouveaux résultats concernant la structure des semi-couplages. 3) Validité des fichiers XML dans le modèle de streaming. Un fichier XML de taille n est une séquence de balises ouvrantes et fermantes. Une DTD est un ensemble de contraintes de validité locales d'un fichier XML. Nous étudions des algorithmes de streaming pour tester si un fichier XML satisfait les contraintes décrites dans une DTD. Notre résultat principal est un algorithme de streaming qui fait O(log n) passages, utilise 3 flux auxiliaires et une mémoire de taille O(log^2 n). De plus, pour le problème de validation des fichiers XML qui décrivent des arbres binaires, nous présentons des algorithmes en un passage et deux passages qui une mémoire de taille sous-linéaire. 4) Correction d'erreur pour la distance du cantonnier. Alice et Bob ont des ensembles de n points sur une grille en d dimensions. Alice envoit un échantillon de petite taille à Bob qui, après réception, déplace ses points pour que la distance du cantonnier entre les points d'Alice et les points de Bob diminue. Pour tout k>0 nous montrons qu'il y a un protocole presque optimal de communication avec coût de communication Ô(kd) tel que les déplacements des points effectués par Bob aboutissent à un facteur d'approximation de O(d) par rapport aux meilleurs déplacements de d points. / In this PhD thesis, we consider two computational models that address problems that arise when processing massive data sets. The first model is the Data Streaming Model. When processing massive data sets, random access to the input data is very costly. Therefore, streaming algorithms only have restricted access to the input data: They sequentially scan the input data once or only a few times. In addition, streaming algorithms use a random access memory of sublinear size in the length of the input. Sequential input access and sublinear memory are drastic limitations when designing algorithms. The major goal of this PhD thesis is to explore the limitations and the strengths of the streaming model. The second model is the Communication Model. When data is processed by multiple computational units at different locations, then the message exchange of the participating parties for synchronizing their calculations is often a bottleneck. The amount of communication should hence be as little as possible. A particular setting is the one-way two-party communication setting. Here, two parties collectively compute a function of the input data that is split among the two parties, and the whole message exchange reduces to a single message from one party to the other one. We study the following four problems in the context of streaming algorithms and one-way two-party communication: (1) Matchings in the Streaming Model. We are given a stream of edges of a graph G=(V,E) with n=|V|, and the goal is to design a streaming algorithm that computes a matching using a random access memory of size O(n polylog n). The Greedy matching algorithm fits into this setting and computes a matching of size at least 1/2 times the size of a maximum matching. A long standing open question is whether the Greedy algorithm is optimal if no assumption about the order of the input stream is made. We show that it is possible to improve on the Greedy algorithm if the input stream is in uniform random order. Furthermore, we show that with two passes an approximation ratio strictly larger than 1/2 can be obtained if no assumption on the order of the input stream is made. (2) Semi-matchings in Streaming and in Two-party Communication. A semi-matching in a bipartite graph G=(A,B,E) is a subset of edges that matches all A vertices exactly once to B vertices, not necessarily in an injective way. The goal is to minimize the maximal number of A vertices that are matched to the same B vertex. We show that for any 0<=ε<=1, there is a one-pass streaming algorithm that computes an O(n^((1-ε)/2))-approximation using Ô(n^(1+ε)) space. Furthermore, we provide upper and lower bounds on the two-party communication complexity of this problem, as well as new results on the structure of semi-matchings. (3) Validity of XML Documents in the Streaming Model. An XML document of length n is a sequence of opening and closing tags. A DTD is a set of local validity constraints of an XML document. We study streaming algorithms for checking whether an XML document fulfills the validity constraints of a given DTD. Our main result is an O(log n)-pass streaming algorithm with 3 auxiliary streams and O(log^2 n) space for this problem. Furthermore, we present one-pass and two-pass sublinear space streaming algorithms for checking validity of XML documents that encode binary trees. (4) Budget-Error-Correcting under Earth-Mover-Distance. We study the following one-way two-party communication problem. Alice and Bob have sets of n points on a d-dimensional grid [Δ]^d for an integer Δ. Alice sends a small sketch of her points to Bob and Bob adjusts his point set towards Alice's point set so that the Earth-Mover-Distance of Bob's points and Alice's points decreases. For any k>0, we show that there is an almost tight randomized protocol with communication cost Ô(kd) such that Bob's adjustments lead to an O(d)-approximation compared to the k best possible adjustments that Bob could make.
|
532 |
Déconvolution aveugle parcimonieuse en imagerie échographique avec un algorithme CLEAN adaptatif / Sparse blind deconvolution in ultrasound imaging using an adaptative CLEAN algorithmChira, Liviu-Teodor 17 October 2013 (has links)
L'imagerie médicale ultrasonore est une modalité en perpétuelle évolution et notamment en post-traitement où il s'agit d'améliorer la résolution et le contraste des images. Ces améliorations devraient alors aider le médecin à mieux distinguer les tissus examinés améliorant ainsi le diagnostic médical. Il existe déjà une large palette de techniques "hardware" et "software". Dans ce travail nous nous sommes focalisés sur la mise en oeuvre de techniques dites de "déconvolution aveugle", ces techniques temporelles utilisant l'enveloppe du signal comme information de base. Elles sont capables de reconstruire des images parcimonieuses, c'est-à-dire des images de diffuseurs dépourvues de bruit spéculaire. Les principales étapes de ce type de méthodes consistent en i) l'estimation aveugle de la fonction d'étalement du point (PSF), ii) l'estimation des diffuseurs en supposant l'environnement exploré parcimonieux et iii) la reconstruction d'images par reconvolution avec une PSF "idéale". La méthode proposée a été comparée avec des techniques faisant référence dans le domaine de l'imagerie médicale en utilisant des signaux synthétiques, des séquences ultrasonores réelles (1D) et images ultrasonores (2D) ayant des statistiques différentes. La méthode, qui offre un temps d'exécution très réduit par rapport aux techniques concurrentes, est adaptée pour les images présentant une quantité réduite ou moyenne des diffuseurs. / The ultrasonic imaging knows a continuous advance in the aspect of increasing the resolution for helping physicians to better observe and distinguish the examined tissues. There is already a large range of techniques to get the best results. It can be found also hardware or signal processing techniques. This work was focused on the post-processing techniques of blind deconvolution in ultrasound imaging and it was implemented an algorithm that works in the time domain and uses the envelope signal as input information for it. It is a blind deconvolution technique that is able to reconstruct reflectors and eliminate the diffusive speckle noise. The main steps are: the estimation of the point spread function (PSF) in a blind way, the estimation of reflectors using the assumption of sparsity for the examined environment and the reconstruction of the image by reconvolving the sparse tissue with an ideal PSF. The proposed method was tested in comparison with some classical techniques in medical imaging reconstruction using synthetic signals, real ultrasound sequences (1D) and ultrasound images (2D) and also using two types of statistically different images. The method is suitable for images that represent tissue with a reduced amount or average scatters. Also, the technique offers a lower execution time than direct competitors.
|
533 |
Recherche automatisée de motifs dans les arbres phylogénétiques / Automatic phylogenetic tree pattern matchingBigot, Thomas 05 June 2013 (has links)
La phylogénie permet de reconstituer l'histoire évolutive de séquences ainsi que des espèces qui les portent. Les récents progrès des méthodes de séquençage ont permis une inflation du nombre de séquences disponibles et donc du nombre d'arbres de gènes qu'il est possible de construire. La question qui se pose est alors d'optimiser la recherche d'informations dans ces arbres. Cette recherche doit être à la fois exhaustive et efficace. Pour ce faire, mon travail de thèse a consisté en l'écriture puis en l'utilisation d'un ensemble de programmes capables de parcourir et d'annoter les arbres phylogénétiques. Cet ensemble de programmes porte le nom de TPMS (Tree Pattern Matching Suite). Le premier de ces programmes (tpms_query) permet d'effectuer l'interrogation de collections à l'aide d'un formalisme dédie. Les possibilités qu'il offre sont : La détection de transferts horizontaux : Si un arbre de gènes présente une espèce branchée dans un arbre au milieu d'un groupe monophylétique d'espèces avec lesquelles elle n'est pas apparentée, on peut supposer qu'il s'agit d'un transfert horizontal, si ces organismes sont des procaryotes ou des eucaryotes unicellulaires. La détection d'orthologie : Si une partie d'un arbre de gènes correspond exactement à l'arbre des espèces, on peut alors supposer que ces gènes sont un ensemble de gènes d'orthologues. La validation de phylogénies connues : Quand l'arbre des espèces donne lieu à des débats, il peut est possible d'interroger une large collection d'arbres de gènes pour voir combien de familles de gènes correspondent à chaque hypothèse. Un autre programme, tpms_computations, permet d'effectuer des opérations en parallèle sur tous les arbres, et propose notamment l'enracinement automatique des arbres via différents critères, ainsi que l'extraction de sous arbres d'orthologues (séquence unique par espèce). Il propose aussi une méthode de détection automatique d'incongruences. La thèse présente le contexte, les différents algorithmes à la base de ces programmes, ainsi que plusieurs utilisations qui en ont été faites / Phylogeny allows to reconstruct evolutionnary history of sequences and species that carry them. Recent progress in sequencing methods produced a growing number of available sequences, and so of number of gene trees that one can build. One of the consecutive issues is to optimise the extraction of information from the trees. Such an extraction should be complete and efficient. To address this, my thesis consisted in writing and then using a suite of programs which aim to browse and annotate phylogenic trees. This program suite is named TPMS (Tree Pattern Matching Suite). It browses and annotates trees with several algorithms. The first of them, tpms_query consists in querying collections using a dedicated formalism. This allows to: Detect horizontal transfers If, in a gene tree, a species is nested in a monophyletic group of unrelated species, one can infer this is a horizontal transfer, if this organisms are prokaryotic (also concerning some unicellular eukaryotes). Orthology detection: if a part of a gene tree exactly matches to the species tree, one can suppose these genes are set of orthologues. Validating known phylogenies: when controversy exists concerning the species tree, it is possible to query a lange collection of gene trees to perform a count of families matching to each hypothesis. Another program allows to perform parallel operations on all the trees, such as automating rooting of trees via different criterions. It also allows an automatic detection of incongruencies. The thesis introduces the context, different algorithms which the programs are based on, and several using performed with it
|
534 |
Impactos do auxílio-alimentação nos índices antropométricos e no consumo de grupos de alimentos e de nutrientes / Impacts of food aid on anthropometric indices and consumption of food groups and nutrientsSantiago, Letícia Alves Tadeu 01 April 2019 (has links)
O cerne dos programas de auxílio-alimentação no Brasil foi a necessidade de combater a desnutrição que acometia uma considerável parte da população brasileira. A desnutrição foi superada no país, que atualmente enfrenta um outro problema relacionado à alimentação, que é o crescente número de pessoas com sobrepeso e obesidade. As transformações nos padrões nutricionais e alimentares, influenciadas pelas mudanças sociais, econômicas e demográficas que ocorreram ao longo das últimas décadas, contribuíram com o aumento do número de casos de pessoas com excesso de peso e de doenças associadas. Trata-se de problemas de saúde pública que geram perdas econômicas e de bem-estar. Diante desse cenário e da escassa literatura que avaliou os efeitos dos programas de auxílio-alimentação ao longo dos anos, esta pesquisa buscou analisar os impactos desses programas nos índices antropométricos, especificamente no excesso de peso e na obesidade, e também no consumo de grupos de alimentos e componentes alimentares dos trabalhadores beneficiados. Para tanto, utilizou-se os dados da Pesquisa de Orçamentos Familiares 2008/2009 e o método Propensity Score Matching. Os resultados apontaram que receber auxílio-alimentação aumenta as chances de obesidade entre as mulheres e de sobrepeso entre os homens. A análise por estratos de renda evidenciou que as chances de excesso de peso e obesidade são maiores entre os homens mais pobres, do primeiro estrato, e de obesidade para as mulheres do segundo estrato. Em relação aos impactos nos grupos de alimentos, foi observado um aumento no consumo de alimentos pertencentes ao grupo de Frutas; Farinhas e massas, Bebidas e Preparações Mistas e uma redução no consumo de Grãos e Legumes, entre as mulheres beneficiárias. Já para os homens que recebem o benefício, foi observado aumento no consumo de alimentos dos grupos das Farinhas e massas, Óleos e gorduras, Bebidas, Pizzas e salgados e de Oleaginosas, em comparação aos não beneficiários. Os resultados referentes à ingestão de nutrientes revelaram que as beneficiárias consumiram mais: Energia, Proteína; Carboidrato, Colesterol; Cálcio; Magnésio, Fósforo; Ferro; Selênio; Retinol; Vitaminas: A, B1, B2, B3, Equivalente a B3, B6; B12 e Folato; Açúcar total e de adição. Já os homens beneficiários ingeriram maiores quantidades de: Energia; Carboidrato; Magnésio, Selênio, Vitaminas: B1, B3, B6, E e Folato; Ácidos graxos: poliinsaturados, poliinsaturado 18:2 e trans total; e Açúcar total e de adição. Conclui-se, portanto, que os programas de alimentação dos trabalhadores estão contribuindo, muitas vezes, com a piora da saúde dos assistidos, em especial os mais pobres. Em partes, os programas de auxílio-alimentação reduzem a qualidade alimentar dos beneficiários e beneficiárias, pois eles consomem maior quantidade de alimentos e de componentes alimentares nocivos ao equilíbrio nutricional, ao mesmo tempo que também ingerem mais nutrientes benéficos ao correto funcionamento do organismo. / Food assistance programs in Brazil used to have as its main goal the control of malnutrition that affected large part of the population. However, malnutrition is today much reduced in Brazil, and the country is facing another problem - the increasing of overweight and obese people. Changes in nutritional and food preferences, affected by social, economic and demographic evolution in the last decades, have been collaborating to increase the number of overweight people and related diseases in the population. These problems cause economic and welfare losses. Nevertheless, the literature still lacks deep studies in this topic. In this context, this research aims to analyse the impact of a food assistance program for employees in weight and obesity indices as well as in the consumption of food and nutrients by Brazilian workers. For that, we use the database from Brazilian Expenditure Survey 2008/09 and apply the Propensity Score Matching method. Our results show that be a participant of food assistance program increases the probability of obesity between women and of overweight among men. Our results point out that the probability to be overweight is greater for poor men and the probability to be obese is greater for both poor men and women. Regarding the impact on food consumption, the results evidence, for women, an increase in the consumption of fruits, flours and pastas, drinks and regional preparations and a reduction of grains and vegetables. For men, the results point out an increase in the consumption of flours and pastas; fat food, drinks; pizzas and oilseeds. About nutrients, women participants raise their consumption of energy, carbohydrate, proteins, cholesterol, calcium, phosphor, iron, selenium, magnesium, retinol, total and additional sugar, folate and vitamins A, B1, B2, B3, and B3 equivalent. On the other hand, men participants increase their consumption of energy, carbohydrate, selenium, magnesium, total and additional sugar, vitamins B1, B3, B6, E and folate; and fatty acids: polyunsaturated, polyunsaturated 18: 2 and trans fats total. In summary, our results support the conclusion that this Brazilian food assistance program has contributed, in many cases, to deteriorate their participants health, specially the poor\'s, and the main channel for that is the reduction of the quality of consumption of food and nutrients by their participants. But the program also contributes to increase the consumption of some important nutrients.
|
535 |
Modélisation gros grain de macromolécules végétales : champ de force paramétré par dynamique moléculaire et application à des assemblages cellulose-xylane / Coarse grain modelling of plant cell wall macromolecules : force field deduced from molecular dynamics and application to cellulose/xylan assemblyLi, Liang 20 December 2013 (has links)
La compréhension de la relation structure-propriétés des parois des cellules végétales s'appuie de plus en plus sur l'utilisation d'approches de modélisation moléculaire en général et de dynamique moléculaire en particulier. A ce jour, le poids numérique que représente une telle démarche à l'échelle de l'atome est la plupart du temps incompatible avec les puissances de calcul disponibles. C'est pourquoi des méthodes d'approximation sont indispensables pour pouvoir mettre en œuvre des simulations numériques à l'échelle de systèmes supramoléculaires réalistes. Dans le cadre de cette thèse, un modèle de dynamique moléculaire, dit « gros grain » a été mis au point à l'échelle du monomère de macromolécules pariétales. Les paramètres de ce modèle ont été calibrés à l'aide de simulations de dynamique moléculaire à l'échelle de l'atome. Ce modèle a fait l'objet de quatre applications : adsorption d'une chaine de xylane sur une surface de cellulose cristalline, arrachement d'une chaine de xylane adsorbée sur une surface de cellulose cristalline par une pointe AFM, adsorption d'une phase amorphe de xylane sur une surface de cellulose cristalline et adsorption d'une phase amorphe de xylane sur un monocristal de cellulose exposant trois surfaces différentes. Des effets de structuration au voisinage de la cellulose sont observés. / Nowadays, the understanding of plant cell walls' structure-properties relationship leans more and more on the use of molecular modeling approaches and of molecular dynamics in particular. To date, numerical weight of such an approach is usually out of the reach of available computing power if the atomic scale is used. As a consequence, building approximate methods is of crucial importance to perform numerical simulation of realistic supramolecular systems. Within the framework of this PhD, a “coarse grain” molecular dynamics model was built at plant cell wall macromolecule monomer's scale, it's parameters being fixed with the help of atom-scale molecular dynamics simulations. Then, several numerical studies were carried out: a single xylan chain was adsorbed on a crystalline cellulose surface, a single xylan chain was pulled from a crystalline cellulose surface with the help of the tip of an AFM cantilever, an amorphous xylan phase was adsorbed on a cellulose surface and an amorphous xylan phase was adsorbed on a cellulose crystal, which three surfaces were exposed. Local structuring effects were observed.
|
536 |
Allocations de ressources dans les réseaux sans fils énergétiquement efficaces. / Radio Resource Management for Green Wireless NetworksDe Mari, Matthieu 01 July 2015 (has links)
Dans le cadre de cette thèse, nous nous intéressons plus particulièrement àdeux techniques permettant d’améliorer l’efficacité énergétique ou spectrale desréseaux sans fil. Dans la première partie de cette thèse, nous proposons de combinerles capacités de prédictions du contexte futur de transmission au classiqueet connu tradeoff latence - efficacité énergétique, amenant à ce que l’on nommeraun réseau proactif tolérant à la latence. L’objectif dans ce genre de problèmesconsiste à définir des politiques de transmissions optimales pour un ensembled’utilisateur, qui garantissent à chacun de pouvoir accomplir une transmissionavant un certain délai, tout en minimisant la puissance totale consommée auniveau de chaque utilisateur. Nous considérons dans un premier temps le problèmemono-utilisateur, qui permet alors d’introduire les concepts de tolérance àla latence, d’optimisation et de contrôle de puissance qui sont utilisés dans lapremière partie de cette thèse. L’extension à un système multi-utilisateurs estensuite considérée. L’analyse révèle alors que l’optimisation multi-utilisateurpose problème du fait de sa complexité mathématique. Mais cette complexitépeut néanmoins être contournée grâce aux récentes avancées dans le domainede la théorie des jeux à champs moyens, théorie qui permet de transiter d’unjeu multi-utilisateur, vers un jeu à champ moyen, à plus faible complexité. Lessimulations numériques démontrent que les stratégies de puissance retournéespar l’approche jeu à champ moyen approchent notablement les stratégies optimaleslorsqu’elles peuvent être calculées, et dépassent les performances desheuristiques communes, lorsque l’optimum n’est plus calculable, comme c’est lecas lorsque le canal varie au cours du temps.Dans la seconde partie de cettethèse, nous investiguons un possible problème dual au problème précédent. Plusspécifiquement, nous considérons une approche d’optimisation d’efficacité spectrale,à configuration de puissance constante. Pour ce faire, nous proposonsalors d’étudier l’impact sur le réseau des récentes avancées en classification d’interférence.L’analyse conduite révèle que le système peut bénéficier d’uneadaptation des traitements d’interférence faits à chaque récepteur. Ces gainsobservés peuvent également être améliorés par deux altérations de la démarched’optimisation. La première propose de redéfinir les groupes d’interféreurs decellules concurrentes, supposés transmettre sur les mêmes ressources spectrales.L’objectif étant alors de former des paires d’interféreurs “amis”, capables detraiter efficacement leurs interférences réciproques. La seconde altération portele nom de “Virtual Handover” : lorsque la classification d’interférence est considérée,l’access point offrant le meilleur SNR n’est plus nécessairement le meilleuraccess point auquel assigner un utilisateur. Pour cette raison, il est donc nécessairede laisser la possibilité au système de pouvoir choisir par lui-même la façondont il procède aux assignations des utilisateurs. Le processus d’optimisationse décompose donc en trois parties : i) Définir les coalitions d’utilisateurs assignésà chaque access point ; ii) Définir les groupes d’interféreurs transmettantsur chaque ressource spectrale ; et iii) Définir les stratégies de transmissionet les traitements d’interférences optimaux. L’objectif de l’optimisationest alors de maximiser l’efficacité spectrale totale du système après traitementde l’interférence. Les différents algorithmes utilisés pour résoudre, étape parétape, l’optimisation globale du système sont détaillés. Enfin, des simulationsnumériques permettent de mettre en évidence les gains de performance potentielsofferts par notre démarche d’optimisation. / In this thesis, we investigate two techniques used for enhancing the energy orspectral efficiency of the network. In the first part of the thesis, we propose tocombine the network future context prediction capabilities with the well-knownlatency vs. energy efficiency tradeoff. In that sense, we consider a proactivedelay-tolerant scheduling problem. In this problem, the objective consists ofdefining the optimal power strategies of a set of competing users, which minimizesthe individual power consumption, while ensuring a complete requestedtransmission before a given deadline. We first investigate the single user versionof the problem, which serves as a preliminary to the concepts of delay tolerance,proactive scheduling, power control and optimization, used through the first halfof this thesis. We then investigate the extension of the problem to a multiusercontext. The conducted analysis of the multiuser optimization problem leads toa non-cooperative dynamic game, which has an inherent mathematical complexity.In order to address this complexity issue, we propose to exploit the recenttheoretical results from the Mean Field Game theory, in order to transitionto a more tractable game with lower complexity. The numerical simulationsprovided demonstrate that the power strategies returned by the Mean FieldGame closely approach the optimal power strategies when it can be computed(e.g. in constant channels scenarios), and outperform the reference heuristicsin more complex scenarios where the optimal power strategies can not be easilycomputed.In the second half of the thesis, we investigate a dual problem to the previousoptimization problem, namely, we seek to optimize the total spectral efficiencyof the system, in a constant short-term power configuration. To do so, we proposeto exploit the recent advances in interference classification. the conductedanalysis reveals that the system benefits from adapting the interference processingtechniques and spectral efficiencies used by each pair of Access Point (AP) and User Equipment (UE). The performance gains offered by interferenceclassification can also be enhanced by considering two improvements. First, wepropose to define the optimal groups of interferers: the interferers in a samegroup transmit over the same spectral resources and thus interfere, but can processinterference according to interference classification. Second, we define theconcept of ’Virtual Handover’: when interference classification is considered,the optimal Access Point for a user is not necessarily the one providing themaximal SNR. For this reason, defining the AP-UE assignments makes sensewhen interference classification is considered. The optimization process is thenthreefold: we must define the optimal i) interference processing technique andspectral efficiencies used by each AP-UE pair in the system; ii) the matching ofinterferers transmitting over the same spectral resources; and iii) define the optimalAP-UE assignments. Matching and interference classification algorithmsare extensively detailed in this thesis and numerical simulations are also provided,demonstrating the performance gain offered by the threefold optimizationprocedure compared to reference scenarios where interference is either avoidedwith orthogonalization or treated as noise exclusively.
|
537 |
Représentation de signaux robuste aux bruits - Application à la détection et l'identification des signaux d'alarme / Signals representation robust to noise - Application to the detection and identification of alarm signalsEl jili, Fatimetou 17 December 2018 (has links)
Ces travaux ont pour application la détection l'identification des signaux audio et particulièrement les signaux d'alarmes de voitures prioritaires. Dans un premier temps, nous proposons une méthode de détection des signaux d'alarme dans un environnement bruité, fondée sur des techniques d'analyse temps-fréquence des signaux. Cette méthode permet de détecter et d'identifier des signaux d'alarmes noyés dans du bruit, y compris pour des rapports signal à bruit négatifs. Puis nous proposons une quantification des signaux robuste aux bruits de transmission. Il s'agit de remplacer chaque niveau de bit d'un vecteur d'échantillons temporels ou fréquentiels par un mot binaire de même longueur fourni par un codeur correcteur d'erreur. Dans une première approche, chaque niveau de bits est quantifié indépendamment des autres selon le critère de minimisation de la distance de Hamming. Dans une seconde approche, pour réduire l'erreur de quantification à robustesse égale, les différents niveaux de bits sont quantifiés successivement selon un algorithme de type matching pursuit. Cette quantification donne aux signaux une forme spécifique permettant par la suite de les reconnaitre facilement parmi d'autres signaux. Nous proposons donc enfin deux méthodes de détection et d'identification des signaux fondées sur la quantification robuste, opérant dans le domaine temporel ou dans le domaine fréquentiel, par minimisation de la distance entre les signaux reçus restreints à leurs bits de poids fort et les signaux de référence. Ces méthodes permettent de détecter et d'identifier les signaux dans des environnements à rapport signal à bruit très faible et ceci grâce à la quantification. Par ailleurs, la première méthode, fondée sur la signature temps-fréquence, s'avère plus performante avec les signaux quantifiés. / This work targets the detection and identification of audio signals and in particular alarm signals from priority cars. First, we propose a method for detecting alarm signals in a noisy environment, based on time-frequency signal analysis. This method makes it possible to detect and identify alarm signals embedded in noise, even with negative signal-to-noise ratios. Then we propose a signal quantization robust against transmission noise. This involves replacing each bit level of a vector of time or frequency samples with a binary word of the same length provided by an error- correcting encoder. In a first approach, each bit level is quantized independently of the others according to the Hamming distance minimization criterion. In a second approach, to reduce the quantization error at equal robustness, the different bit levels are quantized successively by a matching pursuit algorithm. This quantization gives the signals a specific shape that allows them to be easily recognized among other signals. Finally, we propose two methods for detecting and identifying signals based on robust quantization, operating in the time domain or in the frequency domain, by minimizing the distance between the received signals restricted to their high-weight bits and the reference signals. These methods make it possible to detect and identify signals in environments with very low signal-to-noise ratios, thanks to quantization. In addition, the first method, based on the time-frequency signature, is more efficient with quantized signals.
|
538 |
Proposta de redução da dose de radiação na mamografia digital utilizando novos algoritmos de filtragem de ruído Poisson / Proposal of radiation dose reduction in digital mammography using new algorithms for Poisson noise filteringOliveira, Helder Cesar Rodrigues de 19 February 2016 (has links)
O objetivo deste trabalho é apresentar um novo método para a remoção do ruído Poisson em imagens de mamografia digital adquiridas com baixa dosagem de radiação. Sabe-se que a mamografia por raios X é o exame mais eficiente para a detecção precoce do câncer de mama, aumentando consideravelmente as chances de cura da doença. No entanto, a radiação absorvida pela paciente durante o exame ainda é um problema a ser tratado. Estudos indicam que a exposição à radiação pode induzir a formação do câncer em algumas mulheres radiografadas. Apesar desse número ser significativamente baixo em relação ao número de mulheres que são salvas pelo exame, existe a necessidade do desenvolvimento de meios que viabilizem a diminuição da dose de radiação empregada. No entanto, uma redução na dose de radiação piora a qualidade da imagem pela diminuição da relação sinal-ruído, prejudicando o diagnóstico médico e a detecção precoce da doença. Nesse sentido, a proposta deste trabalho é apresentar um método para a filtragem do ruído Poisson que é adicionado às das imagens mamográficas quando adquiridas com baixa dosagem de radiação, fazendo com que ela apresente qualidade equivalente àquela adquirida com a dose padrão de radiação. O algoritmo proposto foi desenvolvido baseado em adaptações de algoritmos bem estabelecidos na literatura, como a filtragem no domínio Wavelet, aqui usando o Shrink-thresholding (WTST), e o Block-matching and 3D Filtering (BM3D). Os resultados obtidos com imagens mamográficas adquiridas com phantom e também imagens clínicas, mostraram que o método proposto é capaz de filtrar o ruído adicional incorporado nas imagens sem perda aparente de informação. / The aim of this work is to present a novel method for removing the Poisson noise in digital mammography images acquired with reduced radiation dose. It is known that the X-ray mammography is the most effective exam for early detection of breast cancer, greatly increasing the chances of healing the disease. However, the radiation absorbed by the patient during the exam is still a problem to be treated. Some studies showed that mammography can induce breast cancer in a few women. Although this number is significantly low compared to the number of women who are saved by the exam, it is important to develop methods to enable the reduction of the radiation dose used in the exam. However, dose reduction led to a decrease in image quality by means of the signal to noise ratio, impairing medical diagnosis and the early detection of the disease. In this sense, the purpose of this study is to propose a new method to reduce Poisson noise in mammographic images acquired with low radiation dose, in order to achive the same quality as those acquired with the standard dose. The method is based on well established algorithms in the literature as the filtering in Wavelet domain, here using Shrink-thresholding (WTST) and the Block-matching and 3D Filtering (BM3D). Results using phantom and clinical images showed that the proposed algorithm is capable of filtering the additional noise in images without apparent loss of information.
|
539 |
Instrumentos públicos de incentivo às exportações e desempenho de estreantes no mercado internacional / Public export promotion and new exporters performance: evidence from Brazilian manufacturing firmsAlvarez, Rodrigo Baggi Prieto 14 June 2013 (has links)
A compreensão da dinâmica de persistência e evasão da atividade exportadora é fundamental para o desenho de incentivos adequados às firmas estreantes no mercado internacional e encontra respaldo nos modelos da nova teoria do comércio internacional. O propósito deste trabalho é investigar os determinantes do desempenho de firmas industriais brasileiras estreantes no mercado internacional, em termos de probabilidade de sobrevivência e evolução do valor exportado, com especial atenção aos impactos dos instrumentos de apoio Drawback, BNDES Exim e Proex. Para tanto, serão analisadas empresas estreantes no comércio exterior entre os anos de 1998 e 2003, configurando um painel desbalanceado com 8,5 mil firmas. Por meio de análise econométrica para dados em painel e estimação de modelos de duração, verificou-se que a função de sobrevivência e crescimento do valor médio exportado no tempo difere claramente entre firmas que utilizam um dos benefícios e aquelas que não o fazem. Também se pode afirmar que custos irreversíveis de entrada no comércio internacional não sejam desprezíveis entre as firmas industriais analisadas, o que indica que os programas públicos de promoção de exportações devam se concentrar na (i) redução da taxa de abandono das recém-exportadoras e (ii) na minimização dos custos fixos associados aos investimentos para entrada no mercado exportador. / Understanding the dynamics of persistence and evasion of export activity is essential for the design of export promotion for new exporters and international trade policies. Several results point to the importance of taking into account the specific sector of the industry in the implementation of public policies, which is supported by the new models of international trade theory. The purpose of this work is to investigate the determinants of the performance of Brazilian industrial new exporters, with particular attention to the impacts of Drawback, BNDES Exim and Proex. For this, we analyzed firms between the 1998 and 2003, constituting an unbalanced panel with 8500 firms. By panel data analysis and estimation of duration models, we found that the survival function and the growth of exports clearly differs among companies that use one of the programs and those that do not. One can also say that sunk costs are not negligible among the industrial firms analyzed, which indicates that public export promotion should focus on (i) reducing the dropout rate of new exporters and (ii) minimize the fixed sunk costs related with initial investments to begin the export activty.
|
540 |
Rotulação de símbolos matemáticos manuscritos via casamento de expressões / Labeling of Handwritten Mathematical Symbols via Expression MatchingHonda, Willian Yukio 23 January 2013 (has links)
O problema de reconhecimento de expressões matemáticas manuscritas envolve três subproblemas importantes: segmentação de símbolos, reconhecimento de símbolos e análise estrutural de expressões. Para avaliar métodos e técnicas de reconhecimento, eles precisam ser testados sobre conjuntos de amostras representativos do domínio de aplicação. Uma das preocupações que tem sido apontada ultimamente é a quase inexistência de base de dados pública de expressões matemáticas, o que dificulta o desenvolvimento e comparação de diferentes abordagens. Em geral, os resultados de reconhecimento apresentados na literatura restringem-se a conjuntos de dados pequenos, não disponíveis publicamente, e muitas vezes formados por dados que visam avaliar apenas alguns aspectos específicos do reconhecimento. No caso de expressões online, para treinar e testar reconhecedores de símbolos, as amostras são em geral obtidas solicitando-se que as pessoas escrevam uma série de símbolos individualmente e repetidas vezes. Tal tarefa é monótona e cansativa. Uma abordagem alternativa para obter amostras de símbolos seria solicitar aos usuários a transcrição de expressões modelo previamente definidas. Dessa forma, a escrita dos símbolos seria realizada de forma natural, menos monótona, e várias amostras de símbolos poderiam ser obtidas de uma única expressão. Para evitar o trabalho de anotar manualmente cada símbolo das expressões transcritas, este trabalho propõe um método para casamento de expressões matemáticas manuscritas, no qual símbolos de uma expressão transcrita por um usuário são associados aos correspondentes símbolos (previamente identificados) da expressão modelo. O método proposto é baseado em uma formulação que reduz o problema a um problema de associação simples, no qual os custos são definidos em termos de características dos símbolos e estrutura da expressão. Resultados experimentais utilizando o método proposto mostram taxas médias de associação correta superiores a 99%. / The problem of recognizing handwritten mathematical expressions includes three important subproblems: symbol segmentation, symbol recognition, and structural analysis of expressions. In order to evaluate recognition methods and techniques, they should be tested on representative sample sets of the application domain. One of the concerns that are being repeatedly pointed recently is the almost non-existence of public representative datasets of mathematical expressions, which makes difficult the development and comparison of distinct approaches. In general, recognition results reported in the literature are restricted to small datasets, not publicly available, and often consisting of data aiming only evaluation of some specific aspects of the recognition. In the case of online expressions, to train and test symbol recognizers, samples are in general obtained asking users to write a series of symbols individually and repeatedly. Such task is boring and tiring. An alternative approach for obtaining samples of symbols would be to ask users to transcribe previously defined model expressions. By doing so, writing would be more natural and less boring, and several symbol samples could be obtained from one transcription. To avoid the task of manually labeling the symbols of the transcribed expressions, in this work a method for handwritten expression matching, in which symbols of a transcribed expression are assigned to the corresponding ones in the model expression, is proposed. The proposed method is based on a formulation that reduces the matching problem to a linear assignment problem, where costs are defined based on symbol features and expression structure. Experimental results using the proposed method show that mean correct assignment rate superior to 99% is achieved.
|
Page generated in 0.0318 seconds