11 |
Complexity issues in counting, polynomial evaluation and zero finding / Complexité de problèmes de comptage, d’évaluation et de recherche de racines de polynômesBriquel, Irénée 29 November 2011 (has links)
Dans cette thèse, nous cherchons à comparer la complexité booléenne classique et la complexité algébrique, en étudiant des problèmes sur les polynômes. Nous considérons les modèles de calcul algébriques de Valiant et de Blum, Shub et Smale (BSS). Pour étudier les classes de complexité algébriques, il est naturel de partir des résultats et des questions ouvertes dans le cas booléen, et de regarder ce qu'il en est dans le contexte algébrique. La comparaison des résultats obtenus dans ces deux domains permet ainsi d'enrichir notre compréhension des deux théories. La première partie suit cette approche. En considérant un polynôme canoniquement associé à toute formule booléenne, nous obtenons un lien entre les questions de complexité booléenne sur la formule booléenne et les questions de complexité algébrique sur le polynôme. Nous avons étudié la complexité du calcul de ce polynôme dans le modèle de Valiant en fonction de la complexité de la formule booléenne, et avons obtenu des analogues algébriques à certains résultats booléens. Nous avons aussi pu utiliser des méthodes algébriques pour améliorer certains résultats booléens, en particulier de meilleures réductions de comptage. Une autre motivation aux modèles de calcul algébriques est d'offrir un cadre pour l‘analyse d’algorithmes continus. La seconde partie suit cette approche. Nous sommes partis d’algorithmes nouveaux pour la recherche de zéros approchés d'un système de n polynômes complexes à n inconnues. Jusqu'à présent il s'agissait d'algorithmes pour le modèle BSS. Nous avons étudié l'implémentabilité de ces algorithmes sur un ordinateur booléen et proposons un algorithme booléen. / In the present thesis, we try to compare the classical boolean complexity with the algebraic complexity, by studying problems related to polynomials. We consider the algebraic models from Valiant and from Blum, Shub and Smale (BSS). To study the algebraic complexity classes, one can start from results and open questions from the boolean case, and look at their translation in the algebraic context. The comparison of the results obtained in the two settings will then boost our understanding of both complexity theories. The first part follows this framework. By considering a polynomial canonically associated to a boolean formula, we get a link between boolean complexity issues on the formula and algebraic complexity problems on the polynomial. We studied the complexity of computing the polynomial in Valiant's model, as a function of the complexity of the boolean formula. We found algebraic counterparts to some boolean results. Along the way, we could also use some algebraic methods to improve boolean results, in particular by getting better counting reductions. Another motivation for algebraic models of computation is to offer an elegant framework to the study of numerical algorithms. The second part of this thesis follows this approach. We started from new algorithms for the search of approximate zeros of complex systems of n polynomials in n variables. Up to now, those were BSS machine algorithms. We studied the implementation of these algorithms on digital computers, and propose an algorithm using floating arithmetic for this problem.
|
12 |
Разработка распределенной системы управления коммуникациями с клиентами : магистерская диссертация / Development of a distributed system for managing communications with clientsТкачук, Д. В., Tkachuk, D. V. January 2023 (has links)
Цель работы – разработка распределённой системы управления коммуникациями с клиентами под нужды компании АО «Эр-Телеком Холдинг». Объект исследования – класс информационных систем для уведомления пользователей. Методы исследования: анализ, систематизация и обобщения данных о предыдущей версии системы нотификации клиентов, сравнение и анализ инструментов, технологий и принципов построения информационных систем. Результаты работы: разработана система массовой нотификации клиентов.Выпускная квалификационная работа выполнена в текстовом редакторе Microsoft Word и представлена в твёрдой копии. / The goal of the work is to develop a distributed system for managing communications with clients for the needs of ER-Telecom Holding JSC. The object of study is a class of information systems for notifying users. Research methods: analysis, systematization and generalization of data on the previous version of the customer notification system, comparison and analysis of tools, technologies and principles for constructing information systems. Results of work: a system for mass notification of clients has been developed. The final qualifying work was completed in the text editor Microsoft Word and presented in a hard copy.
|
13 |
Mobility management and mobile server dispatching in fixed-to-mobile and mobile-to-mobile edge computingWang, Jingrong 12 August 2019 (has links)
Mobile edge computing (MEC) has been considered as a promising technology to handle computation-intensive and latency-sensitive tasks for mobile user equipments (UEs) in next-generation mobile networks. Mobile UEs can offload these tasks to nearby edge servers, which are typically deployed on base stations (BSs) that are equipped with computation resources. Thus, the task execution latency as well as the energy consumption of mobile devices can be reduced.
Mobility management has played a fundamental role in MEC, which associates UEs with the appropriate BSs. In the existing handover decision-making process, the communication costs dominate. However, in edge scenario, the computation capacity constraints should also be considered. Due to user mobility, mobile UEs are nonuniformly distributed over time and space. Edge servers in hot-spot areas can be overloaded while others are underloaded. When edge servers are densely deployed, each UE may have multiple choices to offload its tasks. Instead, if edge servers are sparsely deployed, UEs may only have one option for task offloading. This aggravates the unbalanced workload of the deployed edge servers. Therefore, how to serve the dynamic hot-spot areas needs to be addressed in different edge server deployment scenarios.
Considering these two scenarios discussed above, two problems are addressed in this thesis: 1) with densely deployed edge servers, for each mobile UE, how to choose the appropriate edge servers independently without full system information is inves- tigated, and 2) with sparsely deployed edge servers, how to serve dynamic hot-spot areas in an efficient and flexible way is emphasized. First, with BSs densely de- ployed in hot-spot areas, mobile UEs can offload their tasks to one of the available edge servers nearby. However, precise full system information such as the server workload can be hard to be synchronized in real time, which also introduces extra signaling overhead for mobility management decision-making. Thus, a user-centric reinforcement-learning-based mobility management scheme is proposed to handle sys- tem uncertainties. Each UE observes the task latency and automatically learns the optimal mobility management strategy through trial and feedback. Simulation results show that the proposed scheme manifests superiority in dealing with system uncer- tainties. When compared with the traditional received signal strength (RSS)-based handover scheme, the proposed scheme reduces the task execution latency by about 30%.
Second, fixed edge servers that are sparsely deployed around mobile UEs are not flexible enough to deal with time-varying task offloading. Dispatching mobile servers is formulated as a variable-sized bin-packing problem with geographic constraints. A novel online unmanned aerial vehicle (UAV)-mounted edge server dispatching scheme is proposed to provide flexible mobile-to-mobile edge computing services. UAVs are dispatched to the appropriate hover locations by identifying the hot-spot areas sequen- tially. Theoretical analysis is provided with the worst-case performance guarantee. Extensive evaluations driven by real-world mobile requests show that, with a given task finish time, the mobile dispatching scheme can serve 59% more users on aver- age when compared with the fixed deployment. In addition, the server utilization reaches 98% during the daytime with intensive task requests. Utilizing both the fixed and mobile edge servers can satisfy even more UE demands with fewer UAVs to be dispatched and a better server utilization.
To sum up, not only the communication condition but also the computation lim- itation have an impact on the edge server selection and mobility management in MEC. Moreover, dispatching mobile edge servers can be an effective and flexible way to supplement the fixed servers and deal with dynamic offloading requests. / Graduate
|
14 |
Décompositions parcimonieuses pour l'analyse avancée de données en spectrométrie pour la Santé / Sparse decompositions for advanced data analysis of hyperspectral data in biological applicationsRapin, Jérémy 19 December 2014 (has links)
La séparation de sources en aveugle (SSA) vise à rechercher des signaux sources inconnus et mélangés de manière inconnue au sein de plusieurs observations. Cette approche très générique et non-supervisée ne fournit cependant pas nécessairement des résultats exploitables. Il est alors nécessaire d’ajouter des contraintes, notamment physiques, afin de privilégier la recherche de sources ayant une structure particulière. La factorisation en matrices positives (non-negative matrix factorization, NMF) qui fait plus précisément l’objet de cette thèse recherche ainsi des sources positives observées au travers de mélanges linéaires positifs.L’ajout de davantage d’information reste cependant souvent nécessaire afin de pouvoir séparer les sources. Nous nous intéressons ainsi au concept de parcimonie qui permet d’améliorer le contraste entre celles-ci tout en produisant des approches très robustes, en particulier au bruit. Nous montrons qu’afin d’obtenir des solutions stables, les contraintes de positivité et la régularisation parcimonieuse doivent être appliqués de manière adéquate. Aussi, l’utilisation de la parcimonie dans un espace transformé potentiellement redondant, permettant de capturer la structure de la plu- part des signaux naturels, se révèle difficile à appliquer au côté de la contrainte de positivité dans l’espace direct. Nous proposons ainsi un nouvel algorithme de NMF parcimonieuse, appelé nGMCA (non-negative Generalized Morphological Component Analysis), qui surmonte ces difficultés via l’utilisation de techniques de calcul proximal. Des expérimentations sur des données simulées montrent que cet algorithme est robuste à une contamination par du bruit additif Gaussien, à l’aide d’une gestion automatique du paramètre de parcimonie. Des comparaisons avec des algorithmes de l’état-de-l’art en NMF sur des données réalistes montrent l’efficacité ainsi que la robustesse de l’approche proposée.Finalement, nous appliquerons nGMCA sur des données de chromatographie en phase liquide - spectrométrie de masse (liquid chromatography - mass spectrometry, LC-MS). L’observation de ces données montre qu’elles sont contaminées par du bruit multiplicatif, lequel détériore grandement les résultats des algorithmes de NMF. Une extension de nGMCA conçue pour prendre en compte ce type de bruit à l’aide d’un a priori non-stationnaire permet alors d’obtenir d’excellents résultats sur des données réelles annotées. / Blind source separation aims at extracting unknown source signals from observations where these sources are mixed together by an unknown process. However, this very generic and non-supervised approach does not always provide exploitable results. Therefore, it is often necessary to add more constraints, generally arising from physical considerations, in order to favor the recovery of sources with a particular sought-after structure. Non-negative matrix factorization (NMF), which is the main focus of this thesis, aims at searching for non-negative sources which are observed through non-negative linear mixtures.In some cases, further information still remains necessary in order to correctly separate the sources. Here, we focus on the sparsity concept, which helps improving the contrast between the sources, while providing very robust approaches, even when the data are contaminated by noise. We show that in order to obtain stable solutions, the non-negativity and sparse constraints must be applied adequately. In addition, using sparsity in a potentially redundant transformed domain could allow to capture the structure of most of natural image, but this kind of regularization proves difficult to apply together with the non-negativity constraint in the direct domain. We therefore propose a sparse NMF algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), which overcomes these difficulties by making use of proximal calculus techniques. Experiments on simulated data show that this algorithm is robust to additive Gaussian noise contamination, with an automatic control of the sparsity parameter. This novel algorithm also proves to be more efficient and robust than other state-of-the-art NMF algorithms on realistic data.Finally, we apply nGMCA on liquid chromatography - mass spectrometry data. Observation of these data show that they are contaminated by multiplicative noise, which greatly deteriorates the results of the NMF algorithms. An extension of nGMCA was designed to take into account this type of noise, thanks to the use of a non-stationary prior. This extension is then able to obtain excellent results on annotated real data.
|
15 |
Complexité de problèmes de comptage, d'évaluation et de recherche de racines de polynômes.Briquel, Irénée 29 November 2011 (has links) (PDF)
Dans cette thèse, nous cherchons à comparer la complexité booléenne classique et la complexité algébrique, en étudiant des problèmes sur les polynômes. Nous considérons les modèles de calcul algébriques de Valiant et de Blum, Shub et Smale (BSS). Pour étudier les classes de complexité algébriques, il est naturel de partir des résultats et des questions ouvertes dans le cas booléen, et de regarder ce qu'il en est dans le contexte algébrique. La comparaison des résultats obtenus dans ces deux domaines permet ainsi d'enrichir notre compréhension des deux théories. La première partie suit cette approche. En considérant un polynôme canoniquement associé à toute formule booléenne, nous obtenons un lien entre les questions de complexité booléenne sur la formule booléenne et les questions de complexité algébrique sur le polynôme. Nous avons étudié la complexité du calcul de ce polynôme dans le modèle de Valiant en fonction de la complexité de la formule booléenne, et avons obtenu des analogues algébriques à certains résultats booléens. Nous avons aussi pu utiliser des méthodes algébriques pour améliorer certains résultats booléens, en particulier de meilleures réductions de comptage. Une autre motivation aux modèles de calcul algébriques est d'offrir un cadre pour l'analyse d'algorithmes continus. La seconde partie suit cette approche. Nous sommes partis d'algorithmes nouveaux pour la recherche de zéros approchés d'un système de n polynômes complexes à n inconnues. Jusqu'à présent il s'agissait d'algorithmes pour le modèle BSS. Nous avons étudié l'implémentabilité de ces algorithmes sur un ordinateur booléen et proposons un algorithme booléen.
|
16 |
Characterization of modified neutron fields with americium-beryllium and californium-252 sourcesExline, Peter Riley 23 May 2011 (has links)
There are a variety of uses for reference neutron fields including detector response and dosimeter studies. The Georgia Institute of Technology has a 252Cf spontaneous fission source and an AmBe (α, n) source available for use in its research programs. In addition, it has iron, lead, beryllium, tantalum, heavy water, and polyethylene spheres to modify the neutron energy distributions from these neutron sources. This research characterized the neutron leakage spectra from the source inside spherical shells using a Bonner sphere spectrometer. All the neutron fields measured were also computed with a Monte Carlo code to determine the neutron fluence rate and ambient dose equivalent rate. The comparison of experimental data and calculations are used to provide further insight into the neutron spectra as modified by the spheres. The characterization of these modified sources will provide data to assist in using the resulting neutron fields in other research activities.
To measure each neutron field combination, one of the two sources was placed in the center of an attenuating sphere. The neutron field was first measured at a variety of source-to-detector distances with a Bonner Sphere System. The spectrometer measurements, specifically the count rates of the different Bonner spheres, as a function of distance from the source is fitted to obtain corrections for room-scatter and air-scatter of neutrons using the Eisenhauer, Schwartz, and Johnson method. Using these corrections, the count rates free of room return is obtained at 1 m from the source and unfolded using the BUMS software to obtain the reported fluence and dose equivalent rates.
These results are compared to those generated by the Monte Carlo Neutral Particle (MCNP) code. Models were made in MCNP for each of the source and moderating sphere combinations. The neutron fluence and dose rates were tallied during the MCNP simulation. The unfolded experimental data and the MCNP calculations showed good agreement for most of source-attenuating sphere combinations, thereby reinforcing the experimental results.
|
17 |
Advanced Signal Processing Techniques for Single Trial Electroencephalography Signal Classification for Brain Computer Interface ApplicationsLi, Kun 31 December 2010 (has links)
Brain Computer Interface (BCI) is a direct communication channel between brain and computer. It allows the users to control the environment without the need to control muscle activity [1-2]. P300-Speller is a well known and widely used BCI system that was developed by Farwell and Donchin in 1988 [3]. The accuracy level of the P300-BCI Speller as measured by the percent of communicated characters correctly identified by the system depends on the ability to detect the P300 event related potential (ERP) component among the ongoing electroencephalography (EEG) signal. Different techniques have been tested to reduce the number of trials needed to be averaged together to allow the reliable detection of the P300 response. Some of them have achieved high accuracies in multiple-trial P300 response detection. However the accuracy of single trial P300 response detection still needs to be improved. In this research, two single trial P300 response classification methods were designed. One is based on independent component analysis (ICA) with blind tracking and the other is based on variance analysis. The purpose of both methods is to detect a chosen character in real-time in the P300-BCI speller. The experimental results demonstrate that the proposed methods dramatically reduce the signal processing time, improve the data communication rate, and achieve overall accuracy of 79.1% for ICA based method and 84.8% for variance analysis based method in single trial P300 response classification task. Both methods showed better performance than that of the single trial stepwise linear discriminant analysis (SWLDA), which has been considered as the most accurate and practical technique working with P300-BCI Speller.
|
18 |
Ανάλυση συνιστωσών σήματος σε ηλεκτροεγκεφαλογράφημαΓκρούμας, Γεώργιος 13 January 2015 (has links)
Στην παρούσα διπλωματική εργασία γίνεται μελέτη της μεθόδου ανάλυσης ανεξάρτητων συνιστωσών στο ηλεκτροοεγκεφαλογράφημα. Αφού εξετάσουμε το κομμάτι της φυσιολογίας του εγκεφάλου θα δοθεί ένα μαθηματικό υπόβαθρο της ανάλυσης ανεξάρτητων συνιστωσών. Στη συνέχεια θα γίνει μια βιβλιογραφική έρευνα στη σύγκριση αλγορίθμων της ανάλυσης ανεξάρτητων συνιστωσών όταν εφαρμόζονται σε ηλεκτροεγκεφαλογραφήματα με στόχο την βέλτιστη εξαγωγή παρασίτων. Στο τέλος θα γίνει εφαρμογή της μεθόδου της ανάλυσης ανεξάρτητων συνιστωσών σε πραγματικά δεδομένα ηλεκτροεγκεφαλογραφήματος 64 καναλιών μέσω του περιβάλλοντος του Matlab. Στόχος της εφαρμογής αυτής είναι ο διαχωρισμός των ανεξάρτητων συνιστωσών μη-εγκεφαλικής προέλευσης και η αφαίρεση τους από τα αρχικά δεδομένα. / This thesis is a study of the independent componenet analysis in electroencephalogram. After looking at the piece of brain physiology we will give a mathematical framework of independent component analysis. Then it will become a literature search in the comparison algorithms of Independent Component Analysis in EEGs when applied with a view to optimal extraction of artifacts. At the end will be the method of independent component analysis to real EEG data 64 channels through the environment of Matlab. The aim of this application is the separation of independent components of non brain activity and removing them from the original data.
|
19 |
PERFORMANCE RESULTS USING DATA QUALITY ENCAPSULATION (DQE) AND BEST SOURCE SELECTION (BSS) IN AERONAUTICAL TELEMETRY ENVIRONMENTSGeoghegan, Mark, Schumacher, Robert 10 1900 (has links)
Flight test telemetry environments can be particularly challenging due to RF shadowing, interference, multipath propagation, antenna pattern variations, and large operating ranges. In cases where the link quality is unacceptable, applying multiple receiving assets to a single test article can significantly improve the overall link reliability. The process of combining multiple received streams into a single consolidated stream is called Best Source Selection (BSS). Recent developments in BSS technology include a description of the maximum likelihood detection approach for combining multiple bit sources, and an efficient protocol for providing the real-time data quality metrics necessary for optimal BSS performance. This approach is being standardized and will be included in Appendix 2G of IRIG-106-17. This paper describes the application of this technology and presents performance results obtained during flight testing.
|
20 |
Srovnání agentích platforem pro bezdrátové senzorové sítě / Agents in Wireless Sensor NetworksMelo, Jakub January 2013 (has links)
This thesis deals with the agent platforms for wireless sensor networks. Wireless sensor networks together with the software and hardware tools used for their programming are introduced at the beginning of the thesis. The following chapter is devoted to the agents and their possible usage in wireless sensor networks. Two agent platforms Agilla and WSageNt are presented in the rest of the thesis. The end of the thesis presents the main differences between both platforms.
|
Page generated in 0.0542 seconds