Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
791 |
Baryonic acoustic oscillations with emission line galaxies at intermediate redshift : the large-scale structure of the universe. / Observation des oscillations baryoniques primordiales des galaxies à raie d’émission à décalage vers le rouge modéré : la structure aux grandes échelles dans l’univers.Comparat, Johan 21 June 2013 (has links)
J'ai démontrer la faisabilité de la sélection de la cible pour les galaxies en ligne des émissions lumineuses. Je comprends maintenant les principaux mécanismes physiques de conduite de l'efficacité d'une sélection, en particulier le rapport à la photométrie de parent. Une question reste perplexe, je ne pouvais pas encore estimer quantitativement l'impact de la poussière sur l'efficacité de la sélection. J'espère que d'aborder cette question avec l'ensemble des données décrites dans le chapitre 4.En dehors de la ligne de sélection de la cible de la galaxie d'émission, j'ai étudié, au premier ordre, les deux principales erreurs systématiques sur la détermination de l'échelle BAO nous attendent en raison de l'utilisation galaxies en ligne des émissions comme traceurs de la question. J'ai d'abord montré le caractère incomplet de la distribution redshift, en raison de la mesure du décalage spectral avec [Oii], est lié à la résolution instrumentale. Je trouve qu'il ya deux régimes intéressants. Pour une observation des plus brillants [OII] émetteurs, une résolution modérée est suffisante, alors que pour une enquête plus faible, la plus haute de la résolution le meilleur. Deuxièmement, j'ai estimé le biais de la galaxie linéaire des sélections discuté avant et je trouve qu'ils sont très biaisés. D'une part, ce sont d'excellentes nouvelles pour les observateurs, comme le temps nécessaire pour observer à un signal donné au bruit dans le spectre de puissance diminue avec le carré de la partialité. D'autre part, elle constitue un nouveau défi pour les algorithmes de reconstruction et la fabrication de catalogues simulacres. / In this PhD, I demonstrate the feasibility of the target selection for bright emission line galaxies. Also I now understand the main physical mechanisms driving the efficiency of a selection, in particular the relation to the parent photometry. A puzzling issue remains, I could not yet estimate quantitatively the impact of the dust on the selection efficiency. I hope to address this question with the data set described in chapter 4.Apart from the emission line galaxy target selection, I investigated, at first order, the two main systematic errors on the determination of the BAO scale we expect due to using emission line galaxies as tracers of the matter. First I showed the incompleteness in the redshift distribution, due to the measurement of the redshift with [Oii], is related to the instrumental resolution. I find there are two interesting regimes. For an observation of the brightest [Oii]emitters, a moderate resolution is sufficient, whereas for a fainter survey, the highest the resolution the best. Secondly, I estimated the linear galaxy bias of the selections discussed before and I find they are highly biased. On one hand, this is great news for the observers, as the time required to observed at a given signal to noise in the power spectrum decreases with the square of the bias. On the other hand, it constitutes a new challenge for reconstruction algorithms and the making of mock catalogs. The work in progress described in the last chapter shows I am starting to try and handle these questions in a robust manner.
|
792 |
Měření měsíční kriminality v Německu a v České republice ve vazbě na evropskou migrační krizi / Measuring monthly statistical crime in Germany and the Czech Republic in relation to the European migration crisisŠÍPEK, Matěj January 2019 (has links)
The diploma thesis is based on a defining of a complex goal: Investigation of the dependence of monthly (in connectivity with the possibility of obtaining data also annual) crime in selected European countries and in relation to the European migration crisis. Germany and Czech Republic became selected countries. A monthly and annual crime index was used to achieve the comprehensive goal of the work thesis To achieve the goal, hypotheses H1, H2, H3, H4 were determined: Hypothesis 1 (H1): Monthly crime indexes in the Czech Republic and Germany do not correlate. Hypothesis 2 (H2): Monthly crime indexes in the Czech Republic and Germany have different theoretical divisions. Hypothesis 3 (H3): The annual crime indexes in the Czech Republic and Germany do not correlate. Hypothesis 4 (H4): Annual crime indexes in the Czech Republic and Germany have different theoretical divisions. Through the practical part of the thesis it was found, that the monthly crime indexes in the Czech Republic and Germany do not correlate and have different theoretical divisions (H1 hypotheses and H2 were confirmed). It was also found that indexes of annual crime in the Czech Republic and Germany strongly correlate and have the same theoretical divisions (hypotheses H3 and H4 were not confirmed). The reason for the confirmation of hypotheses H1, H2 and non-confirmation of hypotheses H3, H4 was probably different time periods for data collection (monthly crime included the period January 2011 to December 2016, monthly crime in Germany had to be calculated from the annual crime, due to data collection problems crime was based on data sets from 1987 to 2016). The introduction of different time periods was forced by the difficult availability of data in Germany. The methodology of the thesis used in the theoretical part is mainly search of available literature. Professional literature, valid legislation and Internet resources have been studied. The methodology of the thesis used in the practical part is based on methods of statistics, used for data processing using tables, graphs and basic calculations, due to it is possible to verify hypotheses. Basic statistical descriptive methods were used in order: Formulation of statistical survey, Scaling, Measurement, Elementary statistical processing. Basic statistical mathematical methods were used in order: Non-parametric testing, Linear regression analysis, Linear correlation analysis. Data collection was performed as follows: a) Czech forensic statistics were downloaded from the databases of the Czech Police, which also contain criminality for individual months. Furthermore, the number of inhabitants from the Czech Statistical Office was found for individual years. Crime indexes were calculated from these data. b) The BKA database (Federal Criminal Office of Germany) did not publish criminal statistics. For this reason, an article was published through ntv.de that published the annual crime of Germany. Population numbers for individual years were found through the Federal Statistical Office of Germany. Crime indexes were calculated from these data. At the end of the thesis there is pointed out the solution of possible crisis situations, including problems involving aspects of migration. The great importance of reducing crime is mainly prevention with a sensitive link to the solution of migration issues - to help in areas associated with the emergence of eg migration waves and thus preventing a crisis situations. The thesis did not deal with the political aspects of all examined hypotheses. Methodologically, the diploma thesis can be characterized as a report on the applied research solution in which its quantitative dimension prevails.
|
793 |
Diffraction électromagnétique par des réseaux et des surfaces rugueuses aléatoires : mise en œuvre deméthodes hautement efficaces pour la résolution de systèmes aux valeurs propres et de problèmesaux conditions initiales / Electromagnetic scattering by gratings and random rough surfaces : implementation of high performance algorithms for solving eigenvalue problems and problems with initial conditionsPan, Cihui 02 December 2015 (has links)
Dans cette thèse, nous étudions la diffraction électromagnétique par des réseau et surfaces rugueuse aléatoire. Le méthode C est une méthode exacte développée pour ce but. Il est basé sur équations de Maxwell sous forme covariante écrite dans un système de coordonnées non orthogonal. Le méthode C conduisent à résoudre le problème de valeur propre. Le champ diffusé est expansé comme une combinaison linéaire des solutions propres satisfaisant à la condition d’onde sortant.Nous nous concentrons sur l’aspect numérique de la méthode C, en essayant de développer une application efficace de cette méthode exacte. Pour les réseaux, nous proposons une nouvelle version de la méthode C qui conduit `a un système différentiel avec les conditions initiales. Nous montrons que cette nouvelle version de la méthode C peut être utilisée pour étudier les réseaux de multicouches avec un médium homogène.Nous vous proposons un algorithme QR parallèle conçu spécifiquement pour la méthode C pour résoudre le problème de valeurs propres. Cet algorithme QR parallèle est une variante de l’algorithme QR sur la base de trois tech- niques: “décalage rapide”, poursuite de renflement parallèle et de dégonflage parallèle agressif précoce (AED). / We study the electromagnetic diffraction by gratings and random rough surfaces. The C-method is an exact method developed for this aim. It is based on Maxwell’s equations under covariant form written in a nonorthogonal coordinate system. The C-method leads to an eigenvalue problem, the solution of which gives the diffracted field.We focus on the numerical aspect of the C-method, trying to develop an efficient application of this exact method. For gratings, we have developed a new version of C-method which leads to a differential system with initial conditions. This new version of C-method can be used to study multilayer gratings with homogeneous medium.We implemented high performance algorithms to the original versions of C-method. Especially, we have developed a specifically designed parallel QR algorithm for the C- method and spectral projection method to solve the eigenvalue problem more efficiently. Experiments have shown that the computation time can be reduced significantly.
|
794 |
Optimisation des méthodes à induction électromagnétique pour l'ingénierie des sols / Optimization of electromagnetic induction methods for soil engineeringPareilh-Peyrou, Mathias 19 December 2016 (has links)
Le travail de recherche présenté dans ce mémoire de thèse, qui s’est déroulée dans le cadre d’un dispositif CIFRE (Conventions Industrielles de Formation par la Recherche) en collaboration avec le pôle géophysique du CEBTP à clermont-ferrand et le laboratoire Magmas et Volcans de l’UCA. Ce travail porte sur l’amélioration du rendement de la méthode géophysique électromagnétique de sub-surface. L’idée est de parvenir à extraire plus d’informations à partir des données électromagnétiques en gardant une méthode de prospection proche des méthodes classiques actuelles. Les techniques électromagnétiques (EM) sont des méthodes géophysiques fondées sur la mesure des variations de champs magnétiques et sont utilisé pour réaliser des mesures des caractéristiques électriques des sols. Ces appareils EM sont dits inductifs et ne nécessitent donc pas de contact avec le sol. Ils peuvent donc être mis en œuvre avec une vitesse d’acquisition plus importante que la plupart des autres méthodes géophysiques (profils électriques, gravimétrie, sismique...).Dans le cadre de ce travail, différents développements ont été effectués pour répondre à la problématique de l’amélioration des outils de prospection EM. La prospection sur le terrain a été améliorée grâce au développement d’un prototype de système d’acquisition automatisé. Celui ci est composé d’un conductivimètre (EM-31), d’un chariot support en fibre de verre, d’un GPS et d’un ordinateur assurant un enregistrement continu et géo-référencé des données à l’aide d’un programme spécialement conçu en Python.Ce mémoire présente également une procédure de correction des valeurs du conductivimètre EM-31, notamment la correction des effets de la hauteur de l’appareil par rapport au sol.Un programme Matlab a également spécifiquement été conçu pour le traitement automatisé de données EM. Ce programme permet de disposer rapidement des outils de base pour le traitement et la bonne visualisation des données.Deux études de cas ont été réalisées dans le cadre de ce travail doctoral.La première concerne une prospection linéaire d’une centaine de kilomètres sur des digues de protection contre les crues le long du fleuve Loire. Cette étude met en évidence les difficultés rencontrées lors d’une prospection de grande envergure et permet d’identifier les problématiques d’une étude géophysique à grande échelle, notamment la gestion du grand nombre de données. Cela contraint le choix de la méthodologie de prospection et permet de mettre en place les procédures d’automatisation des traitements.La seconde étude concerne la mise en œuvre des outils EM sur des terrains de nature volcanique. La prospection EM a su s’avérer très efficace pour la cartographie de nombreux sites archéologiques. Cependant les sols et les roches en région volcaniques sont connus pour avoir des effets magnétiques forts. Il s’agit ici dans le cadre d’une prospection archéologique, de déterminer plus précisément les effets magnétiques du sous-sol sur la mesure EM. / This study has been conducted in the framework of the CIFRE doctoral contract, in collaboration with the Ginger CEBTP Geophysics pole and the “Laboratoire Magmas et Volcans”. The main objective of this work is to improve the performance of the sub-surface electromagnetic induction method.The principle is to retrieve more information from electromagnetic data keeping a prospecting method close to current field methods. Electromagnetic (EM) methods are based upon magnetic fields variations in order to to measure electrical characteristics of soils. EM devices are inductive and so don’t need contact with the ground. Thus they can be implemented faster than most of others geophysical methods (seismic, electrical profiles, gravimetry). In this framework, several developments have been performed to respond to the improvement of EM prospecting methods. The field survey was improved by the development of an automated acquisition system including a conductivimeter (EM-31) mounted on a fibreglass cart with a GPS receiver and a computer running a special Python program which ensure continuous data recording and geo-reference. In this thesis we present a correction procedure for the EM-31 conductivimeter values, in particular the device height above the ground. A Matlab program was also specifically conceived for automated EM data processing. It combines basic data processing and visualization tools. Two case studies was conducted during this doctoral work. The first one is about a hundred kilometers of linear prospecting along the Loire protection dykes (France). This study highlights the difficulties of large scale geophysical prospecting and allows to identify specific issues such as management of large data number. This influence the prospecting methodology and allows implementation of adapted automatized data processing. The second case study is about the implementation of EM devices on volcanic fields. Several archaeological sites has been mapped using EM methods. However in volcanic regions, soils and rocks are known to have strong magnetic effects. In this specific case of an archaeological study the purpose is to determine precisely the magnetic effects of soils on EM measures.
|
795 |
Implantação automatizada de composições de serviços web de grande escala / Automated deployment of large scale web service compositionsLeite, Leonardo Alexandre Ferreira 26 May 2014 (has links)
A implantação de composições de serviços web de grande escala apresentam vários desafios, tais como falhas corriqueiras na infraestrutura, heterogeneidade tecnológica, distribuição do sistema por diferentes organizações e atualização frequente dos serviços em operação. Nesta dissertação, estudamos como uma implantação automatizada baseada em middleware pode auxiliar na superação de tais desafios. Para isso, desenvolvemos o CHOReOS Enactment Engine, um sistema de middleware que possibilita a implantação distribuída e automatizada de composições de serviços web em uma infraestrutura virtualizada, operando no modelo de computação em nuvem denominado Plataforma como um Serviço. O middleware desenvolvido é avaliado qualitativamente em comparação a abordagens de implantação ad-hoc e quantitativamente pela sua escalabilidade em relação ao tempo de implantação das composições de serviços. / The deployment of large-scale service compositions presents several challenges, such as infrastructure failures, technological heterogeneity, distributions across different organizations, and continuous services updating. In this master thesis, we study how the automated deployed supported by middleware can help in overcoming such challenges. For this purpose, we developed the CHOReOS Enactment Engine, a middleware system that enables the distributed and automated deployment of web service compositions in a virtualized infrastructure, operating in the cloud computing model known as Platform as a Service. The developed middleware is evaluated qualitatively by comparing it with ad-hoc deployment solutions, and it is also evaluated quantitatively by its scalability regarding the deployment time of service compositions.
|
796 |
Algoritmo evolutivo computacionalmente eficiente para reconfiguração de sistemas de distribuição / Evolutionary algorithm computationally efficient for distribution system reconfigurationSantos, Augusto Cesar dos 24 April 2009 (has links)
O restabelecimento de energia em sistemas de distribuição de energia elétrica radiais geralmente envolve a reconfiguração de redes para restaurar eletricidade à(s) área(s) fora de serviço. As principais técnicas para restabelecimento de energia em sistemas de distribuição de grande porte têm sido os algoritmos evolutivos (AEs). Após a falta ter sido identificada e a zona em falta ter sido isolada do sistema, o algoritmo deve encontrar soluções em que: 1) supra com energia o maior número de consumidores possível, 2) minimize o número de operações de chaveamentos, 3) não viole restrições operacionais do sistema, 4) reduza o total de perdas resistivas, 5) a configuração da rede seja radial e, 6) obtenha tal solução em tempo real. Este projeto emprega uma nova estrutura de dados para manipular grafos produzindo exclusivamente configurações radiais e conexas, chamada representação nó-profundidade (RNP), garantindo que todas as soluções potenciais geradas pelo algoritmo satisfaçam os itens (1) e (5). Além disso, propõe-se um AE utilizando a RNP capaz de encontrar planos de restabelecimento adequados para sistemas de distribuição de larga-escala, com milhares de chaves e barras, em tempo real. / Energy restoration in radial distribution systems usually involves the network reconfiguration to restore the electricity to the out-of-service areas. The main approaches for energy restoration in large-scale distribution systems have been the evolutionary algorithms (EAs). After a fault has been identified and isolated, the algorithm must find solutions that: 1) supply energy to the larger number of consumers, 2) reduce the number of switching operations, 3) respect operational constraints of the system, 4) reduce the amount of power losses, 5) generate exclusively radial configurations and 6) find solutions in real time. This work uses a new data structure, called node-depth encoding (NDE), to manipulate graphs producing exclusively radial and connected configurations, and guaranteeing that all potential solutions generated by the algorithm satisfy items (1) and (5). Moreover, we propose an EA using the NDE that is capable of finding adequate restoration plans in real time for large-scale distribution systems, with thousands of switches and buses.
|
797 |
Um simulador para modelos de larga escala baseado no padrão scalable simulation framework (ssf) / A large-scale model simulator based on the scalable simulation framework (ssf)Jahnecke, Alexandre Nogueira 06 July 2007 (has links)
Esta dissertação apresenta uma proposta de um simulador de modelos de larga escala para o Ambiente de Simulação Distribuída Automático (ASDA), uma ferramenta que facilita a utilização e o desenvolvimento de simulação distribuída e que vem sendo objeto de pesquisas e estudos no Laboratório de Sistemas Distribuídos e Programação Concorrente (LaSDPC) do ICMC-USP. Tal simulador permite ao ASDA a construção de modelos e programas que simulam modelos de redes de filas de larga escala, operações estas que tornam a ferramenta ainda mais completa. O simulador é baseado no padrão público para simulação distribuída de larga escala denominado Scalable Simulation Framework (SSF). O protótipo do simulador desenvolvido é constituído de um programa cliente-servidor, mas podem ser observados três componentes principais: um compilador, que traduz os modelos escritos em linguagem de modelagem para linguagem C++; o módulo SSF que define a API utilizada pelos programas de simulação; e um módulo de execução, que executa os programas de simulação, analisa os resultados e os repassa para um gerador de relatórios. O simulador contribui ainda com mais estudos acerca de simulação, simulação distribuída e modelagem de sistemas utilizando as ferramentas desenvolvidas pelo grupo / This dissertation presents a proposal for a large-scale model simulator, that is integrated into the Automatic Distributed Simulation Environment (ASDA), a tool that supports the development of distributed simulation, and that has been under studies and investigations in the Laboratory of Distributed Systems and Concurrent Programming at ICMC-USP. The proposed simulator allows ASDA to support the development of models and programs that simulates large-scale queuing models, making ASDA more complete and efficient. The simulator is based on a public standard for large-scale distributed simulation named Scalable Simulation Framework (SSF). The simulator prototype that was developed is a client-server program in which we can observe three main components: one compiler, that translates the models written in a modeling language to a simulation program, written in C++ programming language; the SSF library, that defines the API that is used by the simulation programs; and a runtime environment, which runs the simulation programs, analyzes the results and sends the information to a report builder. The simulator prototype also aggregates to the simulation community more studies regarding simulation, distributed simulation, systems modelling using the internal tools developed by our group
|
798 |
Enem e o percurso histórico do conceito de avaliação: implicações das e para as políticas educacionais / Enem and the historical course of the concept of evaluation: implications of and for educational policiesBravo, Maria Helena de Aguiar 29 June 2017 (has links)
A avaliação externa e em larga escala tem assumido papel cada vez mais central na gestão da educação e é apresentada, desde os anos 90, como um dos principais instrumentos para a tomada de decisões nas políticas públicas de educação. Esse movimento, que fez com que tais avaliações passassem a compor mais acentuadamente o debate e a própria literatura da avaliação e da qualidade educacional, se materializou numa série de iniciativas do governo federal voltadas para a avaliação, dentre as quais se destaca a criação do Exame Nacional do Ensino Médio (Enem), objeto desta Dissertação. Com o objetivo geral de expor e discutir acerca do conceito de avaliação subjacente ao Enem durante todo o seu percurso histórico, ensejando uma apreensão de seus objetos e finalidades explícitos e implícitos por meio de uma investigação qualitativa de cunho documental, a análise recaiu sobre os conteúdos conceituais que pudessem ser captados nos mais variados suportes de fundamentações teóricas do Enem. Partindo do pressuposto de que o Exame, com o passar dos anos, concretizou-se como política de Estado e não mais de governo, ocupando distintos lugares a depender das políticas educacionais a ele associadas, da mesma maneira que assumiria ou revelaria concepções de avaliação ou de medição de proficiências que lhe seriam subjacentes com distintas implicações, observou-se que o Enem aproximou-se, cada vez mais, das funções de um exame vestibular, sendo pouco a pouco dissociado de suas funções relativas à avaliação do Ensino Médio. Nesse sentido, a adequada abordagem conceitual no interior do campo da avaliação diretamente relacionado ao Enem justifica-se, por um lado, pela própria possibilidade de ampliação do campo e, por outro, pela necessidade de aclarar e superar imprecisões conceituais que estariam limitando o alcance e a potencialidade de muitas avaliações semelhantes em curso. / The external and large scale evaluation has gained a central role in Brazil since the 1990s as the main instrument for decision-making in public educational policies. This trend, which led such assessment tests to be a part of the evaluation and educational qualitys debate and literature itself, was materialized in several initiatives by the Federal Government of Brazil focused on evaluation, including especially the creation of the Exame Nacional do Ensino Médio (Enem), object of this study. With the general objective of exposing and discussing the underlying Enems evaluation Enem throughout its history, by pursuing its explicit and implicit objectives and purposes through qualitative documentary research, the analysis focused at the conceptual contents that could be captured in a range of studies and documents that compose the theoretical foundations of Enem. Assuming that, over the years, Enem has become a State policy, rather than a government policy, taking different positions depending on the educational policies associated with it, in the same way as it would undertake or reveal conceptions of evaluation or measurement of proficiencies with varied implications, we observed that Enems scholastic assessment test functions have gained increased importance, with the exam being dissociated from its High School evaluation functions. In this way, the adequate conceptual approach related to Enem as an evaluation tool is justified by the possibility of expanding the field and, on the other hand, by the necessity to clarify and overcome conceptual inaccuracies that could be limiting the scope and potentialities of current similar studies.
|
799 |
Otimização de processos acoplados: programação da produção e corte de estoque / Optimization of coupled process: planning production and cutting stockSilva, Carla Taviane Lucke da 15 January 2009 (has links)
Em diversas indústrias de manufatura (por exemplo, papeleira, moveleira, metalúrgica, têxtil) as decisões do dimensionamento de lotes interagem com outras decisões do planejamento e programação da produção, tais como, a distribuição, o processo de corte, entre outros. Porém, usualmente, essas decisões são tratadas de forma isolada, reduzindo o espaço de soluções e a interdependência entre as decisões, elevando assim os custos totais. Nesta tese, estudamos o processo produtivo de indústrias de móveis de pequeno porte, que consiste em cortar placas grandes disponíveis em estoque para obter diversos tipos de peças que são processadas posteriormente em outros estágios e equipamentos com capacidades limitadas para, finalmente, comporem os produtos demandados. Os problemas de dimensionamento de lotes e corte de estoque são acoplados em um modelo de otimização linear inteiro cujo objetivo é minimizar os custos de produção, estoque de produtos, preparação de máquinas e perda de matéria-prima. Esse modelo mostra o compromisso existente entre antecipar ou não a fabricação de certos produtos aumentando os custos de estoque, mas reduzindo a perda de matéria-prima ao obter melhores combinações entre as peças. O impacto da incerteza da demanda (composta pela carteira de pedidos e mais uma quantidade extra estimada) foi amortizado pela estratégia de horizonte de planejamento rolante e por variáveis de decisão que representam uma produção extra para a demanda esperada no melhor momento, visando a minimização dos custos totais. Dois métodos heurísticos são desenvolvidos para resolver uma simplificação do modelo matemático proposto, o qual possui um alto grau de complexidade. Os experimentos computacionais realizados com exemplares gerados a partir de dados reais coletados em uma indústria de móveis de pequeno porte, uma análise dos resultados, as conclusões e perspectivas para este trabalho são apresentados / In the many manufacturing industries (e.g., paper industry, furniture, steel, textile), lot-sizing decisions generally arise together with other decisions of planning production, such as distribution, cutting, scheduling and others. However, usually, these decisions are dealt with separately, which reduce the solution space and break dependence on decisions, increasing the total costs. In this thesis, we study the production process that arises in small scale furniture industries, which consists basically of cutting large plates available in stock into several thicknesses to obtain different types of pieces required to manufacture lots of ordered products. The cutting and drilling machines are possibly bottlenecks and their capacities have to be taken into account. The lot-sizing and cutting stock problems are coupled with each other in a large scale linear integer optimization model, whose objective function consists in minimizing different costs simultaneously, production, inventory, raw material waste and setup costs. The proposed model captures the tradeoff between making inventory and reducing losses. The impact of the uncertainty of the demand, which is composed with ordered and forecasting products) was smoothed down by a rolling horizon strategy and by new decision variables that represent extra production to meet forecasting demands at the best moment, aiming at total cost minimization. Two heuristic methods are proposed to solve relaxation of the mathematical model. Randomly generated instances based on real world life data were used for the computational experiments for empirical analyses of the model and the proposed solution methods
|
800 |
Conception de l'architecture d'un réseau de capteurs sans fil de grande dimension / Architecture design for a large-scale Wireless Sensor networkKoné, Cheick Tidjane 18 October 2011 (has links)
Cette thèse considère les réseaux de capteurs sans fil (RCSF) de grande dimension (de l'ordre du million de noeuds). Les questions posées sont les suivantes : comment prédire le bon fonctionnement et calculer avant déploiement les performances d'un tel réseau, sachant qu'aucun simulateur ne peut simuler un réseau de plus de 100 000 noeuds ? Comment assurer sa configuration pour garantir performance, passage à l'échelle, robustesse et durabilité ? La solution proposée dans cette thèse s'appuie sur une architecture de RCSF hétérogène à deux niveaux, dont le niveau inférieur est composé de capteurs et le niveau supérieur de collecteurs. La première contribution est un algorithme d'auto-organisation multi-canal qui permet de partitionner le réseau inférieur en plusieurs sous-réseaux disjoints avec un collecteur et un canal de fréquence par sous-réseau tout en respectant le principe de réutilisation de fréquence. La seconde contribution est l'optimisation du déploiement des collecteurs car leur nombre représente celui des sous-réseaux. Les problèmes traités ont été : l'optimisation des emplacements des puits pour un nombre prédéfini de puits et la minimisation du nombre de puits ou du coût pour un nombre prédéfini de sauts dans les sous-réseaux. Une solution intuitive et appropriée pour assurer à la fois performances réseaux et coût, est de partitionner le réseau inférieur en sous-réseaux équilibrés en nombre de sauts. Pour ce faire, la topologie physique des puits est une répartition géographique régulière en grille (carrée, triangulaire, etc.). Des études théoriques et expérimentales par simulation des modèles de topologie montrent, en fonction des besoins applicatifs et physiques, la méthodologie de choix et le calcul des meilleures solutions de déploiement. / This thesis considers the large-scale wireless sensor network (LSWSN) consisting of million nodes. The questions are: how to predict the good working and to compute before deployment the performances of such a network, knowing that no simulator can simulate a network of more than 100000 nodes? How to ensure its configuration to ensure performance, scalability, robustness and longevity? The solution proposed in this thesis is based on a two-tiered heterogeneous architecture of WSN in which the level 1 is composed of sensors and the level 2 of collectors. The first contribution is a multi-channel self-organization algorithm, which allows partitioning the network of level 1 into several disjointed sub-networks with one collector and one frequency channel while respecting the principle of frequency reuse. The second contribution is to optimize the deployment of collectors because their number represents that of sub-networks. The problems addressed were: the optimization of sinks locations for a predetermined number of sinks, and the minimization of financial cost related of the sinks? number, for a predetermined number of hops in the sub-networks. An intuitive and appropriate solution to ensure both network performance and cost is to partition the network of level 1 into balanced sub-networks in number of hops. To do this, the physical topology of sinks is a regular geographical grid (square, triangular, etc.). Theoretical studies and simulation of topology models show, depending on application requirements (node density, charge application, etc.) and physical (radio range, surveillance zone), the methodology of choice and the computation of the best deployment solutions.
|
Page generated in 0.0907 seconds