• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 40
  • 34
  • 30
  • 8
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 324
  • 324
  • 55
  • 49
  • 41
  • 40
  • 31
  • 31
  • 28
  • 27
  • 27
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Data Mining Meets HCI: Making Sense of Large Graphs

Chau, Dueng Horng 01 July 2012 (has links)
We have entered the age of big data. Massive datasets are now common in science, government and enterprises. Yet, making sense of these data remains a fundamental challenge. Where do we start our analysis? Where to go next? How to visualize our findings? We answers these questions by bridging Data Mining and Human- Computer Interaction (HCI) to create tools for making sense of graphs with billions of nodes and edges, focusing on: (1) Attention Routing: we introduce this idea, based on anomaly detection, that automatically draws people’s attention to interesting areas of the graph to start their analyses. We present three examples: Polonium unearths malware from 37 billion machine-file relationships; NetProbe fingers bad guys who commit auction fraud. (2) Mixed-Initiative Sensemaking: we present two examples that combine machine inference and visualization to help users locate next areas of interest: Apolo guides users to explore large graphs by learning from few examples of user interest; Graphite finds interesting subgraphs, based on only fuzzy descriptions drawn graphically. (3) Scaling Up: we show how to enable interactive analytics of large graphs by leveraging Hadoop, staging of operations, and approximate computation. This thesis contributes to data mining, HCI, and importantly their intersection, including: interactive systems and algorithms that scale; theories that unify graph mining approaches; and paradigms that overcome fundamental challenges in visual analytics. Our work is making impact to academia and society: Polonium protects 120 million people worldwide from malware; NetProbe made headlines on CNN, WSJ and USA Today; Pegasus won an opensource software award; Apolo helps DARPA detect insider threats and prevent exfiltration. We hope our Big Data Mantra “Machine for Attention Routing, Human for Interaction” will inspire more innovations at the crossroad of data mining and HCI.
282

Recyclage des candidats dans l'algorithme Metropolis à essais multiples

Groiez, Assia 03 1900 (has links)
Les méthodes de Monte Carlo par chaînes de Markov (MCCM) sont des méthodes servant à échantillonner à partir de distributions de probabilité. Ces techniques se basent sur le parcours de chaînes de Markov ayant pour lois stationnaires les distributions à échantillonner. Étant donné leur facilité d’application, elles constituent une des approches les plus utilisées dans la communauté statistique, et tout particulièrement en analyse bayésienne. Ce sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Depuis l’apparition de la première méthode MCCM en 1953 (la méthode de Metropolis, voir [10]), l’intérêt pour ces méthodes, ainsi que l’éventail d’algorithmes disponibles ne cessent de s’accroître d’une année à l’autre. Bien que l’algorithme Metropolis-Hastings (voir [8]) puisse être considéré comme l’un des algorithmes de Monte Carlo par chaînes de Markov les plus généraux, il est aussi l’un des plus simples à comprendre et à expliquer, ce qui en fait un algorithme idéal pour débuter. Il a été sujet de développement par plusieurs chercheurs. L’algorithme Metropolis à essais multiples (MTM), introduit dans la littérature statistique par [9], est considéré comme un développement intéressant dans ce domaine, mais malheureusement son implémentation est très coûteuse (en termes de temps). Récemment, un nouvel algorithme a été développé par [1]. Il s’agit de l’algorithme Metropolis à essais multiples revisité (MTM revisité), qui définit la méthode MTM standard mentionnée précédemment dans le cadre de l’algorithme Metropolis-Hastings sur un espace étendu. L’objectif de ce travail est, en premier lieu, de présenter les méthodes MCCM, et par la suite d’étudier et d’analyser les algorithmes Metropolis-Hastings ainsi que le MTM standard afin de permettre aux lecteurs une meilleure compréhension de l’implémentation de ces méthodes. Un deuxième objectif est d’étudier les perspectives ainsi que les inconvénients de l’algorithme MTM revisité afin de voir s’il répond aux attentes de la communauté statistique. Enfin, nous tentons de combattre le problème de sédentarité de l’algorithme MTM revisité, ce qui donne lieu à un tout nouvel algorithme. Ce nouvel algorithme performe bien lorsque le nombre de candidats générés à chaque itérations est petit, mais sa performance se dégrade à mesure que ce nombre de candidats croît. / Markov Chain Monte Carlo (MCMC) algorithms are methods that are used for sampling from probability distributions. These tools are based on the path of a Markov chain whose stationary distribution is the distribution to be sampled. Given their relative ease of application, they are one of the most popular approaches in the statistical community, especially in Bayesian analysis. These methods are very popular for sampling from complex and/or high dimensional probability distributions. Since the appearance of the first MCMC method in 1953 (the Metropolis algorithm, see [10]), the interest for these methods, as well as the range of algorithms available, continue to increase from one year to another. Although the Metropolis-Hastings algorithm (see [8]) can be considered as one of the most general Markov chain Monte Carlo algorithms, it is also one of the easiest to understand and explain, making it an ideal algorithm for beginners. As such, it has been studied by several researchers. The multiple-try Metropolis (MTM) algorithm , proposed by [9], is considered as one interesting development in this field, but unfortunately its implementation is quite expensive (in terms of time). Recently, a new algorithm was developed by [1]. This method is named the revisited multiple-try Metropolis algorithm (MTM revisited), which is obtained by expressing the MTM method as a Metropolis-Hastings algorithm on an extended space. The objective of this work is to first present MCMC methods, and subsequently study and analyze the Metropolis-Hastings and standard MTM algorithms to allow readers a better perspective on the implementation of these methods. A second objective is to explore the opportunities and disadvantages of the revisited MTM algorithm to see if it meets the expectations of the statistical community. We finally attempt to fight the sedentarity of the revisited MTM algorithm, which leads to a new algorithm. The latter performs efficiently when the number of generated candidates in a given iteration is small, but the performance of this new algorithm then deteriorates as the number of candidates in a given iteration increases.
283

股票群的隨機行走模型與內在結構 - 以1996-1999年美國股票S&P500為例之初步分析 / Random walk model and underlying structure - a primitive study of collections of US stocks over 1996-1999

黃鈺峰, Huang, Yu Feng Unknown Date (has links)
我們從計算股價的相關矩陣,然後利用隨機矩陣定理的結果,了解到股票市場並非符合隨機過程的預測,進而得知股票對股票之間具有關聯性,然其長時距下股票價格對數報酬的變化會呈現隨機行走的模式,因此我們對其結果提出二種不同的耦合隨機行走模型,試圖闡釋股票市場間的關聯性可融合到耦合隨機行走模型之中,並藉由均方對數報酬(mean square log-return,MSLR)來探討此事情。 最後,為了瞭解關聯性的關係,並利用其來了解股票市場內部結構的特性,因此我們利用股價的相關矩陣來建構最小展開樹進行分析,發現當時間尺度越大其圖形越密集,中心幾乎為「GE」這家公司,因此其股票市場具有一定的判斷指標。 / By means of calculating the correlation matrix of the price of stock and using the results of random matrix theorems,we learned that the stock market does not match the prediction of stochastic processes and the stock-stock is correlated。However,stock’s price log-return changes under long time scale will appear random walk model. Therefore,we propose two kinds of the different coupled random walk model,that try to explain the correlation between the stock markets can be integrated into the coupled random walk model,and using the mean square log-return( MSLR) to investigate this issue。 Finally,to understand the relationship of correlation matrix and by using it to know the characteristics of the underlying structure of the stock market,we use the correlation matrix of the price to construct the minimum spanning tree for analysis。The results showed that when the time scale is greater, the graphics are more intensive,and the center is almost the same company,"GE", indicating that the stock market has a certain judgment index。
284

Price and volatility relationships in the Australian electricity market

Higgs, Helen January 2006 (has links)
This thesis presents a collection of papers that has been published, accepted or submitted for publication. They assess price, volatility and market relationships in the five regional electricity markets in the Australian National Electricity Market (NEM): namely, New South Wales (NSW), Queensland (QLD), South Australia (SA), the Snowy Mountains Hydroelectric Scheme (SNO) and Victoria (VIC). The transmission networks that link regional systems via interconnectors across the eastern states have played an important role in the connection of the regional markets into an efficient national electricity market. During peak periods, the interconnectors become congested and the NEM separates into its regions, promoting price differences across the market and exacerbating reliability problems in regional utilities. This thesis is motivated in part by the fact that assessment of these prices and volatility within and between regional markets allows for better forecasts by electricity producers, transmitters and retailers and the efficient distribution of energy on a national level. The first two papers explore whether the lagged price and volatility information flows of the connected spot electricity markets can be used to forecast the pricing behaviour of individual markets. A multivariate generalised autoregressive conditional heteroskedasticity (MGARCH) model is used to identify the source and magnitude of price and volatility spillovers within (intra-relationship) and across (inter-relationship) the various spot markets. The results show evidence of the fact that prices in one market can be explained by their own price lagged one-period and are independent of lagged spot prices of any other markets when daily data is employed. This implies that the regional spot electricity markets are not fully integrated. However, there is also evidence of a large number of significant ownvolatility and cross-volatility spillovers in all five markets indicating that shocks in some markets will affect price volatility in others. Similar conclusions are obtained when the daily data are disaggregated into peak and off-peak periods, suggesting that the spot electricity markets are still rather isolated. These results inspired the research underlying the third paper of the thesis on modelling the dynamics of spot electricity prices in each regional market. A family of generalised autoregressive conditional heteroskedasticity (GARCH), RiskMetrics, normal Asymmetric Power ARCH (APARCH), Student APARCH and skewed Student APARCH is used to model the time-varying variance in prices with the inclusion of news arrival as proxied by the contemporaneous volume of demand, time-of-day, day-of-week and month-of-year effects as exogenous explanatory variables. The important contribution in this paper lies in the use of two latter methodologies, namely, the Student APARCH and skewed Student APARCH which take account of the skewness and fat tailed characteristics of the electricity spot price series. The results indicate significant innovation spillovers (ARCH effects) and volatility spillovers (GARCH effects) in the conditional standard deviation equation, even with market and calendar effects included. Intraday prices also exhibit significant asymmetric responses of volatility to the flow of information (that is, positive shocks or good news are associated with higher volatility than negative shocks or bad news). The fourth research paper attempts to capture salient feature of price hikes or spikes in wholesale electricity markets. The results show that electricity prices exhibit stronger mean-reversion after a price spike than the mean-reversion in the normal period, suggesting the electricity price quickly returns from some extreme position (such as a price spike) to equilibrium; this is, extreme price spikes are shortlived. Mean-reversion can be measured in a separate regime from the normal regime using Markov probability transition to identify the different regimes. The fifth and final paper investigates whether interstate/regional trade has enhanced the efficiency of each spot electricity market. Multiple variance ratio tests are used to determine if Australian spot electricity markets follow a random walk; that is, if they are informationally efficient. The results indicate that despite the presence of a national market only the Victorian market during the off-peak period is informationally (or market) efficient and follows a random walk. This thesis makes a significant contribution in estimating the volatility and the efficiency of the wholesale electricity prices by employing four advanced time series techniques that have not been previously explored in the Australian context. An understanding of the modelling and forecastability of electricity spot price volatility across and within the Australian spot markets is vital for generators, distributors and market regulators. Such an understanding influences the pricing of derivative contracts traded on the electricity markets and enables market participants to better manage their financial risks.
285

Especificação de um modelo para explicação e projeção de retornos do IBRX-100

Mendes, Daniel Lorenzo 24 August 2015 (has links)
Submitted by Daniel Mendes (danielmendes09@gmail.com) on 2015-12-11T13:13:12Z No. of bitstreams: 1 Dissertacao final Daniel Lorenzo Mendes.pdf: 595820 bytes, checksum: ef1f86bcdf354637381b6f3bc2126b61 (MD5) / Approved for entry into archive by GILSON ROCHA MIRANDA (gilson.miranda@fgv.br) on 2015-12-14T11:47:12Z (GMT) No. of bitstreams: 1 Dissertacao final Daniel Lorenzo Mendes.pdf: 595820 bytes, checksum: ef1f86bcdf354637381b6f3bc2126b61 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-12-17T16:41:36Z (GMT) No. of bitstreams: 1 Dissertacao final Daniel Lorenzo Mendes.pdf: 595820 bytes, checksum: ef1f86bcdf354637381b6f3bc2126b61 (MD5) / Made available in DSpace on 2015-12-17T17:00:17Z (GMT). No. of bitstreams: 1 Dissertacao final Daniel Lorenzo Mendes.pdf: 595820 bytes, checksum: ef1f86bcdf354637381b6f3bc2126b61 (MD5) Previous issue date: 2015-08-24 / In this work, we propose an econometric model specification in the short form, estimating by ordinary least squares (OLS) and based in macroeconomic variables, with the goal of explaining trimestral returns of stock index IBRX-100, between 2001 and 2015. Besides, we tested the forecasting efficiency of the model and concluded that the forecast error estimated in a moving sample, estimating OLS at each round, and utilizing auxiliary VAR to forecast variables, is lower than forecast error associated to the Random Walk hypothesis in the one trimester forward horizon. / Neste trabalho, propomos uma especificação de modelo econométrico na forma reduzida, estimado por mínimos quadrados ordinários (MQO) e baseado em variáveis macroeconômicas, com o objetivo de explicar os retornos trimestrais do índice de ações IBRX-100, entre 2001 e 2015. Testamos ainda a eficiência preditiva do modelo e concluímos que o erro de previsão estimado em janela móvel, com re-estimação de MQO a cada rodada, e utilização de VAR auxiliar para projeção dos regressores, é significativamente inferior ao erro de previsão associado à hipótese de Random Walk para o horizonte de previsão de um trimestre a frente.
286

Investigação do comportamento do câmbio nominal brasileiro em relação aos fundamentos econômicos baseados na Regra de Taylor

Miguens, Gabriel Perlott 17 February 2017 (has links)
Submitted by Gabriel Perlott Miguens (gpmiguens@gmail.com) on 2017-03-29T02:52:07Z No. of bitstreams: 1 TESE_Cambio_Regra de Taylor.pdf: 908855 bytes, checksum: 1b33a5bdcea9f731382b0785af425c26 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2017-03-29T17:06:37Z (GMT) No. of bitstreams: 1 TESE_Cambio_Regra de Taylor.pdf: 908855 bytes, checksum: 1b33a5bdcea9f731382b0785af425c26 (MD5) / Made available in DSpace on 2017-03-29T17:13:21Z (GMT). No. of bitstreams: 1 TESE_Cambio_Regra de Taylor.pdf: 908855 bytes, checksum: 1b33a5bdcea9f731382b0785af425c26 (MD5) Previous issue date: 2017-02-17 / The objective of this paper is to analyze the relationship between the Brazilian nominal exchange rate and the economic fundamentals, defined according to the Taylor rule. The transitory and permanent decomposition method was applied in order to identify how the model variables respond to transitory and permanent shocks. The interest is to identify how this relationship occurred during the floating exchange period. In Brazil, this occurred in 1999. At the same time, we try to verify evidence to consider that the fluctuations of the Brazilian nominal exchange rate do not follow a random walk process in the modern era of floating exchange rate. The results showed that the variables of the model are cointegrated and the transitory shocks play an important role in the Brazilian nominal exchange rate fluctuations while the permanent shocks are quite present in the fluctuations of the economic fundamentals of the model. Moreover, the results suggest that there is evidence that the Brazilian nominal exchange rate behavior should not be considered a random walk process. / O objetivo deste trabalho é analisar a relação entre a taxa de câmbio nominal brasileira e os fundamentos econômicos, definidos de acordo com a regra de Taylor. Foi aplicado o método de decomposição transitória e permanente com objetivo de se identificar como as variáveis do modelo respondem à choques transitórios e permanentes ao longo do tempo. O interesse é identificar como se deu essa relação durante o período de câmbio flutuante no Brasil, que ocorreu a partir de 1999. Ao mesmo tempo, busca-se verificar a existência de evidências para considerarmos que as flutuações do câmbio nominal brasileiro não seguem um processo passeio aleatório na era moderna de câmbio flutuante. Os resultados demonstraram que as variáveis do modelo são co-integradas e que os choques transitórios possuem participação importante nas flutuações do câmbio nominal brasileiro enquanto os choques permanentes são bastante presentes nas flutuações dos fundamentos econômicos do modelo. Além disso, os resultados sugerem que há evidências de que o comportamento do câmbio nominal brasileiro não deve ser considerado um processo passeio aleatório.
287

Les Théorèmes limites pour des processus stationnaires / Limit theorems for stationary processes

Lam, Hoang Chuong 25 June 2012 (has links)
Nous étudions la mesure spectrale des transformations stationnaires, puis nous l’utilisons pour étudier le théorème ergodique et le théorème limite central. Nous étudions également les martingales avec une nouvelle preuve du théorème central limite, sans analyse de Fourier. Pour le théorème limite central pour marches aléatoires dans un environnement aléatoire sur la dimension 1, on donne deux méthodes pour l’obtenir: approximation pour une martingale et méthode des moments. La méthode des martingales fait résoudre l’équation de Dirichlet (I - P)h = 0, alors que celle des moments résoudre l’équation de Poisson (I - P)h = f. Enfin, nous pouvons utiliser la deuxième méthode pour prouver la relation d’Einstein pour des diffusions réversibles dans un environnement aléatoire dans une dimension. / We study the spectral measure for stationary transformations, and then apply to Ergodic theorem and Central limit theorem. We study also martingale process with a new proof of the central limit theorem without Fourier analysis. For the central limit theorem for random walks in random environment, we give two methods to obtain it: martingale approximation and moments. The method of martingales solves Dirichlet’s equation (I - P)h = 0, and the method of moments solves Poisson’s equation (I - P)h = f. Finally, we can use the second method to prove the Einstein relation for reversible diffusions in random environment in one dimension.
288

Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques / Stratified Monte Carlo methods for numerical integration and simulation

Fakhereddine, Rana 26 September 2013 (has links)
Les méthodes de Monte Carlo (MC) sont des méthodes numériques qui utilisent des nombres aléatoires pour résoudre avec des ordinateurs des problèmes des sciences appliquées et des techniques. On estime une quantité par des évaluations répétées utilisant N valeurs et l'erreur de la méthode est approchée par la variance de l'estimateur. Le présent travail analyse des méthodes de réduction de la variance et examine leur efficacité pour l'intégration numérique et la résolution d'équations différentielles et intégrales. Nous présentons d'abord les méthodes MC stratifiées et les méthodes d'échantillonnage par hypercube latin (LHS : Latin Hypercube Sampling). Parmi les méthodes de stratification, nous privilégions la méthode simple (MCS) : l'hypercube unité Is := [0; 1)s est divisé en N sous-cubes d'égale mesure, et un point aléatoire est choisi dans chacun des sous-cubes. Nous analysons la variance de ces méthodes pour le problème de la quadrature numérique. Nous étudions particulièrment le cas de l'estimation de la mesure d'un sous-ensemble de Is. La variance de la méthode MCS peut être majorée par O(1=N1+1=s). Les résultats d'expériences numériques en dimensions 2,3 et 4 montrent que les majorations obtenues sont précises. Nous proposons ensuite une méthode hybride entre MCS et LHS, qui possède les propriétés de ces deux techniques, avec un point aléatoire dans chaque sous-cube et les projections des points sur chacun des axes de coordonnées également réparties de manière régulière : une projection dans chacun des N sousintervalles qui divisent I := [0; 1) uniformément. Cette technique est appelée Stratification Sudoku (SS). Dans le même cadre d'analyse que précédemment, nous montrons que la variance de la méthode SS est majorée par O(1=N1+1=s) ; des expériences numériques en dimensions 2,3 et 4 valident les majorations démontrées. Nous présentons ensuite une approche de la méthode de marche aléatoire utilisant les techniques de réduction de variance précédentes. Nous proposons un algorithme de résolution de l'équation de diffusion, avec un coefficient de diffusion constant ou non-constant en espace. On utilise des particules échantillonnées suivant la distribution initiale, qui effectuent un déplacement gaussien à chaque pas de temps. On ordonne les particules suivant leur position à chaque étape et on remplace les nombres aléatoires qui permettent de calculer les déplacements par les points stratifiés utilisés précédemment. On évalue l'amélioration apportée par cette technique sur des exemples numériques Nous utilisons finalement une approche analogue pour la résolution numérique de l'équation de coagulation, qui modélise l'évolution de la taille de particules pouvant s'agglomérer. Les particules sont d'abord échantillonnées suivant la distribution initiale des tailles. On choisit un pas de temps et, à chaque étape et pour chaque particule, on choisit au hasard un partenaire de coalescence et un nombre aléatoire qui décide de cette coalescence. Si l'on classe les particules suivant leur taille à chaque pas de temps et si l'on remplace les nombres aléatoires par des points stratifiés, on observe une réduction de variance par rapport à l'algorithme MC usuel. / Monte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm.
289

Quasi real-time model for security of water distribution network / Modèle quasi-temps réel pour la sécurité des réseaux d’alimentation en eau potable

Ung, Hervé 05 February 2016 (has links)
Le but de cette thèse est de modéliser la propagation d’un contaminant au sein d’un réseau d’eau potable muni de capteurs temps réel. Elle comporte les trois axes de développement suivant: la résolution des équations de transport, celle du problème d’identification des sources de contamination et le placement des capteurs.Le transport d’un produit chimique est modélisé dans un réseau d’eau potable par l’équation de transport réaction 1-D avec l’hypothèse de mélange parfait aux noeuds. Il est proposé d’améliorer sa prédiction par l’ajout d’un modèle de mélange imparfait aux jonctions double T et d’un modèle de dispersion prenant en compte un profil de vitesse 3-D et la diffusion radiale. Le premier modèle est créé à l’aide d’un plan d’expériences avec triangulation de Delaunay, de simulations CFD, et de la méthode d’interpolation krigeage. Le second utilise les équations adjointes du problème de transport avec l’ajout de particules évoluant à l’aide d’une marche aléatoire, cette dernière modélisant la diffusion radiale dans la surface droite du tuyau.Le problème d’identification des sources consiste, à l’aide de réponses positives ou négatives à la contamination des noeuds capteurs, à trouver l’origine, le temps d’injection et la durée de la contamination. La résolution de ce problème inverse est faite par la résolution des équations de transport adjointes par formulation backtracking. La méthode donne la liste des sources potentielles ainsi que le classement de celles-ci selon leur probabilité d’être la vraie source de contamination. Elle s’exprime en fonction de combien, en pourcentage, cette source potentielle peut expliquer les réponses positives aux capteurs.Le placement des capteurs est optimisé pour l’identification des sources. L’objectif est la maximisation du potentiel de détection de la véritable source de contamination. Deux résolutions sont testées. La première utilise un algorithme glouton combiné à une méthode de Monte Carlo.La seconde utilise une méthode de recherche locale sur graphe.Finalement les méthodes sont appliquées à un cas test réel avec dans l’ordre : le placement des capteurs, l’identification de la source de contamination et l’estimation de sa propagation. / The aim of this thesis is to model the propagation of a contaminant inside a water distribution network equipped with real time sensors. There are three research directions: the solving of the transport equations, the source identification and the sensor placement. Classical model for transport of a chemical product in a water distribution network isusing 1D-advection-reaction equations with the hypothesis of perfect mixing at junctions. It isproposed to improve the predictions by adding a model of imperfect mixing at double T-junctions and by considering dispersion effect in pipes which takes into account a 3-D velocity profile. The first enhancement is created with the help of a design of experiment based on the Delaunay triangulation, CFD simulations and the interpolation method Kriging. The second one uses the adjoint formulation of the transport equations applied with an algorithm of particle backtracking and a random walk, which models the radial diffusion in the cross-section of a pipe.The source identification problem consists in finding the contamination origin, itsinjection time and its duration from positive and negative responses given by the sensors. The solution to this inverse problem is computed by solving the adjoint transport equations with a backtracking formulation. The method gives a list of potential sources and the ranking of thosemore likely to be the real sources of contamination. It is function of how much, in percentage, they can explain the positive responses of the sensors.The sensor placement is chosen in order to maximize the ranking of the real source of contamination among the potential sources. Two solutions are proposed. The first one uses agreedy algorithm combined with a Monte Carlo method. The second one uses a local search method on graphs. Finally the methods are applied to a real test case in the following order: the sensor placement, the source identification and the estimation of the contamination propagation.
290

Manipulation et propagation de photons intriqués en fréquence et étude des marches aléatoires en fréquence / Manipulation and propagation of frequency entangled photons and study of quantum random walks in the frequency domain

Galmes, Batiste 14 March 2016 (has links)
Ce manuscrit de thèse s’intéresse à l’étude théorique et à l’observation d’effets quantiques résultantde la manipulation de photons en fréquence. Ainsi nous rapportons une expérience d’interférenceà deux photons, pour laquelle l’intrication et la manipulation des photons survient dans le domainedes fréquences. Nous montrons que cette figure d’interférence est sensible à la dispersion des deuxphotons jumeaux et que ce phénomène peut être compensé de manière non-locale. D’un autre coté,nous étudions la réalisation d’une marche aléatoire quantique établie par la modulation de phase.Nous mettons en évidence un comportement intéressant de ces marches et suggérons un schémaexpérimental. / This manuscript deals with a theoretical and experimental study of quantum effects taking place inthe frequency domain. On one side, we report a two photons interference experiment, where both theentanglement of the photons and their manipulation take place in the frequency domain. We showthat this interference pattern is sensitive to the dispersion of both photons and allows us to perform anonlocal dispersion cancellation. On the other side we study the implementation of a quantum walkbased on the phase modulation. We predict an interesting behavior of these quantum walks andsuggest a physical implementation.

Page generated in 0.0633 seconds