• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Polynomial Expansion-Based Displacement Calculation on FPGA / Polynomexpansions-baserad förskjutningsberäkning på FPGA

Ehrenstråhle, Carl January 2016 (has links)
This thesis implements a system for calculating the displacement between two consecutive video frames. The displacement is calculated using a polynomial expansion-based algorithm. A unit tested bottoms-up approach is successfully used to design and implement the system. The designed and implemented system is thoroughly elaborated upon. The chosen algorithm and its computational details are presented to provide context to the implemented system. Some of the major issues and their impact on the system are discussed.
2

Comparison of accelerated recursive polynomial expansions for electronic structure calculations

Joneus, Carl, Wretstam, Oskar, Enander, Filip January 2015 (has links)
In electronic structure calculations the computational cost is of great importance because large systems can contain a huge number of electrons. One effective method to make such calculations is by density matrix purification. Although, the cost for this method is relatively low compared to other existing methods there is room for improvements. In this paper one method proposed by Emanuel Rubensson and one method proposed by Jaehoon Kim & Yousung Jung was compared to each other with respect to efficiency, simplicity and robustness. Both are improved methods to compute the density matrix by accelerated polynomial expansion. Rubensson’s method consists of two different algorithms and results showed that both performed better than Kim & Jung’s method in terms of efficiency, which is the property both methods directs their main focus on. The major differences between them was identified in terms of adaptivity. The methods require different inputs that demands separate levels of knowledge about the system. Kim & Jung’s method which require less knowledge can however benefit efficiency-wise from more information in order to optimize the algorithm for the system. Results also showed that both methods were stable, but since they only were tested with arbitrarily assumed input arguments no conclusion about their general stability could be drawn.
3

Detection of interesting areas in images by using convexity and rotational symmetries / Detection of interesting areas in images by using convexity and rotational symmetries

Karlsson, Linda January 2002 (has links)
<p>There are several methods avaliable to find areas of interest, but most fail at detecting such areas in cluttered scenes. In this paper two methods will be presented and tested in a qualitative perspective. The first is the darg operator, which is used to detect three dimensional convex or concave objects by calculating the derivative of the argument of the gradient in one direction of four rotated versions. The four versions are thereafter added together in their original orientation. A multi scale version is recommended to avoid the problem that the standard deviation of the Gaussians, combined with the derivatives, controls the scale of the object, which is detected. </p><p>Another feature detected in this paper is rotational symmetries with the help of approximative polynomial expansion. This approach is used in order to minimalize the number and sizes of the filters used for a correlation of a representation of the orientation and filters matching the rotational symmetries of order 0, 1 and 2. With this method a particular type of rotational symmetry can be extracted by using both the order and the orientation of the result. To improve the method’s selectivity a normalized inhibition is applied on the result, which causes a much weaker result in the two other resulting pixel values when one is high. </p><p>Both methods are not enough by themselves to give a definite answer to if the image consists of an area of interest or not, since several other things have these types of features. They can on the other hand give an indication where in the image the feature is found.</p>
4

Detection of interesting areas in images by using convexity and rotational symmetries / Detection of interesting areas in images by using convexity and rotational symmetries

Karlsson, Linda January 2002 (has links)
There are several methods avaliable to find areas of interest, but most fail at detecting such areas in cluttered scenes. In this paper two methods will be presented and tested in a qualitative perspective. The first is the darg operator, which is used to detect three dimensional convex or concave objects by calculating the derivative of the argument of the gradient in one direction of four rotated versions. The four versions are thereafter added together in their original orientation. A multi scale version is recommended to avoid the problem that the standard deviation of the Gaussians, combined with the derivatives, controls the scale of the object, which is detected. Another feature detected in this paper is rotational symmetries with the help of approximative polynomial expansion. This approach is used in order to minimalize the number and sizes of the filters used for a correlation of a representation of the orientation and filters matching the rotational symmetries of order 0, 1 and 2. With this method a particular type of rotational symmetry can be extracted by using both the order and the orientation of the result. To improve the method’s selectivity a normalized inhibition is applied on the result, which causes a much weaker result in the two other resulting pixel values when one is high. Both methods are not enough by themselves to give a definite answer to if the image consists of an area of interest or not, since several other things have these types of features. They can on the other hand give an indication where in the image the feature is found.
5

Essays on numerical solutions to forward-backward stochastic differential equations and their applications in finance

Zhang, Liangliang 30 October 2017 (has links)
In this thesis, we provide convergent numerical solutions to non-linear forward-BSDEs (Backward Stochastic Differential Equations). Applications in mathematical finance, financial economics and financial econometrics are discussed. Numerical examples show the effectiveness of our methods.
6

Graph Theoretical Modelling of Electrical Distribution Grids

Kohler, Iris 01 June 2021 (has links) (PDF)
This thesis deals with the applications of graph theory towards the electrical distribution networks that transmit electricity from the generators that produce it and the consumers that use it. Specifically, we establish the substation and bus network as graph theoretical models for this major piece of electrical infrastructure. We also generate substation and bus networks for a wide range of existing data from both synthetic and real grids and show several properties of these graphs, such as density, degeneracy, and planarity. We also motivate future research into the definition of a graph family containing bus and substation networks and the classification of that family as having polynomial expansion.
7

Legendre Polynomial Expansion of the Electron Boltzmann Equation Applied to the Discharge in Argon

Sosov, Yuriy 20 June 2006 (has links)
No description available.
8

La décomposition en polynôme du chaos pour l'amélioration de l'assimilation de données ensembliste en hydraulique fluviale / Polynomial chaos expansion in fluvial hydraulics in Ensemble data assimilation framework

El Moçayd, Nabil 01 March 2017 (has links)
Ce travail porte sur la construction d'un modèle réduit en hydraulique fluviale avec une méthode de décomposition en polynôme du chaos. Ce modèle réduit remplace le modèle direct afin de réduire le coût de calcul lié aux méthodes ensemblistes en quantification d'incertitudes et assimilation de données. Le contexte de l'étude est la prévision des crues et la gestion de la ressource en eau. Ce manuscrit est composé de cinq parties, chacune divisée en chapitres. La première partie présente un état de l'art des travaux en quantification des incertitudes et en assimilation de données dans le domaine de l'hydraulique ainsi que les objectifs de la thèse. On présente le cadre de la prévision des crues, ses enjeux et les outils dont on dispose pour prévoir la dynamique des rivières. On présente notamment la future mission SWOT qui a pour but de mesurer les hauteurs d'eau dans les rivières avec un couverture globale à haute résolution. On précise notamment l'apport de ces mesures et leur complémentarité avec les mesures in-situ. La deuxième partie présente les équations de Saint-Venant, qui décrivent les écoulements dans les rivières, ainsi qu'une discrétisation numérique de ces équations, telle qu'implémentée dans le logiciel Mascaret-1D. Le dernier chapitre de cette partie propose des simplifications des équations de Saint-Venant. La troisième partie de ce manuscrit présente les méthodes de quantification et de réduction des incertitudes. On présente notamment le contexte probabiliste de la quantification d'incertitudes et d'analyse de sensibilité. On propose ensuite de réduire la dimension d'un problème stochastique quand on traite de champs aléatoires. Les méthodes de décomposition en polynômes du chaos sont ensuite présentées. Cette partie dédiée à la méthodologie s'achève par un chapitre consacré à l'assimilation de données ensemblistes et à l'utilisation des modèles réduits dans ce cadre. La quatrième partie de ce manuscrit est dédiée aux résultats. On commence par identifier les sources d'incertitudes en hydraulique que l'on s'attache à quantifier et réduire par la suite. Un article en cours de révision détaille la validation d'un modèle réduit pour les équations de Saint-Venant en régime stationnaire lorsque l'incertitude est majoritairement portée par les coefficients de frottement et le débit à l'amont. On montre que les moments statistiques, la densité de probabilité et la matrice de covariances spatiales pour la hauteur d'eau sont efficacement et précisément estimés à l'aide du modèle réduit dont la construction ne nécessite que quelques dizaines d'intégrations du modèle direct. On met à profit l'utilisation du modèle réduit pour réduire le coût de calcul du filtre de Kalman d'Ensemble dans le cadre d'un exercice d'assimilation de données synthétiques de type SWOT. On s'intéresse précisément à la représentation spatiale de la donnée telle que vue par SWOT: couverture globale du réseau, moyennage spatial entre les pixels observés. On montre notamment qu'à budget de calcul donné les résultats de l'analyse d'assimilation de données qui repose sur l'utilisation du modèle réduit sont meilleurs que ceux obtenus avec le filtre classique. On s'intéresse enfin à la construction du modèle réduit en régime instationnaire. On suppose ici que l'incertitude est liée aux coefficients de frottement. Il s'agit à présent de juger de la nécessité du recalcul des coefficients polynomiaux au fil du temps et des cycles d'assimilation de données. Pour ce travail seul des données in-situ ont été considérées. On suppose dans un deuxième temps que l'incertitude est portée par le débit en amont du réseau, qui est un vecteur temporel. On procède à une décomposition de type Karhunen-Loève pour réduire la taille de l'espace incertain aux trois premiers modes. Nous sommes ainsi en mesure de mener à bien un exercice d'assimilation de données. Pour finir, les conclusions et les perspectives de ce travail sont présentées en cinquième partie. / This work deals with the formulation of a surrogate model for the shallow water equations in fluvial hydraulics with a chaos polynomial expansion. This reduced model is used instead of the direct model to reduce the computational cost of the ensemble methods in uncertainty quantification and data assimilation. The context of the study is the flood forecasting and the management of water resources. This manuscript is composed of five parts, each divided into chapters. The first part presents a state of art of uncertainty quantification and data assimilation in the field of hydraulics as well as the objectives of this thesis. We present the framework of flood forecasting, its stakes and the tools available (numerical and observation) to predict the dynamics of rivers. In particular, we present the SWOT2 mission, which aims to measure the height of water in rivers with global coverage at high resolution. We highlight particularty their contribution and their complementarity with the in-situ measurements. The second part presents the shallow water equations, which describe the flows in the rivers. We are particularly interested in a 1D representation of the equations.We formulate a numerical discretization of these equations, as implemented in the Mascaret software. The last chapter of this part proposes some simplifications of the shallow-water equations. The third part of this manuscript presents the uncertainty quantification and reduced order methods. We present particularly the probabilistic context which makes it possible to define well-defined problem of uncertainty quantification and sensitivity analysis. It is then proposed to reduce the size of a stochastic problem when dealing with random fields in the context of geophysical models. The methods of chaos polynomial expansion are then presented ; we present in particular the different strategies for the computation of the polynomial coefficients. This section devoted to methodology concludes with a chapter devoted to Ensemble based data assimilation (specially the Ensemble Kalman filter) and the use of surrogate models in this framework. The fourth part of this manuscript is dedicated to the results. The first step is to identify the sources of uncertainty in hydraulics that should be quantified and subsequently reduced. An article, in the review state, details the method and the validation of a polynomial surrogate model for shallow water equations in steady state when the uncertainty is mainly carried by the friction coefficients and upstream inflow. The study is conducted on the river Garonne. It is shown that the statistical moments, the probability density and the spatial covariance matrice for the water height are efficiently and precisely estimated using the reduced model whose construction requires only a few tens of integrations of the direct model. The use of the surrogate model is used to reduce the computational cost of the Ensemble Kalman filter in the context of a synthetic SWOT like data assimilation exercise. The aim is to reconstruct the spatialized friction coefficients and the upstream inflow. We are interested precisely in the spatial representation of the data as seen by SWOT : global coverage of the network, spatial averaging between the observed pixels. We show in particular that at the given calculation budget (2500 simulations of the direct model) the results of the data assimilation analysis based on the use of the polynomial surrogate model are better than those obtained with the classical Ensemble Kalman filter. We are then interested in the construction of the reduced model in unsteady conditions. It is assumed initially that the uncertainty is carried with the friction coefficients. It is now necessary to judge the need for the recalculation of polynomial coefficients over time and data assimilation cycles. For this work only ponctual and in-situ data were considered. It is assumed in a second step that the uncertainty is carried by the upstr
9

Random Matrix Analysis of Future Multi Cell MU-MIMO Networks / Analyse des réseaux multi-cellulaires multi-utilisateurs futurs par la théorie des matrices aléatoires

Müller, Axel 13 November 2014 (has links)
Les futurs systèmes de communication sans fil devront utiliser des architectures cellulaires hétérogènes composées de grandes cellules (macro) plus performantes et de petites cellules (femto, micro, ou pico) très denses, afin de soutenir la demande de débit en augmentation exponentielle au niveau de la couche physique. Ces structures provoquent un niveau d'interférence sans précèdent à l'intérieur, comme à l'extérieur des cellules, qui doit être atténué ou, idéalement, exploité afin d'améliorer l'efficacité spectrale globale du réseau. Des techniques comme le MIMO à grande échelle (dit massive MIMO), la coopération, etc., qui contribuent aussi à la gestion des interférences, vont encore augmenter la taille des grandes architectures hétérogènes, qui échappent ainsi à toute possibilité d'analyse théorique par des techniques statistiques traditionnelles.Par conséquent, dans cette thèse, nous allons appliquer et améliorer des résultats connus de la théorie des matrices aléatoires à grande échelle (RMT) afin d'analyser le problème d'interférence et de proposer de nouveaux systèmes de précodage qui s'appuient sur les résultats acquis par l'analyse du système à grande échelle. Nous allons d'abord proposer et analyser une nouvelle famille de précodeurs qui réduit la complexité de calcul de précodage pour les stations de base équipées d'un grand nombre d'antennes, tout en conservant la plupart des capacités d'atténuation d'interférence de l'approche classique et le caractère quasi-optimal du précodeur regularised zero forcing. Dans un deuxième temps, nous allons proposer une variation de la structure de précodage linéaire optimal (obtenue pour de nombreuses mesures de performance) qui permet de réduire le niveau d'interférence induit aux autres cellules. Ceci permet aux petites cellules d'atténuer efficacement les interférences induites et reçues au moyen d'une coopération minimale. Afin de faciliter l'utilisation de l'approche analytique RMT pour les futures générations de chercheurs, nous fournissons également un tutoriel exhaustif sur l'application pratique de la RMT pour les problèmes de communication en début du manuscrit. / Future wireless communication systems will need to feature multi cellular heterogeneous architectures consisting of improved macro cells and very dense small cells, in order to support the exponentially rising demand for physical layer throughput. Such structures cause unprecedented levels of inter and intra cell interference, which needs to be mitigated or, ideally, exploited in order to improve overall spectral efficiency of the communication network. Techniques like massive multiple input multiple output (MIMO), cooperation, etc., that also help with interference management, will increase the size of the already large heterogeneous architectures to truly enormous networks, that defy theoretical analysis via traditional statistical methods.Accordingly, in this thesis we will apply and improve the already known framework of large random matrix theory (RMT) to analyse the interference problem and propose solutions centred around new precoding schemes, which rely on large system analysis based insights. First, we will propose and analyse a new family of precoding schemes that reduce the computational precoding complexity of base stations equipped with a large number of antennas, while maintaining most of the interference mitigation capabilities of conventional close-to-optimal regularized zero forcing. Second, we will propose an interference aware linear precoder, based on an intuitive trade-off and recent results on multi cell regularized zero forcing, that allows small cells to effectively mitigate induced interference with minimal cooperation. In order to facilitate utilization of the analytic RMT approach for future generations of interested researchers, we will also provide a comprehensive tutorial on the practical application of RMT in communication problems.
10

Moments method for random matrices with applications to wireless communication. / La méthode des moments pour les matrices aléatoires avec application à la communication sans fil

Masucci, Antonia Maria 29 November 2011 (has links)
Dans cette thèse, on étudie l'application de la méthode des moments pour les télécommunications. On analyse cette méthode et on montre son importance pour l'étude des matrices aléatoires. On utilise le cadre de probabilités libres pour analyser cette méthode. La notion de produit de convolution/déconvolution libre peut être utilisée pour prédire le spectre asymptotique de matrices aléatoires qui sont asymptotiquement libres. On montre que la méthode de moments est un outil puissant même pour calculer les moments/moments asymptotiques de matrices qui n'ont pas la propriété de liberté asymptotique. En particulier, on considère des matrices aléatoires gaussiennes de taille finie et des matrices de Vandermonde al ?eatoires. On développe en série entiére la distribution des valeurs propres de differents modèles, par exemple les distributions de Wishart non-centrale et aussi les distributions de Wishart avec des entrées corrélées de moyenne nulle. Le cadre d'inference pour les matrices des dimensions finies est suffisamment souple pour permettre des combinaisons de matrices aléatoires. Les résultats que nous présentons sont implémentés en code Matlab en générant des sous-ensembles, des permutations et des relations d'équivalence. On applique ce cadre à l'étude des réseaux cognitifs et des réseaux à forte mobilité. On analyse les moments de matrices de Vandermonde aléatoires avec des entrées sur le cercle unitaire. On utilise ces moments et les détecteurs à expansion polynomiale pour décrire des détecteurs à faible complexité du signal transmis par des utilisateurs mobiles à une station de base (ou avec deux stations de base) représentée par des réseaux linéaires uniformes. / In this thesis, we focus on the analysis of the moments method, showing its importance in the application of random matrices to wireless communication. This study is conducted in the free probability framework. The concept of free convolution/deconvolution can be used to predict the spectrum of sums or products of random matrices which are asymptotically free. In this framework, we show that the moments method is very appealing and powerful in order to derive the moments/asymptotic moments for cases when the property of asymptotic freeness does not hold. In particular, we focus on Gaussian random matrices with finite dimensions and structured matrices as Vandermonde matrices. We derive the explicit series expansion of the eigenvalue distribution of various models, as noncentral Wishart distributions, as well as correlated zero mean Wishart distributions. We describe an inference framework so flexible that it is possible to apply it for repeated combinations of random ma- trices. The results that we present are implemented generating subsets, permutations, and equivalence relations. We developped a Matlab routine code in order to perform convolution or deconvolution numerically in terms of a set of input moments. We apply this inference framework to the study of cognitive networks, as well as to the study of wireless networks with high mobility. We analyze the asymptotic moments of random Vandermonde matrices with entries on the unit circle. We use them and polynomial expansion detectors in order to design a low complexity linear MMSE decoder to recover the signal transmitted by mobile users to a base station or two base stations, represented by uniform linear arrays.

Page generated in 0.0842 seconds