• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Fault Tolerance in Linear Algebraic Methods using Erasure Coded Computations

Xuejiao Kang (5929862) 16 January 2019 (has links)
<p>As parallel and distributed systems scale to hundreds of thousands of cores and beyond, fault tolerance becomes increasingly important -- particularly on systems with limited I/O capacity and bandwidth. Error correcting codes (ECCs) are used in communication systems where errors arise when bits are corrupted silently in a message. Error correcting codes can detect and correct erroneous bits. Erasure codes, an instance of error correcting codes that deal with data erasures, are widely used in storage systems. An erasure code addsredundancy to the data to tolerate erasures. </p> <p><br> </p> <p>In this thesis, erasure coded computations are proposed as a novel approach to dealing with processor faults in parallel and distributed systems. We first give a brief review of traditional fault tolerance methods, error correcting codes, and erasure coded storage. The benefits and challenges of erasure coded computations with respect to coding scheme, fault models and system support are also presented.</p> <p><br> </p> <p>In the first part of my thesis, I demonstrate the novel concept of erasure coded computations for linear system solvers. Erasure coding augments a given problem instance with redundant data. This augmented problem is executed in a fault oblivious manner in a faulty parallel environment. In the event of faults, we show how we can compute the true solution from potentially fault-prone solutions using a computationally inexpensive procedure. The results on diverse linear systems show that our technique has several important advantages: (i) as the hardware platform scales in size and in number of faults, our scheme yields increasing improvement in resource utilization, compared to traditional schemes; (ii) the proposed scheme is easy to code as the core algorithm remains the same; (iii) the general scheme is flexible to accommodate a range of computation and communication trade-offs. </p> <p><br> </p> <p>We propose a new coding scheme for augmenting the input matrix that satisfies the recovery equations of erasure coding with high probability in the event of random failures. This coding scheme also minimizes fill (non-zero elements introduced by the coding block), while being amenable to efficient partitioning across processing nodes. Our experimental results show that the scheme adds minimal overhead for fault tolerance, yields excellent parallel efficiency and scalability, and is robust to different fault arrival models and fault rates.</p> <p><br> </p> <p>Building on these results, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point during execution, we only solve a system with the same size as the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate the observed faults. We present, in details, the augmentation process, the parallel formulation, and the performance of our method. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance.</p> <p><br> </p> <p>Based on the promising results for linear system solvers, we apply the concept of erasure coded computation to eigenvalue problems, which arise in many applications including machine learning and scientific simulations. Erasure coded computation is used to design a fault tolerant eigenvalue solver. The original eigenvalue problem is reformulated into a generalized eigenvalue problem defined on appropriate augmented matrices. We present the augmentation scheme, the necessary conditions for augmentation blocks, and the proofs of equivalence of the original eigenvalue problem and the reformulated generalized eigenvalue problem. Finally, we show how the eigenvalues can be derived from the augmented system in the event of faults. </p> <p><br> </p> <p>We present detailed experiments, which demonstrate the excellent convergence properties of our fault tolerant TraceMin eigensolver in the average case. In the worst case where the row-column pairs that have the most impact on eigenvalues are erased, we present a novel scheme that computes the augmentation blocks as the computation proceeds, using the estimates of leverage scores of row-column pairs as they are computed by the iterative process. We demonstrate low overhead, excellent scalability in terms of the number of faults, and the robustness to different fault arrival models and fault rates for our method.</p> <p><br> </p> <p>In summary, this thesis presents a novel approach to fault tolerance based on erasure coded computations, demonstrates it in the context of important linear algebra kernels, and validates its performance on a diverse set of problems on scalable parallel computing platforms. As parallel systems scale to hundreds of thousands of processing cores and beyond, these techniques present the most scalable fault tolerant mechanisms currently available.</p><br>
12

Fluxo de potência ótimo com restrições de estabilidade / Stability constrained Optimal Power Flow

Ana Cecilia Moreno Alamo 06 July 2015 (has links)
Neste trabalho, as restrições de estabilidade transitória são incorporadas ao problema de Fluxo de Potência Ótimo (FPO) por meio da aproximação de equações diferenciais do problema de estabilidade por um conjunto de equações algébricas provenientes de procedimentos de integração numérica. Uma contribuição original desta dissertação é a proposição de um procedimento de otimização multi-passos que minimiza problemas de convergência e acelera o processo computacional. O procedimento de otimização proposto foi testado com sucesso num sistema pequeno de 3 geradores, tendo as potências geradas como variáveis de controle. / In this work, transient stability constraints are incorporated into the Optimal Power Flow (OPF) problem by approximating differential equations constraints by a set of equivalent algebraic equations originated from numerical integration procedures. A contribution of this dissertation is the proposal of a multi-step optimization procedure, which minimizes convergence problems and speeds up computation. The proposed optimization procedure was successfully tested on a small 3-machine power system, having the generated powers as control variables.
13

Coupling between transport, mechanical properties and degradation by dissolution of rock reservoir / Couplage entre transport, comportement mécanique et dégradation par dissolution de réservoirs de roche

Wojtacki, Kajetan Tomasz 16 December 2015 (has links)
L'objectif de cette thèse est d'analyser l'évolution des propriétés mécaniques et de transport effectives de roches aquifères,qui sont soumises à une dégradation progressive par attaque chimique due à la dissolution par CO2.L'étude proposée porte sur les conditions à long terme et en champ lointain, lorsque la dégradation de la matrice poreuse peut être supposée homogène à l'échelle de l'échantillon.La morphologie du réseau de pores et du squelette solide définissant les propriétés macroscopiques majeures de la roche (perméabilité, élasticité),la modélisation d'un tel matériau poreux doit être basée sur une caractérisation morphologique et statistique des roches étudiées.Tout d'abord, une méthode de reconstruction inspirée du processus naturel de formation des grès est développée afin d'obtenir des représentations statistiquement équivalentes à de véritables échantillons.Les échantillons générés sont sélectionnés afin de satisfaire les informations morphologiques extraites de l'analyse des images microtomographiques d'échantillons de roche naturelle.Une méthodologie afin d'estimer les propriétés mécaniques équivalentes des échantillons générés, fondées directement sur des maillages réguliers considérés comme images binaires, est présentée.Le comportement mécanique équivalent est obtenu dans le cadre de l'homogénéisation périodique.Mais en raison du manque de périodicité géométrique des échantillons considérés, deux approches différentes sont développées :la reconstruction de VER par symétrie de réflexion ou l'addition d'une couche homogène associée à une méthode de point fixe.L’évolution de la perméabilité est estimée de manière classique en utilisant la méthode de mise à l'échelle dans la forme de la loi de Darcy. Enfin, la dissolution chimique du matériau est abordée par dilatation morphologique de la phase poreuse.De plus, une analyse détaillée de l'évolution des descripteurs morphologiques liée aux modifications de la microstructure lors des étapes de dissolution est présentée.La relation entre les propriétés morphologiques - perméabilité - modules d'élasticité est également fournie.La méthodologie développée dans ce travail pourra être facilement appliquée à d'autres classes de matériaux hétérogènes. / The aim of this thesis is to analyse evolution of effective mechanical and transport properties of rock aquifer, which is subjected to progressive chemical degradation due to CO2 dissolution. The proposed study focuses on long-term and far field conditions, when degradation of porous matrix can be assumed to be homogeneous at sample scale. It is very well known that morphology of pore network and solid skeleton defines important macroscopic properties of the rock (permeability, stiffness). Therefore, modelling of such porous material should be based on morphological and statistical characterisation of investigated rocks. First of all, in order to obtain statistically equivalent representations of real specimen a reconstruction method inspired by natural process of sandstone formation is adapted. Then the selected generated samples satisfy morphological informations which are extracted by analysing microtomography of the natural rock sample. Secondly, a methodology to estimate effective mechanical properties of investigated material, based directly on binary images, is featured. Effective mechanical behaviour is obtain within the framework of periodic homogenization, However due to lack of geometrical periodicity two different approaches are used (reflectional symmetry of considered RVE and a fixed point method, using additional layer spread over the considered geometry). Evolution of permeability is estimated in classical way using upscaling method in the form of Darcy's law. Finally, chemical dissolution of material is tackled in a simplified way by performing morphological dilation of porous phase. Detailed analysis of chosen morphological descriptors evolution, triggered by modifications of microstructures is provided. The relation between morphological properties – permeability – elastic moduli is also provided. The methodology developed in this work could be easily applied to other heterogeneous materials.
14

Failure Prediction for Composite Materials with Generalized Standard Models

Zhenyuan Gao (7481801) 17 October 2019 (has links)
<div>Despite the advances of analytical and numerical methods for composite materials, it is still challenging to predict the onset and evolution of their different failure mechanisms. Because most failure mechanisms are irreversible processes in thermodynamics, it is beneficial to model them within a unified thermodynamic framework. Noting the advantages of so-called generalized standard models (GSMs) in this regard, the objective of this work is to formulate constitutive models for several main failure mechanisms: brittle fracture, interlaminar delamination, and fatigue behavior for both continuum damage and delamination, in a generalized standard manner.</div><div><br></div><div>For brittle fracture, the numerical difficulties caused by damage and strain localization in traditional finite element analysis will be addressed and overcome. A nonlocal damage model utilizing an integral-type regularization technique will be derived based on a recently developed ``local'' continuum damage model. The objective is to make this model not only rigorously handle brittle fracture, but also incorporate common damage behavior such as damage anisotropy, distinct tensile and compressive damage behavior, and damage deactivation. A fully explicit integration scheme for the present model will be developed and implemented.</div><div><br></div><div>For fatigue continuum damage, a viscodamage model, which can handle frequently observed brittle damage phenomena, is developed to produce stress-dependent fatigue damage evolution. The governing equation for damage evolution is derived using an incremental method. A class of closed-form incremental constitutive relations is derived. </div><div><br></div><div>For interlaminar delamination, a cohesive zone model (CZM) will be proposed. Focus is placed on making the associated cohesive elements capable of displaying experimental critical energy release rate--mode mixture ratio relationships. To achieve this goal, each cohesive element is idealized as a deformable string exhibiting path dependent damage behavior. A damage model having a path dependence function will be developed, which will be constructed such that each cohesive element can exhibit designated, possibly sophisticated mixed-mode behavior. The rate form of the cohesive law will be subsequently derived.</div><div><br></div><div>Finally, a CZM for interlaminar fatigue, capable of handling brittle damage behavior, is developed to produce realistic interlaminar crack propagation under high-cycle fatigue. An implicit integration scheme, which can handle complex separation paths and mixed-mode delamination, is developed. Many numerical examples will be utilized to clearly demonstrate the capabilities of the proposed nonlocal damage model, continuum fatigue damage model, and CZMs for quasi-static and fatigue delamination.</div>
15

RANDOMIZED NUMERICAL LINEAR ALGEBRA APPROACHES FOR APPROXIMATING MATRIX FUNCTIONS

Evgenia-Maria Kontopoulou (9179300) 28 July 2020 (has links)
<p>This work explores how randomization can be exploited to deliver sophisticated</p><p>algorithms with provable bounds for: (i) The approximation of matrix functions, such</p><p>as the log-determinant and the Von-Neumann entropy; and (ii) The low-rank approximation</p><p>of matrices. Our algorithms are inspired by recent advances in Randomized</p><p>Numerical Linear Algebra (RandNLA), an interdisciplinary research area that exploits</p><p>randomization as a computational resource to develop improved algorithms for</p><p>large-scale linear algebra problems. The main goal of this work is to encourage the</p><p>practical use of RandNLA approaches to solve Big Data bottlenecks at industrial</p><p>level. Our extensive evaluation tests are complemented by a thorough theoretical</p><p>analysis that proves the accuracy of the proposed algorithms and highlights their</p><p>scalability as the volume of data increases. Finally, the low computational time and</p><p>memory consumption, combined with simple implementation schemes that can easily</p><p>be extended in parallel and distributed environments, render our algorithms suitable</p><p>for use in the development of highly efficient real-world software.</p>
16

Optimisation de transfert de données pour les processeurs pluri-coeurs, appliqué à l'algèbre linéaire et aux calculs sur stencils / Optimization of data transfer on many-core processors, applied to dense linear algebra and stencil computations

Ho, Minh Quan 05 July 2018 (has links)
La prochaine cible de Exascale en calcul haute performance (High Performance Computing - HPC) et des récent accomplissements dans l'intelligence artificielle donnent l'émergence des architectures alternatives non conventionnelles, dont l'efficacité énergétique est typique des systèmes embarqués, tout en fournissant un écosystème de logiciel équivalent aux plateformes HPC classiques. Un facteur clé de performance de ces architectures à plusieurs cœurs est l'exploitation de la localité de données, en particulier l'utilisation de mémoire locale (scratchpad) en combinaison avec des moteurs d'accès direct à la mémoire (Direct Memory Access - DMA) afin de chevaucher le calcul et la communication. Un tel paradigme soulève des défis de programmation considérables à la fois au fabricant et au développeur d'application. Dans cette thèse, nous abordons les problèmes de transfert et d'accès aux mémoires hiérarchiques, de performance de calcul, ainsi que les défis de programmation des applications HPC, sur l'architecture pluri-cœurs MPPA de Kalray. Pour le premier cas d'application lié à la méthode de Boltzmann sur réseau (Lattice Boltzmann method - LBM), nous fournissons des techniques génériques et réponses fondamentales à la question de décomposition d'un domaine stencil itérative tridimensionnelle sur les processeurs clusterisés équipés de mémoires locales et de moteurs DMA. Nous proposons un algorithme de streaming et de recouvrement basé sur DMA, délivrant 33% de gain de performance par rapport à l'implémentation basée sur la mémoire cache par défaut. Le calcul de stencil multi-dimensionnel souffre d'un goulot d'étranglement important sur les entrées/sorties de données et d'espace mémoire sur puce limitée. Nous avons développé un nouvel algorithme de propagation LBM sur-place (in-place). Il consiste à travailler sur une seule instance de données, au lieu de deux, réduisant de moitié l'empreinte mémoire et cède une efficacité de performance-par-octet 1.5 fois meilleur par rapport à l'algorithme traditionnel dans l'état de l'art. Du côté du calcul intensif avec l'algèbre linéaire dense, nous construisons un benchmark de multiplication matricielle optimale, basé sur exploitation de la mémoire locale et la communication DMA asynchrone. Ces techniques sont ensuite étendues à un module DMA générique du framework BLIS, ce qui nous permet d'instancier une bibliothèque BLAS3 (Basic Linear Algebra Subprograms) portable et optimisée sur n'importe quelle architecture basée sur DMA, en moins de 100 lignes de code. Nous atteignons une performance maximale de 75% du théorique sur le processeur MPPA avec l'opération de multiplication de matrices (GEMM) de BLAS, sans avoir à écrire des milliers de lignes de code laborieusement optimisé pour le même résultat. / Upcoming Exascale target in High Performance Computing (HPC) and disruptive achievements in artificial intelligence give emergence of alternative non-conventional many-core architectures, with energy efficiency typical of embedded systems, and providing the same software ecosystem as classic HPC platforms. A key enabler of energy-efficient computing on many-core architectures is the exploitation of data locality, specifically the use of scratchpad memories in combination with DMA engines in order to overlap computation and communication. Such software paradigm raises considerable programming challenges to both the vendor and the application developer. In this thesis, we tackle the memory transfer and performance issues, as well as the programming challenges of memory- and compute-intensive HPC applications on he Kalray MPPA many-core architecture. With the first memory-bound use-case of the lattice Boltzmann method (LBM), we provide generic and fundamental techniques for decomposing three-dimensional iterative stencil problems onto clustered many-core processors fitted withs cratchpad memories and DMA engines. The developed DMA-based streaming and overlapping algorithm delivers 33%performance gain over the default cache-based implementation.High-dimensional stencil computation suffers serious I/O bottleneck and limited on-chip memory space. We developed a new in-place LBM propagation algorithm, which reduces by half the memory footprint and yields 1.5 times higher performance-per-byte efficiency than the state-of-the-art out-of-place algorithm. On the compute-intensive side with dense linear algebra computations, we build an optimized matrix multiplication benchmark based on exploitation of scratchpad memory and efficient asynchronous DMA communication. These techniques are then extended to a DMA module of the BLIS framework, which allows us to instantiate an optimized and portable level-3 BLAS numerical library on any DMA-based architecture, in less than 100 lines of code. We achieve 75% peak performance on the MPPA processor with the matrix multiplication operation (GEMM) from the standard BLAS library, without having to write thousands of lines of laboriously optimized code for the same result.
17

Practical Numerical Trajectory Optimization via Indirect Methods

Sean M. Nolan (5930771) 15 June 2023 (has links)
<p>Numerical trajectory optimization is helpful not only for mission planning but also design</p> <p>space exploration and quantifying vehicle performance. Direct methods for solving the opti-</p> <p>mal control problems, which first discretize the problem before applying necessary conditions</p> <p>of optimality, dominate the field of trajectory optimization because they are easier for the</p> <p>user to set up and are less reliant on a forming a good initial guess. On the other hand,</p> <p>many consider indirect methods, which apply the necessary conditions of optimality prior to</p> <p>discretization, too difficult to use for practical applications. Indirect methods though provide</p> <p>very high quality solutions, easily accessible sensitivity information, and faster convergence</p> <p>given a sufficiently good guess. Those strengths make indirect methods especially well-suited</p> <p>for generating large data sets for system analysis and worth revisiting.</p> <p>Recent advancements in the application of indirect methods have already mitigated many</p> <p>of the often cited issues. Automatic derivation of the necessary conditions with computer</p> <p>algebra systems have eliminated the manual step which was time-intensive and error-prone.</p> <p>Furthermore, regularization techniques have reduced problems which traditionally needed</p> <p>many phases and complex staging, like those with inequality path constraints, to a signifi-</p> <p>cantly easier to handle single arc. Finally, continuation methods can circumvent the small</p> <p>radius of convergence of indirect methods by gradually changing the problem and use previ-</p> <p>ously found solutions for guesses.</p> <p>The new optimal control problem solver Giuseppe incorporates and builds upon these</p> <p>advancements to make indirect methods more accessible and easily used. It seeks to enable</p> <p>greater research and creative approaches to problem solving by being more flexible and</p> <p>extensible than previous solvers. The solver accomplishes this by implementing a modular</p> <p>design with well-defined internal interfaces. Moreover, it allows the user easy access to and</p> <p>manipulation of component objects and functions to be use in the way best suited to solve</p> <p>a problem.</p> <p>A new technique simplifies and automates what was the predominate roadblock to using</p> <p>continuation, the generation of an initial guess for the seed solution. Reliable generation of</p> <p>a guess sufficient for convergence still usually required advanced knowledge optimal contrtheory or sometimes incorporation of an entirely separate optimization method. With the</p> <p>new method, a user only needs to supply initial states, a control profile, and a time-span</p> <p>over which to integrate. The guess generator then produces a guess for the “primal” problem</p> <p>through propagation of the initial value problem. It then estimates the “dual” (adjoint)</p> <p>variables by the Gauss-Newton method for solving the nonlinear least-squares problem. The</p> <p>decoupled approach prevents poorly guessed dual variables from altering the relatively easily</p> <p>guess primal variables. As a result, this method is simpler to use, faster to iterate, and much</p> <p>more reliable than previous guess generation techniques.</p> <p>Leveraging the continuation process also allows for greater insight into the solution space</p> <p>as there is only a small marginal cost to producing an additional nearby solutions. As a</p> <p>result, a user can quickly generate large families of solutions by sweeping parameters and</p> <p>modifying constraints. These families provide much greater insight in the general problem</p> <p>and underlying system than is obtainable with singular point solutions. One can extend</p> <p>these analyses to high-dimensional spaces through construction of compound continuation</p> <p>strategies expressible by directed trees.</p> <p>Lastly, a study into common convergence explicates their causes and recommends mitiga-</p> <p>tion strategies. In this area, the continuation process also serves an important role. Adaptive</p> <p>step-size routines usually suffice to handle common sensitivity issues and scaling constraints</p> <p>is simpler and out-performs scaling parameters directly. Issues arise when a cost functional</p> <p>becomes insensitive to the control, which a small control cost mitigates. The best perfor-</p> <p>mance of the solver requires proper sizing of the smoothing parameters used in regularization</p> <p>methods. An asymptotic increase in the magnitude of adjoint variables indicate approaching</p> <p>a feasibility boundary of the solution space.</p> <p>These techniques for indirect methods greatly facilitate their use and enable the gen-</p> <p>eration of large libraries of high-quality optimal trajectories for complex problems. In the</p> <p>future, these libraries can give a detailed account of vehicle performance throughout its flight</p> <p>envelope, feed higher-level system analyses, or inform real-time control applications.</p>
18

Approche probabiliste du comportement mécanique des composites thermoplastiques assemblés par soudage laser / Probabilistic approach of thermoplastics composites mechanical behaviour assemblied by laser welding

Oumarou Mairagouna, Mamane 09 November 2012 (has links)
Les matériaux composites à matrice thermoplastique occupent de plus en plus un large domaine d'application grâce à leur aptitude à être recyclés et à être assemblés par fusion du polymère, encore appelée soudage. Parmi ces modes d'assemblage, le soudage laser propose de meilleures alternatives. Car, outre le fait qu'elle assure une meilleure tenue mécanique et un meilleur aspect esthétique, cette technique d'assemblage ne crée pas d'endommagement au sein du composite, à l'instar de certaines méthodes comme le rivetage, le vissage ou le boulonnage.L'objectif de ce travail est de proposer un modèle de rupture probabiliste de l'assemblage par faisceau laser d'un composite thermoplastique à fibres continues.Une description fine du matériau est d'abord effectuée par une approche multi-échelles dont le but était de pouvoir prédire le comportement macroscopique du composite de base connaissant les fluctuations locales de sa microstructure.La caractérisation mécanique de l'assemblage est ensuite effectuée par des essais multiaxiaux au moyen d'un dispositif spécifiques (Arcan-Mines) qui prend en compte l'état de confinement du joint soudé. Ce qui a permis de proposer un modèle de comportement élasto-plastique basé sur le critère de Drucker-Prager généralisé.Des tests par émission acoustique ont permis de faire l'hypothèse de l'existence d'un maillon faible au sein du joint soudé. La rupture est alors évaluée par le modèle statistique de Weibull. Un critère de rupture probabiliste basé sur le premier et le second invariant du tenseur des contraintes est finalement proposé. / Thermoplastic composite materials are more and more used in many fields of application as a result of their recyclability and their joining capabilities by polymer fusion, witch is called welding. Among these assembly types, the laser welding offers better alternatives. Because, beyond the high level mechanical strength and the good aesthetic appearance it provides, this assembly technique will not create damage within the composite material, like certain methods such riveting, drilling or bolting.The purpose of this study is to propose a probabilistic failure model of laser beam assembly of a continuous fibres thermoplastic composite.A detailed description of the material is first performed by a multi-scale approach, which was aiming to predict the macroscopic behaviour of the based composite knowing the local fluctuations of its microstructure.The mechanical characterisation of the assembly is then conducted through multi-axial tests using a special device (Arcan-Mines) which takes into account the confinement of the laser weld seam. This enables proposing a generalized Drucker-Prager élasto-plastic model.Acoustic emission tests allowed making the assumption of weak link model within the weld seam. The failure is then evaluated through the Weibull statistical model.Probabilistic failure criterion based on the first and the second invariants of stress tensor is finally proposed
19

Dimensioning Of Corona Control Rings For EHV/UHV Line Hardware And Substations

Chatterjee, Sreenita 10 1900 (has links) (PDF)
High voltage (EHV and UHV) transmission facilitates transfer of large amount of power over long distances. However, due to the inherent geometry, the line and substation hardware of EHV and UHV class generate high electric fields, which results in local ionisation of air called corona discharges. Apart from producing audible noise in the form of frying or hissing sound, corona produces significant electromagnetic interferences in the radio range. The limit for this corona generated Radio Interference (RI) has been stipulated by international standards, which are strictly to be followed. In line and substation hardware, corona control rings are generally employed to limit or avoid corona. Standard dimensions of corona rings are not available for EHV and UHV class. In most of the cases, their design is based on either a trial and error method or based on empirical extrapolation. Only in certain specific cases, the dimensioning of the rings is carried out using electric field calculations. In any of these approaches, the unavoidable surface abrasions, which can lead to corona, are not considered. There are also efforts to account for nominal surface irregularity by using a surface roughness factor, which is highly heuristic. In order to address this practically relevant problem, the present work was taken up. The intended exercise requires accurate field computation and a suitable criterion for checking corona onset. For the first part, the Surface Charge Simulation Method is adopted with newly proposed sub-modelling technique. The surface of the toroid is discretised into curvilinear patches with linear approximation for the surface charge density. Owing to its high accuracy, Galerkin’s method of moments formulation is employed. The problem of singularity encountered in the numerical approach is handled using a method based on Duffy’s transformation. The developed codes have also been validated with standard geometries. After a survey of relevant literature the ‘Critical Avalanche Criteria’ is chosen for its simplicity and applicability to the problem. Through a detailed simulation, the effect of avalanche space charge in reducing the corona onset voltage is found to be around 1.5% and hence it is not considered further. For utilities not interested in a detailed calculation procedure for dimensioning of corona rings, design curves are developed for circular corona rings of both 400 kV and 765 kV class with surface roughness factor in the range 0.8 – 1. In the second part of the work, a methodology for dimensioning is developed wherein the inevitable surface abrasion in the form of minute protrusions can be accounted. It is first shown that even though considerable field intensification occurs at the protrusions, such localised modification need not lead to corona. It is shown that by varying the minor radius of the corona ring, it is possible to get a design where the prescribed surface abrasion does not lead to corona onset. In summary, the present work has successfully developed a reliable methodology for the design of corona rings with prescribed surface abrasions. It involved development of an efficient field computation technique for handling minute surface protrusions and use of appropriate criteria for assessing corona inception. It has also provided design curves for EHV and UHV class corona rings with surface roughness factor specified in the range 0.8 – 1.0.
20

Analyse spectrale et calcul numérique pour l'équation de Boltzmann / Spectral analysis and numerical calculus for the Bomtzmann equation

Jrad, Ibrahim 27 June 2018 (has links)
Dans cette thèse, nous étudions les solutions de l'équation de Boltzmann. Nous nous intéressons au cadre homogène en espace où la solution f(t; x; v) dépend uniquement du temps t et de la vitesse v. Nous considérons des sections efficaces singulières (cas dit non cutoff) dans le cas Maxwellien. Pour l'étude du problème de Cauchy, nous considérons une fluctuation de la solution autour de la distribution Maxwellienne puis une décomposition de cette fluctuation dans la base spectrale associée à l'oscillateur harmonique quantique. Dans un premier temps, nous résolvons numériquement les solutions en utilisant des méthodes de calcul symbolique et la décomposition spectrale des fonctions de Hermite. Nous considérons des conditions initiales régulières et des conditions initiales de type distribution. Ensuite, nous prouvons qu'il n'y a plus de solution globale en temps pour une condition initiale grande et qui change de signe (ce qui ne contredit pas l'existence globale d'une solution faible pour une condition initiale positive - voir par exemple Villani Arch. Rational Mech. Anal 1998). / In this thesis, we study the solutions of the Boltzmann equation. We are interested in the homogeneous framework in which the solution f(t; x; v) depends only on the time t and the velocity v. We consider singular crosssections (non cuto_ case) in the Maxwellian case. For the study of the Cauchy problem, we consider a uctuation of the solution around the Maxwellian distribution then a decomposition of this uctuation in the spectral base associated to the quantum harmonic oscillator At first, we solve numerically the solutions using symbolic computation methods and spectral decomposition of Hermite functions. We consider regular initial data and initial conditions of distribution type. Next, we prove that there is no longer a global solution in time for a large initial condition that changes sign (which does not contradict the global existence of a weak solution for a positive initial condition - see for example Villani Arch. Rational Mech. Anal 1998).

Page generated in 0.5238 seconds