• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Matriz de massa de ordem elevada, dispersão de velocidades e reflexões espúrias / High order mass matrix, velocity dispersion and spurious wave reflection

Noronha Neto, Celso de Carvalho 16 May 2008 (has links)
O assunto principal deste trabalho é qualificar, quantificar e implementar o comportamento numérico de estruturas discretizadas através do método dos elementos finitos. Serão abordados apenas os elementos lineares unidimensionais dinâmicos, porém a aplicabilidade da formulação proposta pode se estender para elementos bi e tridimensionais lineares dinâmicos. Inicia-se com uma introdução ao tema. Com certo desenvolvimento matemático, pode-se isolar analiticamente a parcela relacionada ao erro numérico. Elevando a ordem do erro de truncamento, obtém-se precisão elevada na resposta numérica. Inspirado no integrador temporal de Newmark, projetam-se elementos que apresentam estabilidade incondicional para os chamados efeitos espúrios. O efeito evanescente é um fenômeno espúrio onde a onda se propaga ao longo da estrutura acompanhada de um amortecimento puramente numérico ao longo do domínio do espaço. Outro efeito analisado é a reflexão espúria. Quando dois elementos adjacentes têm comprimentos diferentes, surge uma onda de reflexão (ou duas, no caso do elemento de viga) na interface deles. Tal onda, também de origem puramente matemática, existe devido à diferença entre as massas e as rigidezes absolutas dos elementos envolvidos, independente do fato de que eles tenham as mesmas características físicas. A relação entre o incremento de tempo e o período de oscilação é convenientemente empregada como principal parâmetro para quantificar a discretização no domínio temporal. No domínio do espaço, a relação empregada é entre o comprimento do elemento e o comprimento de onda. / The main subject of this work is to qualify, quantify and implement the numerical behavior of discrete structures through the finite element method. It will be investigated only the dynamic onedimensional linear elements, but the applicability of the proposed formulation can be extended to the bi and tri-dimensional cases. It begins with an introduction to the theme. With some mathematical development, the related numerical error can be isolated analytically. Once the truncation error is isolate, a high precision numerical response is obtained. Inspired in the Newmark time integrator, unconditionally stable elements for spurious effects are idealized. The evanescent effect is a spurious phenomenon where the wave propagates along the structure subjected to a numerical damping in the spatial domain. Another effect analyzed here is the spurious wave reflection. When two adjacent elements have different lengths, a reflected wave exists (two waves for the beam element) at their interface. This wave, which meaning is purely mathematical, exists due to the difference of their absolute mass and stiffness between the finite elements involved, even when both elements have the same physical properties. The rate between the time increment and the period of oscillation is conveniently employed as the main parameter to quantify the time discretization. In the spatial domain, the used parameter is the relation between the element and the wave length.
2

Investigating the potential for improving the accuracy of weather and climate forecasts by varying numerical precision in computer models

Thornes, Tobias January 2018 (has links)
Accurate forecasts of weather and climate will become increasingly important as the world adapts to anthropogenic climatic change. Forecasts' accuracy is limited by the computer power available to forecast centres, which determines the maximum resolution, ensemble size and complexity of atmospheric models. Furthermore, faster supercomputers are increasingly energy-hungry and unaffordable to run. In this thesis, a new means of making computer simulations more efficient is presented that could lead to more accurate forecasts without increasing computational costs. This 'scale-selective reduced precision' technique builds on previous work that shows that weather models can be run with almost all real numbers represented in 32 bit precision or lower without any impact on forecast accuracy, challenging the paradigm that 64 bits of numerical precision are necessary for sufficiently accurate computations. The observational and model errors inherent in weather and climate simulations, combined with the sensitive dependence on initial conditions of the atmosphere and atmospheric models, renders such high precision unnecessary, especially at small scales. The 'scale-selective' technique introduced here therefore represents smaller, less influential scales of motion with less precision. Experiments are described in which reduced precision is emulated on conventional hardware and applied to three models of increasing complexity. In a three-scale extension of the Lorenz '96 toy model, it is demonstrated that high resolution scale-dependent precision forecasts are more accurate than low resolution high-precision forecasts of a similar computational cost. A spectral model based on the Surface Quasi-Geostrophic Equations is used to determine a power law describing how low precision can be safely reduced as a function of spatial scale; and experiments using four historical test-cases in an open-source version of the real-world Integrated Forecasting System demonstrate that a similar power law holds for the spectral part of this model. It is concluded that the scale-selective approach could be beneficially employed to optimally balance forecast cost and accuracy if utilised on real reduced precision hardware.
3

Matriz de massa de ordem elevada, dispersão de velocidades e reflexões espúrias / High order mass matrix, velocity dispersion and spurious wave reflection

Celso de Carvalho Noronha Neto 16 May 2008 (has links)
O assunto principal deste trabalho é qualificar, quantificar e implementar o comportamento numérico de estruturas discretizadas através do método dos elementos finitos. Serão abordados apenas os elementos lineares unidimensionais dinâmicos, porém a aplicabilidade da formulação proposta pode se estender para elementos bi e tridimensionais lineares dinâmicos. Inicia-se com uma introdução ao tema. Com certo desenvolvimento matemático, pode-se isolar analiticamente a parcela relacionada ao erro numérico. Elevando a ordem do erro de truncamento, obtém-se precisão elevada na resposta numérica. Inspirado no integrador temporal de Newmark, projetam-se elementos que apresentam estabilidade incondicional para os chamados efeitos espúrios. O efeito evanescente é um fenômeno espúrio onde a onda se propaga ao longo da estrutura acompanhada de um amortecimento puramente numérico ao longo do domínio do espaço. Outro efeito analisado é a reflexão espúria. Quando dois elementos adjacentes têm comprimentos diferentes, surge uma onda de reflexão (ou duas, no caso do elemento de viga) na interface deles. Tal onda, também de origem puramente matemática, existe devido à diferença entre as massas e as rigidezes absolutas dos elementos envolvidos, independente do fato de que eles tenham as mesmas características físicas. A relação entre o incremento de tempo e o período de oscilação é convenientemente empregada como principal parâmetro para quantificar a discretização no domínio temporal. No domínio do espaço, a relação empregada é entre o comprimento do elemento e o comprimento de onda. / The main subject of this work is to qualify, quantify and implement the numerical behavior of discrete structures through the finite element method. It will be investigated only the dynamic onedimensional linear elements, but the applicability of the proposed formulation can be extended to the bi and tri-dimensional cases. It begins with an introduction to the theme. With some mathematical development, the related numerical error can be isolated analytically. Once the truncation error is isolate, a high precision numerical response is obtained. Inspired in the Newmark time integrator, unconditionally stable elements for spurious effects are idealized. The evanescent effect is a spurious phenomenon where the wave propagates along the structure subjected to a numerical damping in the spatial domain. Another effect analyzed here is the spurious wave reflection. When two adjacent elements have different lengths, a reflected wave exists (two waves for the beam element) at their interface. This wave, which meaning is purely mathematical, exists due to the difference of their absolute mass and stiffness between the finite elements involved, even when both elements have the same physical properties. The rate between the time increment and the period of oscillation is conveniently employed as the main parameter to quantify the time discretization. In the spatial domain, the used parameter is the relation between the element and the wave length.
4

Estimation of Wordlengths for Fixed-Point Implementations using Polynomial Chaos Expansions

Rahman, Mushfiqur January 2023 (has links)
Due to advances in digital computing much of the baseband signal processing of a communication system has moved into the digital domain from the analog domain. Within the domain of digital communication systems, Software Defined Radios (SDRs) allow for majority of the signal processing tasks to be implemented in reconfigurable digital hardware. However this comes at a cost of higher power and resource requirements. Therefore, highly efficient custom hardware implementations for SDRs are needed to make SDRs feasible for practical use. Efficient custom hardware motivates the use of fixed point arithmetic in the implementation of Digital Signal Processing (DSP) algorithms. This conversion to finite precision arithmetic introduces quantization noise in the system, which significantly affects the performance metrics of the system. As a result, characterizing quantization noise and its effects within a DSP system is an important challenge that needs to be addressed. Current models to do so significantly over-estimate the quantization effects, resulting in an over-allocation of hardware resources to implement a system. Polynomial Chaos Expansion (PCE) is a method that is currently gaining attention in modelling uncertainty in engineering systems. Although it has been used to analyze quantization effects in DSP systems, previous investigations have been limited to simple examples. The purpose of this thesis is to therefore introduce new techniques that allow the application of PCE to be scaled up to larger DSP blocks with many noise sources. Additionally, the thesis introduces design space exploration algorithms that leverage the accuracy of PCE simulations to estimate bitwidths for fixed point implementations of DSP systems. The advantages of using PCE over current modelling techniques will be presented though its application to case studies relevant to practice. These case studies include Sine Generators, Infinite Impulse Response (IIR) filters, Finite Impulse Response (FIR) filters, FM demodulators and Phase Locked Loops (PLLs). / Thesis / Master of Applied Science (MASc)
5

Développement d’algorithmes d’imagerie et de reconstruction sur architectures à unités de traitements parallèles pour des applications en contrôle non destructif / Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

Pedron, Antoine 28 May 2013 (has links)
La problématique de cette thèse se place à l’interface entre le domaine scientifique du contrôle non destructif par ultrasons (CND US) et l’adéquation algorithme architecture. Le CND US comprend un ensemble de techniques utilisées pour examiner un matériau, qu’il soit en production ou maintenance. Afin de détecter d’éventuels défauts, de les positionner et les dimensionner, des méthodes d’imagerie et de reconstruction ont été développées au CEA-LIST, dans la plateforme logicielle CIVA.L’évolution du matériel d’acquisition entraine une augmentation des volumes de données et par conséquent nécessite toujours plus de puissance de calcul pour parvenir à des reconstructions en temps interactif. L’évolution multicoeurs des processeurs généralistes (GPP), ainsi que l’arrivée de nouvelles architectures comme les GPU rendent maintenant possible l’accélération de ces algorithmes.Le but de cette thèse est d’évaluer les possibilités d’accélération de deux algorithmes de reconstruction sur ces architectures. Ces deux algorithmes diffèrent dans leurs possibilités de parallélisation. Pour un premier, la parallélisation sur GPP est relativement immédiate, contrairement à celle sur GPU qui nécessite une utilisation intensive des instructions atomiques. Quant au second, le parallélisme est plus simple à exprimer, mais l’ordonnancement des nids de boucles sur GPP, ainsi que l’ordonnancement des threads et une bonne utilisation de la mémoire partagée des GPU sont nécessaires pour obtenir un fonctionnement efficace. Pour ce faire, OpenMP, CUDA et OpenCL ont été utilisés et comparés. L’intégration de ces prototypes dans la plateforme CIVA a mis en évidence un ensemble de problématiques liées à la maintenance et à la pérennisation de codes sur le long terme. / This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterize possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform.Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purprose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms.The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed.
6

Methodologies for FPGA Implementation of Finite Control Set Model Predictive Control for Electric Motor Drives

Lao, Alex January 2019 (has links)
Model predictive control is a popular research focus in electric motor control as it allows designers to specify optimization goals and exhibits fast transient response. Availability of faster and more affordable computers makes it possible to implement these algorithms in real-time. Real-time implementation is not without challenges however as these algorithms exhibit high computational complexity. Field-programmable gate arrays are a potential solution to the high computational requirements. However, they can be time-consuming to develop for. In this thesis, we present a methodology that reduces the size and development time of field-programmable gate array based fixed-point model predictive motor controllers using automated numerical analysis, optimization and code generation. The methods can be applied to other domains where model predictive control is used. Here, we demonstrate the benefits of our methodology by using it to build a motor controller at various sampling rates for an interior permanent magnet synchronous motor, tested in simulation at up to 125 kHz. Performance is then evaluated on a physical test bench with sampling rates up to 35 kHz, limited by the inverter. Our results show that the low latency achievable in our design allows for the exclusion of delay compensation common in other implementations and that automated reduction of numerical precision can allow the controller design to be compacted. / Thesis / Master of Applied Science (MASc)

Page generated in 0.1079 seconds