• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 1
  • Tagged with
  • 22
  • 22
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Dynamic response of laterally-loaded piles

Thammarak, Punchet 20 October 2009 (has links)
The laterally-loaded pile has long been a topic of research interest. Several models of the soil surrounding a pile have been developed for simulation of lateral pile behavior, ranging from simple spring and dashpot models to sophisticated three-dimensional finite-element models. However, results from the available pile-soil models are not accurate due to inherent approximations or constraints. For the springs and dashpots representation, the real and imaginary stiffness are calculated by idealizing the soil domain as a series of plane-strain slices leading to unrealistic pile behavior at low frequencies while the three-dimensional finite-element analysis is very computationally demanding. Therefore, this dissertation research seeks to contribute toward procedures that are computationally cost-effective while accuracy of the computed response is maintained identical or close to that of the three-dimensional finite-element solution. Based on the fact that purely-elastic soil displacement variations in azimuthal direction are known, the surrounding soil can be formulated in terms of an equivalent one-dimensional model leading to a significant reduction of computational cost. The pile with conventional soil-slice model will be explored first. Next, models with shear stresses between soil slices, including and neglecting the soil vertical displacement, are investigated. Excellent agreement of results from the proposed models with three-dimensional finite-element solutions can be achieved with only small additional computational cost. / text
12

Redução do custo computacional do algoritmo RRT através de otimização por eliminação / Reduction in the computational cost of the RRT algorithm through optimization by elimination

Vieira, Hiparco Lins 15 July 2014 (has links)
A aplicação de técnicas baseadas em amostragem em algoritmos que envolvem o planejamento de trajetórias de robôs tem se tornado cada vez mais difundida. Deste grupo, um dos algoritmos mais utilizados é chamado Rapidly-exploring Random Tree (RRT), que se baseia na amostragem incremental para calcular de forma eficiente os planos de trajetória do robô evitando colisões com obstáculos. Vários esforços tem sido realizados a fim de reduzir o custo computacional do algoritmo RRT, visando aplicações que necessitem de respostas mais rápidas do algoritmo, como, por exemplo, em ambientes dinâmicos. Um dos dilemas relacionados ao RRT está na etapa de geração de primitivas de movimento. Se várias primitivas são geradas, permitindo o robô executar vários movimentos básicos diferentes, um grande custo computacional é gasto. Por outro lado, quando poucas primitivas são geradas e, consequentemente, poucos movimentos básicos são permitidos, o robô pode não ser capaz de encontrar uma solução para o problema, mesmo que esta exista. Motivados por este problema, um método de geração de primitivas de movimento foi proposto. Tal método é comparado com os métodos tradicional e aleatório de geração de primitivas, considerando não apenas o custo computacional de cada um, mas também a qualidade da solução obtida. O método proposto é aplicado ao algoritmo RRT, que depois é aplicado em um caso de estudo em um ambiente dinâmico. No estudo de caso, o algoritmo RRT otimizado é avaliado em termos de seus custos computacionais durante planejamentos e replanejamento de trajetória. As simulações são realizadas em dois simuladores: um desenvolvido em linguagem Python e outro em Matlab. / The application of sample-based techniques in path-planning algorithms has become year-by-year more widespread. In this group, one of the most widely used algorithms is the Rapidly-exploring Random Tree (RRT), which is based on an incremental sampling of configurations to efficiently compute the robot\'s path while avoiding obstacles. Many efforts have been made to reduce RRT computational costs, targeting, in particular, applications in which quick responses are required, e.g., in dynamic environments. One of the dilemmas posed by the RRT arises from its motion primitives generation. If many primitives are generated to enable the robot to perform a broad range of basic movements, a signicant computational cost is required. On the other hand, when only a few primitives are generated, thus, enabling a limited number of basic movements, the robot may be unable to find a solution to the problem, even if one exists. To address this quandary, an optimized method for primitive generation is proposed. This method is compared with the traditional and random primitive generation methods, considering not only computational cost, but also the quality of local and global solutions that may be attained. The optimized method is applied to the RRT algorithm, which is then used in a case study in dynamic environments. In the study, the modied RRT is evaluated in terms of the computational costs of its planning and replanning. The simulations were developed to access the effectiveness and efficiency of the proposed algorithm.
13

Redução do custo computacional do algoritmo RRT através de otimização por eliminação / Reduction in the computational cost of the RRT algorithm through optimization by elimination

Hiparco Lins Vieira 15 July 2014 (has links)
A aplicação de técnicas baseadas em amostragem em algoritmos que envolvem o planejamento de trajetórias de robôs tem se tornado cada vez mais difundida. Deste grupo, um dos algoritmos mais utilizados é chamado Rapidly-exploring Random Tree (RRT), que se baseia na amostragem incremental para calcular de forma eficiente os planos de trajetória do robô evitando colisões com obstáculos. Vários esforços tem sido realizados a fim de reduzir o custo computacional do algoritmo RRT, visando aplicações que necessitem de respostas mais rápidas do algoritmo, como, por exemplo, em ambientes dinâmicos. Um dos dilemas relacionados ao RRT está na etapa de geração de primitivas de movimento. Se várias primitivas são geradas, permitindo o robô executar vários movimentos básicos diferentes, um grande custo computacional é gasto. Por outro lado, quando poucas primitivas são geradas e, consequentemente, poucos movimentos básicos são permitidos, o robô pode não ser capaz de encontrar uma solução para o problema, mesmo que esta exista. Motivados por este problema, um método de geração de primitivas de movimento foi proposto. Tal método é comparado com os métodos tradicional e aleatório de geração de primitivas, considerando não apenas o custo computacional de cada um, mas também a qualidade da solução obtida. O método proposto é aplicado ao algoritmo RRT, que depois é aplicado em um caso de estudo em um ambiente dinâmico. No estudo de caso, o algoritmo RRT otimizado é avaliado em termos de seus custos computacionais durante planejamentos e replanejamento de trajetória. As simulações são realizadas em dois simuladores: um desenvolvido em linguagem Python e outro em Matlab. / The application of sample-based techniques in path-planning algorithms has become year-by-year more widespread. In this group, one of the most widely used algorithms is the Rapidly-exploring Random Tree (RRT), which is based on an incremental sampling of configurations to efficiently compute the robot\'s path while avoiding obstacles. Many efforts have been made to reduce RRT computational costs, targeting, in particular, applications in which quick responses are required, e.g., in dynamic environments. One of the dilemmas posed by the RRT arises from its motion primitives generation. If many primitives are generated to enable the robot to perform a broad range of basic movements, a signicant computational cost is required. On the other hand, when only a few primitives are generated, thus, enabling a limited number of basic movements, the robot may be unable to find a solution to the problem, even if one exists. To address this quandary, an optimized method for primitive generation is proposed. This method is compared with the traditional and random primitive generation methods, considering not only computational cost, but also the quality of local and global solutions that may be attained. The optimized method is applied to the RRT algorithm, which is then used in a case study in dynamic environments. In the study, the modied RRT is evaluated in terms of the computational costs of its planning and replanning. The simulations were developed to access the effectiveness and efficiency of the proposed algorithm.
14

Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético / Otimization of a propagation model with multiple obstacles on troposphere using genetic algorithms

Vilanova, Antonio Carlos 01 February 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This thesis presents an evaluation methodology to optimize parameters in a model of propagation of electromagnetic waves in the troposphere. The propagation model is based on parabolic equations solved by Split-Step Fourier. This propagation model shows good efficiency and rough terrain situations where the refractivity varies with distance. The search for optimal parameters in models involving electromagnetic waves requires a large computational cost, especially in large search spaces. Aiming to reduce the computational cost in determining the parameter values that maximize the field strength at a given position of the observer was developed an application called EP-AG. The application has two main modules. The first is the propagation module that estimates the value of the electric field in the area of a given terrain irregularities and varying with the refractivity with distance. The second is the optimization module which finds the optimum antenna height and frequency of operation that lead the field to the maximum value of the land in a certain position. Initially performed only the propagation module using different profiles of land and refractivity. The results shown by contours and profile field shown the efficiency of the model. Subsequently to evaluate the optimization by genetic algorithms were used two different settings as well as the irregularity of the terrain, refractivity profile and size of the search space. In each of these settings picked up a point observation in which the value of the electric field served as a metric for comparison. At this point, we determined the optimal values of the parameters by the brute force method and the genetic algorithm optimization. The results showed that for small search spaces virtually no reduction of the computational cost, however for large search spaces, the decrease was very significant and relative errors much smaller than those obtained by the method of brute force. / Esta tese apresenta uma avaliação metodológica para otimizar parâmetros em um modelo de propagação de ondas eletromagnéticas na troposfera. O modelo de propagação é baseado em equações parabólicas resolvidas pelo Divisor de Passos de Fourier. Esse modelo de propagação apresenta boa eficiência em terrenos irregulares e situações em que a refratividade varia com a distância. A busca de parâmetros ótimos em modelos que envolvem ondas eletromagnéticas demanda um grande custo computacional, principalmente em grandes espaços de busca. Com o objetivo de diminuir o custo computacional na determinação dos valores dos parâmetros que maximizem a intensidade de campo em uma determinada posição do observador, foi desenvolvido um aplicativo denominado EP-AG. O aplicativo possui dois módulos principais. O primeiro é o módulo de propagação, que estima o valor do campo elétrico na área de um determinado terreno com irregularidades e com a refratividade variando com a distância. O segundo é o módulo de otimização, que encontra o valor ótimo da altura da antena e da frequência de operação que levam o campo ao valor máximo em determinada posição do terreno. Inicialmente, executou-se apenas o módulo de propagação utilizando diferentes perfis de terrenos e de refratividade. Os resultados apresentados através de contornos e de perfis de campo mostraram a eficiência do modelo. Posteriormente, para avaliar a otimização por algoritmos genéticos, foram utilizadas duas configurações bem diferentes quanto à irregularidade do terreno, perfil de refratividade e tamanho de espaço de busca. Em cada uma dessas configurações, escolheu-se um ponto observação no qual o valor do campo elétrico serviu de métrica para comparação. Nesse ponto, determinou-se os valores ótimos dos parâmetros pelo método da força bruta e pela otimização por algoritmo genético. Os resultados mostraram que, para pequenos espaços de busca, praticamente não houve redução do custo computacional, porém, para grandes espaços de busca, a redução foi muito significativa e com erros relativos bem menores do que os obtidos pelo método da força bruta. / Doutor em Ciências
15

[pt] PROPAGAÇÃO DE INCERTEZAS VIA EXPANSÃO POR CAOS POLINOMIAL EM SIMULAÇÃO DE RESERVATÓRIOS DE PETRÓLEO / [en] UNCERTAINTY PROPAGATION USING POLYNOMIAL CHAOS EXPANSION IN OIL RESERVOIR MODELS

17 November 2021 (has links)
[pt] Este trabalho tem por objetivo investigar a redução do custo computacional associado ao cálculo das principais estatísticas das saídas dos modelos de propagação de incertezas. Para tal, apresentamos uma implementação alternativa ao método tradicional de Monte Carlo, chamado Caos Polinomial; que é adequado a problemas onde o número de variáveis de incerteza não é muito alto. No método Caos Polinomial, o valor esperado e a variância das saídas do simulador são diretamente estimados, como funções de distribuições de probabilidade de variáveis de incerteza na entrada do simulador. A principal vantagem do método de Caos Polinomial é que o número de pontos necessários para uma boa estimativa das estatísticas da saída de um simulador, comparado com Monte Carlo, é menor. Aplicações de Caos Polinomial em reservatórios de petróleo serão apresentadas para a propagação de até quatro variáveis, apesar do método poder ser aplicado a problemas de dimensões maiores. Nossos principais resultados são aplicados a dois modelos de reservatórios de petróleo sintéticos. / [en] In this work we investigate the reduction of the computational cost of the calculus of statistical moments of simulator s output in uncertainties propagation s models. For do that, we present an alternative s implementation to the traditional Monte Carlo s Method, called Polynomial Chaos; that is adequate to problems where the number of uncertain variables is not so high. In the Polynomial Chaos method, the expectation and the variance of the simulator s output are directly estimated, as functions of the probability distribuition of the uncertain variables in simulator input. The great advantage of Polynomial Chaos is that number of points necessary for a good estimation of the output statistics have smaller magnitude, compared to the Monte Carlo Method. Applications of Polynomial Chaos on oil reservoir simulations will be presented. As it is just a preliminar implementation, we just treat propagation s problems with at most four uncertainties variables, despite of the method being applicable to problems with more dimensions. Our main results are applied to two models of synthetic oil reservoirs.
16

Lid driven cavity flow using stencil-based numerical methods

Juujärvi, Hannes, Kinnunen, Isak January 2022 (has links)
In this report the regular finite differences method (FDM) and a least-squares radial basis function-generated finite differences method (RBF-FD-LS) is used to solve the two-dimensional incompressible Navier-Stokes equations for the lid driven cavity problem. The Navier-Stokes equations is solved using stream function-vorticity formulation. The purpose of the report is to compare FDM and RBF-FD-LS with respect to accuracy and computational cost. Both methods were implemented in MATLAB and the problem was solved for Reynolds numbers equal to 100, 400 and 1000. In the report we present the solutions obtained as well as the results from the comparison. The results are discussed and conclusions are drawn. We came to the conclusion that RBF-FD-LS is more accurate when the stepsize of the grids used is held constant, while RBF-FD-LS costs more than FDM for similar accuracy.
17

Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost

Allesson, Sara January 2019 (has links)
The car industry produces many of their car parts by using sheet metal forming, where one of the most time-consuming phases is the development and manufacturing of new forming tools. As of today, when a new tool is to be evaluated in terms of usability, a forming simulation is conducted to predict possible failures before manufacturing. The assumption is then that the tools are rigid, and the only deformable part is the sheet metal itself. This is however not the case, since the tools also deform during the forming process. A previous research, which is the basis of this thesis, included a model with only elastic tools and showed results of high accuracy in comparison to using a rigid setup. However, this simulation is not optimal to implement for a daily based usage, since it requires high computational power and has a long simulation time.  The aim and scope for this thesis is to evaluate how a sheet metal forming simulation with elastic tool consideration can be reduced in terms of computational cost, by using the software LS-DYNA. A small deviation of the forming result is acceptable and the aim is to run the simulation with a 50-75 % reduction of time on fewer cores than the approximate 14 hours and 800 CPUs that the simulation requires today. The first step was to alter the geometry of the tools and evaluate the impact on the deformations of the blank. The elastic solid parts that only has small deformations are deleted and replaced by rigid surfaces, making the model partly elastic. Later, different decomposition methods are studied to determine what kind that makes the simulation run faster. At last, a scaling analysis is conducted to determine the range of computational power that is to be used to run the simulations as efficient as possible, and what part of the simulation that is affecting the simulation time the most. The correlation of major strain deviation between a fully elastic model and a partly elastic model showed results of high accuracy, as well as comparison with production measurements of a formed blank. The computational time is reduced by over 90 % when using approximately 65 % of the initial computational power. If the simulations are run with even less number of cores, 10 % of the initial number of CPUs, the simulation time is reduced by over 70 %. The conclusion of this work is that it is possible to run a partly elastic sheet metal forming simulation much more efficient than using a fully elastic model, without reliability problems of the forming results. This by reducing the number of elements, evaluate the decomposition method and by conducting a scaling analysis to evaluate the efficiency of computational power. / Bilindustrin producerar många av sina bildelar genom att tillämpa plåtformning, där en av de mest tidskrävande faserna är utveckling och tillverkning av nya formningsverktyg. Idag, när ett nytt verktyg ska utvärderas med avseende på användbarhet, genomförs en formningssimulering för att förutsäga eventuella fel innan tillverkning. Antagandet är då att verktygen är stela och den enda deformerbara delen är själva plåten. Det är dock inte så, eftersom verktygen också deformeras under formningsprocessen. Tidigare forskning, som ligger till grund för detta examensarbete, inkluderade en modell med endast elastiska verktyg och visade resultat med hög noggrannhet i jämförelse med att använda stela verktyg. Simuleringen med elastiska verktyg är emellertid inte optimal att implementera för daglig användning, eftersom den kräver hög beräkningskraft och har en lång simuleringstid. Syftet och omfattningen av detta examensarbete är att utvärdera hur en plåtformningssimulering med elastiska verktyg kan minskas med avseende på beräkningskostnaden, genom att använda programvaran LS-DYNA. En liten avvikelse från formningsresultatet är acceptabelt, och målet är att köra simuleringen med en 50-75 % minskning av tiden på färre kärnor än ungefär 14 timmar och 800 processorer som simuleringen kräver idag. Det första steget är att ändra verktygets geometri och utvärdera inverkan på deformationerna av plåten. De elastiska solida verktygsdelarna som endast har små deformationer raderas och ersätts av stela ytor, vilket gör modellen delvis elastisk. Senare studeras olika dekompositionsmetoder för att avgöra vilka som gör simuleringen snabbare. Till sist utförs en skalningsanalys för att bestämma antalet processorer som ska användas för att köra simuleringen så effektivt som möjligt. Korrelationen av huvudtöjningarna mellan en helt elastisk modell och en delvis elastisk modell visade resultat av hög noggrannhet, såväl som jämförelse med produktionsmätningar av en format plåt. Beräkningstiden minskar med över 90 % när man använder ungefär 65 % av den ursprungliga beräkningskraften. Om simuleringarna körs med färre antal kärnor, cirka 10 % av ursprungligt antal CPUer, minskar simuleringstiden med 70 %.  Slutsatsen av detta arbete är att det är möjligt att köra en delvis elastisk plåtformningssimulering mycket effektivare än att använda en helt elastisk modell, utan att de resulterar i pålitlighetsproblem. Detta genom att minska antalet element, utvärdera dekompositionsmetoden och genom att genomföra en skalningsanalys för att utvärdera effektiviteten av beräkningskraften. / Reduced Lead Time through Advanced Die Structure Analysis - Swedish innovation agency Vinnova
18

Pore-Scale Simulation of Cathode Catalyst Layers in Proton Exchange Membrane Fuel Cells (PEMFCs)

ZHENG, WEIBO 11 July 2019 (has links)
No description available.
19

Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés / Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics

Herouard, Nicolas 05 December 2014 (has links)
Lors de la rentrée atmosphérique, l’écoulement raréfié de l’air autour de l’objet rentrant est régi par un modèle cinétique dérivé de l’équation de Boltzmann ; celui-ci décrit l’évolution d’une fonction de distribution des particules de gaz dans l’espace des phases, de dimension 6 dans le cas général. La simulation numérique déterministe de cet écoulement requiert donc le traitement d’une quantité considérable de données, soit un espace mémoire et un temps de calcul importants. Nous étudions dans ce travail différents moyens de réduire le coût de ces calculs. La première approche est une méthode permettant d’optimiser la taille de la grille de vitesses discrètes employée dans le calcul par une prédiction de l’allure des fonctions de distribution dans l’espace des vitesses, en supposant un faible déséquilibre thermodynamique du gaz. La seconde approche consiste à essayer d’exploiter les propriétés de préservation asymptotique des schémas Galerkin Discontinu, déjà établies dans le cadre du transport linéaire des neutrons, qui permettent de tenir compte des effets de la couche limite cinétique sans que celle-ci soit résolue par le maillage, alors que les méthodes classiques (comme les Volumes Finis) imposent l’utilisation de maillages très raffinés en zone de proche paroi. Dans une dernière partie, nous comparons les performances respectives de ces schémas Galerkin Discontinu et de quelques schémas Volumes Finis, appliqués au modèle BGK sur un cas simple, en étudiant en particulier leur comportement près des parois et les conditions aux limites numériques. / During the atmospheric re-entry of a space engine, the rarefied air flow around the body is determined by a kinetic model derived from the Boltzmann equation, which describes the evolution of a distribution function of gas molecules in the phase space, this means a 6-dimensional space in the general case. Consequently, a deterministic numerical simulation of this flow requires large computational ressources, both in memory storage and CPU time. The aim of this work is to reduce those ressources, using two different approaches. The first one is a method allowing to optimize the size of the discrete velocity grid used for the computation by a prediction of the shape of the distributions in the velocity space, assuming that the gas is close to thermodynamic equilibrium. The second approach is an attempt to use the asymptotic preservation properties of Discontinuous Galerkin schemes, already established for neutron transport, which allow to take into account the effects of kinetic boundary layers even if they are not resolved by the mesh, while classical methods (such as Finite Volumes) require very refined meshes along the direction normal to the walls. In a last part, we compare the performances of these Discontinuous Galerkin schemes with some classical Finite Volumes schemes, applied to the BGK equation in a simple case, and pay particular attention to their near-wall behavior and numerical boundary conditions.
20

Multi-level Decoupled Optimization of Wind Turbine Structures Using Coefficients of Approximating Functions as Design Variables

Lee, Jin Woo January 2017 (has links)
No description available.

Page generated in 0.1229 seconds