• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 3
  • 1
  • Tagged with
  • 15
  • 15
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

STRESSES AND ELASTIC CONSTANTS OF CRYSTALLINE SODIUM, FROM MOLECULAR DYNAMICS.

SCHIFERL, SHEILA KLEIN. January 1984 (has links)
The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340 K. The total adiabatic potential of a system of sodium atoms is represented by a pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the results to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. The fluctuation averages make a large contribution to the elastic constants, and the uncertainties in these averages are the dominant uncertainties in the elastic constants. The strictly volume-dependent terms are very large. The ensemble correction is small but significant at higher temperatures. Surprisingly, the volume derivatives of the two-body potential make large contributions to the stresses and the elastic constants. The effects of finite potential range and finite system size are discussed, as well as the effects of quantum corrections and electronic excitations. The agreement of theory and experiment is very good for the magnitudes of C₁₁ and C₁₂. The magnitude of C₄₄ is consistently small by ∼9 kbar for finite temperatures. This discrepancy is most likely due to the neglect of three-body contributions to the potential. The agreement of theory and experiment is excellent for the temperature dependences of all three elastic constants. This result illustrates a definite advantage of MD compared to lattice dynamics for conditions where classical statistics are valid. MD methods involve direct calculations of anharmonic effects; no perturbation treatment is necessary.
2

Numerical simulations of cold atom ratchets in dissipative optical lattices

Rapp, Anthony P. 13 August 2019 (has links)
No description available.
3

Real-Time Prediction-Driven Dynamics Simulation to Mitigate Frame Time Variation

Buck, Mackinnon A 01 December 2021 (has links) (PDF)
Real-time physics engines have seen recent performance improvements through techniques like hardware acceleration and artificial intelligence. However, state of the art physics simulation technology fails to account for the variation in simulation complexity over time. Sudden increases in contact frequency between simulated bodies can momentarily increase the processing time per frame. To solve this, we present a prediction-driven real-time dynamics method that uses a memory-efficient graph-based state buffer to minimize the cost of mispredictions. This buffer, which is generated by a separate thread running the physics pipeline, allows physics computation to temporarily run slower than real-time without affecting the frame rate of the host application. The main thread, whose role in dynamics computation gets limited to querying the simulation state and regenerating mispredicted state, sees a significant reduction in time spent per frame on dynamics computation when our multi-threaded prediction pipeline is enabled. Thus, our technique enables interactive multimedia applications to increase the computational budget for graphics at no cost perceptible to the end user. Furthermore, our method guarantees determinism and low input latency, making it suitable in competitive games and other real-time interactive applications. We also provide a C++ API to integrate custom game logic with the prediction engine to further minimize the frequency of mispredictions.
4

Dynamická simulace tuhých těles na programovatelných GPU / Dynamic simulation of rigid bodies using programmable GPUs

Cséfalvay, Szabolcs January 2011 (has links)
The goal of this work is to create a program which simulates the dynamics of rigid bodies and their systems using GPGPU with an emphasis on speed and stability. The result is a physics engine that uses the CUDA architecture. It runs entirely on the GPU, handles collision detection, collision response and different forces like friction, gravity, contact forces, etc. It supports spheres, rods (which are similar to cylinders), springs, boxes and planes. It's also possible to construct compound objects by connecting basic primitives.
5

Contribution à l'intégration de la modélisation et la simulation multi-physique pour conception des systèmes mécatroniques, / Contribution to the integration of multiphysics modelling and simulation for the design of mechatronic systems

Hammadi, Moncef 12 January 2012 (has links)
Le verrou de l'intégration de la simulation multi-physique dans la conception des systèmes mécatroniques est lié, entre autres, aux problèmes d'interopérabilité entre les outils de simulation. Ces problèmes engendrent des difficultés pour assurer des optimisations multidisciplinaires. Dans cette thèse, nous avons développé une approche de conception intégrée permettant de franchir cet obstacle. Cette approche s'appuie sur l'utilisation d'une plateforme d'intégration permettant de coupler divers outils de modélisation et de simulation. La modélisation du comportement multi-physique des composants au niveau détaillé est assurée par les méta-modèles, également utilisés pour l'optimisation multidisciplinaire des composants du système mécatronique. Ces méta-modèles permettent aussi d'intégrer le comportement multi-physique des composants et des modules mécatroniques pour la simulation au niveau système. Cette approche a été validée avec une modélisation d'un véhicule électrique. Ainsi, le niveau conceptuel de modélisation a été effectué avec le langage de modélisation des systèmes SysML et la véri_cation d'un test de performance d'accélération a été réalisée avec le langage de modélisation Modelica. Le module de conversion de puissance électrique du véhicule avec les fils de bonding a été modélisé avec la CAO 3D et son comportement multi-physique a été vérifié avec la méthode des éléments finis. Des méta-modèles sont ainsi élaborés en utilisant les techniques de surfaces de réponse et les réseaux de neurones de fonctions à base radiale. Ces méta-modèles ont permis ensuite d'effectuer des optimisations géométriques bi-niveaux du convertisseur de puissance et des fils de bonding. Le comportement électro-thermique du convertisseur de puissance et celui thermo-mécanique des fils de bonding ont été alors intégrés au niveau système à travers les méta-modèles. Les résultats montrent la flexibilité de l'approche du point de vue échange des méta-modèles et optimisation multidisciplinaire. Cette approche permet ainsi un gain très important du temps de conception, tout en respectant la précision souhaitée. / Difficulty of integrating multi-physics simulation in mechatronic system design is related, among others, to issues of interoperability between design tools, which lead to difficulties to ensure multidisciplinary optimizations. In this thesis, we have developed an integrated design approach to overcome this obstacle. This approach relies on the use of integrating platforms for coupling various design tools. Capture of multi-physics behaviour of components at detailed level is provided by meta-models which are also used for multidisciplinary optimization. These meta-models are therefore used to integrate multi-physics behaviour of mechatronic components and modules in system-level simulations. This approach has been validated with a design case of an electric vehicle. Conceptual design level has been performed with the Systems Modeling Language SysML and a verification of an acceleration performance test has been achieved with modeling language Modelica. Electric power converter with wire bondings has been modeled using 3D CAD and the multi-physics behaviour has been verified with finite elements method. Meta-models have then been developed for the power converter and wire bondings using techniques of response surfaces and neuronal networks of radial basis functions. These meta-models have been used to perform geometric bi-level optimizations of the components. Electro-thermal behavior of the power converter and thermo-mechanical behavior of the wire bondings have been integrated at system level through meta-models. Results show flexibility of the approach used in terms of exchange of meta-models and multidisciplinary optimization. Thus, this approach allows an important gain of design time while maintaining the desired accuracy.
6

Cable path optimization methods with cascade structures for industrial robot arms using physical simulators / 物理シミュレータを活用した産業用ロボットアームのためのカスケード構造を有するケーブル経路最適化手法に関する研究

Iwamura, Shintaro 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第24606号 / 工博第5112号 / 新制||工||1978(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 松野 文俊, 教授 松原 厚, 教授 泉井 一浩 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DGAM
7

Thermal Bimorph Micro-Cantilever Based Nano-Calorimeter for Sensing of Energetic Materials

Kang, Seokwon 2012 May 1900 (has links)
The objective of this study is to develop a robust portable nano-calorimeter sensor for detection of energetic materials, primarily explosives, combustible materials and propellants. A micro-cantilever sensor array is actuated thermally using bi-morph structure consisting of gold (Au: 400 nm) and silicon nitride (Si3N4: 600 nm) thin film layers of sub-micron thickness. An array of micro-heaters is integrated with the microcantilevers at their base. On electrically activating the micro-heaters at different actuation currents the microcantilevers undergo thermo-mechanical deformation, due to differential coefficient of thermal expansion. This deformation is tracked by monitoring the reflected ray from a laser illuminating the individual microcantilevers (i.e., using the optical lever principle). In the presence of explosive vapors, the change in bending response of microcantilever is affected by the induced thermal stresses arising from temperature changes due to adsorption and combustion reactions (catalyzed by the gold surface). A parametric study was performed for investigating the optimum values by varying the thickness and length in parallel with the heater power since the sensor sensitivity is enhanced by the optimum geometry as well as operating conditions for the sensor (e.g., temperature distribution within the microcantilever, power supply, concentration of the analyte, etc.). Also, for the geometry present in this study the nano-coatings of high thermal conductivity materials (e.g., Carbon Nanotubes: CNTs) over the microcantilever surface enables maximizing the thermally induced stress, which results in the enhancement of sensor sensitivity. For this purpose, CNTs are synthesized by post-growth method over the metal (e.g., Palladium Chloride: PdCl2) catalyst arrays pre-deposited by Dip-Pen Nanolithography (DPN) technique. The threshold current for differential actuation of the microcantilevers is correlated with the catalytic activity of a particular explosive (combustible vapor) over the metal (Au) catalysts and the corresponding vapor pressure. Numerical modeling is also explored to study the variation of temperature, species concentration and deflection of individual microcantilevers as a function of actuation current. Joule-heating in the resistive heating elements was coupled with the gaseous combustion at the heated surface to obtain the temperature profile and therefore the deflection of a microcantilever by calculating the thermally induced stress and strain relationship. The sensitivity of the threshold current of the sensor that is used for the specific detection and identification of individual explosives samples - is predicted to depend on the chemical kinetics and the vapor pressure. The simulation results showed similar trends with the experimental results for monitoring the bending response of the microcantilever sensors to explosive vapors (e.g., Acetone and 2-Propanol) as a function of the actuation current.
8

Soluções numericas em um modelo de tecidos baseado na superficie de Cosserat / Numerical solutions to a cloth model based on the Cosserat surface

Monteiro, Leandro de Pinho 12 August 2018 (has links)
Orientador: Wu Shin-Ting / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e Computação / Made available in DSpace on 2018-08-12T07:57:07Z (GMT). No. of bitstreams: 1 Monteiro_LeandrodePinho_M.pdf: 2175636 bytes, checksum: 41d333af4978e00fcca508aa92a49e03 (MD5) Previous issue date: 2008 / Resumo: Atualmente, os modelos físicos são amplamente utilizados em simulações computacionais de vestuários e tecidos em geral. Eles podem ser divididos em duas abordagens: mecânica de partículas e mecânica de contínuos. A mecânica de partículas é hoje a abordagem mais utilizada, possuindo vantagens como a simplicidade da formulação e um bom desempenho computacional. Já a mecânica de contínuos é reconhecida por ser a abordagem mais precisa sob o ponto de vista físico, porém tem um alto custo computacional. Muitas vezes, o baixo desempenho da mecânica de contínuos está relacionado à solução numérica utilizada para obter diferentes estados do tecido ao longo do tempo, em decorrência da sua solução demandar resolução de sistemas de equações de um grande número de variáveis. As soluções encontradas na literatura são técnicas de elementos finitos ou de diferenças finitas, com integração semi-implícita ou implícita, todas com uma complexidade computacional não linear. Desta forma, este trabalho visa analisar a adequabilidade de utilizar métodos explícitos, que tem um comportamento linear, em um modelo físico fundamentado na teoria de superfície de Cosserat, cujos autores provam que ele consegue produzir dobras sob forças de compressão. Discretizando espacialmente este modelo com a técnica das diferenças finitas, métodos explícitos são avaliados para solucionar as equações diferenciais ordinárias resultantes. Com base nestas avaliações, foi selecionado o método de Verlet e implementado um simulador da dinâmica de malhas retangulares provido de uma interface gráfica interativa. Isso viabilizou uma validação prática do modelo, demonstrando a sua superioridade na produção de dobras em relação aos outros modelos existentes. / Abstract: Currently, physics-based models are frequently used for cloth simulations. There are basically two approaches: particle mechanics and continuum mechanics. The particle mechanics is at the present time the most widely used method, having some advantages like an easy formulation and a high performance. The continuum mechanics is regarded as being a more precise approach, but it has a high computational cost. This low performance is mostly owing to the numerical solution that obtains cloth states along the time, because its formulation is based involves a large number of variables. The solutions found in the literature are the finite element and the finite differences techniques, with the semi-implicit and implicit integration. They possess a non linear computational complexity. So, this work aims to analyze the suitability of application of an explicit integration technique, that has a linear behavior, to a cloth model founded on the theory of Cosserat surface, whose authors claim to have proved it can produce natural folds under pure compression forces. Based on this evaluation, the Verlet method was selected and a rectangular mesh dynamics simulator with an interactive graphics interface has been implemented. This permits a practice validation of the cloth model, showing its superiority on production of folds compared with the other models. / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
9

Evaluating Visual Quality of Secondary Motion Simulation Techniques : A Survey on Stylized 3D Game Character Cloth and Hair

Burman, Adam January 2022 (has links)
Background. Secondary motion is a principle of animation, it is movement that occurs as a result of other movement, such as swinging hair or clothes. In 3D animation, such as in games, it is often simulated instead of animated manually. In game projects with time limitations, it can be interesting to know to what degree these simulations impact the visual quality in order to decide whether they should be prioritized. It is also interesting to know how the results of various methods compare to each other. To simulate in real-time means that physics simulations are running during gameplay. Baked animations on the other hand are simulations that have already been processed and saved as animation data, they are less dynamic but also less performance intensive. Objectives. The aim of this thesis is to evaluate the impact of three sets of animations by conducting a survey where each set is compared. The three sets are: animations that feature real-time simulations, baked simulations and ones without simulation. The goal is to acquire a metric from the comparisons that can give an insight to the visual quality impact of each method. Methods. Three animation sets were created. Then, a survey was conducted using a questionnaire that featured side by side video comparisons of the animation sets. The videos featured a stylized character running, walking, or jumping through an empty environment. Pairwise similarity judgements were done by asking the participants to rate each video compared to each other. The results from the questionnaire were analyzed using a method that is a part of the analytical hierarchy process. The data from each comparison was averaged, put into pairwise comparison matrices, and then used to calculate priority vectors. The level of consistency of the comparisons were also calculated. Results. The priority vectors show the ratios of how each animation set were preferred compared to each other. In the priority vector for all animations combined, the set without simulations ranked at twenty-four percent, the real-time set ranked at thirty-three percent and the baked set ranked the highest at forty-three percent. The comparisons were calculated to have a very high consistency, which strengthens the result. Conclusions. The results show the impact that adding simulated secondary motion has. The simulations appear to improve the visual quality, but the margin is not extreme. The calculated ratios could be used to argue for or against a game project’s prioritization of secondary motion simulations depending on the project’s time constraints and access to preexisting methods of simulation. It should be noted that the format of video comparisons did not showcase all the advantages of each method such as creation accessibility, technical performance or dynamicity. As such, it is uncertain how fair the comparisons of the baked and real-time simulations are in a more general sense. Nevertheless, the results are considered to give at least a partial insight into how these methods compare.
10

PaVo un tri parallèle adaptatif / PaVo. An Adaptative Parallel Sorting Algorithm.

Durand, Marie 25 October 2013 (has links)
Les joueurs exigeants acquièrent dès que possible une carte graphique capable de satisfaire leur soif d'immersion dans des jeux dont la précision, le réalisme et l'interactivité redoublent d'intensité au fil du temps. Depuis l'avènement des cartes graphiques dédiées au calcul généraliste, ils n'en sont plus les seuls clients. Dans un premier temps, nous analysons l'apport de ces architectures parallèles spécifiques pour des simulations physiques à grande échelle. Cette étude nous permet de mettre en avant un goulot d'étranglement en particulier limitant la performance des simulations. Partons d'un cas typique : les fissures d'une structure complexe de type barrage en béton armé peuvent être modélisées par un ensemble de particules. La cohésion de la matière ainsi simulée est assurée par les interactions entre elles. Chaque particule est représentée en mémoire par un ensemble de paramètres physiques à consulter systématiquement pour tout calcul de forces entre deux particules. Ainsi, pour que les calculs soient rapides, les données de particules proches dans l'espace doivent être proches en mémoire. Dans le cas contraire, le nombre de défauts de cache augmente et la limite de bande passante de la mémoire peut être atteinte, particulièrement en parallèle, bornant les performances. L'enjeu est de maintenir l'organisation des données en mémoire tout au long de la simulation malgré les mouvements des particules. Les algorithmes de tri standard ne sont pas adaptés car ils trient systématiquement tous les éléments. De plus, ils travaillent sur des structures denses ce qui implique de nombreux déplacements de données en mémoire. Nous proposons PaVo, un algorithme de tri dit adaptatif, c'est-à-dire qu'il sait tirer parti de l'ordre pré-existant dans une séquence. De plus, PaVo maintient des trous dans la structure, répartis de manière à réduire le nombre de déplacements mémoires nécessaires. Nous présentons une généreuse étude expérimentale et comparons les résultats obtenus à plusieurs tris renommés. La diminution des accès à la mémoire a encore plus d'importance pour des simulations à grande échelles sur des architectures parallèles. Nous détaillons une version parallèle de PaVo et évaluons son intérêt. Pour tenir compte de l'irrégularité des applications, la charge de travail est équilibrée dynamiquement par vol de travail. Nous proposons de distribuer automatiquement les données en mémoire de manière à profiter des architectures hiérarchiques. Les tâches sont pré-assignées aux cœurs pour utiliser cette distribution et nous adaptons le moteur de vol pour favoriser des vols de tâches concernant des données proches en mémoire. / Gamers are used to throw onto the latest graphics cards to play immersive games which precision, realism and interactivity keep increasing over time. With general-propose processing on graphics processing units, scientists now participate in graphics card use too. First, we examine these architectures interest for large-scale physics simulations. Drawing on this experience, we highlight in particular a bottleneck in simulations performance. Let us consider a typical situation: cracks in complex reinforced concrete structures such as dams are modelised by many particles. Interactions between particles simulate the matter cohesion. In computer memory, each particle is represented by a set of physical parameters used for every force calculations between two particles. Then, to speed up computations, data from particles close in space should be close in memory. Otherwise, the number of cache misses raises up and memory bandwidth may be reached, specially in parallel environments, limiting global performance. The challenge is to maintain data organization during the simulations despite particle movements. Classical sorting algorithms do not suit such situations because they consistently sort all the elements. Besides, they work upon dense structures leading to a lot of memory transfers. We propose PaVo, an adaptive sort which means it benefits from sequence presortedness. Moreover, to reduce the number of necessary memory transfers, PaVo spreads some gaps inside the data structure. We present a large experimental study and confront results to reputed sort algorithms. Reducing memory requests is again more important for large scale simulations with parallel architectures. We detail a parallel version of PaVo and evaluate its interest. To deal with application irregularities, we do load balancing with work-stealing. We take advantage of hierarchical architectures by automatically distributing data in memory. Thus, tasks are pre-assigned to cores with respect to this organization and we adapt the scheduler to favor steals of tasks working on data close in memory.

Page generated in 0.0816 seconds