• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 475
  • 88
  • 87
  • 56
  • 43
  • 21
  • 14
  • 14
  • 11
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 989
  • 321
  • 204
  • 184
  • 169
  • 165
  • 154
  • 138
  • 124
  • 104
  • 97
  • 95
  • 93
  • 88
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Extrémní učící se stroje pro předpovídání časových řad / Extreme learning machines for time series prediction

Zmeškal, Jiří January 2018 (has links)
Thesis is aimed at the possibility of utilization of extreme learning machines and echo state networks for time series forecasting with possibility of utilizing GPU acceleration. Such predictions are part of nearly everyone’s daily lives through utilization in weather forecasting, prediction of regular and stock market, power consumption predictions and many more. Thesis is meant to familiarize reader firstly with theoretical basis of extreme learning machines and echo state networks, taking advantage of randomly generating majority of neural networks parameters and avoiding iterative processes. Secondly thesis demonstrates use of programing tools, such as ND4J and CUDA toolkit, to create very own programs. Finally, prediction capability and convenience of GPU acceleration is tested.
672

Digitální metody zpracování trojrozměrného zobrazení v rentgenové tomografii a holografické mikroskopii / The Three-Dimensional Digital Imaging Methods for X-ray Computed Tomography and Digital Holographic Microscopy

Kvasnica, Lukáš January 2015 (has links)
This dissertation thesis deals with the methods for processing image data in X-ray microtomography and digital holographic microscopy. The work aims to achieve significant acceleration of algorithms for tomographic reconstruction and image reconstruction in holographic microscopy by means of optimization and the use of massively parallel GPU. In the field of microtomography, the new GPU (graphic processing unit) accelerated implementations of filtered back projection and back projection filtration of derived data are presented. Another presented algorithm is the orientation normalization technique and evaluation of 3D tomographic data. In the part related to holographic microscopy, the individual steps of the complete image processing procedure are described. This part introduces the new orignal technique of phase unwrapping and correction of image phase damaged by the occurrence of optical vortices in the wrapped image phase. The implementation of the methods for the compensation of the phase deformation and for tracking of cells is then described. In conclusion, there is briefly introduced the Q-PHASE software, which is the complete bundle of all the algorithms necessary for the holographic microscope control, and holographic image processing.
673

Vizualizace značených buněk modelového organismu / Visualization of Marked Cells of a Model Organism

Kubíček, Radek Unknown Date (has links)
This master thesis is focused on volumetric data rendering and on highlighting and visualization of the selected cells of the model organisms. These data are captured by a confocal deconvolution microscope. Input data form one large volumetric block containing separate slices. This data block is rendered by an applicable method and then are identified and visualized the cells marked by the GFP (Green Fluorescent Protein) process or by chlorophyle fluorescency. The principal aim of this work is to find out the preferably optimal effective method enabling this highlighting, most preferably working without a manual check. Due to the data structure, this ambition seems hardly realizable, so it suffices to find out a manual working method. The last step is to embed the results of this work into FluorCam application, the confocal deconvolution microscope data visualizer.
674

On GPU Assisted Polar Decoding : Evaluating the Parallelization of the Successive Cancellation Algorithmusing Graphics Processing Units / Polärkodning med hjälp av GPU:er : En utvärdering av parallelliseringmöjligheterna av SuccessiveCancellation-algoritmen med hjälp av grafikprocessorer

Nordqvist, Siri January 2023 (has links)
In telecommunication, messages sent through a wireless medium often experience noise interfering with the signal in a way that corrupts the messages. As the demand for high throughput in the mobile network is increasing, algorithms that can detectand correct these corrupted messages quickly and accurately are of interest to the industry. Polar codes have been chosen by the Third Generation Partnership Project as the error correction code for 5G New Radio control channels. This thesis work aimed to investigate whether the polar code Successive Cancellation (SC) could be parallelized and if a graphics processing unit (GPU) can be utilized to optimize the execution time of the algorithm. The polar code Successive Cancellation was enhanced by implementing tree pruning and support for GPUs to leverage their parallelization. The difference in execution time between the concurrent and sequential versions of the SC algorithm with and without tree pruning was evaluated. The tree pruning SC algorithm almost always offered shorter execution times than the SC algorithm that did not employ treepruning. However, the support for GPUs did not reduce the execution time in these tests. Thus, the GPU is not certain to be able to improve this type of enhanced SC algorithm based on these results. / Meddelanden som överförs över ett mobilt nät utsätts ofta för brus som distorterar dem. I takt med att intresset ökat för hög genomströmning i mobilnätet har också intresset för algoritmer som snabbt och tillförlitligt kan upptäcka och korrigera distorderade meddelanden ökat. Polarkoder har valts av "Third Generation Partnership Project" som den klass av felkorrigeringskoder som ska användas för 5G:s radiokontrollkanaler. Detta examensarbete hade som syfte att undersöka om polarkoden "Successive Cancellation" (SC) skulle kunna parallelliseras och om en grafisk bearbetningsenhet (GPU) kan användas för att optimera exekveringstiden för algoritmen. SC utökades med stöd för trädbeskärning och parallellisering med hjälp av GPU:er. Skillnaden i exekveringstid mellan de parallella och sekventiella versionerna av SC-algoritmen med och utan trädbeskärning utvärderades. SC-algoritmen för trädbeskärning erbjöd nästan alltid kortare exekveringstider än SC-algoritmen som inte använde trädbeskärning. Stödet för GPU:er minskade dock inte exekveringstiden. Således kan man med dessa resultat inte med säkerhet säga att GPU-stöd skulle gynna SC-algoritmen.
675

Real-time 3D-based Virtual Eye Contact for Video Communication

Waizenegger, Wolfgang 09 August 2019 (has links)
Das Problem des fehlenden Augenkontaktes vermindert den Eindruck einer natürlichen Kommunikationssituation bei Videokonferenzen. Während eine Person auf den Bildschirm blickt, wird sie von Kameras aufgenommen, die sich normalerweise direkt daneben befinden. Mit dem Aufkommen von massiv paralleler Computer Hardware und ganz speziell den sehr leistungsstarken Spielegrafikkarten ist es möglich geworden, viele Eingabeansichten für eine Echtzeit 3D Rekonstruktion zu verarbeiten. Eine größere Anzahl von Eingabeansichten mildert Verdeckungsprobleme ab und führt zu vollständigeren 3D Daten. In dieser Arbeit werden neue Algorithmen vorgeschlagen, welche eine hochqualitative Echtzeit 3D Rekonstruktion, die kontinuierliche Anpassung der photometrischen Kameraparameter und die benutzerunabhängige Schätzung der Augenkontaktkameras ermöglichen. Die Echtzeit 3D Analyse besteht aus zwei komplementären Ansätzen. Einerseits gibt es einen Algorithmus, der auf der Verarbeitung geometrischer Formen basiert und auf der anderen Seite steht eine patchbasierte Technik, die 3D Hypothesen durch das Vergleichen von Bildtexturen evaluiert. Zur Vorbereitung für die Bildsynthese ist es notwendig, Texturen von verschiedenen Ansichten anzugleichen. Hierfür wird die Anwendung eines neuen Algorithmus zur kontinuierlichen photometrischen Justierung der Kameraparameter vorgeschlagen. Die photometrische Anpassung wird iterativ, im Wechsel mit einer 3D Registrierung der entsprechenden Ansichten, ausgeführt. So ist die Qualität der photometrischen Parameter direkt mit jener der Ergebnisse der 3D Analyse verbunden und vice versa. Eine weitere wichtige Voraussetzung für eine korrekte Synthese der Augenkontaktansicht ist die Schätzung einer passenden virtuellen Augenkontaktkamera. Hierfür wird die Augenkontaktkamera kontinuierlich an die Augenposition der Benutzer angeglichen. Auf diese Weise wird eine virtuelle Kommunikationsumgebung geschaffen, die eine natürlichere Kommunikation ermöglicht. / A major problem, that decreases the naturalness of conversations via video communication, is missing eye contact. While a person is looking on the display, she or he is recorded from cameras that are usually attached next to the display frame. With the advent of massively parallel computer hardware and in particular very powerful consumer graphics cards, it became possible to simultaneously process multiple input views for real-time 3D reconstruction. Here, a greater amount of input views mitigate occlusion problems and lead to a more complete set of 3D data that is available for view synthesis. In this thesis, novel algorithms are proposed that enable for high quality real-time 3D reconstruction, the on-line alignment of photometric camera parameters, and the automatic and user independent estimation of the eye contact cameras. The real-time 3D analysis consist of two complementary approaches. On the one hand, a shape based algorithm and on the other hand, a patch based technique that evaluates 3D hypotheses via comparison of image textures. Preparative to rendering, texture from multiple views needs to be aligned. For this purpose, a novel algorithm for photometric on-line adjustment of the camera parameters is proposed. The photometric adjustment is carried out iteratively in alternation with a 3D registration of the respective views. In this way, the quality of photometric parameters is directly linked to the 3D analysis results and vice versa. Based on the textured 3D data, the eye contact view is rendered. An important prerequisite for this task is the estimation of a suitable virtual eye contact camera. In this thesis, a novel approach is formulated that enables for an automatic adaptation to arbitrary new users. Therefor, the eye contact camera is dynamically adapted to the current eye positions of the users. In this way, a virtual communication environment is created that allows for a more natural conversation.
676

Traction Adaptive Motion Planning for Autonomous Racing / Tractionadaptiv rörelseplanering för autonom racing

Raikar, Shekhar January 2022 (has links)
Autonomous driving technology is continuously evolving at an accelerated pace. The road environment is always uncertain, which requires an evasive manoeuvre that an autonomous vehicle can take. This evasive behaviour to avoid accidents in a critical situation is analogous to autonomous racing that operates at the limits of stable vehicle handling. In autonomous racing, the vehicle must operate in highly nonlinear operating conditions such as high-speed manoeuvre on sharp turns, avoiding obstacles and slippery road conditions. These dynamically changing racing situations require advanced path planning systems with obstacle avoidance executed in real-time. Therefore, the motion planning problem for autonomous racing is analogous to safe and reliable autonomous vehicle operation in critical situations. This thesis project evaluates the application of traction adaptive motion planning to autonomous racing on different road surfaces for a small-scale test vehicle in real-time. The evaluation is based on a state-of-the-art algorithm that uses a combination of optimization, trajectory rollout, and constraint adaption framework called "Sampling Augmented Real-Time Iteration (SAARTI)". SAARTI allows motion planning and control with respect to time-varying vehicle actuation capabilities while taking locally adaptive traction into account for different parts of the track as a constraint. Initially, the SAARTI framework is adapted to work with the SmallVehicles-for-Autonomy (SVEA) system; then, the whole system is simulated in a ROS (Robot Operating System) based SVEA simulator with a Hardware-in-the-loop setup. Later, the same setup is used for the real time experiments that are carried out using the SVEA vehicles, and the different critical scenarios are tested on the SVEA vehicle. The emphasis was given to the experimental results; therefore, the results also consider computationally intensive localization inputs while the motion planner was implemented in real-time instead of a simulation setup. The experimental results showed the impact of planning motions according to an approximately correct friction estimate when the friction parameter was close to the actual value. The results indicated that the traction variation had indeed affected the lap time and trajectory taken by the test vehicle. The lap time is affected significantly when the coefficient of friction value is far away from the real friction coefficient. It is observed that the lap time increased significantly at higher values of friction coefficient, when involving more excessive over-estimation of the traction, leading to the oscillatory motion and lane exits. Furthermore, the non-adaptive case scenario result shows that the test vehicle performed better when given friction parameter inputs to the algorithm approximately equal to the real friction value. / Teknik för autonom körning har utvecklats i snabb takt de senaste åren. Trafikmiljön innehåller många källor till osäkerhet, vilket ibland kräver undanmanövrar av det autonoma fordonet. Undanmanövrar i kritiska situationer är analoga med autonom racing i det avseendet att fordonet opererar nära gränsen av dess fysiska förmåga. I autonom racing måste fordonet fungera i hög grad olinjära driftsförhållanden som höghastighetsmanöver i skarpa svängar, undvika hinder och halt väglag. Dessa dynamiska föränderliga racingsituationer kräver avancerad vägplaneringssystem med undvikande av hinder exekveras i realtid. Därför är rörelseplaneringsproblemet för autonom racing är analogt med det för säkra undanmanövrer i kritiska situationer. Detta examensarbete utvärderar tillämpningen av dragkraft adaptiv till autonom racing på olika väglag för ett småskaligt testfordon i realtid. Utvärderingen baseras på en algoritm som kallas "Sampling Augmented Real Time Iteration (SAARTI)" som tillåter rörelse planering och kontroll med avseende på tidsvarierande fordonsdynamik, på så vis tar algoritmen hänsyn till lokalt varierande väglag. Arbetet började med att integrera SAARTI-ramverket med testplattformen Small-Vehicles-for-Autonomy (SVEA). Därefter utfördes hardware-in-the-loop simuleringar i ROS (Robot Operating System), och därefter utfördes fysiska tester med SVEA plattformen. Under experimenten kördes SAARTI-algoritmen parallellt med en beräkningsintensiv SLAM-algoritm för lokalisering. De experimentella resultaten visade att adaptiv rörelseplanering kan avhjälpa problemet med lokalt varierande väglag, givet att den uppskattade friktionsparametern är approximativt korrekt. Varvtiden påverkas negativt när friktionsskattningen avviker från den verkliga friktionskoefficienten. Vidare observerades att varvtiden ökade vid höga värden på den skattade friktionsparametern, vilket gav upphov till mer aggressiva manövrer, vilket i sin tur gav upphov till oscillerande rörelser och avåkningar.
677

Repeatable high-resolution statistical downscaling through deep learning

Quesada-Chacón, Dánnell, Barfus, Klemens, Bernhofer, Christian 04 June 2024 (has links)
One of the major obstacles for designing solutions against the imminent climate crisis is the scarcity of high spatio-temporal resolution model projections for variables such as precipitation. This kind of information is crucial for impact studies in fields like hydrology, agronomy, ecology, and risk management. The currently highest spatial resolution datasets on a daily scale for projected conditions fail to represent complex local variability. We used deep-learning-based statistical downscaling methods to obtain daily 1 km resolution gridded data for precipitation in the Eastern Ore Mountains in Saxony, Germany. We built upon the well-established climate4R framework, while adding modifications to its base-code, and introducing skip connections-based deep learning architectures, such as U-Net and U-Net++. We also aimed to address the known general reproducibility issues by creating a containerized environment with multi-GPU (graphic processing unit) and TensorFlow's deterministic operations support. The perfect prognosis approach was applied using the ERA5 reanalysis and the ReKIS (Regional Climate Information System for Saxony, Saxony-Anhalt, and Thuringia) dataset. The results were validated with the robust VALUE framework. The introduced architectures show a clear performance improvement when compared to previous statistical downscaling benchmarks. The best performing architecture had a small increase in total number of parameters, in contrast with the benchmark, and a training time of less than 6 min with one NVIDIA A-100 GPU. Characteristics of the deep learning models configurations that promote their suitability for this specific task were identified, tested, and argued. Full model repeatability was achieved employing the same physical GPU, which is key to build trust in deep learning applications. The EURO-CORDEX dataset is meant to be coupled with the trained models to generate a high-resolution ensemble, which can serve as input to multi-purpose impact models.
678

Level Up CFD - GPU-Beschleunigung in Ansys Fluent

Findeisen, Fabian 20 June 2024 (has links)
In der numerischen Strömungssimulation (Computational Fluid Dynamics, CFD) stellt die Berechnungsgeschwindigkeit einen kritischen Faktor dar. Insbesondere bei transienten Berechnungen oder bei der Simulation von umfangreichen Modellen können Berechnungen auf Hochleistungsrechnern mit mehreren hundert Kernen schnell zu einer zeitintensiven Aufgabe werden, die Tage oder sogar Wochen in Anspruch nimmt. Der Vortrag bietet einen detaillierten Einblick in die Möglichkeiten der GPU-Beschleunigung in Ansys Fluent und beleuchtet das Potenzial dieser innovativen Technologie. Zu Beginn wird der neue GPU-Solver in Ansys Fluent vorgestellt. Dieser Gleichungslöser nutzt die Rechenkapazität von Grafikprozessoren (GPUs), um CFD-Berechnungen durch extreme Parallelisierung effizienter durchzuführen als herkömmliche CPU-basierte Solver. Ein zusätzlicher Vorteil dieser Methode ist die signifikante Reduzierung des Energieverbrauchs und der Hardware-Investitionskosten. Im Anschluss werden Benchmarks von CPU- gegenüber GPU-basierten Lösungen anhand verschiedener Anwendungsfälle präsentiert. Diese Benchmarks verdeutlichen die Leistungsfähigkeit und Effizienz von GPU-Solvern im Vergleich zu CPU-Solvern. So kann beispielsweise die Außenumströmung eines Fahrzeugs mit dem Coupled GPU Solver zehnmal schneller auf einer Nvidia A100 GPU berechnet werden als auf herkömmlicher HPC-Hardware mit 48 Kernen. Der Vortrag bietet auch einen Überblick über den aktuellen Funktionsumfang und die zukünftige Entwicklungsroadmap von Ansys Fluent. Dies gibt einen Einblick in die aktuellen Funktionen des Tools und die geplanten Entwicklungen für die Zukunft. Ein weiterer wichtiger Aspekt sind die Lizenz- und Hardwareanforderungen. Dies hilft, die notwendigen Ressourcen für die Implementierung dieser Technologie in eigenen Projekten zu verstehen. Abschließend bietet der Vortrag einen Ausblick auf die Anwendung von Künstlicher Intelligenz (KI) für CFD. Mit der fortschreitenden Entwicklung der KI-Technologie eröffnen sich neue Möglichkeiten für die Verbesserung und Beschleunigung von CFD-Berechnungen. Insgesamt bietet der Vortrag einen umfassenden Überblick über die Anwendung von GPU-Beschleunigung in moderner CFD-Software und die zukünftigen Entwicklungen in diesem Bereich. / Calculation speed is a critical factor in computational fluid dynamics (CFD). Especially for transient calculations or the simulation of extensive models, calculations on high-performance computers with several hundred cores can quickly become a time-consuming task that takes days or even weeks. The presentation offers a detailed insight into the possibilities of GPU acceleration in Ansys Fluent and highlights the potential of this innovative technology. At the beginning, the new GPU solver in Ansys Fluent will be introduced. This solver uses the computing power of graphics processing units (GPUs) to perform CFD calculations more efficiently than conventional CPU-based solvers through extreme parallelization. An additional advantage of this method is the significant reduction in energy consumption and hardware investment costs. Subsequently, benchmarks of CPU- versus GPU-based solutions will be presented based on different use cases. These benchmarks illustrate the performance and efficiency of GPU solvers compared to CPU solvers. For example, the external airflow of a vehicle can be calculated ten times faster with the Coupled GPU Solver on an Nvidia A100 GPU than on conventional HPC hardware with 48 cores. The presentation will also provide an overview of the current range of functions and the future development roadmap.
679

Méthodes de rendu à base de vidéos et applications à la réalité Virtuelle

Nozick, Vincent 07 June 2006 (has links) (PDF)
Etant donné un ensemble de caméras filmant une même scène, le rendu à base de vidéos consiste à générer de nouvelles images de cette scène à partir de nouveaux points de vue. L'utilisateur a ainsi l'impression de pouvoir déplacer une caméra virtuelle dans la scène alors qu'en réalité, toutes les caméras sont fixes. Certaines méthodes de rendu à base de vidéos coûteuses en temps de calcul se basent sur une reconstruction 3d de la scène et produisent des images de très bonne qualité. D'autres méthodes s'orientent plutôt vers le rendu temps réel. C'est dans cette dernière catégorie que s'inscrit la méthode de Plane Sweep sur laquelle porte la majeure partie de nos travaux. Le principe de la méthode des Plane Sweep consiste à discrétiser la scène en plans parallèles et à traiter séparément chaque point de ces plans afin de déterminer s'ils se trouvent ou non sur la surface d'un objet de la scène. Les résultats obtenus permettent de générer une nouvelle image de la scène à partir d'un nouveau point de vue. Cette méthode est particulièrement bien adaptée à une utilisation optimale des ressources de la carte graphique ce qui explique qu'elle permette d'effectuer du rendu en temps réel. Notre principale contribution à cette méthode concerne la façon d'estimer si un point d'un plan représente la surface d'un objet. Nous proposons d'une part un nouveau mode de calcul permettant d'améliorer le résultat visuel tout en rendant la navigation de la caméra virtuelle plus souple. D'autre part, nous présentons une adaptation de la méthode des Plane Sweep permettant de gérer les occlusions partielles. Compte tenu des applications du rendu à base de vidéos en réalité virtuelle, nous proposons une amélioration des Plane Sweep appliquée à la réalité virtuelle avec notamment la création de paires d'images stéréoscopiques permettant de visualiser en relief la scène reconstruite. Notre amélioration consiste à calculer la seconde vue à moindre coût alors qu'une majorité des méthodes concurrentes sont contraintes d'effectuer deux rendus indépendants. Cette amélioration est basée sur un partage des données communes aux deux vues stéréoscopiques. Enfin, dans le cadre de l'utilisation des Plane Sweep en réalité virtuelle, nous présentons une méthode permettant de supprimer les mouvements pseudoscopiques. Ces mouvements pseudoscopiques apparaissent lorsque l'observateur se déplace devant une image stéréoscopique, il ressent alors une distorsion des proportions de la scène virtuelle et voit les objets se déplacer de façon anormale. La méthode de correction que nous proposons est applicable d'une part à des méthodes classiques de rendu d'images de synthèse et d'autre part à la méthode des Plane Sweep. Toutes les méthodes que nous présentons utilisent largement les possibilités du processeur de la carte graphique à l'aide des shader programs et génèrent toutes des images en temps réel. Seuls un ordinateur grand public, un dispositif d'acquisition vidéo et une bonne carte graphique sont suffisants pour les faire fonctionner. Les applications des Plane Sweep sont nombreuses, en particulier dans les domaines de la réalité virtuelle, du jeu vidéo, de la télévision 3d ou de la sécurité.
680

GPU Accelerated Study of Heat Transfer and Fluid Flow by Lattice Boltzmann Method on CUDA

Ren, Qinlong, Ren, Qinlong January 2016 (has links)
Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1,000 US dollars. The release of NVIDIA's CUDA architecture which includes both hardware and programming environment in 2007 makes GPU computing attractive. Due to its highly parallel nature, lattice Boltzmann method is successfully ported into GPU with a performance benefit during the recent years. In the current work, LBM CUDA code is developed for different fluid flow and heat transfer problems. In this dissertation, lattice Boltzmann method and immersed boundary method are used to study natural convection in an enclosure with an array of conduting obstacles, double-diffusive convection in a vertical cavity with Soret and Dufour effects, PCM melting process in a latent heat thermal energy storage system with internal fins, mixed convection in a lid-driven cavity with a sinusoidal cylinder, and AC electrothermal pumping in microfluidic systems on a CUDA computational platform. It is demonstrated that LBM is an efficient method to simulate complex heat transfer problems using GPU on CUDA.

Page generated in 0.0506 seconds