• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 472
  • 88
  • 87
  • 56
  • 43
  • 20
  • 14
  • 14
  • 10
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 984
  • 319
  • 202
  • 183
  • 168
  • 165
  • 153
  • 138
  • 124
  • 104
  • 96
  • 94
  • 93
  • 87
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Dosimétrie pour des applications de radiothérapie en utilisant les processeurs graphiques / Monte Carlo dosimetry on GPU for radiation therapy applications

Lemaréchal, Yannick 22 June 2016 (has links)
Le cancer de la prostate est le cancer le plus fréquemment diagnostiqué en France chaque année. Il est responsable d’environ 10 % des morts liées au cancer. Les principaux traitements sont la chirurgie et la radiothérapie. Cette dernière concerne environ 60 % à 70 % des patients pris en charge en oncologie. La radiothérapie consiste à délivrer la dose la plus élevée possible à une cible tumorale, via des rayonnements ionisants, tout en limitant au maximum la dose délivrée aux tissus sains et organes à risque (OAR) environnants. Cette pratique requiert un contrôle sans faille de la dose délivrée au patient car une déviation de la prescription médicale peut réduire l’efficacité du traitement des volumes tumoraux. Elle peut également avoir des conséquences graves sur le patient dues à l’irradiation excessive des tissus sains. Un moyen pour évaluer de façon précise la dose délivrée est de simuler l’interaction rayonnement matière à l’intérieur du patient par simulation Monte-Carlo (SMC). Ceci exige une capacité de calcul importante notamment pour simuler les milliards de particules nécessaires à l’évaluation de la dosimétrie. Le temps nécessaire pour obtenir un résultat satisfaisant peut varier de quelques heures à plusieurs jours. Dans ce contexte, le moteur de simulation Monte-Carlo GGEMS (GPU GEant4-based Monte-Carlo Simulation), basé sur l’utilisation de cartes graphiques (GPUs), a pu être développé. Les effets physiques modélisés se basent sur le code Monte-Carlo générique Geant4 réputé et validé. Ce logiciel tient compte de différents types de simulations comme la radiothérapie externe ou les techniques de curiethérapie bas débit et haut débit de dose. Ces exemples ont nécessité la modélisation précise et l’utilisation de plusieurs types de géométries comme des volumes voxélisés, analytiques ou maillés. Concernant la radiothérapie, il n'existait pas de code Monte-Carlo utilisant les architectures GPUs prenant en considération l'ensemble de l'appareil de traitement. Dans ce contexte, nous avons développé un modèle de source paramétrée reproduisant scrupuleusement le faisceau d'émission et permettant une utilisation sur GPU. Nous avons modélisé analytiquement les géométries des mâchoires. Le collimateur multi-lames est quant à lui formé par un ensemble de triangles (maillage). La navigation des électrons dans un volume voxélisé a également été développée. Nous avons utilisé comme exemple l'accélérateur Novalis TrueBeam® Stx. Nous pouvons ainsi effectuer des simulations Monte-Carlo reproduisant fidèlement cet accélérateur linéaire. L’ensemble de l’appareil a été validé à l’aide de comparaisons avec des mesures expérimentales ou avec des simulations Monte-Carlo de référence. Finalement, nous avons développé une plateforme de simulation Monte-Carlo utilisant les architectures GPUs pour des applications de curiethérapie et de radiothérapie externe. Cette plateforme comprend la navigation des photons et des électrons. Elle gère également les volumes voxélisés, analytiques (cylindre, pavé) et maillés. Les sources d'émission des particules sont modélisées pour reproduire fidèlement leur modèle de référence. Les facteurs d'accélération par rapport à Geant4 sont compris entre 40 et 568 selon l'application. Des applications de GGEMS dans des conditions cliniques, notamment en curiethérapie, sont la prochaine étape du développement. / Prostate cancer is the most frequently diagnosed cancer in France each year. It is responsible for about 10% of deaths related to cancer. The main treatments are surgery and radiation therapy. The latter concerns about 60 % to 70 % of patients treated in oncology. The aim of radiation therapy is to deliver the highest possible dose to the tumor target, via ionizing radiation, while minimizing the dose delivered to surrounding healthy tissues and organs at risk (OAR). This practice requires a flawless dose control for patient safety as far as a deviation from the medical prescription could reduce treatment efficiency This could also lead to an excessive irradiation of healthy tissues and cause serious damage to the patient. A way to evaluate the dose delivered to the patient is to track particles in the matter using Monte Carlo simulations (MCS). This requires a large computation time specially to simulate billion of particles and to evaluate the associated dosimetry. The time required to obtain a satisfactory result may vary from hours to days. In this context, the Monte Carlo simulation engine GGEMS (GPU Geant4-based Monte Carlo Simulation), based on the use of graphics cards (GPUs), has been developed. Physics effects are based on the generic and validated Monte Carlo code Geant4. This software is able to handle various types of simulations such as external beam radiation therapy and low dose rate or high dose rate brachytherapy. These examples need an accurate modelling and the use of several types of geometries such as for voxelised, analytical or meshed volumes. We analytically modeled jaw geometries. The multi-leaf collimator was formed by a set of triangles (mesh). Electron navigation in a voxelised volume was also developed. We used the example of the Novalis TrueBeam® Stx accelerator. We can then perform Monte Carlo simulations reproducing the linear accelerator. The entire device was validated using comparisons with experimental measurements or with Monte Carlo simulations from Geant4 Finally, we have developed a Monte Carlo simulation platform using GPU architectures for applications of brachytherapy and external beam radiotherapy. This platform includes photons and electrons navigation. It also manages voxelised, analytical (cylinder, cube) and mesh volumes. The particle emission sources are modelled to accurately reproduce their reference model. The acceleration factors from Geant4 are between 40 and 568 depending on the application. GGEMS Applications under clinical conditions, including brachytherapy, are the next development step.
142

Extraction hybride et description structurelle de caractères pour une reconnaissance efficace de texte dans les documents hétérogènes scannés : Méthodes et Algorithmes parallèles / Hybrid extraction and structural description of characters for effective text recognition in heterogeneous scanned documents : Methods and Parallel Algorithms

Soua, Mahmoud 08 November 2016 (has links)
La Reconnaissance Optique de Caractères (OCR) est un processus qui convertit les images textuelles en documents textes éditables. De nos jours, ces systèmes sont largement utilisés dans les applications de dématérialisation tels que le tri de courriers, la gestion de factures, etc. Dans ce cadre, l'objectif de cette thèse est de proposer un système OCR qui assure un meilleur compromis entre le taux de reconnaissance et la vitesse de traitement ce qui permet de faire une dématérialisation de documents fiable et temps réel. Pour assurer sa reconnaissance, le texte est d'abord extrait à partir de l'arrière-plan. Ensuite, il est segmenté en caractères disjoints qui seront décrits ultérieurement en se basant sur leurs caractéristiques structurelles. Finalement, les caractères sont reconnus suite à la mise en correspondance de leurs descripteurs avec ceux d'une base prédéfinie. L'extraction du texte, reste difficile dans les documents hétérogènes scannés avec un arrière-plan complexe et bruité où le texte risque d'être confondu avec un fond texturé/varié en couleurs ou distordu à cause du bruit de la numérisation. D'autre part, la description des caractères, extraits et segmentés, se montre souvent complexe (calcul de transformations géométriques, utilisation d'un grand nombre de caractéristiques) ou peu discriminante si les caractéristiques des caractères choisies sont sensibles à la variation de l'échelle, de la fonte, de style, etc. Pour ceci, nous adaptons la binarisation au type de documents hétérogènes scannés. Nous assurons également une description hautement discriminante entre les caractères se basant sur l'étude de la structure des caractères selon leurs projections horizontale et verticale dans l'espace. Pour assurer un traitement temps réel, nous parallélisons les algorithmes développés sur la plateforme du processeur graphique (GPU). Nos principales contributions dans notre système OCR proposé sont comme suit :Une nouvelle méthode d'extraction de texte à partir des documents hétérogènes scannés incluant des régions de texte avec un fond complexe ou homogène. Dans cette méthode, un processus d'analyse d’image est employé suivi d’une classification des régions du document en régions d’images (texte avec un fond complexe) et de textes (texte avec un fond homogène). Pour les régions de texte on extrait l'information textuelle en utilisant une méthode de classification hybride basée sur l'algorithme Kmeans (CHK) que nous avons développé. Les régions d'images sont améliorées avec une Correction Gamma (CG) avant d'appliquer CHK. Les résultats obtenus d'expérimentations, montrent que notre méthode d'extraction de texte permet d'attendre un taux de reconnaissance de caractères de 98,5% sur des documents hétérogènes scannés.Un Descripteur de Caractère Unifié basé sur l'étude de la structure des caractères. Il emploie un nombre suffisant de caractéristiques issues de l'unification des descripteurs de la projection horizontale et verticale des caractères réalisantune discrimination plus efficace. L'avantage de ce descripteur est à la fois sa haute performance et sa simplicité en termes de calcul. Il supporte la reconnaissance des reconnaissance de caractère de 100% pour une fonte et une taille données.Une parallélisation du système de reconnaissance de caractères. Le processeur graphique GPU a été employé comme une plateforme de parallélisation. Flexible et puissante, cette architecture offre une solution efficace pour l'accélération des algorithmesde traitement intensif d'images. Notre mise en oeuvre, combine les stratégies de parallélisation à fins et gros grains pour accélérer les étapes de la chaine OCR. En outre, les coûts de communication CPU-GPU sont évités et une bonne gestion mémoire est assurée. L'efficacité de notre mise en oeuvre est validée par une expérimentation approfondie / The Optical Character Recognition (OCR) is a process that converts text images into editable text documents. Today, these systems are widely used in the dematerialization applications such as mail sorting, bill management, etc. In this context, the aim of this thesis is to propose an OCR system that provides a better compromise between recognition rate and processing speed which allows to give a reliable and a real time documents dematerialization. To ensure its recognition, the text is firstly extracted from the background. Then, it is segmented into disjoint characters that are described based on their structural characteristics. Finally, the characters are recognized when comparing their descriptors with a predefined ones.The text extraction, based on binarization methods remains difficult in heterogeneous and scanned documents with a complex and noisy background where the text may be confused with a textured background or because of the noise. On the other hand, the description of characters, and the extraction of segments, are often complex using calculation of geometricaltransformations, polygon, including a large number of characteristics or gives low discrimination if the characteristics of the selected type are sensitive to variation of scale, style, etc. For this, we adapt our algorithms to the type of heterogeneous and scanned documents. We also provide a high discriminatiobn between characters that descriptionis based on the study of the structure of the characters according to their horizontal and vertical projections. To ensure real-time processing, we parallelise algorithms developed on the graphics processor (GPU). Our main contributions in our proposed OCR system are as follows:A new binarisation method for heterogeneous and scanned documents including text regions with complex or homogeneous background. In this method, an image analysis process is used followed by a classification of the document areas into images (text with a complex background) and text (text with a homogeneous background). For text regions is performed text extraction using a hybrid method based on classification algorithm Kmeans (CHK) that we have developed for this aim. This method combines local and global approaches. It improves the quality of separation text/background, while minimizing the amount of distortion for text extraction from the scanned document and noisy because of the process of digitization. The image areas are improved with Gamma Correction (CG) before applying HBK. According to our experiment, our text extraction method gives 98% of character recognition rate on heterogeneous scanned documents.A Unified Character Descriptor based on the study of the character structure. It employs a sufficient number of characteristics resulting from the unification of the descriptors of the horizontal and vertical projection of the characters for efficient discrimination. The advantage of this descriptor is both on its high performance and its simple computation. It supports the recognition of alphanumeric and multiscale characters. The proposed descriptor provides a character recognition 100% for a given Face-type and Font-size.Parallelization of the proposed character recognition system. The GPU graphics processor has been used as a platform of parallelization. Flexible and powerful, this architecture provides an effective solution for accelerating intensive image processing algorithms. Our implementation, combines coarse/fine-grained parallelization strategies to speed up the steps of the OCR chain. In addition, the CPU-GPU communication overheads are avoided and a good memory management is assured. The effectiveness of our implementation is validated through extensive experiments
143

Accelerating Java on Embedded GPU

P. Joseph, Iype January 2014 (has links)
Multicore CPUs (Central Processing Units) and GPUs (Graphics Processing Units) are omnipresent in today’s market-leading smartphones and tablets. With CPUs and GPUs getting more complex, maximizing hardware utilization is becoming problematic. The challenges faced in GPGPU (General Purpose computing using GPU) computing on embedded platforms are different from their desktop counterparts due to their memory and computational limitations. This thesis evaluates the performance and energy efficiency achieved by offloading Java applications to an embedded GPU. The existing solutions in literature address various techniques and benefits of offloading Java on desktop or server grade GPUs and not on embedded GPUs. Our research is focussed on providing a framework for accelerating Java programs on embedded GPUs. Our experiments were conducted on a Freescale i.MX6Q SabreLite board which encompasses a quad-core ARM Cortex A9 CPU and a Vivante GC 2000 GPU that supports the OpenCL 1.1 Embedded Profile. We successfully accelerated Java code and reduced energy consumption by employing two approaches, namely JNI-OpenCL, and JOCL, which is a popular Java-binding for OpenCL. These approaches can be easily implemented on other platforms by embedded Java programmers to exploit the computational power of GPUs. Our results show up to an 8 times increase in performance efficiency and 3 times decrease in energy consumption compared to the embedded CPU-only execution of Java program. To the best of our knowledge, this is the first work done on accelerating Java on an embedded GPU.
144

Aceleração por GPU de serviços em sistemas robóticos focado no processamento de tempo real de nuvem de pontos 3D / GPU Acceleration of robotic systems services focused in real-time processing of 3D point clouds

Leonardo Milhomem Franco Christino 03 February 2016 (has links)
O projeto de mestrado, denominado de forma abreviada como GPUServices, se insere no contexto da pesquisa e do desenvolvimento de métodos de processamento de dados de sensores tridimensionais aplicados a robótica móvel. Tais métodos serão chamados de serviços neste projeto e incluem algoritmos de pré-processamento de nuvens de pontos 3D com segmentação dos dados, a separação e identificação de zonas planares (chão, vias), e detecção de elementos de interesse (bordas, obstáculos). Devido à grande quantidade de dados a serem tratados em um curto espaço de tempo, esses serviços utilizam processamento paralelo por GPU para realizar o processamento parcial ou completo destes dados. A área de aplicação em foco neste projeto visa prover serviços para um sistema ADAS: veículos autônomos e inteligentes, forçando-os a se aproximarem de um sistema de processamento em tempo real devido ao contexto de direção autônoma. Os serviços são divididos em etapas de acordo com a metodologia do projeto, mas sempre buscando a aceleração com o uso de paralelismo inerente: O pré-projeto consiste de organizar um ambiente que seja capaz de coordenar todas as tecnologias utilizadas e que explore o paralelismo; O primeiro serviço tem a responsabilidade de extrair inteligentemente os dados do sensor que foi usado pelo projeto (Sensor laser Velodyne de múltiplos feixes), que se mostra necessário devido à diversos erros de leitura e ao formato de recebimento, fornecendo os dados em uma estrutura matricial; O segundo serviço em cooperação com o anterior corrige a desestabilidade espacial do sensor devido à base de fixação não estar perfeitamente paralela ao chão e devido aos amortecimentos do veículo; O terceiro serviço separa as zonas semânticas do ambiente, como plano do chão, regiões abaixo e acima do chão; O quarto serviço, similar ao anterior, realiza uma pré-segmentação das guias da rua; O quinto serviço realiza uma segmentação de objetos do ambiente, separando-os em blobs; E o sexto serviço utiliza de todos os anteriores para a detecção e segmentação das guias da rua. Os dados recebidos pelo sensor são na forma de uma nuvem de pontos 3D com grande potencial de exploração do paralelismo baseado na localidade das informações. Porém, sua grande dificuldade é a grande taxa de dados recebidos do sensor (em torno de 700.000 pontos/seg.), sendo esta a motivação deste projeto: usar todo o potencial do sensor de forma eficiente ao usar o paralelismo de programação GPU, disponibilizando assim ao usuário serviços de tratamento destes dados. / The master\'s project, abbreviated hence forth as GPUServices, fits in the context of research and development of three-dimensional sensor data processing methods applied to mobile robotics. Such methods will be called services in this project, which include a 3D point cloud preprocessing algorithms with data segmentation, separation and identification of planar areas (ground track), and also detecting elements of interest (borders, barriers). Due to the large amount of data to be processed in a short time, these services should use parallel processing, using the GPU to perform partial or complete processing of these data. The application area in focus in this project aims to provide services for an ADAS system: autonomous and intelligent vehicles, forcing them to get close to a real-time processing system due to the autonomous direction of context.The services are divided into stages according to the project methodology, but always striving for acceleration using inherent parallelism: The pre-project consists of organizing an environment for development that is able to coordinate all used technologies, to exploit parallelism and to be integrated to the system already used by the autonomous car; The first service has a responsibility to intelligently extract sensor data that will be used by the project (Laser sensor Velodyne multi-beam), it appears necessary because of the many reading errors and the receiving data format, hence providing data in a matrix structure; The second service, in cooperation with the above, corrects the spatial destabilization due to the sensor fixing base not perfectly parallel to the ground and due to the damping of the vehicle; The third service separates the environment into semantics areas such as ground plane and regions below and above the ground; The fourth service, similar to the above, performs a pre-segmentation of street cruds; The fifth service performs an environmental objects segmentation, separating them into blobs; The sixth service uses all prior to detection and segmentation of street guides.The received sensor data is structured in the form of a cloud of points. They allow processing with great potential for exploitation of parallelism based on the location of the information. However, its major difficulty is the high rate of data received from the sensor (around 700,000 points/sec), and this gives the motivation of this project: to use the full potential of sensor to efficiently use the parallelism of GPU programming, therefore providing data processing services to the user, providing services that helps and make the implementation of ADAS systems easier and/or faster.
145

Modélisation et calcul parallèle pour le Web SIG 3D / Modeling and Parallel Computation for 3D WebGIS

Cellier, Fabien 31 January 2014 (has links)
Cette thèse est centrée sur l'affichage et la manipulation en temps interactif au sein d'un navigateur Internet de modèles 3D issus de Systèmes d'Informations Géographiques (SIG). Ses principales contributions sont la visualisation de terrains 3D haute résolution, la simplification de maillages irréguliers sur GPU, et la création d'une nouvelle API navigateur permettant de réaliser des traitements lourds et efficaces (parallélisme GP/GPU) sans compromettre la sécurité. La première approche proposée pour la visualisation de modèles de terrain s'appuie sur les récents efforts des navigateurs pour devenir une plateforme versatile. Grâce aux nouvelles API 3D sans plugin, nous avons pu créer un client de visualisation de terrains "streamés" à travers HTTP. Celui-ci s'intègre parfaitement dans les écosystèmes Web-SIG actuels (desktop et mobile) par l'utilisation des protocoles standards du domaine (fournis par l'OGC, Open Geospatial Consortium). Ce prototype s'inscrit dans le cadre des partenariats industriels entre ATOS Worldline et ses clients SIG, et notamment l'IGN (institut national de l'information géographique et forestière) avec le Géoportail (http://www.geoportail.gouv.fr) et ses API cartographiques. La 3D dans les navigateurs possède ses propres défis, qui sont différents de ce que l'on connaît des applications lourdes : aux problèmes de transfert de données s'ajoutent les restrictions et contraintes du JavaScript. Ces contraintes, détaillées dans le paragraphe suivant, nous ont poussé à repenser les algorithmes de référence de visualisation de terrain afin de prendre en compte les spécificités dues aux navigateurs. Ainsi, nous avons su profiter de la latence du réseau pour gérer dynamiquement les liaisons entre les parties du maillage sans impacter significativement la vitesse du rendu. Au-delà de la visualisation 3D, et bien que le langage JavaScript autorise le parallélisme de tâches, le parallélisme de données reste quasi inexistant au sein des navigateurs Web. Ce constat, couplé à la faiblesse de traitement du JavaScript, constituait un frein majeur dans notre objectif de définir une plateforme SIG complète et performante intégrée au navigateur. C'est pour cette raison que nous avons conçu et développé, à travers les WebCLWorkers, une API Web de calcul GP/GPU haute performance répondant aux critères de simplicité et de sécurité inhérents au Web. Contrairement à l'existant, qui se base sur des codes déjà précompilés ou met de côté les performances, nous avons tenté de trouver le bon compromis pour avoir un langage proche du script mais sécurisé et performant, en utilisant les API OpenCL comme moteur d'exécution. Notre proposition d'API a intéressé la fondation Mozilla qui nous a ensuite demandé de participer à l'élaboration du standard WebCL dans la cadre du groupe Khronos, (aux côtés de Mozilla mais aussi de Samsung, Nokia, Google, AMD, etc.). Grâce aux nouvelles ressources de calcul ainsi obtenues, nous avons alors proposé un algorithme de simplification parallèle de maillages irréguliers. Alors que l'état de l'art repose essentiellement sur des grilles régulières pour le parallélisme (hors Web) ou sur la simplification via clusterisation et kd-tree, aucune solution ne permettait d'avoir à la fois une simplification parallèle et des modèles intermédiaires utilisables pour la visualisation progressive en utilisant des grilles irrégulières. Notre solution repose sur un algorithme en trois étapes utilisant des priorités implicites et des minima locaux afin de réaliser la simplification, et dont le degré de parallélisme est linéairement lié au nombre de points et de triangles du maillage à traiter [etc...]GP/GPU / This thesis focuses on displaying and manipulating 3D models from Geographic Information Systems (GIS) in interactive time directly in a web browser. Its main contributions are the visualization of high resolution 3D terrains, the simplification of irregular meshes on the GPU, and the creation of a new API for performing heavy and effective computing in the browser (parallelism GP/GPU) without compromising safety. The first approach proposed for the visualization of terrain models is built on recent browsers efforts to become a versatile platform. With the new 3D pluginless APIs, we have created a visualization client for terrain models “streamed” through HTTP. It fits perfectly into the current Web-GIS ecosystem (desktop and mobile) by the use of the standard protocols provided by OGC Open Geospatial Consortium. This prototype is part of an industrial partnership between ATOS Wordline and its GIS customer, and particularly the IGN (French National Geographic Institute) with the Geoportail application (http://www.geoportail.gouv.fr) and its mapping APIs. The 3D embedded in browsers brings its own challenges which are different from what we know in heavy applications: restrictions and constraints from JavaScript but also problems of data transfer. These constraints, detailed in the next paragraph, led us to rethink the standard algorithms for 3D visualization to take into account the browser specificities. Thus, we have taken advantage of network latency to dynamically manage the connections between the different parts of the mesh without significantly impacting the rendering speed. Beyond 3D visualization, and even if the JavaScript language allows task parallelism, data parallelism remains absent from Web browsers. This observation, added to the slowness of JavaScript processing, constituted a major obstacle in our goal to define a complete and powerful GIS platform integrated in the browser. That is why we have designed and developed the WebCLWorkers, a GP/GPU Web API for high performance computing that meets the criteria of simplicity and security inherent to the Web. We tried to find a trade-off for a language close to the script but secure and efficient, based on the OpenCL API at runtime. This approach is opposite to the existing ones, which are either based on precompiled code or disregard performances. Our API proposal interested the Mozilla Foundation which asked us to participate in the development of the WebCL standard by integrating the Khronos Group (Mozilla, Samsung, Nokia, Google, AMD, and so on). Exploiting these new computing resources, we then suggested an algorithm for parallel simplification of irregular meshes. While the state of the art was mainly based on regular grids for parallelism (and did not take into account Web browsers restrictions) or on simplification and kd-tree clustering, no solution could allow both parallel simplification and progressive visualization using irregular grids. Our solution is based on a three-step algorithm using implicit priorities and local minima to achieve simplification, and its degree of parallelism is linearly related to the number of points and triangles in the mesh to process. We have proposed in the thesis an innovative approach for 3D WebGIS pluglinless visualization, offering tools that bring to the browser a comfortable GP/GPU computing power, and designing a method for irregular meshes parallel simplification allowing to visualize level of details directly in Web browsers. Based on these initial results, it becomes possible to carry all the rich functionalities of desktop GIS clients to Web browsers, on PC as well as mobile phones and tablets
146

Simulating High Detail Brush Painting on Mobile Devices : Using OpenGL, Data-Driven Modeling and GPU Computation / Simulering av penselmålning med hög detaljrikedom på mobila enheter

Blanco Paananen, Adrian January 2016 (has links)
This report presents FastBrush, an advanced implementation for real time brush simulation, which achieves high detail with a large amount of bristles, and is lightweight enough to be implemented for mobile devices. The final result of this system has far higher detail than available consumer painting applications. Paintbrushes have up to a thousand bristles. Adobe Photoshop is only able to simulate up to a hundred bristles in real-time, while FastBrush is able to capture the full detail of a brush with up to a thousand bristles in real-time on mobile devices. Simple multidimensional data driven modeling is used to create a deformation table, which enables calculating the physics of the brush deformations in near constant time for the entire brush, and thus the physics calculation overhead of a large number of bristles becomes negligible. The results show that there is a large potential for use of data driven models in high detail brush simulations. / Denna rapport presenterar FastBrush, en avancerad implementation för realtidssimulation av penselmålning som uppnår hög detalj med en stor mängd penselstrån, samt är snabb nog att implementeras för mobila enheter. Det slutgiltliga resultatet av denna implementation har mycket högre detail än nuvarande tillgängliga konsumentapplikationer. Penslar har ett tusen individuella penselstrån. Adobe Photoshop är begränsad till att simulera maximum ett hundra penselstrån, medan FastBrush kan uppnå fullständig detaljrik återgivning med upp till ett tusen penselstrån i realtid på mobila enheter. Enkel multidimensionell datadriven modellering används för att skapa en deformationstabell, vilket möjliggör att beräkna fysiken för penselns deformation i nära konstant tid, och därför blir de kostnaden av fysikkalkylationerna för ett högt antal individuella penselstrån försummbar. Resultaten visar att det finns stor potential för användning av datadrivna modeller i högdetaljerade penselsimulationer.
147

Slab and Powder-Snow avalanche animation on the GPU

Jonthan, Åleskog, Daniel, Cheh January 2021 (has links)
Background: The video game industry has yet to achieve a physically-based real-time avalanche simulation because of the sheer complexity of modeling the behavior of snow avalanches. An avalanche is made out of snow, meaning it would require a snow simulation which itself already shows to be complex. However, a real-time snow simulation has shown up recently, making this thesis worth investigating. Objectives: The proposed method will take advantage of a real-time snow simulation framework to animate slab and powder-snow avalanches. The powder-snow avalanche is divided into two types: a loose-snow avalanche and a powder-cloud avalanche. Animating the avalanches will require tuning the parameters to get the different types and calculate the release area on a terrain. Lastly, a survey is sent out to verify the viability for use in games of the avalanches. The performance will be measured and analyzed along with the gathered data from the survey. Methods: Two particle systems were used to animate the avalanches. The Discrete Element Method was used to animate the slab and loose-snow avalanches, whereas the powder-cloud avalanche utilized the Smoothed Particle Hydrodynamics method. A procedural Voronoi pattern was used for generating the slabs and hypertexture to render the powder-cloud. In contrast, a fluid renderer was used to render the snow. Results: Measurement of the proposed avalanche animations was conducted and analyzed. The proposed slab and loose-snow avalanches reached real-time performance depending on the number of particles and the shading when rendering the scenes. However, the powder-snow avalanche did not fulfill the real-time performance criteria. Furthermore, the survey was used to verify if the proposed avalanche animations were viable for games. Both the slab and loose-snow avalanches were seen as viable for use in games, while the powder-snow avalanche was not seen as viable for games. Conclusions: The proposed slab and loose-snow avalanche animations ran in real-time with a dynamic particle count of 300k or lower, without shaded rendering, or 75K or lower with shaded rendering. Both avalanches were seen as viable for use in video games. Furthermore, the powder-snow avalanche could not reach a real-time performance of over 30 frames per second and was not seen as viable for use in video games. Further research is needed for the powder-snow avalanche. / Bakgrund: Dataspelsbranschen har ännu inte uppnått en fysikbaserad lavinanimation i realtid på grund av dess komplexitet att modellera lavinbeteende. En lavin består utav snö, som i sig själv visas redan vara komplext att simulera. Dock, en realtidssnösimulering har dykt upp nyligen, som gör detta arbetet värt att undersöka. Syfte: Den föreslagna metoden kommer att utnyttja ett realtidssnösimuleringsramverk för att animera flaklaviner och snödammslaviner. Snödammslaviner delas in i två typer: en lössnö-del som är kärnan och en snödamms-del som är snön som virvlas upp i luften. För att animera lavinerna kommer parametrarna justeras för att få fram de olika typerna och beräkna utlösningsområdet på terrängen. Slutligen skickas en enkät ut för att verifiera om de föreslagna lavinerna kan användas inom spel. Prestandan kommer att mätas och analyseras tillsammans med den samlade datan från undersökningen. Metod: Två partikelsystem användes för att animera lavinerna. Diskreta elementmetoden användes för att animera flaklavinen och lössnölavinen, medan snödammslavinen animerades med SPH-metoden. Ett procedurellt Voronoi-mönster användes för att generera bitarna i snöflaket och hypertexturer för att rendera snödammslavinen. Däremot användes en rendering för vätskor för att rendera snön. Resultat: Mätning av de föreslagna lavinanimationerna genomfördes och analyserades. De föreslagna flaklavinen och lössnölavinen nådde realtidsprestanda, med rendering. Där prestandan berodde på antalet partiklar och rendering. Snödammslavinen uppfyllde dock inte kriterierna för realtidsprestanda. Dessutom användes en enkät för att kontrollera om de föreslagna lavinanimationerna kunde användas för dataspel. Både flaklavinen och lössnölavinen sågs passande för användning i dataspel, däremot sågs snödammslavinen inte vara användbar i dataspel. Slutsatser: De föreslagna animationerna av flaklavinen och lössnölavinen exekverades i realtid med ett dynamiskt partikelantal på 300 tusen eller lägre, utan rendering, eller 75 tusen eller lägre med rendering. Båda lavinerna ansågs passande för användning i dataspel. Snödammslavinen kunde dock inte nå en realtidsprestanda på över 30 bilder per sekund och då ansågs inte vara lämplig för användning inom dataspel. Det behövs ytterligare forskning på att animera snödammslavinen för använding inom dataspel.
148

Contribution de la Lattice Boltzmann Method à l’étude de l’enveloppe du bâtiment / Lattice Boltzmann Method applied to Building Physics

Walther, Édouard 29 January 2016 (has links)
Les enjeux de réduction des consommations d’énergie, d’estimation de la durabilité ainsi que l’évolution des pratiques constructives et réglementaires génèrent une augmentation significative du niveau de détail exigé dans la simulation des phénomènes physiques du Génie Civil pour une prédiction fiable du comportement des ouvrages. Le bâtiment est le siège de phénomènes couplés multi-échelles, entre le microscopique (voire le nanoscopique) et le macroscopique, impliquant des études de couplages complexes entre matériaux, à l’instar des phénomènes de sorption-désorption qui influent sur la résistance mécanique, les transferts de masse, la conductivité, le stockage d’énergie ou la durabilité d’un ouvrage. Les méthodes numériques appliquées permettent de résoudre certains de ces problèmes en ayant recours aux techniques de calcul multi-grilles, de couplage multi-échelles ou de parallélisation massive afin de réduire substantiellement les temps de calcul. Dans le présent travail, qui traite de plusieurs simulations ayant trait à la physique du bâtiment, nous nous intéressons à la pertinence d’utilisation de la méthode "Lattice Boltzmann". Il s’agit d’une méthode numérique construite sur une grille – d’où l’appellation de lattice – dite "mésoscopique" qui, à partir d’un raisonnement de thermodynamique statistique sur le comportement d’un groupes de particules microscopiques de fluide, permet d’obtenir une extrapolation consistante vers son comportement macroscopique. Après une étude les avantages comparés de la méthode et sur le comportement oscillatoire qu'elle exhibe dans certaines configurations, on présente :- une application au calcul des propriétés diffusives homogénéisée des matériaux cimentaires en cours d'hydratation, par résolution sur le cluster du LMT.- une application à l'énergétique du bâtiment avec la comportement d'une paroi solaire dynamique, dont le calcul a été porté sur carte graphique afin d'en évaluer le potentiel. / Reducing building energy consumption and estimating the durability of structures are ongoing challenges in the current regulatory framework and construction practice. They suppose a significant increase of the level of detail for simulating the physical phenomena of Civil Engineering to achieve a reliable prediction of structures.Building is the centre of multi-scale, coupled phenomena ranging from the micro (or even nano) to the macro-scale, thus implying complex couplings between materials such as sorption-desorption process which influences the intrinsic properties of matter such as mechanical resistance, mass transfer, thermal conductivity, energy storage or durability.Applied numerical methods allow for the resolution of some of these problems by using multi-grid computing, multi-scale coupling or massive parallelisation in order to substantially reduce the computing time.The present work is intended to evaluate the suitability of the “lattice Boltzmann method” applied to several applications in building physics. This numerical method, said to be “mesoscopic”, starts from the thermodynamic statistical behaviour of a group of fluid particles, mimicking the macroscopic behaviour thanks to a consistent extrapolation across the scales.After having studied the comparative advantages of the method and the oscillatory behaviour it displays under some circumstances, we present - An application to the diffusive properties of cementitious materials during hydration via numerical homogenization and cluster-computing numerical campaign - An application to building energy with the modeling of a solar active wall in forced convection simulated on a graphical processing unit.
149

Vizualizace a editace voxelů pro 3D tisk v real-time / Real-time voxel visualization and editing for 3D printing

Kužel, Vojtěch January 2021 (has links)
In this thesis, we explore detailed voxel scene compression methods and editing thereof with the goal to design an interactive voxel viewer/editor, for e.g. a 3D printing appli- cation. We present state-of-the-art GPU compatible data structures and compare them. On top of the chosen data structure, we build standard editing tools known from 2D, capable of changing voxel color in real-time even on lower end machines. 1
150

Applying spatially and temporally adaptive techniques for faster DEM-based snow simulation

Andreasson, Simon, Östergaard, Linus January 2023 (has links)
Background. Physically-based snow simulation is computationally expensive and not yet applicable to real-time applications. Some of the prime factors for this cost are the complex physics, the large number of particles, and the small time step required for a high-quality and stable simulation.Simplified methods, such as height maps, are used instead to emulate snow accumulation. A way of improving performance is finding ways of doing less computations. In the field of computer graphics, adaptive methods have been developed to focus computation to where it is most needed. These works will serve as inspiration for this thesis. Objectives. This thesis aims to reduce the total particle workload of an existing Discrete Element Method (DEM) application, thereby improving performance. The aim consists of the following objectives. Integrate a spatial method, thereby lessening the total number of particles through particle merging and splitting, and implement a temporal method, thereby lessening the workload by freezing certain particles in time. The performance of both these techniques will then be tested and analyzed in multiple scenarios. Methods. Spatially and temporally adaptive methods were implemented in an existing snow simulator. The methods were both measured and compared using quantitative tests in three different scenes with varying particle counts. Results. Performance tests show that both the spatial and temporal adaptivity reduce the execution time compared to the base method. The improvements from temporal adaptivity are consistently around 1.25x while the spatial adaptivity shows a larger range of improvements between 1.23x and 2.86x. Combining both adaptive techniques provides an improvement of up to 3.58x. Conclusions. Both spatially and temporally adaptive techniques are viable ways to improve the performance of a DEM-based snow simulation. The current implementation has some issues with performance overhead and with the visual results while using spatial adaptivity, but there is a lot of potential for the future. / Bakgrund. Fysikbaserad snösimulering är beräkningsmässigt dyrt och ännu inte tillämpligt på realtidsapplikationer. Några av de viktigaste faktorerna för denna kostnad är den komplexa fysiken, stora mängden partiklar och det lilla tidssteg som krävs för en högkvalitativ och stabil simulering. Förenklade metoder, såsom höjdkartor, används istället för att efterlikna ansamlingen av snö. Ett sätt att förbättra prestandan är hitta sätt att göra färre beräkningar. Inom området datorgrafik har adaptiva metoder utvecklats för att fokusera beräkningen där den behövs som mest. Dessa verk kommer att användas som inspiration för detta arbete. Syfte. Detta examensarbete syftar till att minska den totala partikelbelastningen för en befintlig applikation baserat på Discrete Element Method (DEM), och därigenom förbättra prestandan. Målet består av följande mål. Integrera en rumslig metod, och därigenom minska det totala antalet partiklar genom partikelsammanslagning och -splittring, och implementera en tidsmässig metod, och därigenom minska arbetsbelastningen genom att frysa vissa partiklar i tiden. Båda dessa teknikers prestanda kommer sedan att testas och analyseras i flera scenarier. Metod. Metoder för rumslig- och tidsmässig adaptivitet implementerades i en befintlig snösimulator. Metoderna både mättes och jämfördes med hjälp av kvantitativa tester i tre olika scener med varierande partikelantal. Resultat. Prestandatester visar att både den rumsliga och tidsmässiga adaptiviteten minskar exekveringstiden jämfört med basmetoden. Förbättringarna från tidsmässig adaptivitet är konsekvent runt 1,25x medan den rumsliga adaptiviteten visar en större bredd av förbättringar mellan 1,23x och 2,86x. Kombinering av båda adaptiva teknikerna ger en förbättring på upp till 3,58x. Slutsatser. Både rumsligt och tidsmässigt adaptiva tekniker är användbara sätt att förbättra prestandan för en DEM-baserad snösimulering. Den nuvarande implementationen har vissa problem med prestanda och med de visuella resultaten vid användning av rumslig adaptivitet, men det finns mycket potential för framtiden.

Page generated in 0.0339 seconds