Spelling suggestions: "subject:"out off core"" "subject:"out oof core""
11 |
Adaptive Isoflächenextraktion aus großen VolumendatenHelbig, Andreas 15 November 2007 (has links) (PDF)
Aus besonders großen Volumendaten extrahierte Isoflächen besitzen eine kaum beherrschbare Anzahl an Polygonen, weshalb die Extraktion von adaptiven, also bezüglich einer geometrischen Fehlermetrik reduzierten, Isoflächen wünschenswert ist. Ein häufiges Problem gängiger adaptiver Verfahren ist, dass sie Datenstrukturen verwenden, die gerade für große Daten besonders viel Hauptspeicher benötigen und daher nicht direkt anwendbar sind. Nachdem auf die Grundlagen zur Isoflächenextration eingegangen wurde, wird im Rahmen dieser Diplomarbeit ein auf Dual Contouring basierendes Verfahren entworfen, das die adaptive Isoflächenextraktion aus sehr großen Volumendaten auch bei begrenztem Hauptspeicher mit einem zeitlich vertretbaren Aufwand erlaubt. Der verwendete Octree wird dazu nur implizit aufgebaut und temporär nicht benötigte Daten werden unter Nutzung von Out-of-core-Techniken in den Sekundärspeicher ausgelagert. Die verschiedenen Implementierungsansätze werden unter Berücksichtigung maximaler Effizienz verglichen. Die Tauglichkeit des Verfahrens wird an verschiedenen sehr großen Testdatensätzen nachgewiesen. / Isosurfaces that are extracted from massive volume data sets consist of a hardly processable amount of polygons. Hence adaptive isosurfaces should be extracted with respect to a geometric error metric. Popular adaptive methods frequently require an amount of memory that turns them unfeasible for large data sets. After dwelling on the fundamentals of isosurfaces, a dual contouring based method will be developed that allows for the extraction of adaptive isosurfaces from massive volume data sets. The required octree is built implicitly, and temporarily unneeded data is swapped out on a secondary storage using out of core techniques. Various implementation approaches will be discussed and compared concerning maximum efficiency. The suitability of the method will be demonstrated with various massive volume data sets.
|
12 |
Interactive out-of-core rendering and filtering of one billion stars measured by the ESA Gaia missionAlsegård, Adam January 2018 (has links)
The purpose of this thesis was to visualize the 1.7 billion stars released by the European Space Agency, as the second data release (DR2) of their Gaia mission, in the open source software OpenSpace with interactive framerates and also to be able to filter the data in real-time. An additional implementation goal was to streamline the data pipeline so that astronomers could use OpenSpace as a visualization tool in their research. An out-of-core rendering technique has been implemented where the data is streamed from disk during runtime. To be able to stream the data it first has to be read, sorted into an octree structure and then stored as binary files in a preprocess. The results of this report show that the entire DR2 dataset can be read from multiple files in a folder and stored as binary values in about seven hours. This step determines what values the user will be able to filter by and only has to be done once for a specific dataset. Then an octree can be created in about 5 to 60 minutes where the user can define if the stars should be filtered by any of the previously stored values. Only values used in the rendering will be stored in the octree. If the created octree can fit in the computer’s working memory then the entire octree will be loaded asynchronously on start-up otherwise only a binary file with the structure of the octree will be read during start-up while the actual star data will be streamed from disk during runtime. When the data have been loaded it is streamed to the GPU. Only stars that are visible are uploaded and the application also keeps track of which nodes that already have been uploaded to eliminate redundant updates. The inner nodes of the octree store the brightest stars in all its descendants as a level-of-detail cache that can be used when the nodes are small enough in screen space. The previous star rendering in OpenSpace has been improved by dividing the rendering phase into two passes. The first pass renders into a framebuffer object while the second pass then performs a tonemapping of the values. The rendering can be done either with billboard instancing or point splatting. The latter is generally the faster alternative. The user can also switch between using VBOs or SSBOs when updating the buffers. The latter is faster but requires OpenGL 4.3, which Apple products do not currently support. The rendering runs with interactive framerates for both flat and curved screen, such as domes/planetariums. The user can also switch dataset during render as well as render technique, buffer objects, color settings and many other properties. It is also possible to turn time on and see the stars move with their calculated space velocity, or transverse velocity if the star lacks radial velocity measurements. The calculations omits the gravitational rotation. The purpose of the thesis has been fulfilled as it is possible to fly through the entire DR2 dataset on a moderate desktop computer and filter the data in real-time. However, the main contribution of the project may be that the ground work has been laid in OpenSpace for astronomers to actually use it as a tool when visualizing their own datasets and also for continuing to explore the coming Gaia releases.
|
13 |
Visualisation et traitements interactifs de grilles régulières 3D haute-résolution virtualisées sur GPU. Application aux données biomédicales pour la microscopie virtuelle en environnement HPC. / Interactive visualisation and processing of high-resolution regular 3D grids virtualised on GPU. Application to biomedical data for virtual microscopy in HPC environment.Courilleau, Nicolas 29 August 2019 (has links)
La visualisation de données est un aspect important de la recherche scientifique dans de nombreux domaines.Elle permet d'aider à comprendre les phénomènes observés voire simulés et d'en extraire des informations à des fins notamment de validations expérimentales ou tout simplement pour de la revue de projet.Nous nous intéressons dans le cadre de cette étude doctorale à la visualisation de données volumiques en imagerie médicale et biomédicale, obtenues grâce à des appareils d'acquisition générant des champs scalaires ou vectoriels représentés sous forme de grilles régulières 3D.La taille croissante des données, due à la précision grandissante des appareils d'acquisition, impose d'adapter les algorithmes de visualisation afin de pouvoir gérer de telles volumétries.De plus, les GPUs utilisés en visualisation de données volumiques, se trouvant être particulièrement adaptés à ces problématiques, disposent d'une quantité de mémoire très limitée comparée aux données à visualiser.La question se pose alors de savoir comment dissocier les unités de calculs, permettant la visualisation, de celles de stockage.Les algorithmes se basant sur le principe dit "out-of-core" sont les solutions permettant de gérer de larges ensembles de données volumiques.Dans cette thèse, nous proposons un pipeline complet permettant de visualiser et de traiter, en temps réel sur GPU, des volumes de données dépassant très largement les capacités mémoires des CPU et GPU.L'intérêt de notre pipeline provient de son approche de gestion de données "out-of-core" permettant de virtualiser la mémoire qui se trouve être particulièrement adaptée aux données volumiques.De plus, cette approche repose sur une structure d'adressage virtuel entièrement gérée et maintenue sur GPU.Nous validons notre modèle grâce à plusieurs applications de visualisation et de traitement en temps réel.Tout d'abord, nous proposons un microscope virtuel interactif permettant la visualisation 3D auto-stéréoscopique de piles d'images haute résolution.Puis nous validons l'adaptabilité de notre structure à tous types de données grâce à un microscope virtuel multimodale.Enfin, nous démontrons les capacités multi-rôles de notre structure grâce à une application de visualisation et de traitement concourant en temps réel. / Data visualisation is an essential aspect of scientific research in many fields.It helps to understand observed or even simulated phenomena and to extract information from them for purposes such as experimental validations or solely for project review.The focus given in this thesis is on the visualisation of volume data in medical and biomedical imaging.The acquisition devices used to acquire the data generate scalar or vector fields represented in the form of regular 3D grids.The increasing accuracy of the acquisition devices implies an increasing size of the volume data.Therefore, it requires to adapt the visualisation algorithms in order to be able to manage such volumes.Moreover, visualisation mostly relies on the use of GPUs because they suit well to such problematics.However, they possess a very limited amount of memory compared to the generated volume data.The question then arises as to how to dissociate the calculation units, allowing visualisation, from those of storage.Algorithms based on the so-called "out-of-core" principle are the solutions for managing large volume data sets.In this thesis, we propose a complete GPU-based pipeline allowing real-time visualisation and processing of volume data that are significantly larger than the CPU and GPU memory capacities.The pipeline interest comes from its GPU-based approach of an out-of-core addressing structure, allowing the data virtualisation, which is adequate for volume data management.We validate our approach using different real-time applications of visualisation and processing.First, we propose an interactive virtual microscope allowing 3D auto-stereoscopic visualisation of stacks of high-resolution images.Then, we verify the adaptability of our structure to all data types with a multimodal virtual microscope.Finally, we demonstrate the multi-role capabilities of our structure through a concurrent real-time visualisation and processing application.
|
14 |
Out-of-Core GPU Path Tracing on Large Instanced Scenes via Geometry StreamingBerchtold, Jeremy 01 June 2022 (has links) (PDF)
We present a technique for out-of-core GPU path tracing of arbitrarily large scenes that is compatible with hardware-accelerated ray-tracing. Our technique improves upon previous works by subdividing the scene spatially into streamable chunks that are loaded using a priority system that maximizes ray throughput and minimizes GPU memory usage. This allows for arbitrarily large scaling of scene complexity. Our system required under 19 minutes to render a solid color version of Disney's Moana Island scene (39.3 million instances, 261.1 million unique quads, and 82.4 billion instanced quads at a resolution of 1024x429 and 1024spp on an RTX 5000 (24GB memory total, 22GB used, 13GB geometry cache, with the remainder for temporary buffers and storage) (Wald et al.). As a scalability test, our system rendered 26 Moana Island scenes without multi-level instancing (1.02 billion instances, 2.14 trillion instanced quads, ~230GB if all resident) in under 1h:28m. Compared to state-of-the-art hardware-accelerated renders of the Moana Island scene, our system can render larger scenes on a single GPU. Our system is faster than the previous out-of-core approach and is able to render larger scenes than previous in-core approaches given the same memory constraints (Hellmuth, Zellman et al, Wald).
|
15 |
ANALYSIS OF VERY LARGE SCALE IMAGE DATA USING OUT-OF-CORE TECHNIQUE AND AUTOMATED 3D RECONSTRUCTION USING CALIBRATED IMAGESHassan Raju, Chandrashekara 28 September 2007 (has links)
No description available.
|
16 |
Integrated compiler optimizations for tensor contractionsGao, Xiaoyang 07 January 2008 (has links)
No description available.
|
17 |
Real-time Visualization of Massive 3D Models on GPU Parallel ArchitecturesPeng, Chao 24 April 2013 (has links)
Real-time rendering of massive 3D models has been recognized as a challenging task due to the limited computational power and memory available in a workstation. Most existing acceleration techniques, such as mesh simplification algorithms with hierarchical data structures, suffer from the nature of sequential executions. As data complexity increases due to the fundamental advances in modeling and simulation technologies, 3D models become complex and require gigabytes in storage. Consequently, visualizing such large datasets becomes a computationally intensive process where sequential solutions are unable to satisfy the demands of real-time rendering.
Recently, the Graphics Processing Unit (GPU) has been praised as a massively parallel architecture not only for its significant improvements in performance but also because of its programmability for general-purpose computation. Today's GPUs allow researchers to solve problems by delivering fine-grained parallel implementations. In this dissertation, I concentrate on the design of parallel algorithms for real-time rendering of massive 3D polygonal models towards modern GPU architectures. As a result, the delivered rendering system supports high-performance visualization of 3D models composed of hundreds of millions of polygons on a single commodity workstation. / Ph. D.
|
18 |
Out-of-Core Multi-Resolution Volume Rendering of Large Data SetsLundell, Fredrik January 2011 (has links)
A modality device can today capture high resolution volumetric data sets and as the data resolutions increase so does the challenges of processing volumetric data through a visualization pipeline. Standard volume rendering pipelines often use a graphic processing unit (GPU) to accelerate rendering performance by taking beneficial use of the parallel architecture on such devices. Unfortunately, graphics cards have limited amounts of video memory (VRAM), causing a bottleneck in a standard pipeline. Multi-resolution techniques can be used to efficiently modify the rendering pipeline, allowing a sub-domain within the volume to be represented at different resolutions. The active resolution distribution is temporarily stored on the VRAM for rendering and the inactive parts are stored on secondary memory layers such as the system RAM or on disk. The active resolution set can be optimized to produce high quality renders while minimizing the amount of storage required. This is done by using a dynamic compression scheme which optimize the visual quality by evaluating user-input data. The optimized resolution of each sub-domain is then, on demand, streamed to the VRAM from secondary memory layers. Rendering a multi-resolution data set requires some extra care between boundaries of sub-domains. To avoid artifacts, an intrablock interpolation (II) sampling scheme capable of creating smooth transitions between sub-domains at arbitrary resolutions can be used. The result is a highly optimized rendering pipeline complemented with a preprocessing pipeline together capable of rendering large volumetric data sets in real-time.
|
19 |
Fluxo do Vetor Gradiente e Modelos Deformáveis Out-of-Core para Segmentação e Imagens / Gradient vector flow and out-of-core image segmentaion by deformable modelsLeandro Schaeffer Marturelli 07 April 2006 (has links)
Limitações de memória principal podem diminuir a performance de aplicativos de segmentação de imagens para grandes volumes ou mesmo impedir seu funcionamento. Nesse trabalho nós integramos o modelo das T-Superfícies com um método de extração de iso-superfícies Out-of-Core formando um esquema de segmentação para imagens de grande volume. A T-Superficie é um modelo deformável paramétrico baseado em uma triangulação do domínio da imagem, um modelo discreto de superfície e um threshold da imagem. Técnicas de extração de isso-superfícies foram implementadas usando o método Out-of-Core que usa estruturas kd-tree, chamadas técnicas de Meta-Células. Usando essas técnicas, apresentamos uma versão Out-of-Core de um método de segmentação baseado nas T-Superfícies e em iso-superfícies. O fluxo do Vetor Gradiente (GVF) é um campo vetorial baseado em equações diferenciais parciais. Esse método é aplicado em conjunto com o modelo das Snakes para segmentação de imagens através de extração de contorno. A idéia principal é usar uma equação de difusão-reação para gerar um novo campo de força externa que deixa o modelo menos sensível a inicialização e melhora a habilidade das Snakes para extrair bordas com concavidades acentuadas. Nesse trabalho, primeiramente serão revistos resultados sobre condições de otimização global do GVF e feitas algumas considerações numéricas. Além disso, serão apresentadas uma análise analítica do GVF e uma análise no domínio da frequência, as quais oferecem elementos para discutir a dependência dos parâmetros do modelo. Ainda, será discutida a solução numérica do GVF baseada no método de SOR. Observamos também que o modelo pode ser estendido para Domínios Multiplamente Conexos e aplicamos uma metodologia de pré-processamento que pode tornar mais eficiente o método. / Main memory limitations can lower the performance of segmentation applications for large images or even make it undoable. In this work we integrate the T-Surfaces model
and Out-of-Core isosurface generation methods in a general framework for segmentation of large image volumes. T-Surfaces is a parametric deformable model based on a triangulation of the image domain, a discrete surface model and an image threshold. Isosurface generation techniques have been implemented through an Out-of-Core method that uses a kd-tree structure, called Meta-Cell technique. By using the Meta-Cell framework, we present an Out-of-Core version of a segmentation method based on T-Surfaces and isosurface extraction. The Gradient Vector Flow (GVF) is an
approach based on Partial Differential Equations. This method has been applied together with snake models for image segmentation through boundary extraction. The key idea is to use a diffusion-reaction PDE in order to generate a new external force field that makes snake models less sensitivity to initialization as well as improves the snakes ability to move into boundary concavities. In this work, we firstly review basic results about global optimization conditions of the GVF and numerical considerations of usual GVF schemes. Besides, we present an analytical analysis of the GVF and a frequency domain analysis, which gives elements to discuss the dependency from the parameter values. Also, we discuss the numerical solution of the GVF based in a SOR method. We observe that the model can be used for Multiply Connected Domains and applied an image processing approach in order to increase the GVF efficiency.
|
20 |
Adaptive Isoflächenextraktion aus großen VolumendatenHelbig, Andreas 17 September 2007 (has links)
Aus besonders großen Volumendaten extrahierte Isoflächen besitzen eine kaum beherrschbare Anzahl an Polygonen, weshalb die Extraktion von adaptiven, also bezüglich einer geometrischen Fehlermetrik reduzierten, Isoflächen wünschenswert ist. Ein häufiges Problem gängiger adaptiver Verfahren ist, dass sie Datenstrukturen verwenden, die gerade für große Daten besonders viel Hauptspeicher benötigen und daher nicht direkt anwendbar sind. Nachdem auf die Grundlagen zur Isoflächenextration eingegangen wurde, wird im Rahmen dieser Diplomarbeit ein auf Dual Contouring basierendes Verfahren entworfen, das die adaptive Isoflächenextraktion aus sehr großen Volumendaten auch bei begrenztem Hauptspeicher mit einem zeitlich vertretbaren Aufwand erlaubt. Der verwendete Octree wird dazu nur implizit aufgebaut und temporär nicht benötigte Daten werden unter Nutzung von Out-of-core-Techniken in den Sekundärspeicher ausgelagert. Die verschiedenen Implementierungsansätze werden unter Berücksichtigung maximaler Effizienz verglichen. Die Tauglichkeit des Verfahrens wird an verschiedenen sehr großen Testdatensätzen nachgewiesen. / Isosurfaces that are extracted from massive volume data sets consist of a hardly processable amount of polygons. Hence adaptive isosurfaces should be extracted with respect to a geometric error metric. Popular adaptive methods frequently require an amount of memory that turns them unfeasible for large data sets. After dwelling on the fundamentals of isosurfaces, a dual contouring based method will be developed that allows for the extraction of adaptive isosurfaces from massive volume data sets. The required octree is built implicitly, and temporarily unneeded data is swapped out on a secondary storage using out of core techniques. Various implementation approaches will be discussed and compared concerning maximum efficiency. The suitability of the method will be demonstrated with various massive volume data sets.
|
Page generated in 0.0837 seconds