• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A novel fully progressive lossy-to-lossless coder for arbitrarily-connected triangle-mesh models of images and other bivariate functions

Guo, Jiacheng 16 August 2018 (has links)
A new progressive lossy-to-lossless coding method for arbitrarily-connected triangle mesh models of bivariate functions is proposed. The algorithm employs a novel representation of a mesh dataset called a bivariate-function description (BFD) tree, and codes the tree in an efficient manner. The proposed coder yields a particularly compact description of the mesh connectivity by only coding the constrained edges that are not locally preferred Delaunay (locally PD). Experimental results show our method to be vastly superior to previously-proposed coding frameworks for both lossless and progressive coding performance. For lossless coding performance, the proposed method produces the coded bitstreams that are 27.3% and 68.1% smaller than those generated by the Edgebreaker and Wavemesh methods, respectively. The progressive coding performance is measured in terms of the PSNR of function reconstructions generated from the meshes decoded at intermediate stages. The experimental results show that the function approximations obtained with the proposed approach are vastly superior to those yielded with the image tree (IT) method, the scattered data coding (SDC) method, the average-difference image tree (ADIT) method, and the Wavemesh method with an average improvement of 4.70 dB, 10.06 dB, 2.92 dB, and 10.19 dB in PSNR, respectively. The proposed coding approach can also be combined with a mesh generator to form a highly effective mesh-based image coding system, which is evaluated by comparing to the popular JPEG2000 codec for images that are nearly piecewise smooth. The images are compressed with the mesh-based image coder and the JPEG2000 codec at the fixed compression rates and the quality of the resulting reconstructions are measured in terms of PSNR. The images obtained with our method are shown to have a better quality than those produced by the JPEG2000 codec, with an average improvement of 3.46 dB. / Graduate
2

Geometric Reasoning with Mesh-based Shape Representation in Product Development

Adhikary, Nepal January 2013 (has links) (PDF)
Triangle meshes have become an increasingly popular shape representation. Given the ease of standardization it allows and the proliferation of devices (scanners, range images ) that capture and output shape information as meshes, this representation is now used in applications such as virtual reality, medical imaging, rapid prototyping, digital art and entertainment, simulation and analysis, product design and development. In product development manipulation of mesh models is required in applications such as visualization, analysis, simulation and rapid prototyping. The nature of manipulation of the mesh includes annotation, interactive viewing, slicing, re-meshing, mesh optimization, mesh segmentation, simplification and editing. Of these editing has received the least attention. Mesh model often requires editing either locally or globally based on the application. With the increased use of meshes it is desirable to have formal reasoning tools that enable manipulation of mesh models in product development. The mesh model may contain artifacts like self-intersection, overlapping triangles, inconsistent normal’s of triangles and gaps or holes with or without islands. It is necessary to repair the mesh before further processing the mesh model. An automatic algorithm is proposed to repair and fill arbitrary holes while maintaining curvature continuity across the boundaries of the hole. The algorithm uses slices across the hole to first identify curves that bridge the hole. These curves are then used to find the surface patch that would fill the hole. The proposed algorithm works for arbitrary holes in any mesh model irrespective of the type of underlying surface and is able to preserve features in the mesh model that are missing in the input information. Since editing during product development is mostly feature based, an automatic algorithm to recognize shape features by directly clustering the triangles constituting a feature in a mesh model is proposed. Shape features addressed in the thesis are volumetric features that are associated with either addition or removal of a finite volume. The algorithm involves two steps – isolating features in 2D slices followed by a 3D traversal to cluster all the triangles in the feature. Editing a mesh model mainly implies editing local volumetric features in that model. An automatic algorithm is proposed for parametric editing of volumetric features in the mesh model. The proposed algorithm eliminates the need of original CAD model while manipulating any volumetric feature in the mesh model based on feature parameters. An automatic algorithm to manipulate global shape parameters of the object using the mesh model is developed. Global shape parameters include thickness, drafts and axes of symmetry. As the mesh models do not explicitly carry this information global editing of mesh models (other than for visualization) has not been attempted thus far. This thesis proposes the use of mid-surface to identify and manipulate global shape parameters for a class of objects that are classified as thin walled objects. Mid-curves are first identified on slices of the part and then the mid-surface is obtained from these mid-curves. Results of implementation are presented and discussed along with the scope for future work.
3

Mesh models of images, their generation, and their application in image scaling

Mostafavian, Ali 22 January 2019 (has links)
Triangle-mesh modeling, as one of the approaches for representing images based on nonuniform sampling, has become quite popular and beneficial in many applications. In this thesis, image representation using triangle-mesh models and its application in image scaling are studied. Consequently, two new methods, namely, the SEMMG and MIS methods are proposed, where each solves a different problem. In particular, the SEMMG method is proposed to address the problem of image representation by producing effective mesh models that are used for representing grayscale images, by minimizing squared error. The MIS method is proposed to address the image-scaling problem for grayscale images that are approximately piecewise-smooth, using triangle-mesh models. The SEMMG method, which is proposed for addressing the mesh-generation problem, is developed based on an earlier work, which uses a greedy-point-insertion (GPI) approach to generate a mesh model with explicit representation of discontinuities (ERD). After in-depth analyses of two existing methods for generating the ERD models, several weaknesses are identified and specifically addressed to improve the quality of the generated models, leading to the proposal of the SEMMG method. The performance of the SEMMG method is then evaluated by comparing the quality of the meshes it produces with those obtained by eight other competing methods, namely, the error-diffusion (ED) method of Yang, the modified Garland-Heckbert (MGH) method, the ERDED and ERDGPI methods of Tu and Adams, the Garcia-Vintimilla-Sappa (GVS) method, the hybrid wavelet triangulation (HWT) method of Phichet, the binary space partition (BSP) method of Sarkis, and the adaptive triangular meshes (ATM) method of Liu. For this evaluation, the error between the original and reconstructed images, obtained from each method under comparison, is measured in terms of the PSNR. Moreover, in the case of the competing methods whose implementations are available, the subjective quality is compared in addition to the PSNR. Evaluation results show that the reconstructed images obtained from the SEMMG method are better than those obtained by the competing methods in terms of both PSNR and subjective quality. More specifically, in the case of the methods with implementations, the results collected from 350 test cases show that the SEMMG method outperforms the ED, MGH, ERDED, and ERDGPI schemes in approximately 100%, 89%, 99%, and 85% of cases, respectively. Moreover, in the case of the methods without implementations, we show that the PSNR of the reconstructed images produced by the SEMMG method are on average 3.85, 0.75, 2, and 1.10 dB higher than those obtained by the GVS, HWT, BSP, and ATM methods, respectively. Furthermore, for a given PSNR, the SEMMG method is shown to produce much smaller meshes compared to those obtained by the GVS and BSP methods, with approximately 65% to 80% fewer vertices and 10% to 60% fewer triangles, respectively. Therefore, the SEMMG method is shown to be capable of producing triangular meshes of higher quality and smaller sizes (i.e., number of vertices or triangles) which can be effectively used for image representation. Besides the superior image approximations achieved with the SEMMG method, this work also makes contributions by addressing the problem of image scaling. For this purpose, the application of triangle-mesh mesh models in image scaling is studied. Some of the mesh-based image-scaling approaches proposed to date employ mesh models that are associated with an approximating function that is continuous everywhere, which inevitably yields edge blurring in the process of image scaling. Moreover, other mesh-based image-scaling approaches that employ approximating functions with discontinuities are often based on mesh simplification where the method starts with an extremely large initial mesh, leading to a very slow mesh generation with high memory cost. In this thesis, however, we propose a new mesh-based image-scaling (MIS) method which firstly employs an approximating function with selected discontinuities to better maintain the sharpness at the edges. Secondly, unlike most of the other discontinuity-preserving mesh-based methods, the proposed MIS method is not based on mesh simplification. Instead, our MIS method employs a mesh-refinement scheme, where it starts from a very simple mesh and iteratively refines the mesh to reach a desirable size. For developing the MIS method, the performance of our SEMMG method, which is proposed for image representation, is examined in the application of image scaling. Although the SEMMG method is not designed for solving the problem of image scaling, examining its performance in this application helps to better understand potential shortcomings of using a mesh generator in image scaling. Through this examination, several shortcomings are found and different techniques are devised to address them. By applying these techniques, a new effective mesh-generation method called MISMG is developed that can be used for image scaling. The MISMG method is then combined with a scaling transformation and a subdivision-based model-rasterization algorithm, yielding the proposed MIS method for scaling grayscale images that are approximately piecewise-smooth. The performance of our MIS method is then evaluated by comparing the quality of the scaled images it produces with those obtained from five well-known raster-based methods, namely, bilinear interpolation, bicubic interpolation of Keys, the directional cubic convolution interpolation (DCCI) method of Zhou et al., the new edge-directed image interpolation (NEDI) method of Li and Orchard, and the recent method of super-resolution using convolutional neural networks (SRCNN) by Dong et al.. Since our main goal is to produce scaled images of higher subjective quality with the least amount of edge blurring, the quality of the scaled images are first compared through a subjective evaluation followed by some objective evaluations. The results of the subjective evaluation show that the proposed MIS method was ranked best overall in almost 67\% of the cases, with the best average rank of 2 out of 6, among 380 collected rankings with 20 images and 19 participants. Moreover, visual inspections on the scaled images obtained with different methods show that the proposed MIS method produces scaled images of better quality with more accurate and sharper edges. Furthermore, in the case of the mesh-based image-scaling methods, where no implementation is available, the MIS method is conceptually compared, using theoretical analysis, to two mesh-based methods, namely, the subdivision-based image-representation (SBIR) method of Liao et al. and the curvilinear feature driven image-representation (CFDIR) method of Zhou et al.. / Graduate
4

GPGPU separation of opaque and transparent mesh polygons

Tännström, Ulf Nilsson January 2014 (has links)
Context: By doing a depth-prepass in a tiled forward renderer, pixels can be prevented from being shaded more than once. More aggressive culling of lights that might contribute to tiles can also be performed. In order to produce artifact-free rendering, only meshes containing fully opaque polygons can be included in the depth-prepass. This limits the benefit of the depth-prepass for scenes containing large, mostly opaque, meshes that has some portions of transparency in them. Objectives: The objective of this thesis was to classify the polygons of a mesh as either opaque or transparent using the GPU. Then to separate the polygons into two different vertex buffers depending on the classification. This allows all opaque polygons in a scene to be used in the depth-prepass, potentially increasing render performance. Methods: An implementation was performed using OpenCL, which was then used to measure the time it took to separate the polygons in meshes of different complexity. The polygon separation times were then compared to the time it took to load the meshes into the game. What effect the polygon separation had on rendering times was also investigated. Results: The results showed that polygon separation times were highly dependent on the number of polygons and the texture resolution. It took roughly 350ms to separate a mesh with 100k polygons and a 2048x2048 texture, while the same mesh with a 1024x1024 texture took a quarter of the time. In the test scene used the rendering times differed only slightly. Conclusions: If the polygon separation should be performed when loading the mesh or when exporting it depends on the game. For games with a lower geometrical and textural detail level it may be feasible to separate the polygons each time the mesh is loaded, but for most game it would be recommended to perform it once when exporting the mesh. / Att använda ett djup-prepass med en tiled forward renderare kan minska tiden det tar att rendera en scen. Dock kan inte meshar som innehåller transparenta delar inkluderas i detta pre-pass utan att introducera renderingsartefakter. Detta begränsar djup-prepassets användbarhet i scenarion med stora meshar som innehåller små delar av transparens. Denna uppsats försöker lösa detta genom att med hjälp av grafikkortet dela upp meshar i två delar. En del som enbart innehåller icke-transparenta polygoner och en del med enbart transparenta polygoner.
5

Contributions to objective and subjective visual quality assessment of 3d models / Contributions à l'évaluation objective et subjective de la qualité visuelle des modèles 3D

GUO, Jinjiang 06 October 2016 (has links)
Dans le domaine de l’informatique graphique, les données tridimensionnelles, généralement représentées par des maillages triangulaires, sont employées dans une grande variété d’applications (par exemple, le lissage, la compression, le remaillage, la simplification, le rendu, etc.). Cependant, ces procédés introduisent inévitablement des artefacts qui altèrent la qualité visuelle des données 3D rendues. Ainsi, afin de guider perceptuellement les algorithmes de traitement, il y a un besoin croissant d'évaluations subjectives et objectives de la qualité visuelle à la fois performantes et adaptées, pour évaluer et prédire les artefacts visuels. Dans cette thèse, nous présentons d'abord une étude exhaustive sur les différentes sources d'artefacts associés aux données numériques graphiques, ainsi que l’évaluation objective et subjective de la qualité visuelle des artefacts. Ensuite, nous introduisons une nouvelle étude sur la qualité subjective conçue sur la base de l’évaluations de la visibilité locale des artefacts géométriques, dans laquelle il a été demandé à des observateurs de marquer les zones de maillages 3D qui contiennent des distorsions visibles. Les cartes de distorsion visuelle collectées sont utilisées pour illustrer plusieurs fonctionnalités perceptuelles du système visuel humain (HVS), et servent de vérité-terrain pour évaluer les performances des attributs et des mesures géométriques bien connus pour prédire la visibilité locale des distorsions. Notre deuxième étude vise à évaluer la qualité visuelle de modèles 3D texturés, subjectivement et objectivement. Pour atteindre ces objectifs, nous avons introduit 136 modèles traités avec à la fois des distorsions géométriques et de texture, mené une expérience subjective de comparaison par paires, et invité 101 sujets pour évaluer les qualités visuelles des modèles à travers deux protocoles de rendu. Motivés par les opinions subjectives collectées, nous proposons deux mesures de qualité visuelle objective pour les maillages texturés, en se fondant sur les combinaisons optimales des mesures de qualité issues de la géométrie et de la texture. Ces mesures de perception proposées surpassent leurs homologues en termes de corrélation avec le jugement humain. / In computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment.
6

Efficient generation and rendering of tube geometry in Unreal Engine : Utilizing compute shaders for 3D line generation / Effektiv generering och rendering av tubgeometri i Unreal Engine : Generering av 3D-linjer med compute shaders

Woxler, Platon January 2021 (has links)
Massive graph visualization in an immersive environment, such as virtual reality (VR) or Augmented Reality (AR), has the possibility to improve users’ understanding when exploring data in new ways. To make the most of a visualization, such as this, requires interactive components that are fast enough to accommodate interactivity. By rendering the edges of the graph as shaded lines that imitate three‑dimensional (3D) lines or tubes, one can circumvent technical limitations. This method works well enough when using traditional two‑dimensional (2D) monitors, but representing tubes as flat lines in a virtual environment (VE) makes for a less immersive user experience as opposed to visualizing true 3D geometry. In order to accommodate for these requirements i.e., speed and visual fidelity, we need a time efficient way of producing tubular meshes. This thesis project explores how one can generate tubular geometry utilizing compute shaders in the modern game engine, Unreal Engine (UE). Exploiting the parallel computing power of the graphical processing unit (GPU) we use compute shaders to generate a tubular mesh following a predetermined path. The result from the project is an open source plugin for UE, able to generate tubular geometry at rapid rates. While not giving any major advantages when generating smaller models, comparing it to a sequential implementation, the compute shader implementation create and render models > 40× faster when generating 106 tube segments. A secondary effect of generating most of the data on the GPU, is that we avoid bottlenecks that can occur when surpassing the bandwidth of the central processing unit (CPU) to GPU data transfer. Using this tool researches can more easily explore information visualization in a VE. Furthermore, this thesis promotes extended development of mesh generation, using compute shaders in UE. / Att visualisera stora grafer i en immersiv miljö, såsom VR eller AR, kan förbättra en användares förståelse när de utforskar data på nya sätt. För att få ut det mesta av denna typen av visualiseringar krävs interaktiva komponenter som är tillräckligt snabba för att tillgodose interaktivitet. Genom att visa de linjer, som binder samman en grafs noder, som plana linjer som imiterar 3Dlinjer eller rör, kan man undvika att slå i det tak som tekniska begränsningar medför. Denna metoden är acceptabel vid användning av traditionella 2Dskärmar, men att representera rör som plana linjer i VE ger en mindre immersiv användarupplevelse, i kontrast till att visualisera sann 3D -geometri. För att tillgodose dessa krav dvs, tidseffektivitet och visuella kvaliteter, behöver vi ett effektivt sätt att producera 3D-linjer. Denna uppsats undersöker hur man kan generera rörformad geometri med hjälp av compute shaders i den moderna spelmotorn Unreal Engine (UE). Genom att använda compute shaders kan vi utnyttja den parallella beräkningskraften hos en GPU, kan vi generera ett rörformat mesh som följer en förutbestämd bana. Resultatet från projektet är ett open source-plugin för UE, som kan generera rörformad geometri i höga hastigheter. Även om det inte kan visas ge några större fördelar när man genererar mindre modeller, jämfört med en sekventiell implementering, skapar och renderar implementeringen av compute Shaders modeller > 40× snabbare, när de genererar 106 rörsegment. I och med att den större delen av datan skapas på GPU kan vi också undvika den flaskhals som kan uppstå när vi överskrider bandbredden mellan CPU och GPU. Med hjälp av verktyget som skapats i samband med denna uppsats kan människor lättare utforska informationsvisualisering i VE. Dessutom främjar denna uppsats utökad utveckling av mesh-generering med hjälp av compute shaders i UE.

Page generated in 0.0457 seconds