• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 71
  • 14
  • 10
  • 10
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 222
  • 70
  • 53
  • 50
  • 45
  • 32
  • 26
  • 25
  • 24
  • 24
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Global Illumination on Modern GPUs

Zhang, Fan January 2022 (has links)
This thesis that implemented Monte Carlo path tracing and voxel cone tracing for global illumination on GPU compared the performance and visual result. The Monte Carlo path tracing algorithm is implemented in CUDA to do parallel computing on GPU and accelerate the computing speed. The voxel cone tracing, a global illumination algorithm for real-time computing, runs on OpenGL through the GPU graphics pipeline. The results show that the Monte Carlo Path Tracing on CPU single core takes over 10 hours, around 4 hours with 4 cores, on GPU it takes around 48 minutes, while the voxel cone tracing on the same GPU takes 2 ms. The quality of the image generated by the Monte Carlo path tracing contains much more transparent, reflection, and shadow details than that using the voxel cone tracing algorithm. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
142

A New Way for Mapping Texture onto 3D Face Model

Xiang, Changsheng January 2015 (has links)
No description available.
143

Photon tracing na GPU / Photon Tracing on GPU

Galacz, Roman January 2013 (has links)
Subject of this thesis is acceleration of the photon mapping method on a graphic card. The photon mapping is a method for computing almost realistic global illumination of the scene. The computation itself is relatively time-consuming, so the acceleration of it is a hot issue in the field of computer graphics. The photon mapping is described in detail from photon tracing to rendering of the scene. The thesis is then focused on spatial subdivision structures, especially to the uniform grid. The design and the implementation of the application computing the photon mapping on GPU, which is achieved by OpenGL and CUDA interoperability, is described in the next part of the thesis. Lastly, the application is tested properly. The achieved results are reviewed in the conclusion of the thesis.
144

Generation of 3D autostereoscopic integral images using computer simulated imaging systems

Salih, Shafik January 2015 (has links)
Production of artificial Three-Dimension (3-D) images was the aim of many researches over hundreds of years. 3-D images are the images that create sense of depth when viewing them. 3-D images are closer to the real world scenes than 2-D images due to the 3-D effect or the sense of depth the 3-D images provide. Sense of depth can be caused by binocular cues including convergence and parallax. Convergence is created by the difference between the angles of the left eye and the right eye viewing axes. Parallax is the effect of viewing with one eye a view of the scene that is inherently shifted to the view seen by the other eye. Several techniques have targeted the creation of 3-D images with the mentioned cues. The technique is preferred when it is able to create 3-D images so that the viewer can view these images without wearing special glasses and the occurrence of viewer fatigue. Integral photography that was invented in 1908 is able to meet the previous requirements. Based on integral photography, several techniques, research and studies have been published. The purposes of this thesis include the computer simulation of flexible integral photography systems, the computer generation of good quality 3-D static and animated integral images using the simulated systems, optimising the generation process to be more accurate, less expensive, more effective, and faster, and producing a portable specialist software tool to achieve these targets. New techniques and algorithms are needed to meet these purposes. A literature survey was carried out about the closest researches and studies to the subject of computer-generated integral images; these were compared with the new techniques introduced in this study to prove the advantages and the necessity of these new techniques. The closest technique to the suggested techniques was implemented using more developed tools to compare the quality of the resulting integral images with the targeted integral images that are going to be produced using the tools and algorithms proposed in this thesis. A method to simulate an imaging system and produce integral images based on the new technique of dividing the view volume of the scene was introduced, explained, proved, and implemented with a program designed for this purpose. To optimise the processing time and the image quality, the previous method is developed, new features are added to the resulting integral images, and better performance was achieved by introducing the method of Displacing the Virtual Camera Target (DCT). Application software with Graphical User Interface is designed and implemented to allow users to select the required parameters of the imaging system and the required features of the resultant integral images. The software tool that is based on the developed techniques and employing OpenGL is useful to simulate the imaging systems, tune their parameters before the actual implementation of these systems, and as a result, save time and materials when designing these systems. The introduced techniques and the software tools are faster, more effective, and cheaper original methods to help in optimising both the integral imaging systems and the quality of integral images. These software tools based on the new techniques can be used on a wide range of devices and platforms because these are employing the portable Application Interface OpenGL. With these methods, integral imaging systems are simulated, and optimised; good quality static and animated integral images were created.
145

Proposta para aceleração de desempenho de algoritmos de visão computacional em sistemas embarcados / Proposed algorithms performance acceleration computer vision in embedded systems

Curvello, André Márcio de Lima 10 June 2016 (has links)
O presente trabalho apresenta um benchmark para avaliar o desempenho de uma plataforma embarcada WandBoard Quad no processamento de imagens, considerando o uso da sua GPU Vivante GC2000 na execução de rotinas usando OpenGL ES 2.0. Para esse fim, foi tomado por base a execução de filtros de imagem em CPU e GPU. Os filtros são as aplicações mais comumente utilizadas em processamento de imagens, que por sua vez operam por meio de convoluções, técnica esta que faz uso de sucessivas multiplicações matriciais, o que justifica um alto custo computacional dos algoritmos de filtros de imagem em processamento de imagens. Dessa forma, o emprego da GPU em sistemas embarcados é uma interessante alternativa que torna viável a realização de processamento de imagem nestes sistemas, pois além de fazer uso de um recurso presente em uma grande gama de dispositivos presentes no mercado, é capaz de acelerar a execução de algoritmos de processamento de imagem, que por sua vez são a base para aplicações de visão computacional tais como reconhecimento facial, reconhecimento de gestos, dentre outras. Tais aplicações tornam-se cada vez mais requisitadas em um cenário de uso e consumo em aplicações modernas de sistemas embarcados. Para embasar esse objetivo foram realizados estudos comparativos de desempenho entre sistemas e entre bibliotecas capazes de auxiliar no aproveitamento de recursos de processadores multicore. Para comprovar o potencial do assunto abordado e fundamentar a proposta do presente trabalho, foi realizado um benchmark na forma de uma sequência de testes, tendo como alvo uma aplicação modelo que executa o algoritmo do Filtro de Sobel sobre um fluxo de imagens capturadas de uma webcam. A aplicação foi executada diretamente na CPU e também na GPU embarcada. Como resultado, a execução em GPU por meio de OpenGL ES 2.0 alcançou desempenho quase 10 vezes maior com relação à execução em CPU, e considerando tempos de readback, obteve ganho de desempenho total de até 4 vezes. / This work presents a benchmark for evaluating the performance of an embedded WandBoard Quad platform in image processing, considering the use of its GPU Vivante GC2000 in executing routines using OpenGL ES 2.0. To this goal, it has relied upon the execution of image filters in CPU and GPU. The filters are the most commonly applications used in image processing, which in turn operate through convolutions, a technique which makes use of successive matrix multiplications, which justifies a high computational cost of image filters algorithms for image processing. Thus, the use of the GPU for embedded systems is an interesting alternative that makes it feasible to image processing performing in these systems, as well as make use of a present feature in a wide range of devices on the market, it is able to accelerate image processing algorithms, which in turn are the basis for computer vision applications such as facial recognition, gesture recognition, among others. Such applications become increasingly required in a consumption and usage scenario in modern applications of embedded systems. To support this goal were carried out a comparative studies of performance between systems and between libraries capable of assisting in the use of multicore processors resources. To prove the potential of the subject matter and explain the purpose of this study, it was performed a benchmark in the form of a sequence of tests, targeting a model application that runs Sobel filter algorithm on a stream of images captured from a webcam. The application was performed directly on the embbedded CPU and GPU. As a result, running on GPU via OpenGL ES 2.0 performance achieved nearly 10 times higher with respect to the running CPU, and considering readback times, achieved total performance gain of up to 4 times.
146

Simulation for LEGO Mindstorms robotics

Tian, Yuan January 2008 (has links)
The LEGO® MINDSTORMS® toolkit can be used to help students learn basic programming and engineering concepts. Software that is widely used with LEGO MINDSTORMS is ROBOLAB, developed by Professor Chris Rogers from Tufts University, Boston, United States. It has been adopted in about 10,000 schools in the United States and other countries. It is used to program LEGO MINDSTORMS robotics in its icon-based programming environment. However, this software does not provide debug features for LEGO MINDSTORMS programs. Users cannot test the program before downloading it into LEGO robotics hardware. In this project, we develop a simulator for LEGO MINDSTORMS to simulate the motions of LEGO robotics in a virtual 3D environment. We use ODE (Open Dynamic Engine) and OpenGL, combined with ROBOLAB. The simulator allows users to test their ROBOLAB program before downloading it into the LEGO MINDSTORMS hardware. For users who do not have the hardware, they may use the simulator to learn ROBOLAB programming skills which may be tested and debugged using the simulator. The simulator can track and display program execution as the simulation runs. This helps users to learn and understand basic robotics programming concepts. An introduction to the overall structure and architecture of the simulator is given and is followed by a detailed description of each component in the system. This presents the techniques that are used to implement each feature of the simulator. The discussions based on several test results are then given. This leads to the conclusion that the simulator is able to accurately represent the actions of robots under certain assumptions and conditions.
147

Navigation, Visualisation and Editing of Very Large 2D Graphics Scenes

Kempe, Marcus, Åbjörnsson, Carl January 2004 (has links)
<p>The project has been carried out at, and in association with, Micronic Laser Systems AB in Täby, Sweden. Micronic Laser Systems, manufacture laser pattern generators for the semiconductor and display markets. Laser pattern generators are used to create photomasks, which are a key component in the microlithographic process of manufacturing microchips and displays. </p><p>An essential problem to all modern semiconductor manufacturing is the constantly decreasing sizes of features and increasing use of resolution enhancement techniques (RET), leading to ever growing sizes of datasets describing the semiconductors. When sizes of datasets reach magnitudes of hundreds of gigabytes, visualisation, navigation and editing of any such dataset becomes extremely difficult. As of today this problem has no satisfying solution. </p><p>The project aims at the proposal of a geometry engine that effectively can deal with the evergrowing sizes of modern semiconductor lithography. This involves a new approach to handling data, a new format for spatial description of the datasets, hardware accelerated rendering and support for multiprocessor and distributed systems. The project has been executed without implying changes to existing data formats and the resulting application is executable on Micronics currently existing hardware platforms. </p><p>The performance of the new viewer system surpasses any old implementation by a varying factor. If rendering speed is the comparative factor, the new system is about 10-20 times faster than its old counterparts. In some cases, when hard disk access speed is the limiting factor, the new implementation is only slightly faster or as fast. And finally, spatial indexing allow some operations that previously lasted several hours, to complete in a few seconds, by eliminating all unnecessary disk-reading operations.</p>
148

Public news network: digital sampling to create a hybrid media feed

Stenner, Jack Eric 30 September 2004 (has links)
A software application called Public News Network (PNN) is created in this thesis, which functions to produce an aesthetic experience in the viewer. The application engenders this experience by presenting a three-dimensional virtual world that the viewer can navigate using the computer mouse and keyboard. As the viewer navigates the environment she sees irregularly shaped objects resting on an infinite ground plane, and hears an ethereal wind. As the viewer nears the objects, the sound transforms into the sound of television static and text is displayed which identifies this object as representative of an episode of the evening news. The viewer "touches" the episode and a "disembodied" transcript of the broadcast begins to scroll across the screen. With further interaction, video of the broadcast streams across the surfaces of the environment, distorted by the shapes upon which it flows. The viewer can further manipulate and repurpose the broadcast by searching for words contained within the transcript. The results of this search are reassembled into a new, re-contextualized display of video containing the search terms stripped from their original, pre-packaged context. It is this willful manipulation that completes the opportunity for true meaning to appear.
149

Platform Independent Real-Time X3D Shaders and their Applications in Bioinformatics Visualization

Liu, Feng 12 January 2007 (has links)
Since the introduction of programmable Graphics Processing Units (GPUs) and procedural shaders, hardware vendors have each developed their own individual real-time shading language standard. None of these shading languages is fully platform independent. Although this real-time programmable shader technology could be developed into 3D application on a single system, this platform dependent limitation keeps the shader technology away from 3D Internet applications. The primary purpose of this dissertation is to design a framework for translating different shader formats to platform independent shaders and embed them into the eXtensible 3D (X3D) scene for 3D web applications. This framework includes a back-end core shader converter, which translates shaders among different shading languages with a middle XML layer. Also included is a shader library containing a basic set of shaders that developers can load and add shaders to. This framework will then be applied to some applications in Biomolecular Visualization.
150

MPEG-4 Facial Feature Point Editor / Editor för MPEG-4 "feature points"

Lundberg, Jonas January 2002 (has links)
The use of computer animated interactive faces in film, TV, games is ever growing, with new application areas emerging also on the Internet and mobile environments. Morph targets are one of the most popular methods to animate the face. Up until now 3D artists had to design each morph target defined by the MPEG-4 standard by hand. This is a very monotonous and tedious task. With the newly developed method of Facial Motion Cloning [11]the heavy work is relieved from the artists. From an already animated face model the morph targets can now be copied onto a new static face model. For the Facial Motion Cloning process there must be a subset of the feature points specified by the MPEG-4 standard defined. The purpose of this is to correlate the facial features of the two faces. The goal of this project is to develop a graphical editor in which the artists can define the feature points for a face model. The feature points will be saved in a file format that can be used in a Facial Motion Cloning software.

Page generated in 0.0347 seconds