141 |
WaldBoost na GPU / WaldBoost on GPUPolok, Lukáš January 2009 (has links)
Image recognition and machine vision in general is quickly evolving field, due boom of cheap and powerful computation technologies. Image recognition has many different applications in wide spectrum of industries, ranging from communications trough security to entertainment. Algorithms for image recognition are still evolving and are often quite computationaly demanding. That is why some of authors deal with implementing the algorithms on specialized hardware accelerators. This work describes implementation of image recognition using the WaldBoost algorithm on the graphic accelerator (GPU) platform.
|
142 |
Water Animation using Coupled SPH and Wave EquationsVarun Ramakrishnan (13273275) 19 April 2023 (has links)
<p>This thesis project addresses the need for an interactive, real-time water animation tech-<br>
nique that can showcase visually convincing effects such as splashes and breaking waves while<br>
being computationally inexpensive. Our method couples SPH and wave equations in a one-<br>
way manner to simulate the behavior of water in real-time, leveraging OpenGL’s Compute<br>
Shaders for interactive performance and a novel Uniform Grid implementation. Through a<br>
review of related literature on real-time simulation methods of fluids, and water animation,<br>
this thesis presents a feasible algorithm, animations to showcase interesting water effects,<br>
and a comparison of computational costs between SPH, wave equations, and the coupled<br>
approach. The program renders a water body with a planar surface and discrete particles.<br>
This project aims to provide a solution that can meet the needs of various water animation<br>
use-cases, such as games, and movies, by offering a computationally efficient technique that<br>
can animate water to behave plausibly and showcase essential effects in real-time.</p>
|
143 |
Global Illumination on Modern GPUsZhang, Fan January 2022 (has links)
This thesis that implemented Monte Carlo path tracing and voxel cone tracing for global illumination on GPU compared the performance and visual result. The Monte Carlo path tracing algorithm is implemented in CUDA to do parallel computing on GPU and accelerate the computing speed. The voxel cone tracing, a global illumination algorithm for real-time computing, runs on OpenGL through the GPU graphics pipeline. The results show that the Monte Carlo Path Tracing on CPU single core takes over 10 hours, around 4 hours with 4 cores, on GPU it takes around 48 minutes, while the voxel cone tracing on the same GPU takes 2 ms. The quality of the image generated by the Monte Carlo path tracing contains much more transparent, reflection, and shadow details than that using the voxel cone tracing algorithm. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
144 |
A New Way for Mapping Texture onto 3D Face ModelXiang, Changsheng January 2015 (has links)
No description available.
|
145 |
Photon tracing na GPU / Photon Tracing on GPUGalacz, Roman January 2013 (has links)
Subject of this thesis is acceleration of the photon mapping method on a graphic card. The photon mapping is a method for computing almost realistic global illumination of the scene. The computation itself is relatively time-consuming, so the acceleration of it is a hot issue in the field of computer graphics. The photon mapping is described in detail from photon tracing to rendering of the scene. The thesis is then focused on spatial subdivision structures, especially to the uniform grid. The design and the implementation of the application computing the photon mapping on GPU, which is achieved by OpenGL and CUDA interoperability, is described in the next part of the thesis. Lastly, the application is tested properly. The achieved results are reviewed in the conclusion of the thesis.
|
146 |
Generation of 3D autostereoscopic integral images using computer simulated imaging systemsSalih, Shafik January 2015 (has links)
Production of artificial Three-Dimension (3-D) images was the aim of many researches over hundreds of years. 3-D images are the images that create sense of depth when viewing them. 3-D images are closer to the real world scenes than 2-D images due to the 3-D effect or the sense of depth the 3-D images provide. Sense of depth can be caused by binocular cues including convergence and parallax. Convergence is created by the difference between the angles of the left eye and the right eye viewing axes. Parallax is the effect of viewing with one eye a view of the scene that is inherently shifted to the view seen by the other eye. Several techniques have targeted the creation of 3-D images with the mentioned cues. The technique is preferred when it is able to create 3-D images so that the viewer can view these images without wearing special glasses and the occurrence of viewer fatigue. Integral photography that was invented in 1908 is able to meet the previous requirements. Based on integral photography, several techniques, research and studies have been published. The purposes of this thesis include the computer simulation of flexible integral photography systems, the computer generation of good quality 3-D static and animated integral images using the simulated systems, optimising the generation process to be more accurate, less expensive, more effective, and faster, and producing a portable specialist software tool to achieve these targets. New techniques and algorithms are needed to meet these purposes. A literature survey was carried out about the closest researches and studies to the subject of computer-generated integral images; these were compared with the new techniques introduced in this study to prove the advantages and the necessity of these new techniques. The closest technique to the suggested techniques was implemented using more developed tools to compare the quality of the resulting integral images with the targeted integral images that are going to be produced using the tools and algorithms proposed in this thesis. A method to simulate an imaging system and produce integral images based on the new technique of dividing the view volume of the scene was introduced, explained, proved, and implemented with a program designed for this purpose. To optimise the processing time and the image quality, the previous method is developed, new features are added to the resulting integral images, and better performance was achieved by introducing the method of Displacing the Virtual Camera Target (DCT). Application software with Graphical User Interface is designed and implemented to allow users to select the required parameters of the imaging system and the required features of the resultant integral images. The software tool that is based on the developed techniques and employing OpenGL is useful to simulate the imaging systems, tune their parameters before the actual implementation of these systems, and as a result, save time and materials when designing these systems. The introduced techniques and the software tools are faster, more effective, and cheaper original methods to help in optimising both the integral imaging systems and the quality of integral images. These software tools based on the new techniques can be used on a wide range of devices and platforms because these are employing the portable Application Interface OpenGL. With these methods, integral imaging systems are simulated, and optimised; good quality static and animated integral images were created.
|
147 |
Proposta para aceleração de desempenho de algoritmos de visão computacional em sistemas embarcados / Proposed algorithms performance acceleration computer vision in embedded systemsCurvello, André Márcio de Lima 10 June 2016 (has links)
O presente trabalho apresenta um benchmark para avaliar o desempenho de uma plataforma embarcada WandBoard Quad no processamento de imagens, considerando o uso da sua GPU Vivante GC2000 na execução de rotinas usando OpenGL ES 2.0. Para esse fim, foi tomado por base a execução de filtros de imagem em CPU e GPU. Os filtros são as aplicações mais comumente utilizadas em processamento de imagens, que por sua vez operam por meio de convoluções, técnica esta que faz uso de sucessivas multiplicações matriciais, o que justifica um alto custo computacional dos algoritmos de filtros de imagem em processamento de imagens. Dessa forma, o emprego da GPU em sistemas embarcados é uma interessante alternativa que torna viável a realização de processamento de imagem nestes sistemas, pois além de fazer uso de um recurso presente em uma grande gama de dispositivos presentes no mercado, é capaz de acelerar a execução de algoritmos de processamento de imagem, que por sua vez são a base para aplicações de visão computacional tais como reconhecimento facial, reconhecimento de gestos, dentre outras. Tais aplicações tornam-se cada vez mais requisitadas em um cenário de uso e consumo em aplicações modernas de sistemas embarcados. Para embasar esse objetivo foram realizados estudos comparativos de desempenho entre sistemas e entre bibliotecas capazes de auxiliar no aproveitamento de recursos de processadores multicore. Para comprovar o potencial do assunto abordado e fundamentar a proposta do presente trabalho, foi realizado um benchmark na forma de uma sequência de testes, tendo como alvo uma aplicação modelo que executa o algoritmo do Filtro de Sobel sobre um fluxo de imagens capturadas de uma webcam. A aplicação foi executada diretamente na CPU e também na GPU embarcada. Como resultado, a execução em GPU por meio de OpenGL ES 2.0 alcançou desempenho quase 10 vezes maior com relação à execução em CPU, e considerando tempos de readback, obteve ganho de desempenho total de até 4 vezes. / This work presents a benchmark for evaluating the performance of an embedded WandBoard Quad platform in image processing, considering the use of its GPU Vivante GC2000 in executing routines using OpenGL ES 2.0. To this goal, it has relied upon the execution of image filters in CPU and GPU. The filters are the most commonly applications used in image processing, which in turn operate through convolutions, a technique which makes use of successive matrix multiplications, which justifies a high computational cost of image filters algorithms for image processing. Thus, the use of the GPU for embedded systems is an interesting alternative that makes it feasible to image processing performing in these systems, as well as make use of a present feature in a wide range of devices on the market, it is able to accelerate image processing algorithms, which in turn are the basis for computer vision applications such as facial recognition, gesture recognition, among others. Such applications become increasingly required in a consumption and usage scenario in modern applications of embedded systems. To support this goal were carried out a comparative studies of performance between systems and between libraries capable of assisting in the use of multicore processors resources. To prove the potential of the subject matter and explain the purpose of this study, it was performed a benchmark in the form of a sequence of tests, targeting a model application that runs Sobel filter algorithm on a stream of images captured from a webcam. The application was performed directly on the embbedded CPU and GPU. As a result, running on GPU via OpenGL ES 2.0 performance achieved nearly 10 times higher with respect to the running CPU, and considering readback times, achieved total performance gain of up to 4 times.
|
148 |
Simulation for LEGO Mindstorms roboticsTian, Yuan January 2008 (has links)
The LEGO® MINDSTORMS® toolkit can be used to help students learn basic programming and engineering concepts. Software that is widely used with LEGO MINDSTORMS is ROBOLAB, developed by Professor Chris Rogers from Tufts University, Boston, United States. It has been adopted in about 10,000 schools in the United States and other countries. It is used to program LEGO MINDSTORMS robotics in its icon-based programming environment. However, this software does not provide debug features for LEGO MINDSTORMS programs. Users cannot test the program before downloading it into LEGO robotics hardware. In this project, we develop a simulator for LEGO MINDSTORMS to simulate the motions of LEGO robotics in a virtual 3D environment. We use ODE (Open Dynamic Engine) and OpenGL, combined with ROBOLAB. The simulator allows users to test their ROBOLAB program before downloading it into the LEGO MINDSTORMS hardware. For users who do not have the hardware, they may use the simulator to learn ROBOLAB programming skills which may be tested and debugged using the simulator. The simulator can track and display program execution as the simulation runs. This helps users to learn and understand basic robotics programming concepts. An introduction to the overall structure and architecture of the simulator is given and is followed by a detailed description of each component in the system. This presents the techniques that are used to implement each feature of the simulator. The discussions based on several test results are then given. This leads to the conclusion that the simulator is able to accurately represent the actions of robots under certain assumptions and conditions.
|
149 |
Navigation, Visualisation and Editing of Very Large 2D Graphics ScenesKempe, Marcus, Åbjörnsson, Carl January 2004 (has links)
<p>The project has been carried out at, and in association with, Micronic Laser Systems AB in Täby, Sweden. Micronic Laser Systems, manufacture laser pattern generators for the semiconductor and display markets. Laser pattern generators are used to create photomasks, which are a key component in the microlithographic process of manufacturing microchips and displays. </p><p>An essential problem to all modern semiconductor manufacturing is the constantly decreasing sizes of features and increasing use of resolution enhancement techniques (RET), leading to ever growing sizes of datasets describing the semiconductors. When sizes of datasets reach magnitudes of hundreds of gigabytes, visualisation, navigation and editing of any such dataset becomes extremely difficult. As of today this problem has no satisfying solution. </p><p>The project aims at the proposal of a geometry engine that effectively can deal with the evergrowing sizes of modern semiconductor lithography. This involves a new approach to handling data, a new format for spatial description of the datasets, hardware accelerated rendering and support for multiprocessor and distributed systems. The project has been executed without implying changes to existing data formats and the resulting application is executable on Micronics currently existing hardware platforms. </p><p>The performance of the new viewer system surpasses any old implementation by a varying factor. If rendering speed is the comparative factor, the new system is about 10-20 times faster than its old counterparts. In some cases, when hard disk access speed is the limiting factor, the new implementation is only slightly faster or as fast. And finally, spatial indexing allow some operations that previously lasted several hours, to complete in a few seconds, by eliminating all unnecessary disk-reading operations.</p>
|
150 |
Public news network: digital sampling to create a hybrid media feedStenner, Jack Eric 30 September 2004 (has links)
A software application called Public News Network (PNN) is created in this thesis, which functions to produce an aesthetic experience in the viewer. The application engenders this experience by presenting a three-dimensional virtual world that the viewer can navigate using the computer mouse and keyboard. As the viewer navigates the environment she sees irregularly shaped objects resting on an infinite ground plane, and hears an ethereal wind. As the viewer nears the objects, the sound transforms into the sound of television static and text is displayed which identifies this object as representative of an episode of the evening news. The viewer "touches" the episode and a "disembodied" transcript of the broadcast begins to scroll across the screen. With further interaction, video of the broadcast streams across the surfaces of the environment, distorted by the shapes upon which it flows. The viewer can further manipulate and repurpose the broadcast by searching for words contained within the transcript. The results of this search are reassembled into a new, re-contextualized display of video containing the search terms stripped from their original, pre-packaged context. It is this willful manipulation that completes the opportunity for true meaning to appear.
|
Page generated in 0.1394 seconds