• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 7
  • 6
  • 6
  • 3
  • 3
  • 2
  • Tagged with
  • 73
  • 73
  • 73
  • 39
  • 32
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Delay sensitive delivery of rich images over WLAN in telemedicine applications

Sankara Krishnan, Shivaranjani 27 May 2009 (has links)
Transmission of medical images, that mandate lossless transmission of content over WLANs, presents a great challenge. The large size of these images coupled with the low acceptance of traditional image compression techniques within the medical community compounds the problem even more. These factors are of enormous significance in a hospital setting in the context of real-time image collaboration. However, recent advances in medical image compression techniques such as diagnostically lossless compression methodology, has made the solution to this difficult problem feasible. The growing popularity of high speed wireless LAN in enterprise applications and the introduction of the new 802.11n draft standard have made this problem pertinent. The thesis makes recommendations on the degree of compression to be performed for specific instances of image communication applications based on the image size and the underlying network devices and their topology. During our analysis, it was found that for most cases, only a portion of the image; typically the region of interest of the image will be able to meet the time deadline requirement. This dictates a need for adaptive method for maximizing the percentage of the image delivered to the receiver within the deadline. The problem of maximizing delivery of regions of interest of image data within the deadline has been effectively modeled as a multi-commodity flow problem in this work. Though this model provides an optimal solution to the problem, it is NP hard in computational complexity and hence cannot be implemented in dynamic networks. An approximation algorithm that uses greedy approach to flow allocation is proposed to cater to the connection requests in real time. While implementing integer programming model is not feasible due to time constraints, the heuristic can be used to provide a near-optimal solution for the problem of maximizing the reliable delivery of regions of interest of medical images within delay deadlines. This scenario may typically be expected when new connection requests are placed after the initial flow allocations have been made.
52

Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladas

Slomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
53

Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladas

Slomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
54

Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladas

Slomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
55

[en] REAL TIME RENDERING USING HIGH DYNAMIC RANGE ILLUMINATION MAPS / [pt] RENDERIZAÇÃO EM TEMPO REAL UTILIZANDO MAPAS DE ILUMINAÇÃO EM ALTA PRECISÃO

RODRIGO PEREIRA MARTINS 23 October 2006 (has links)
[pt] A principal meta da computação gráfica é a síntese de imagens. Essas imagens podem ser tanto geradas por computador quanto ser resultado de manipulação digital de fotografias. Diferentes métodos para captura de imagens e fotografia digital mudaram a importância da imagem digital. Em relação a imagens geradas por computador, a busca por imagens mais realistas é importante para a indústria de filmes, de desenvolvimento de jogos entre outras. Uma das maiores revoluções na computação gráfica atual está relacionada a imagens High Dynamic Range. Essas imagens representam o próximo nível em termos de representação de imagens, uma vez que seus valores são verdadeiramente proporcionais às condições de iluminação em uma cena e são capazes de codificar a dynamic range encontrado no mundo real, fato impossível para as imagens tradicionais que apresentam 24 bits por pixel. Quando imagens high dynamic range são utilizadas para codificar as condições de iluminação em uma cena, elas são chamadas mapas de radiância ou mapas de iluminação. O foco principal dessa dissertação é mostrar técnicas de renderização em tempo real utilizando mapas de iluminação. Técnicas conhecidas como Image Based Lighting. Esse trabalho apresenta os conceitos de imagens high dynamic range, seus fundamentos físicos na teoria da luz, uma série de trabalhos importantes em manipulação dessas imagens e uma discussão sobre o pipeline de aplicações em tempo real que utilizam high dynamic range. Finalmente são apresentadas as técnicas para utilização de mapas de iluminação em alta precisão em tempo real. / [en] In 1997, the seminal work by Paul Debevec and Jitendra Malik on the generation of HDR (High Dynamic Range) images, from ordinary LDR (Low Dynamic Range) cameras, facilitated the generation of light probes enormously. In consequence, this caused a boom of works on the rendering of objects with images of light from the real world, which is known as Image- Based Lighting. The present dissertation aims to study this new area, trying to situate itself in the question of real-time compositing of synthetic objects in real images. This dissertation proposes a real-time rendering pipeline for 3D games, in the simple case of static scenes, adapting the non-real-time technique presented by Paul Debevec in 1998. There is no written work about this adaptation in the literature, although exists some reference to developments done by graphics card manufacturers in this direction. Also this dissertation presents an experiment with diffuse objects. Moreover, the author gives ideas towards the solution of shadow problems for diffuse objects.
56

[en] CUTAWAY ALGORITHM WITH CONTEXT PRESERVATION FOR RESERVOIR MODEL VISUALIZATION / [pt] ALGORITMO DE CORTE COM PRESERVAÇÃO DE CONTEXTO PARA VISUALIZAÇÃO DE MODELOS DE RESERVATÓRIO

LUIZ FELIPE NETTO 11 January 2017 (has links)
[pt] A simulação numérica de reservatório de petróleo é um processo amplamente utilizado na indústria de óleo e gás. O reservatório é representado por um modelo de células hexaédricas com propriedades associadas, e a simulação numérica procura prever o fluxo de fluído dentro do modelo. Especialistas analisam os resultados dessas simulações através da inspeção, num ambiente gráfico interativo, do modelo tridimensional. Neste trabalho, propõe-se um novo algoritmo de corte com preservação de contexto para auxiliar a inspeção do modelo. O principal objetivo é permitir que o especialista visualize o entorno de poços. Os poços representam o objeto de interesse que deve estar visível e o modelo tridimensional (o contexto) é preservado na medida do possível no entorno desses poços. Desta forma, torna-se possível avaliar a variação de propriedades associadas às células na vizinhança do objeto de interesse. O algoritmo proposto explora programação em placa gráfica e é válido para objetos de interesse arbitrários. Propõe-se também uma extensão do algoritmo para que a seção de corte seja desacoplada da câmera, permitindo analisar o modelo cortado de outros pontos de vista. A eficácia do algoritmo proposto é demonstrada através de resultados baseados em modelos reais de reservatório. / [en] Numerical simulation of black oil reservoir is widely used in the oil and gas industry. The reservoir is represented by a model of hexahedral cells with associated properties, and the numerical simulation is used to predict the fluid behavior in the model. Specialists make analysis of such simulations by inspecting, in a graphical interactive environment, the tridimensional model. In this work, we propose a new cutaway algorithm with context preservation to help the inspection of the model. The main goal is to allow the specialist to visualize the wells and their vicinity. The wells represent the object of interest that must be visible while preserving the tridimensional model (the context) in the vicinity as far as possible. In this way, it is possible to visualize the distribution of cell property together with the object of interest. The proposed algorithm makes use of graphics processing units and is valid for arbitrary objects of interest. It is also proposed an extension to the algorithm to allow the cut section to be decoupled from the camera, allowing analysis of the cut model from different points of view. The effectiveness of the proposed algorithm is demonstrated by a set of results based on actual reservoir models.
57

Zobrazení scény pomocí hlubokých stínových map / Rendering Using Deep Shadowmaps

Rejent, Tomáš January 2014 (has links)
Rendering shadows of transparent objects in real-time applications is difficult. The number of usable methods is limited by the available computing power. Depth Peeling and Dual Depth Peeling methods are described in this document. These allow rendering of transparent objects without the need of sorting them. Deep Shadow Maps are described as a method for rendering shadows of transparent objects. These methods were used to create an demonstration application. This application provides rendering of transparent objects and their shadows, including colored ones. The Application is build upon OpenGL and Qt framework. Evaluation of rendering speed according to various parameters is also part of this work.
58

Real-time Realistic Rendering And High Dynamic Range Image Display And Compression

Xu, Ruifeng 01 January 2005 (has links)
This dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very nature, an image of high dynamic range (HDR). This leads to the issues of the display of such images on conventional low dynamic range devices and the development of data compression algorithms to store and recover the corresponding large amounts of detail found in HDR images. This dissertation presents our contributions relevant to these issues. Our contributions to high dynamic range image processing include tone mapping and data compression algorithms. This research proposes and shows the efficacy of a novel level set based tone mapping method that preserves visual details in the display of high dynamic range images on low dynamic range display devices. The level set method is used to extract the high frequency information from HDR images. The details are then added to the range compressed low frequency information to reconstruct a visually accurate low dynamic range version of the image. Additional challenges associated with high dynamic range images include the requirements to reduce excessively large amounts of storage and transmission time. To alleviate these problems, this research presents two methods for efficient high dynamic range image data compression. One is based on the classical JPEG compression. It first converts the raw image into RGBE representation, and then sends the color base and common exponent to classical discrete cosine transform based compression and lossless compression, respectively. The other is based on the wavelet transformation. It first transforms the raw image data into the logarithmic domain, then quantizes the logarithmic data into the integer domain, and finally applies the wavelet based JPEG2000 encoder for entropy compression and bit stream truncation to meet the desired bit rate requirement. We believe that these and similar such contributions will make a wide application of high dynamic range images possible. The contributions to light transport simulation include Monte Carlo noise reduction, dynamic object rendering and complex scene rendering. Monte Carlo noise is an inescapable artifact in synthetic images rendered using stochastic algorithm. This dissertation proposes two noise reduction algorithms to obtain high quality synthetic images. The first one models the distribution of noise in the wavelet domain using a Laplacian function, and then suppresses the noise using a Bayesian method. The other extends the bilateral filtering method to reduce all types of Monte Carlo noise in a unified way. All our methods reduce Monte Carlo noise effectively. Rendering of dynamic objects adds more dimension to the expensive light transport simulation issue. This dissertation presents a pre-computation based method. It pre-computes the surface radiance for each basis lighting and animation key frame, and then renders the objects by synthesizing the pre-computed data in real-time. Realistic rendering of complex scenes is computationally expensive. This research proposes a novel 3D space subdivision method, which leads to a new rendering framework. The light is first distributed to each local region to form local light fields, which are then used to illuminate the local scenes. The method allows us to render complex scenes at interactive frame rates. Rendering has important applications in mixed reality. Consistent lighting and shadows between real scenes and virtual scenes are important features of visual integration. The dissertation proposes to render the virtual objects by irradiance rendering using live captured environmental lighting. This research also introduces a virtual shadow generation method that computes shadows cast by virtual objects to the real background. We finally conclude the dissertation by discussing a number of future directions for rendering research, and presenting our proposed approaches.
59

The State of Live Facial Puppetry in Online Entertainment

Gren, Lisa, Lindberg, Denny January 2024 (has links)
Avatars are used more and more in online communication, in both games and socialmedia. At the same time technology for facial puppetry, where expressions of the user aretransferred to the avatar, has developed rapidly. Why is it that facial puppetry, despite this,is conspicuous by its absence? This thesis analyzes the available and upcoming solutions for facial puppetry, if a com-mon framework or library can exist and what can be done to simplify the process for de-velopers who wants to implement facial puppetry. A survey was conducted to get a better understanding of the technology. It showedthat there is no standard yet for how to describe facial expressions, but part of the marketis converging towards a common format. It also showed that there is no existing inter-face that can handle communication with tracking devices or translation between differentexpression formats. Several prototypes for recording and streaming facial expression data from differentsources were implemented as a practical test. This was done to evaluate the complexity ofimplementing real-time facial puppetry. It showed that it is not always possible to integratethe available tracking solutions into an existing project. When integration was possible itrequired a lot of work. The best way to get tracking right now seems to be to implement astandalone program for tracking that streams the tracked data to the main application. In summary it is the poor integrability of the solutions that makes it problematic forthe developers, together with a wide variety of facial expression formats. A software thatacts like a bridge between the tracking solutions and the game could allow for translationbetween different formats and simplify implementation of support. In the future, instead of working towards making all tracking solutions output stan-dardized tracking data, research further how to build a framework that can handle differ-ent configurations. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
60

Echantillonage d'importance des sources de lumières réalistes / Importance Sampling of Realistic Light Sources

Lu, Heqi 27 February 2014 (has links)
On peut atteindre des images réalistes par la simulation du transport lumineuse avec des méthodes de Monte-Carlo. La possibilité d’utiliser des sources de lumière réalistes pour synthétiser les images contribue grandement à leur réalisme physique. Parmi les modèles existants, ceux basés sur des cartes d’environnement ou des champs lumineuse sont attrayants en raison de leur capacité à capter fidèlement les effets de champs lointain et de champs proche, aussi bien que leur possibilité d’être acquis directement. Parce que ces sources lumineuses acquises ont des fréquences arbitraires et sont éventuellement de grande dimension (4D), leur utilisation pour un rendu réaliste conduit à des problèmes de performance.Dans ce manuscrit, je me concentre sur la façon d’équilibrer la précision de la représentation et de l’efficacité de la simulation. Mon travail repose sur la génération des échantillons de haute qualité à partir des sources de lumière par des estimateurs de Monte-Carlo non-biaisés. Dans ce manuscrit, nous présentons trois nouvelles méthodes.La première consiste à générer des échantillons de haute qualité de manière efficace à partir de cartes d’environnement dynamiques (i.e. qui changent au cours du temps). Nous y parvenons en adoptant une approche GPU qui génère des échantillons de lumière grâce à une approximation du facteur de forme et qui combine ces échantillons avec ceux issus de la BRDF pour chaque pixel d’une image. Notre méthode est précise et efficace. En effet, avec seulement 256 échantillons par pixel, nous obtenons des résultats de haute qualité en temps réel pour une résolution de 1024 × 768. La seconde est une stratégie d’échantillonnage adaptatif pour des sources représente comme un "light field". Nous générons des échantillons de haute qualité de manière efficace en limitant de manière conservative la zone d’échantillonnage sans réduire la précision. Avec une mise en oeuvre sur GPU et sans aucun calcul de visibilité, nous obtenons des résultats de haute qualité avec 200 échantillons pour chaque pixel, en temps réel et pour une résolution de 1024×768. Le rendu est encore être interactif, tant que la visibilité est calculée en utilisant notre nouvelle technique de carte d’ombre (shadow map). Nous proposons également une approche totalement non-biaisée en remplaçant le test de visibilité avec une approche CPU. Parce que l’échantillonnage d’importance à base de lumière n’est pas très efficace lorsque le matériau sous-jacent de la géométrie est spéculaire, nous introduisons une nouvelle technique d’équilibrage pour de l’échantillonnage multiple (Multiple Importance Sampling). Cela nous permet de combiner d’autres techniques d’échantillonnage avec le notre basé sur la lumière. En minimisant la variance selon une approximation de second ordre, nous sommes en mesure de trouver une bonne représentation entre les différentes techniques d’échantillonnage sans aucune connaissance préalable. Notre méthode est pertinence, puisque nous réduisons effectivement en moyenne la variance pour toutes nos scènes de test avec différentes sources de lumière, complexités de visibilité et de matériaux. Notre méthode est aussi efficace par le fait que le surcoût de notre approche «boîte noire» est constant et représente 1% du processus de rendu dans son ensemble. / Realistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process.

Page generated in 0.1249 seconds