• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 12
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Light field remote vision / Algorithmes de traitement et de visualisation pour la vision plénoptique à grande distance

Nieto, Grégoire 03 October 2017 (has links)
Les champs de lumière ont attisé la curiosité durant ces dernières décennies. Capturés par une caméra plénoptique ou un ensemble de caméras, ils échantillonnent la fonction plénoptique qui informe sur la radiance de n'importe quel rayon lumineux traversant la scène observée. Les champs lumineux offrent de nombreuses applications en vision par ordinateur comme en infographie, de la reconstruction 3D à la segmentation, en passant par la synthèse de vue, l'inpainting ou encore le matting par exemple.Dans ce travail nous nous attelons au problème de reconstruction du champ de lumière dans le but de synthétiser une image, comme si elle avait été prise par une caméra plus proche du sujet de la scène que l'appareil de capture plénoptique. Notre approche consiste à formuler la reconstruction du champ lumineux comme un problème de rendu basé image (IBR). La plupart des algorithmes de rendu basé image s'appuient dans un premier temps sur une reconstruction 3D approximative de la scène, appelée proxy géométrique, afin d'établir des correspondances entre les points image des vues sources et ceux de la vue cible. Une nouvelle vue est générée par l'utilisation conjointe des images sources et du proxy géométrique, bien souvent par la projection des images sources sur le point de vue cible et leur fusion en intensité.Un simple mélange des couleurs des images sources ne garantit pas la cohérence de l'image synthétisée. Nous proposons donc une méthode de rendu direct multi-échelles basée sur les pyramides de laplaciens afin de fusionner les images sources à toutes les fréquences, prévenant ainsi l'apparition d'artefacts de rendu.Mais l'imperfection du proxy géométrique est aussi la cause d'artefacts de rendu, qui se traduisent par du bruit en haute fréquence dans l'image synthétisée. Nous introduisons une nouvelle méthode de rendu variationnelle avec des contraintes sur les gradients de l'image cible dans le but de mieux conditionner le système d'équation linéaire à résoudre et supprimer les artefacts de rendu dus au proxy.Certaines scènes posent de grandes difficultés de reconstruction du fait du caractère non-lambertien éventuel de certaines surfaces~; d'autre part même un bon proxy ne suffit pas, lorsque des réflexions, transparences et spécularités remettent en cause les règles de la parallaxe. Nous proposons méthode originale basée sur l'approximation locale de l'espace plénoptique à partir d'un échantillonnage épars afin de synthétiser n'importe quel point de vue sans avoir recours à la reconstruction explicite d'un proxy géométrique. Nous évaluons notre méthode à la fois qualitativement et quantitativement sur des scènes non-triviales contenant des matériaux non-lambertiens.Enfin nous ouvrons une discussion sur le problème du placement optimal de caméras contraintes pour le rendu basé image, et sur l'utilisation de nos algorithmes pour la vision d'objets dissimulés derrière des camouflages.Les différents algorithmes proposés sont illustrés par des résultats sur des jeux de données plénoptiques structurés (de type grilles de caméras) ou non-structurés. / Light fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an image-based rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets.
12

Transmissão progressiva de imagens sintetizadas de light field

Souza, Wallace Bruno Silva de 25 July 2018 (has links)
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2018. / Esta proposta estabelece um método otimizado baseado em taxa-distorção para transmitir imagens sintetizadas de light field. Resumidamente, uma imagem light field pode ser interpretada como um dado quadridimensional (4D) que possui tanto resolução espacial, quanto resolução angular, sendo que cada subimagem bidimensional desse dado 4D é tido como uma determinada perspectiva, isto é, uma imagem de subabertura (SAI, do inglês Sub-Aperture Image). Este trabalho visa modi car e aprimorar uma proposta anterior chamada de Comunicação Progressiva de Light Field (PLFC, do inglês Progressive Light Field Communication ), a qual trata da sintetização de imagens referentes a diferentes focos requisitados por um usuário. Como o PLFC, este trabalho busca fornecer informação suficiente para o usuário de modo que, conforme a transmissão avance, ele tenha condições de sintetizar suas próprias imagens de ponto focal, sem a necessidade de se enviar novas imagens. Assim, a primeira modificação proposta diz respeito à como escolher a cache inicial do usuário, determinando uma quantidade ideal de imagens de subabertura para enviar no início da transmissão. Propõe-se também um aprimoramento do processo de seleção de imagens adicionais por meio de um algoritmo de refinamento, o qual é aplicado inclusive na inicialização da cache. Esse novo processo de seleção lida com QPs (Passo de Quantização, do inglês Quantization Parameter ) dinâmicos durante a codificação e envolve não só os ganhos imediatos para a qualidade da imagem sintetizada, mas ainda considera as sintetizações subsequentes. Tal ideia já foi apresentada pelo PLFC, mas não havia sido implementada de maneira satisfatória. Estabelece-se ainda uma maneira automática para calcular o multiplicador de Lagrange que controla a influência do benefício futuro associado à transmissão de uma SAI. Por fim, descreve-se um modo simplificado de obter esse benefício futuro, reduzindo a complexidade computacional envolvida. Muitas são as utilidades de um sistema como este, podendo, por exemplo, ser usado para identificar algum elemento em uma imagem light field, ajustando apropriadamente o foco em questão. Além da proposta, os resultados obtidos são exibidos, sendo feita uma discussão acerca dos significativos ganhos conseguidos de até 32; 8% com relação ao PLFC anterior em termos de BD-Taxa. Esse ganho chega a ser de até 85; 8% em comparação com transmissões triviais de dados light field. / This work proposes an optimized rate-distortion method to transmit light field synthesized images. Briefy, light eld images could be understood like quadridimensional (4D) data, which have both spatial and angular resolution, once each bidimensional subimage in this 4D image is a certain perspective, that is, a SAI (Sub-Aperture Image). This work aims to modify and to improve a previous proposal named PLFC (Progressive Light Field Communication), which addresses the image synthesis for diferent focal point images requested by an user. Like the PLFC, this work tries to provide enough information to the user so that, as the transmsission progress, he can synthesize his own focal point images, without the need to transmit new images. Thus, the first proposed modification refers to how the user's initial cache should be chosen, defining an ideal ammount of SAIs to send at the transmission begining. An improvement of the additional images selection process is also proposed by means of a refinement algorithm, which is applied even in the cache initialization. This new selection process works with dynamic QPs (Quantization Parameter) during encoding and involves not only the immediate gains for the synthesized image, but either considers the subsequent synthesis. This idea already was presented by PLFC, but had not been satisfactorily implemented. Moreover, this work proposes an automatic way to calculate the Lagrange multiplier which controls the in uence of the future benefit associated with the transmission of some SAI. Finally, a simplified manner of obtaining this future benefit is then described, reducing the computational complexity involved. The utilities of such a system are diverse and, for example, it can be used to identify some element in a light field image, adjusting the focus accordingly. Besides the proposal, the obtained results are shown, and a discussion is made about the significant achieved gains up to 32:8% compared to the previous PLFC in terms of BD-Rate. This gain is up to 85:8% in relation to trivial light field data transmissions.
13

The Study of Nonlinear Optical Properties of Diacrylate Using Z-SCAN Technique

Li, Ming-Hong 02 July 2012 (has links)
Polymer liquid crystal possesses advantages of polymer in chemical industry and liquid crystal in display industry,so it is attracted more attention in science and technology. Diacrylate is a polymer liquid crystal with photosensitive property, so ,it can be applied to optical storage . He-Ne laser induced polymerization in diacrylate mesogen RM257 and RM82 had been verified in previous study. Furthermore, holography pattern can be recorded in RM257 and RM82 by controlling both the temperature of sample and the time of exposing. In this study, we consider the study of nonlinear optical properties of diacrylate using Z-SCAN techeique.¡¨Z-SCAN¡¨ is a simply yet highly sensitive single-beam experimential technique ,it can be used to measure both nonlinear absorption and nonlinear refraction.In this study ,we measured effect of absorption of diacrylate in irradiation of He-Ne laser using Z-SCAN technique.To investigate the reason that He-Ne laser induced polymerization in both RM257 and RM82.
14

A QUANTITATIVE STUDY OF THE RADIANCE DISTRIBUTION AND ITS VARIATION IN OCEAN SURFACE WATERS

Wei, Jianwei 21 February 2013 (has links)
The radiance distribution provides complete information regarding the geometrical structure of the ambient light field within the ocean. A quantitative study of the radiance field in the dynamic ocean water is presented in this thesis work. The study starts with the development of a novel radiance camera for the measurement of the full spherical radiance distribution at the ocean surface and depth. Nonlinear response functions are designed and advanced radiometric calibrations are developed. The resulting camera measures the radiance distribution in absolute units over an extremely high dynamic range at fast rates. With the newly obtained radiance data, I have examined the fine structure of both the downwelling and upwelling radiance distribution and its variation with depth in optically diverse water types. The fully specified radiance distribution data are used to derive all apparent optical properties and some inherent optical properties including the absorption coefficient. With the camera fixed at shallow depths, I have observed and determined the sea surface wave disturbance of the radiance distribution. It is found that the radiance fluctuates anisotropically with regard to its amplitude and periodicity. Typical spatial structures of the dynamic radiance field are identified and shown relevant to the surface waves and the solar zenith angles. The variability in the radiance field also propagates to the irradiance field; the variability is pronounced in measured irradiance depth profiles in the upper layers of the ocean. The statistics of the irradiance fluctuations along the water depth, including the dominant frequency and coefficient of variation, are derived using wavelet techniques and fitted to novel analytic models. The results from the irradiance depth-profile decomposition are in agreement with theoretical models and other independent measurements. This thesis work represents the first attempt to quantify the full light field and its variability in dynamic ocean waters and is of significant relevance to many other optics-related applications.
15

Multi-dimensional digital signal integration with applications in image, video and light field processing

Sevcenco, Ioana Speranta 16 August 2018 (has links)
Multi-dimensional digital signals have become an intertwined part of day to day life, from digital images and videos used to capture and share life experiences, to more powerful scene representations such as light field images, which open the gate to previously challenging tasks, such as post capture refocusing or eliminating visible occlusions from a scene. This dissertation delves into the world of multi-dimensional signal processing and introduces a tool of particular use for gradient based solutions of well-known signal processing problems. Specifically, a technique to reconstruct a signal from a given gradient data set is developed in the case of two dimensional (2-D), three dimensional (3-D) and four dimensional (4-D) digital signals. The reconstruction technique is multiresolution in nature, and begins by using the given gradient to generate a multi-dimensional Haar wavelet decomposition of the signals of interest, and then reconstructs the signal by Haar wavelet synthesis, performed on successive resolution levels. The challenges in developing this technique are non-trivial and are brought about by the applications at hand. For example, in video content replacement, the gradient data from which a video sequence needs to be reconstructed is a combination of gradient values that belong to different video sequences. In most cases, such operations disrupt the conservative nature of the gradient data set. The effects of the non-conservative nature of the newly generated gradient data set are attenuated by using an iterative Poisson solver at each resolution level during the reconstruction. A second and more important challenge is brought about by the increase in signal dimensionality. In a previous approach, an intermediate extended signal with symmetric region of support is obtained, and the signal of interest is extracted from it. This approach is reasonable in 2-D, but becomes less appealing as the signal dimensionality increases. To avoid generating data that is then discarded, a new approach is proposed, in which signal extension is no longer performed. Instead, different procedures are suggested to generate a non-symmetric Haar wavelet decomposition of the signals of interest. In the case of 2-D and 3-D signals, ways to obtain this decomposition exactly from the given gradient data and the average value of the signal are proposed. In addition, ways to approximate a subset of decomposition coefficients are introduced and the visual consequences of such approximations are studied in the special case of 2-D digital images. Several ways to approximate the same subset of decomposition coefficients are developed in the special case of 4-D light field images. Experiments run on various 2-D, 3-D and 4-D test signals are included to provide an insight on the performance of the reconstruction technique. The value of the multi-dimensional reconstruction technique is then demonstrated by including it in a number of signal processing applications. First, an efficient algorithm is developed with the purpose of combining information from the gradient of a set of 2-D images with different regions in focus or different exposure times, with the purpose of generating an all-in-focus image or revealing details that were lost due to improper exposure setting. Moving on to 3-D signal processing applications, two video editing problems are studied and gradient based solutions are presented. In the first one, the objective is to seamlessly place content from one video sequence in another, while in the second one, to combine elements from two video sequences and generate a transparency effect. Lastly, a gradient based technique for editing 4-D scene representations (light fields) is presented, as well as a technique to combine information from two light fields with the purpose of generating a light field with more details of the imaged scene. All these applications show that the developed technique is a reliable tool for gradient domain based solutions of signal processing problems. / Graduate
16

Depth Estimation from Structured Light Fields

Li, Yan 03 July 2020 (has links) (PDF)
Light fields have been populated as a new geometry representation of 3D scenes, which is composed of multiple views, offering large potentials to improve the depth perception in the scenes. The light fields can be captured by different camera sensors, in which different acquisitions give rise to different representations, mainly containing a line of camera views - 3D light field representation, a grid of camera views - 4D light field representation. When the captured position is uniformly distributed, the outputs are the structured light fields. This thesis focuses on depth estimation from the structured light fields. The light field representations (or setups) differ not only in terms of 3D and 4D, but also the density or baseline of camera views. Rather than the objective of reconstructing high quality depths from dense (narrow-baseline) light fields, we put efforts into a general objective, i.e. reconstructing depths from a wide range of light field setups. Hence a series of depth estimation methods from light fields, including traditional and deep learningbased methods, are presented in this thesis. Extra efforts are made for achieving the high performance on aspects of depth accuracy and computation efficiency. Specifically, 1) a robust traditional framework is put forward for estimating the depth in sparse (wide-baseline) light fields, where a combination of the cost calculation, the window-based filtering and the optimization are conducted; 2) the above-mentioned framework is extended with the extra new or alternative components to the 4D light fields. This new framework shows the ability of being independent of the number of views and/or baseline of 4D light fields when predicting the depth; 3) two new deep learning-based methods are proposed for the light fields with the narrow-baseline, where the features are learned from the Epipolar-Plane-Image and light field images. One of the methods is designed as a lightweight model for more practical goals; 4) due to the dataset deficiency, a large-scale and diverse synthetic wide-baseline dataset with labeled data are created. A new lightweight deep model is proposed for the 4D light fields with the wide-baseline. Besides, this model also works on the 4D light fields with the narrow baseline if trained on the narrow-baseline datasets. Evaluations are made on the public light field datasets. Experimental results show the proposed depth estimation methods from a wide range of light field setups are capable of achieving the high quality depths, and some even outperform state-of-the-art methods. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
17

Realistické zobrazování voxelových scén v reálném čase / Real-Time Photorealistic Rendering of Voxel Scenes

Flajšingr, Petr January 2021 (has links)
The subject of this thesis is an implementation of realistic rendering of voxel scenes using a graphics card. This work explains the fundamentals of realistic rendering and voxel representation of visual data. It also presents selected hierarchical structures usable for acceleration and describes the desing of a solution focusing on the representation of voxel data and their rendering. The thesis describes libraries created as part of the project and algorithms. It also evaluates time and memory requirements of the application along with graphical output.
18

Light-Field Style Transfer

Hart, David Marvin 01 November 2019 (has links)
For many years, light fields have been a unique way of capturing a scene. By using a particular set of optics, a light field camera is able to, in a single moment, take images of the same scene from multiple perspectives. These perspectives can be used to calculate the scene geometry and allow for effects not possible with standard photographs, such as refocus and the creation of novel views.Neural style transfer is the process of training a neural network to render photographs in the style of a particular painting or piece of art. This is a simple process for a single photograph, but naively applying style transfer to each view in a light field generates inconsistencies in coloring between views. Because of these inconsistencies, common light field effects break down.We propose a style transfer method for light fields that maintains consistencies between different views of the scene. This is done by using warping techniques based on the depth estimation of the scene. These warped images are then used to compare areas of similarity between views and incorporate differences into the loss function of the style transfer network. Additionally, this is done in a post-training fashion, which removes the need for a light field training set.
19

From focal stack to light field : Reconstruction of a lightfield

Joujo, Johannes January 2024 (has links)
För att generera ett ljusfält krävs en fokalstack. Fokal stacken kommer att skapas med programvaran Blender. En fokalstack kan ses som mappen som innehåller alla bilderna med olika fokus från en scen. Fokalstack kommer att i denna studie innehålla bilder från samma scen men med olika fokallängeder vilket kommer att göra så att bilderna har olika delar av scenen i fokus. I Blender tillägget ska användaren kunna bestämma hur många bilder som ska skapas och hur fokusdistansen ska skilja sig mellan bilderna. Efter att fokal stacken har skapats kommer den att användas i ljusfältgenereringsalgoritmer för att utvärdera resultatet. Utvärderingen kommer att baseras på tiden det tar att generera ljusfältet och medelvärdet av PSNR. Algoritmerna kommer först att utvärderas individuellt med 35, 41, 50, 60 och 70 fokal stack-bilder. Därefter kommer algoritmerna att jämföras med varandra. Målet med arbetet är att skapa ett Blender-tillägg som möjliggör skapandet av en fokal stack med önskade parametrar, såsom antalet bilder och fokusdistansvariation. Studien har visat att skapandet av ljusfält var effektivare med SART-algoritmerna jämfört med algoritmer som krävde träningsmodelle / A light field requires a focal stack for its generation, and in this study, the focal stack will be created using the software Blender. A focal stack is what the folder containing the different images is called. This study will have the focal stack containing images from a stationary camera with different focal length. Focal length is the parameter deciding what regions is in focus of the scene. An addon will be developed to enable users to create a focal stack from a scene in Blender, allowing them to specify the desired focal length and the number of images. Once the focal stack is generated, it will be utilized in light field generating algorithms to evaluate the time taken to create the light field and the average PSNR value compared to the focal stack. The algorithms will first be evaluated independently with focal stack sizes of 35, 41, 50, 60, and 70 images. Following this, the algorithms will be compared to each other. The study's objective was to create a Blender addon and use the generated focal stacks to assess the performance of light field generation methods. The study demonstrated that light field creation was more effective using the SART algorithms compared to the algorithm requiring model training beforehand.
20

Architectures et apports de systèmes de vision light-field pour la vision par ordinateur / Designs and contributions of light-field vision systems for computer vision

Riou, Cécile 13 December 2017 (has links)
Cette thèse traite des caméras light-field en tant que caméra ayant des capacités 3D. Les images brutes, acquises avec ces systèmes, sont généralement inexploitables directement. L’obstacle majeur concernant l'utilisation de ces caméras réside dans la complexité du traitement des images acquises. Cette thèse vise à dépasser ces limitations en s'intéressant aux dispositifs multi-vues et multi-caméras. De plus, comme l'un des domaines d'application envisagé est la vision industrielle, les images sont acquises en lumière naturelle afin de conserver la possibilité d'effectuer des traitements conventionnels par vision sur les images. Le travail de thèse repose sur trois axes : l'étude et la conception optique de systèmes light-field multi-caméras et multi-vues, le calibrage de ces dispositifs et le développement d’algorithmes et enfin leur mise en application pour montrer les intérêts de ces caméras dans divers domaines. / This thesis deals with light-field cameras as cameras having 3D capacities. The raw images. acquired with these systems, are generally unusable directly. The main obstacle about their use lies in the complex processing of the recorded images. This thesis aims to overcome these limitations by focusing on multi-views and multi-camera devices. Morcover, as one of the application domains is the industrial vision, the images are acquired in natural lightning in order to conserve the possibility to make conventional treatments by vision on the images. The work is based on three axis: the study and'the optical desien of light-field systems, the calibration of these devices and the development of algorithms to show the intercsts of these cameras in various fields.

Page generated in 0.0722 seconds