Spelling suggestions: "subject:"computational photography"" "subject:"eomputational photography""
1 |
Applications of 3D computational photography to marine scienceScott-Murray, Amy January 2017 (has links)
This thesis represents the first dedicated study of the application of computational photography in marine science. It deals chiefly with the acquisition and use of photogrammetrically derived 3D organism models. The use of 3D models as 'virtual specimens' means that they may be securely archived and are accessible by anyone in any part of the world. Interactive 3D objects enhance learning by engaging the viewer in a participatory manner, and can help to clarify features that are unclear in photographs or diagrams. Measurements may be taken from these models for morphometric work, either manually or in an automated process. Digital 3D models permit the collection of novel metrics such as volume and surface area, which are very difficult to take by traditional means. These, and other metrics taken from 3D models, are a key step towards automating the species identification process. Where an organism changes over time, photogrammetry offers the ability to mathematically compare its shape before and after change. Sponge plasticity in response to stress and injury is quantified and visualised here for the first time. An array of networked underwater cameras was constructed for simultaneous capture of image sets. The philosophy of adapting simple, cheap consumer hardware is continued for the imaging and quantification of marine particulates. A restricted light field imaging system is described, together with techniques for image processing and data extraction. The techniques described are shown to be as effective as traditional instruments and methods for particulate measurement. The array cameras used a novel epoxy encapsulation technique which offers significant weight and cost advantages when compared to traditional metal pressure housings. It is also described here applied to standalone autonomous marine cameras. A fully synchronised autonomous in situ photogrammetry array is now possible. This will permit the non-invasive archiving and examination of organisms that may be damaged by recovery to the surface.
|
2 |
Digital Stack Photography and Its ApplicationsHu, Jun January 2014 (has links)
<p>This work centers on digital stack photography and its applications.</p><p>A stack of images refer, in a broader sense, to an ensemble of</p><p>associated images taken with variation in one or more than one various </p><p>values in one or more parameters in system configuration or setting.</p><p>An image stack captures and contains potentially more information than</p><p>any of the constituent images. Digital stack photography (DST)</p><p>techniques explore the rich information to render a synthesized image</p><p>that oversteps the limitation in a digital camera's capabilities.</p><p>This work considers in particular two basic DST problems, which had</p><p>been challenging, and their applications. One is high-dynamic-range</p><p>(HDR) imaging of non-stationary dynamic scenes, in which the stacked</p><p>images vary in exposure conditions. The other</p><p>is large scale panorama composition from multiple images. In this</p><p>case, the image components are related to each other by the spatial</p><p>relation among the subdomains of the same scene they covered and</p><p>captured jointly. We consider the non-conventional, practical and</p><p>challenge situations where the spatial overlap among the sub-images is</p><p>sparse (S), irregular in geometry and imprecise from the designed</p><p>geometry (I), and the captured data over the overlap zones are noisy</p><p>(N) or lack of features. We refer to these conditions simply as the</p><p>S.I.N. conditions.</p><p>There are common challenging issues with both problems. For example,</p><p>both faced the dominant problem with image alignment for</p><p>seamless and artifact-free image composition. Our solutions to the</p><p>common problems are manifested differently in each of the particular</p><p>problems, as a result of adaption to the specific properties in each</p><p>type of image ensembles. For the exposure stack, existing</p><p>alignment approaches struggled to overcome three main challenges:</p><p>inconsistency in brightness, large displacement in dynamic scene and</p><p>pixel saturation. We exploit solutions in the following three</p><p>aspects. In the first, we introduce a model that addresses and admits</p><p>changes in both geometric configurations and optical conditions, while</p><p>following the traditional optical flow description. Previous models</p><p>treated these two types of changes one or the other, namely, with</p><p>mutual exclusions. Next, we extend the pixel-based optical flow model</p><p>to a patch-based model. There are two-fold advantages. A patch has</p><p>texture and local content that individual pixels fail to present. It</p><p>also renders opportunities for faster processing, such as via</p><p>two-scale or multiple-scale processing. The extended model is then</p><p>solved efficiently with an EM-like algorithm, which is reliable in the</p><p>presence of large displacement. Thirdly, we present a generative</p><p>model for reducing or eliminating typical artifacts as a side effect</p><p>of an inadequate alignment for clipped pixels. A patch-based texture</p><p>synthesis is combined with the patch-based alignment to achieve an</p><p>artifact free result.</p><p>For large-scale panorama composition under the S.I.N. conditions, we</p><p>have developed an effective solution scheme that significantly reduces</p><p>both processing time and artifacts. Previously existing approaches can</p><p>be roughly categorized as either geometry-based composition or feature</p><p>based composition. In the former approach, one relies on precise</p><p>knowledge of the system geometry, by design and/or calibration. It</p><p>works well with a far-away scene, in which case there is only limited</p><p>variation in projective geometry among the sub-images. However, the</p><p>system geometry is not invariant to physical conditions such as</p><p>thermal variation, stress variation and etc.. The composition with</p><p>this approach is typically done in the spatial space. The other</p><p>approach is more robust to geometric and optical conditions. It works</p><p>surprisingly well with feature-rich and stationary scenes, not well</p><p>with the absence of recognizable features. The composition based on</p><p>feature matching is typically done in the spatial gradient domain. In</p><p>short, both approaches are challenged by the S.I.N. conditions. With</p><p>certain snapshot data sets obtained and contributed by Brady et al, </p><p>these methods either fail in composition or render images with</p><p>visually disturbing artifacts. To overcome the S.I.N. conditions, we</p><p>have reconciled these two approaches and made successful and</p><p>complementary use of both priori and approximate information about</p><p>geometric system configuration and the feature information from the</p><p>image data. We also designed and developed a software architecture</p><p>with careful extraction of primitive function modules that can be</p><p>efficiently implemented and executed in parallel. In addition to a</p><p>much faster processing speed, the resulting images are clear and</p><p>sharper at the overlapping zones, without typical ghosting artifacts.</p> / Dissertation
|
3 |
Reconfigurable Snapshot HDR Imaging Using Coded MasksAlghamdi, Masheal M. 10 July 2021 (has links)
High Dynamic Range (HDR) image acquisition from a single image capture, also
known as snapshot HDR imaging, is challenging because the bit depths of camera
sensors are far from sufficient to cover the full dynamic range of the scene. Existing
HDR techniques focus either on algorithmic reconstruction or hardware modification
to extend the dynamic range. In this thesis, we propose a joint design for snapshot
HDR imaging by devising a spatially varying modulation mask in the hardware
combined with a deep learning algorithm to reconstruct the HDR image.
In this approach, we achieve a reconfigurable HDR camera design that does not
require custom sensors, and instead can be reconfigured between HDR and conventional
mode with very simple calibration steps. We demonstrate that the proposed
hardware-software solution offers a flexible, yet robust, way to modulate per-pixel
exposures, and the network requires little knowledge of the hardware to faithfully
reconstruct the HDR image. Comparative analysis demonstrated that our method
outperforms the state-of-the-art in terms of visual perception quality.
We leverage transfer learning to overcome the lack of sufficiently large HDR
datasets available. We show how transferring from a different large scale task (image
classification on ImageNet) leads to considerable improvements in HDR reconstruction
|
4 |
De-Emphasis of Distracting Image Regions Using Texture Power MapsSu, Sara L., Durand, Frédo, Agrawala, Maneesh 01 1900 (has links)
We present a post-processing technique that selectively reduces the salience of distracting regions in an image. Computational models of attention predict that texture variation influences bottom-up attention mechanisms. Our method reduces the spatial variation of texture using power maps, high-order features describing local frequency content in an image. Modification of power maps results in effective regional de-emphasis. We validate our results quantitatively via a human subject search experiment and qualitatively with eye tracking data. / Singapore-MIT Alliance (SMA)
|
5 |
Image relighting using shading proxies / Reiluminação de imagens utilizando shading proxiesHenz, Bernardo January 2014 (has links)
Esta dissertação apresenta uma solução prática para o problema de reiluminação de imagens para objetos com geometria arbitrária. Nossa técnica baseia-se no que chamamos de shading proxies (versões deformadas de modelos 3D que aproximam o objeto a ser reiluminado) para guiar o processo de reiluminação. Nosso método é flexível e robusto, podendo reiluminar fotografias, pinturas, e desenhos de diferentes objetos de maneira eficaz. Além de reiluminação, nossa técnica pode ser usada para estimar mapas de normais e profundidade, bem como realizar decomposição intrínsica de imagens, e transferir iluminação para desenhos delineados. Uma avaliação preliminar mostra que nossa técnica produz resultados convincentes, e usuários novatos podem reiluminar imagens facilmente em poucos minutos. / We present a practical solution to the problem of single-image relighting of objects with arbitrary shapes. It is based on a shading-ratio image obtained from the original and target lighting applied to shading proxies (warped versions of 3-D models that approximate the objects to be relit). Our approach is flexible and robust, being applicable to objects with non-uniform albedos. We demonstrate its effectiveness by relighting a large number of photographs, paintings, and drawings containing a variety of objects of different materials. In addition to relighting, our technique can estimate smooth normal and depth maps from pictures, as well as perform intrinsic-image decomposition. Preliminary evaluation has shown that our technique produces convincing results, and novice users can relight images in just a couple of minutes.
|
6 |
Image relighting using shading proxies / Reiluminação de imagens utilizando shading proxiesHenz, Bernardo January 2014 (has links)
Esta dissertação apresenta uma solução prática para o problema de reiluminação de imagens para objetos com geometria arbitrária. Nossa técnica baseia-se no que chamamos de shading proxies (versões deformadas de modelos 3D que aproximam o objeto a ser reiluminado) para guiar o processo de reiluminação. Nosso método é flexível e robusto, podendo reiluminar fotografias, pinturas, e desenhos de diferentes objetos de maneira eficaz. Além de reiluminação, nossa técnica pode ser usada para estimar mapas de normais e profundidade, bem como realizar decomposição intrínsica de imagens, e transferir iluminação para desenhos delineados. Uma avaliação preliminar mostra que nossa técnica produz resultados convincentes, e usuários novatos podem reiluminar imagens facilmente em poucos minutos. / We present a practical solution to the problem of single-image relighting of objects with arbitrary shapes. It is based on a shading-ratio image obtained from the original and target lighting applied to shading proxies (warped versions of 3-D models that approximate the objects to be relit). Our approach is flexible and robust, being applicable to objects with non-uniform albedos. We demonstrate its effectiveness by relighting a large number of photographs, paintings, and drawings containing a variety of objects of different materials. In addition to relighting, our technique can estimate smooth normal and depth maps from pictures, as well as perform intrinsic-image decomposition. Preliminary evaluation has shown that our technique produces convincing results, and novice users can relight images in just a couple of minutes.
|
7 |
Image relighting using shading proxies / Reiluminação de imagens utilizando shading proxiesHenz, Bernardo January 2014 (has links)
Esta dissertação apresenta uma solução prática para o problema de reiluminação de imagens para objetos com geometria arbitrária. Nossa técnica baseia-se no que chamamos de shading proxies (versões deformadas de modelos 3D que aproximam o objeto a ser reiluminado) para guiar o processo de reiluminação. Nosso método é flexível e robusto, podendo reiluminar fotografias, pinturas, e desenhos de diferentes objetos de maneira eficaz. Além de reiluminação, nossa técnica pode ser usada para estimar mapas de normais e profundidade, bem como realizar decomposição intrínsica de imagens, e transferir iluminação para desenhos delineados. Uma avaliação preliminar mostra que nossa técnica produz resultados convincentes, e usuários novatos podem reiluminar imagens facilmente em poucos minutos. / We present a practical solution to the problem of single-image relighting of objects with arbitrary shapes. It is based on a shading-ratio image obtained from the original and target lighting applied to shading proxies (warped versions of 3-D models that approximate the objects to be relit). Our approach is flexible and robust, being applicable to objects with non-uniform albedos. We demonstrate its effectiveness by relighting a large number of photographs, paintings, and drawings containing a variety of objects of different materials. In addition to relighting, our technique can estimate smooth normal and depth maps from pictures, as well as perform intrinsic-image decomposition. Preliminary evaluation has shown that our technique produces convincing results, and novice users can relight images in just a couple of minutes.
|
8 |
Learning Consistent Visual SynthesisGao, Chen 22 August 2022 (has links)
With the rapid development of photography, we can easily record the 3D world by taking photos and videos. In traditional images and videos, the viewer observes the scene from fixed viewpoints and cannot navigate the scene or edit the 2D observation afterward.
Thus, visual content editing and synthesis become an essential task in computer vision.
However, achieving high-quality visual synthesis often requires a complex and expensive multi-camera setup. This is not practical for daily use because most people only have one cellphone camera. But a single camera, on the contrary, could not provide enough multi-view constraints to synthesize consistent visual content.
Therefore, in this thesis, I address this challenging single-camera visual synthesis problem by leveraging different regularizations. I study three consistent synthesis problems: time-consistent synthesis, view-consistent synthesis, and view-time-consistent synthesis. I show how we can take cellphone-captured monocular images and videos as input to model the scene and consistently synthesize new content for an immersive viewing experience. / Doctor of Philosophy / With the rapid development of photography, we can easily record the 3D world by taking photos and videos. More recently, we have incredible cameras on cell phones, which enable us to take pro-level photos and videos. Those powerful cellphones even have advanced computational photography features build-in. However, these features focus on faithfully recording the world during capturing. We can only watch the photo and video as it is, but not navigate the scene, edit the 2D observation, or synthesize content afterward.
Thus, visual content editing and synthesis become an essential task in computer vision. We know that achieving high-quality visual synthesis often requires a complex and expensive multi-camera setup. This is not practical for daily use because most people only have one cellphone camera. But a single camera, on the contrary, is not enough to synthesize consistent visual content.
Therefore, in this thesis, I address this challenging single-camera visual synthesis problem by leveraging different regularizations. I study three consistent synthesis problems: time-consistent synthesis, view-consistent synthesis, and view-time-consistent synthesis. I show how we can take cellphone-captured monocular images and videos as input to model the scene and consistently synthesize new content for an immersive viewing experience.
|
9 |
Programmable Image-Based Light Capture for PrevisualizationLindsay, Clifford 02 April 2013 (has links)
Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his "minds-eye". Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation's focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker's artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform.
This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori.
|
10 |
Light field editing and rendering / Édition et rendu de champs de lumièreHog, Matthieu 21 November 2018 (has links)
En imageant une scène à partir de différents points de vue, un champ de lumière permet de capturer de nombreuses informations sur la géométrie de la scène. Grâce aux récents progrès de ses dispositifs d’acquisition, l’imagerie par champs de lumière est devenue une alternative sérieuse à la capture de contenu 3D et à d’autres problèmes connexes. Le but de cette thèse est double. L'une des principales applications de l'imagerie par champs de lumière est sa capacité à produire de nouvelles vues à partir d'une capture unique. Dans une première partie, nous proposons de nouvelles techniques de rendu d’image dans deux cas qui s’écartent des cas usuels. Nous proposons d’abord un pipeline complet pour les caméras plénoptiques focalisées, traitant la calibration, l’estimation de profondeur et le rendu de l’image. Nous passons ensuite au problème de la synthèse des vues, nous cherchons à générer des vues intermédiaires à partir d’un ensemble de 4 vues seulement. La retouche d'image est une étape commune de la production de média. Pour les images et les vidéos 2D, de nombreux outils commerciaux existent. Cependant, le problème est plutôt inexploré pour les champs de lumière. Dans une seconde partie, nous proposons des techniques d’édition de champs de lumière à la fois nouvelles et efficaces. Nous proposons tout d’abord une nouvelle méthode de segmentation niveau pixel basée sur des graphes, qui à partir d’un ensemble limité d’entrées utilisateur, segmente simultanément toutes les vues d’un champ de lumière. Nous proposons ensuite une approche de segmentation automatique des champs de lumière qui utilise la puissance de calcul des GPUs. Cette approche diminue encore les besoins en calcul et nous étendons l'approche pour la segmentation de champs de lumières vidéo. / By imaging a scene from different viewpoints, a light field allows capturing a lot of information about the scene geometry. Thanks to the recent development of its acquisition devices (plenoptic camera and camera arrays mainly), light field imaging is becoming a serious alternative for 3D content capture and other related problems. The goal of this thesis is twofold. One of the main application for light field imaging is its ability to produce new views from a single capture. In a first part, we propose new image rendering techniques in two cases that deviate from the mainstream light field image rendering. We first propose a full pipeline for focused plenoptic cameras, addressing calibration, depth estimation, and image rendering. We then move to the problem of view synthesis, we seek to generate intermediates views given a set of only 4 corner views of a light field. Image editing is a common step of media production. For 2D images and videos, a lot of commercial tools exist. However, the problem is rather unexplored for light fields. In a second part, we propose new and efficient light field editing techniques. We first propose a new graph-based pixel-wise segmentation method that, from a sparse set of user input, segments simultaneously all the views of a light field. Then we propose an automatic light field over-segmenting approach that makes use of GPUs computational power. This approach further decreases the computational requirement for light field segmentation and we extend the approach for light field video segmentation.
|
Page generated in 0.1213 seconds